When I studied for my pilot's license, I learned that any instrument, any measure in isolation, can be misleading....
Flight airspeed alone, for example, doesn't indicate whether you're heading for your goal -- or a dive. You need context -- other instruments and an understanding of what these measures are saying -- to know.
So, how about measures of customer experience? Consider a contact center industry darling: first-call resolution (FCR) (also termed first contact resolution). Higher is better, right? It depends. I've seen too many cases where organizations with high FCR rates (e.g., in the 95% to 100% range) are resolving calls they shouldn't be handling in the first place. Here are some common scenarios that artificially drive up FCR scores:
- Problems with automation. The center handles contacts that could be automated. Handling many interactions that could be serviced through interactive voice response, Web-based services or mobile apps suggests that the call center doesn't have such systems or that they are present but difficult to use or operate poorly. Customers may also not be aware of these options or may be loath to use them. If contact center agents are handling calls that other call centers are handling through automated software, FCR percentages may be higher, but overall contact center efficiency is not.
- Communication with customers is unclear. When promotional pieces, product documentation, and other types of customer communication are unclear or incomplete, the contact center gets more work. These calls are usually straightforward -- "Yes, I apologize for the confusion and I can help you with that …" -- but they drive up costs and consume precious resources, even as they boost FCR. Ditto products and services that have glitches that lead to customer contacts. We get lots of practice playing cleanup for issues that should be resolved at a deeper level.
First-call resolution is just one example. Virtually any measure, when viewed in isolation, can be misleading. Consider service level. I once discovered a large utility that employed two people in workforce management just to monitor incoming traffic. If service levels began to slip, they would (with a few clicks) take blocks of queued calls that had already waited beyond the service-level threshold and put them into a holding pattern, allowing customers just entering the system to go directly to agents. As the queue settled down and service levels improved, they would release the calls from the holding pattern and allow them to reach agents (if they hadn't already been abandoned). In this way, they could literally control their service-level reports (the percent of calls answered within X seconds). This is an extreme example, but there are plenty of other ways to manipulate the level of customer service effectiveness.
How about customer satisfaction (customer loyalty, Net Promoter Scores and so on)? Yep, we'd have to know more. Consider the extreme: Giving your customers ski vacations for $5 inconveniences would earn you some great PR but probably break the bank. But there are plenty of other examples. When we take samples, whom we sample or whether customer satisfaction reflects the cost of heroic efforts necessitated by poor product or service design are all important questions in determining customer service effectiveness.
Now, low-scoring metrics are not a good thing either. But the point is that we need to know more about what these measures are telling us: We need to know if we're achieving customer service effectiveness. Having a good understanding of our most important goals -- e.g., sustainable customer loyalty, market share, profitability, true cost effectiveness -- is the context required to make sense of it all.
Back to the cockpit: You sacrifice some speed as you climb or turn, and that's OK, as long as you know where you're going and are making adjustments that get you there in the most effective manner.
Keep your eyes on the prize: Customer service effectiveness is what truly matters.
About the author:
Brad Cleveland is a speaker and consultant who has worked throughout the U.S. and in more than 60 countries for American Express, Apple, USAA and others. He is the author of eight books and publishes the popular (free) newsletter, The Edge of Service. Cleveland was one of two initial partners in the International Customer Management Institute, or ICMI, where now serves in an advisory role. He can be reached at [email protected]
When customer service automation goes south
Trends to watch
Driving multichannel engagement