Closer to the Customer
Originally published in Contact Center Pipeline, August 2016
One of the best things to happen to the contact center in the last 10 to 15 years is the attention placed on customer feedback. Surveying customers used to be an informal activity done by a small minority of centers; today it is a best practice followed by the vast majority of us. Customers now have a voice, and contact center leaders have direct insight into the service improvements most valued by customers.
And yet, at the back of my mind, the attention focused on satisfaction studies reminds me that too much of a good thing is a bad thing. Not bad because it is wrong, or because it is a wasted activity. In this case, the risk is that we have put all our eggs in one basket, and in so doing, we might be missing a lot more than we thought. Yes, the data is valuable, and when it is used properly it helps us deliver better service. But just how well can we know our customer when we lean so heavily on data on graphs?
Part of a toolbox
Today’s contact centers squeeze a lot of activity from the data tied to a sampling of customer surveys. These metrics are used to drive improvement activities, evaluate agent performance, and portion out incentive compensation at all levels throughout the organization. With so much impact, the data needs to be giving a full, accurate and complete story. That’s a tall order, and our satisfaction survey results may not be up to the task.
Why? Quite simply, satisfaction studies are a tool, but they are not the whole toolbox. A hammer is indispensable to a carpenter, but it isn’t much help when a piece of wood needs to be cut to size. Likewise, satisfaction data can help us in many areas, but it doesn’t replace the other tools we have.
It seems unconventional for a contact center consultant to be questioning the value of customer satisfaction surveys, so let’s be clear – there is substantial value to them, and no one should ever consider stopping them. But I am saying they should not be the only way to connect with the customer, and here’s my argument:
Sampling bias – customers self-select into satisfaction surveys, and anytime this happens you have sampling bias. This means that those selecting in may not be reflective of all customers, and this is reflected in the data.
Survey fatigue – customers are not compensated for their input (for those that do, the sampling bias is an even bigger problem). They are routinely asked for feedback from all the companies they do business with, and many are quite frankly tired of all the requests. Some opt out, and others offer half-hearted input with little to no thought behind it.
Lack of expertise – even those customers that want to provide great input often cannot do it. Customers do not know what process should have taken place during a transaction, or if they got accurate information, or if they got a complete answer. They can accurately tell you what they think, but in cases of fact their answers simply may not be right.
Disconnection from the source – it is great data, without a doubt. But thinking you understand your customers by only studying the data can be dangerous.
Does this mean we should eliminate customer satisfaction surveys? Not at all. We simply need to recognize they are part of our toolbox, and if we really want to get close to the customer we need a few more tools.
So what other options are available? Keep in mind, we want to plug the gaps left by our customer satisfaction surveys, and get closer to the interaction than a set of data will typically allow. Here’s some ideas:
Listen to calls (live or recorded). Yes, this is far more anecdotal than customer satisfaction surveys, but listening to a customer provides a greater level of insight into a service transaction than you will get from numbers on a scoresheet. There is simply no substitute for hearing caller emotion, be it positive or negative. While you need to be careful about over-reacting to a single instance, the value of listening to the transaction is that it encourages a response much more effectively than a single low rating in a satisfaction report.
Group calibration sessions. The deepest dig into a service transaction happens during calibration sessions. With many sets of eyes and ears on the service, no stone goes unturned. The focus of calibration, of course, is getting everyone in line regarding ratings. But a great addition is to talk about the customer experience, and discuss items like the driver of the call and how the customer perceived the interaction.
Elevate your monitoring results. Monitoring data is a great complement to satisfaction data. It is not subjected to sampling bias, and it generated by the experts that you chose to do the evaluating. In some elements of the transaction (those dealing with the knowledge level of the rep, for example), the monitoring data is more accurate and complete than what a customer could possibly provide. We often only focus on individual monitoring data, and that’s a mistake. Summary monitoring data should ride side-by-side with customer satisfaction data in monthly reports.
Focus groups. Yes, this is typically a marketing activity that centers on products, prices and messaging. But there is no reason why focus groups can’t be assembled regarding service. While there is still sampling bias, you put customers in an environment where they will be more thoughtful about input, and that lends more credibility to their perceptions.
Many leaders like the thought of a simple, solitary solution. The one metric that matters. The only activity that counts. But the reality is that delivering great service across an entire customer base is a complex task, and you will never get the job done with only one tool. Using the right tool at the right time is the better way to go.
#Customersatisfaction #Satisfactionsurveys #Customersurveys #Satisfactionstudies #Qualitymonitoring #Samplingbias #Surveyfatigue