Originally published in Contact Center Pipeline, August 2016
One of the best things to happen to the contact center in the last 10 to 15 years is the attention placed on customer feedback. Surveying customers used to be an informal activity done by a small minority of centers; today it is a best practice followed by the vast majority of us. Customers now have a voice, and contact center leaders have direct insight into the service improvements most valued by customers.
And yet, at the back of my mind, the attention focused on satisfaction studies reminds me that too much of a good thing is a bad thing. Not bad because it is wrong, or because it is a wasted activity. In this case, the risk is that we have put all our eggs in one basket, and in so doing, we might be missing a lot more than we thought. Yes, the data is valuable, and when it is used properly it helps us deliver better service. But just how well can we know our customer when we lean so heavily on data on graphs?
Part of a toolbox
Today’s contact centers squeeze a lot of activity from the data tied to a sampling of customer surveys. These metrics are used to drive improvement activities, evaluate agent performance, and portion out incentive compensation at all levels throughout the organization. With so much impact, the data needs to be giving a full, accurate and complete story. That’s a tall order, and our satisfaction survey results may not be up to the task.
Why? Quite simply, satisfaction studies are a tool, but they are not the whole toolbox. A hammer is indispensable to a carpenter, but it isn’t much help when a piece of wood needs to be cut to size. Likewise, satisfaction data can help us in many areas, but it doesn’t replace the other tools we have.
It seems unconventional for a contact center consultant to be questioning the value of customer satisfaction surveys, so let’s be clear – there is substantial value to them, and no one should ever consider stopping them. But I am saying they should not be the only way to connect with the customer, and here’s my argument:
Sampling bias – customers self-select into satisfaction surveys, and anytime this happens you have sampling bias. This means that those selecting in may not be reflective of all customers, and this is reflected in the data.
Survey fatigue – customers are not compensated for their input (for those that do, the sampling bias is an even bigger problem). They are routinely asked for feedback from all the companies they do business with, and many are quite frankly tired of all the requests. Some opt out, and others offer half-hearted input with little to no thought behind it.