To NPS or Not NPS
Originally published in Contact Center Pipeline, February 2015
If by chance you are unfamiliar with the phrase Net Promoter Score (NPS), it is a way of scoring customer loyalty based on the customer’s 0-10 rating on the question “how likely is it that you would recommend our company/product/service to a friend or colleague”? The concept was presented in a Harvard Business Review article in 2003 and continues to spark debate across the business world. In many organizations it has worked its way deep into the contact center scoreboard.
Many years later, it should come as no surprise that the original design has sometimes been modified. Many organizations have taken the concept and altered the wording, or adjusted the scoring, or applied it more like a satisfaction rating than a loyalty score. It should be noted that the originators of NPS (Fred Reichold, Bain and Company, and Satmetrix) have criticized the variants to the original design, though their objections have done little to stem the tide.
Given that it affects so many of us running contact centers today, it seems appropriate to determine what, if any value we are getting from it. Does it help in any format, does the value depend on the use, or is it simply hype? There are many factors associated with NPS, and to determine value it is easier to break the analysis down into the segments of the NPS system and the ways it used.
Asking the Question on an Enterprise Customer Satisfaction Survey
An enterprise customer survey is focused on the relationship between the customer and the entire enterprise. Questions can be related to overall satisfaction, and will also delve into sub-topics such as pricing, product quality, speed of delivery and customer support. When properly designed, the answers provided on the sub-topic questions will help explain the ratings on the higher level type of questions, of which NPS is one.
In this format, I’m not sure why I would not want to ask the NPS question. I know some have criticized the value of the question, believing that a simpler overall satisfaction question (or something similar) may be more accurate or more predictive related to future loyalty. While that might turn out to be true, there is nothing that says you cannot ask more than one higher level question. And if a desire for brevity limits the number of questions, there is nothing that says you cannot have more than one type of survey out there. The NPS question seems on its surface to have merit, and if I were the CEO of a company I believe I would like to know how a random sample of customers answered it.
Scoring the Question
Ah, we love to know the score, don’t we? Part of the allure of NPS is that it has its own self-made scoring mechanism – 9 and 10 is positive, 7 and 8 is passive, and anything below 7 is negative. Subtract the total number of negative ratings from the number of positives, and you have yourself a score.
And that score makes the whole thing a bit uglier. The NPS question already suffers from the problem of being a perceptual measure of something that might happen in the future. Now we add to that all the issues related to scoring, and the tarnish becomes more noticeable. An 11 point scale that functions as a 3 point scale. One of the three points does not count in the score. A rating that is below the top box still counts as a promoter. These and other legitimate concerns raise questions regarding the precision of the score (see call-out 1 – Is There a Better Option?).
Worse than the scoring accuracy issues, though, is the illusion that there is now one single score that tells me all I need to know about customer loyalty. The concept of one grand, all-knowing number is tempting to those in the C-wing, especially when that number can be used to compare your organization to others. Customer loyalty, though, is far too important and complex to be summed up neatly into one magic score. Even if the number were accurate and meaningful, it would be so only within the confines of today. In our modern times, tomorrow comes much too swiftly to lean heavily on today’s results.
Asking the Question on a Transactional Customer Satisfaction Survey
If scoring the question makes NPS a bit shaky, re-configuring the NPS question to meet the needs of a transactional survey makes it much worse. The transactional customer satisfaction survey is the one taken shortly after a customer completes an inquiry (usually, but not always, an agent assisted inbound call) with us. Since the original wording of the NPS question does not quite fit this format, it is generally preceded in a transactional survey with wording such as “Based on your last interaction with us…”
So in an attempt to find that magic number and apply it to the contact center, we have made the mistake of throwing logic out the window. We are trying to determine if a customer is so strongly aligned with our company/product/service that they would promote us to the people they care about most. How could someone possibly base that decision on one four minute transaction? The modifying phrase used at the beginning of the question is supposed to direct the customer to reconsider their level of passion regarding our product/service based on a single interaction, and that is too tall a task to accomplish. Regardless of how it is worded, an NPS score derived from a transactional survey cannot be used with any degree of confidence.
Using NPS as an Agent Metric
Since we are now on the logic train, it should come as no surprise that the practice of calculating an NPS score by agent and using it as a performance metric is deeply flawed. Nothing upsets agents more than being rated by a metric over which they have little control, and that is exactly what transpires when a summary rating like NPS is used at the individual level. The argument made for this practice is that the other factors will “even out” over time, so differences in NPS scoring can then be attributed solely to the agent. I can assure you that the low number of transactions per agent analyzed, the lack of ability to get a truly random sample and the high number of factors escaping analysis will more than quash the “evening out” theory. Regardless of where you stand on NPS as a predictor of loyalty, it should not be (and was not ever intended to be) a measure of agent performance.
Summary
We obsess over customer satisfaction in contact centers, and that’s a great thing. Too much of a great thing, though, can sometimes be a bad thing. If you like NPS, keep it casual, keep it high level, and keep it away from transactions and individual performance statistics. Used correctly, it can help paint the picture – but it is not the only subject in the frame.
Call Out #1 - Is There a Better Option?
If I am interviewing a job candidate, I am much better off asking how he/she turned around a difficult caller in the past, rather than asking how such a scenario might play out in the future if the job is offered. The former deals with facts, while the latter deals with a perception of a supposed future. Why, then, does NPS not deal with facts? Others have suggested changing the wording to ask if the customer did, in fact, recommend our product/service to others. That seems like more valuable data.