Conventional wisdom says that the best way to measure the effectiveness of a learning experience is by measuring the Net Promoter Score (NPS), i.e., asking participants how likely they are to recommend the learning session to a friend. If people are willing to promote the learning session, they must have learned something new.
This may seem an intuitive approach, but it is one that happens to be highly flawed from a scientific perspective. As NLI has found, the smarter way to go about gauging effectiveness isn’t to track intentions to change, but to track the changes themselves.
How NPS works — and why it falls short
The NPS, one of the most widely used key performance indicator across industries, is asked on a 0-10 scale, with those responding 9 or 10 categorized as Promoters, those responding 7 or 8 as Passives, and everyone else as Detractors.
The NPS is calculated by subtracting the percentage of Detractors from the percentage of Promoters (and ignoring the Passives):
NPS = % of Promoters – % of Detractors
Despite how easy the score is to administer (and how valid NPS seems on its face) the methodology still runs into a lot of problems.
To start with, an 11-point scale is inherently difficult to interpret as a respondent. For example, how does one interpret the difference between a 6 and a 7? Even if two people have the exact same experience and the exact same opinion, they may arbitrarily choose one or the other.
From the NPS point of view, this is a huge difference, since a 6 registers as a Detractor and a 7 is a Passive. In these cases, the NPS can actually ignore data. For every person who registers a Passive response, a whole swath of insightful data — as much as 20-30% — gets systematically wiped from the sample.
The science of memory also shows how the hallmark NPS question — “How likely are you to recommend this learning session to a friend or colleague?” — provides unreliable data. When interpreting the question, for instance, individuals might focus on whether or not the learning session was fun. However, as NLI has found, the most effective learning comes from exerting effort to understand difficult concepts — an experience that might not be the most enjoyable.
Finally, when people are asked to make decisions about their future behavior, respondents have a strong tendency to over-report the likelihood of “good behavior.” As a result, the NPS tends to be more reflective of the person answering the question than the learning session itself.
The NLI Solution
At NLI, we take a science-based approach to measuring behavior change and can offer some broad advice the next time you want to evaluate behavior change at your organization:
Ask About the Behavior
If the goal of a learning session is to change behavior, then ask questions about the desired behavior. When possible, try to get perspectives from other people.
Reduce the Need for Inference and Estimation
Ask questions of behavior recall with a short, recent reference period. Short reference periods — for example, “in the past week” — make it more likely that respondents will try to recall relevant episodes, whereas longer reference periods encourage guessing and estimation.
Also, try to provide behavioral cues or examples of demonstrative behaviors in order to minimize the room for multiple interpretations.
Ask About the Behavior Again
A single data point can give you a snapshot of where your organization currently stands, but not where it is going. By examining behaviors over time, your organization can measure impact and sustainability of your learning efforts – and ultimately the return on investment.
Time spent on good question design is a negligible cost in the overall budget of any study – and bad questions will result in bad data. Please consider these issues – and NLI – the next time you need to collect people data at your organization.