Customer satisfaction: the lead metric (3)

This follows on from my two previous posts on this subject.

I think I’ve got somewhere with this. It has been an interesting ride. The questions boil down to:

  • How can you measure the quality of your customer experience, across multiple channels (web, store, call centre, WAP)?
  • Is measuring symptoms of customer experience, rather than the experience itself “dangerous”?
  • Are those metrics the only data you need to help you deliver a good customer experience?
  • Why would you measure customer experience related factors anyway?

And here are the answers.

How can you measure the quality of your customer experience, across multiple channels (web, store, call centre, WAP)?
Observing users, rather than gathering their opinions, is one of the cornerstones of usability and UCD methodology. It’s effective. So to measure customer experience, it makes sense to follow the same approach: observe what users do, rather than asking them what they think.

In the case of the telecoms operator I first spoke about, I think their decision to measure ARPU and churn is the right one. The observed action of loyal, satisfied customers is to stick with them and spend more. Customers vote with their wallets.

And conversely, customer satisfaction surveys are not a good measurement – those are opinion based, and opinion can’t be trusted.

I discovered a different telecoms operator that measures customer dissatisfaction on a monthly basis. This is an odd half-way meausre. If it’s done by survey, it’s not much good. If it’s done by measuring actual customer complaints across all channels, it’s better. But it’s not great. Even if you got customer complaints to zero, that doesn’t really tell you if your customers are happy – just that they are not complaining any more.

If we concentrate on the web channel for a moment, there are some unique methods of measuring customer satisfaction. Tools like Relevant View and WebIQ allow you to intercept customers on your site, track them around, and ask them whether they are getting what they want at key moments. These tools do yield some wonderful information, but for a truly multi-channel business they are not enough. If you’re working across multiple channels you need metrics that are relevant across all channels. ARPU and churn, for a telecoms operator, fit the bill.

Is measuring symptoms of customer experience, rather than the experience itself “dangerous”?
The possible issue here was that measuring a “symptom” of the customer experience (eg. ARPU) might lead the business to focus on the revenue itself, rather than the cause of the revenue (ie. customer experience).

There are three counter examples, and that indicates to me that there’s no problem here.

  • The original telco I’ve been talking about set about a program of study and experimentation to find and prove the factors that were affecting ARPU, and change them. They didn’t get too hung up on the metric, but successfully went looking for the causes.
  • Nokia meaures the sales of each mobile device it launches, and pays each product team’s bonus based on that. Nokia makes very good phones, and you can see why. The engineers know that if they make something people like, it will sell well, and they will benefit personally.
  • Apple do the same.

Are those metrics the only data you need to help you deliver a good customer experience?
No. ‘Course not. But lots of businesses have made the mistake of thinking they are.

To design a new product or service, you need to understand the motivations, abilities and desires of the target user group and deliver something that addresses those needs. UCD is great at that: start with ethnography or contextual enquiry, and later when you’re in the thick of concept and detailed design, get more user input from usability testing.

If you’re fixing an existing customer experience, you need to understand where it’s broken. Mystery shopping, diary studies, expert reviews, call log and search log analysis, web analytics and usability tests can all help.

Why would you measure customer experience-related factors anyway?
Two key reasons, that I can see.

1. Measuring hard numbers and linking them to the quality of customer experience is a great way of demonstrating the value of CX initiatives to the business. (Proof based businesses)

2. Using those numebrs as the basis for team reward, like Apple and Nokia do, is a great way to drive certain kinds of positive behaviours in the business. Supply the incentive and watch the business gravitate towards good UCD practice. You won’t have to force designers to design for user needs or conduct usability tests. When their bonusses are riding on customer satisfaction, they’ll do everything they can to engage with their customers during design. (Faith based businesses)

More about faith and proof in a future posting.

Leave a Reply

Your email address will not be published. Required fields are marked *