Stated vs. Derived Importance in Key Drivers Analysis

Date

11/30/2023

Author

Dino Fire

Congratulations, somebody shopped in your store.  Maybe somebody else will shop in your store, too.  Or maybe they’ll shop in someone else’s store.  Or maybe they’ll just say, “the heck with it” and go to a movie instead.

Retailers keep us researchers in business for the purpose of answering one question: “why?”  Why our store?  Why that other guy’s store?  How was the movie?

A lot of money is spent on research designed to learn and ultimate quantify the reasons consumers do something.  In this case, we’re considering this issue in the context of shopping.  Specifically, we want to know not only the reasons for a consumer choosing one retailer over another, but how important each of those reasons are in making their choice.

Frequently, studies are designed to measure importance of those features explicitly and in isolation—no further analysis is necessary than looking at the top-2 box summary scores of which features are more important to customers than others.  Those scores are founded, of course, on which attributes consumers give a score of 10—which, along with its companion 9, are in the top two box—versus 8, which is not.

In other cases, the importance metrics will be used to determine what, if anything, could or should be changed to improve the consumer’s shopping experience, which theoretically draws them into the store next time.  That’s where key drivers analysis comes in, but more about that later.

Despite its popularity, measuring importance through traditional Likert scales is not the method I recommend to measure importance. There are a couple fundamental reasons for this.

First, importance scales often do not provide adequatediscrimination and differentiation between product features or retail experiences.

Key Drivers

Imagine this interview:

Q: Now let’s talk about the last time you went to Dino’s House of Statistics.  How important were prices?  

A: Oh, very important.

Q: How important was the convenience of the place? 
A: Oh, that was very important.

Q: How important were helpful store employees?  
A: Oh, that was very important too.

Second, people use scales differently.  This problem is not limited to importance scales.  Respondents tend to calibrate their responses to the previous scores they provided. For example, here’s Bob, rating the 3 attributes in our survey.

Q: How important is price? 
A: Let’s give it a 9.

Q: Now, how important is convenience? 
A: Well, not as important as price, so let’s say 8.

Q: How important are helpful employees?
A: Less important than price, but more important than convenience. 8 for that one too.

But Mary may follow precisely the same response pattern—9 / 8 / 8—but start their ratings at 6 instead, yielding 6 / 5 / 5. Should we view these three features as more important for Bob than for Mary?  No. Do any of Mary’s answers qualify for top-2 box summaries? No.

The problem is that Bob’s 9 rating may be Mary’s 6 rating. The very nature of scales—that the values are relative, not absolute—can cause misinterpretation of the results.

There are occasions where stated importance is appropriate and useful. If this is the case, there are far better ways than Likert scales to measure it, but that’s a subject for another day.  Hint: Google “discrete choice modeling.”

Measuring Derived Importance

Key drivers analysis yields importance in a derived manner, by measuring the relative impact of product features and retailer attributes on critical performance metrics like overall satisfaction, likelihood to shop again, likelihood to recommend the store to others, or some combination of those. The structure of a key drivers questionnaire looks like this:

Q. This next question is about your satisfaction with Dino’s House of Statistics in general. Please rate the store on how satisfied you are with them overall. 10 means you are “completely satisfied” and 0 means you were “not at all satisfied.”

 This question—overall satisfaction—becomes the dependent variable for our analysis.

Q. Now, consider these specific statements. Using the same scale, how satisfied are you with Dino’s on…

     

      • Variety of products and services

      • Professional appearance of staff

      • Length of wait time

      • Ease of finding things in store

      • Length of transaction time

      • Convenient parking

      • Convenient store location

    With correlations, we get a statistic called r2, which is a measure of the strength of the score of one item to another. In the case of Pearson R, 1.0 means a perfect, positive correlation and -1.0 reflects a perfect, negative correlation. An r2 value of 0.0 means no correlation at all.

    Tangential note: have you noticed how fond statisticians
    are of naming obscure methods after themselves?

    In a key drivers analysis, the higher the correlation between each of the specific attributes and overall satisfaction, the more influence that attribute has on satisfaction, thus the more important it is. Notice that we never have to ask the question “how important is…” since the derived importance tells us everything we need to know. But that’s only half of the equation.

    As a result of the question structure, we get explicit satisfaction metrics on each of the individual attributes as well. This data tells us how well we perform on each of the attributes. The resulting output looks something like this:

    graph

    In our example, “professional appearance” and “wait times” are the most important attributes; they have the highest correlations to overall satisfaction.  Notice how their correlations are higher than 0.750: a high correlation indeed! 

    quote

    Now compare those attributes to “store location,” scoring just over 0.520. The correlation is still positive, but not nearly as powerful as the first two examples. Remember, derived importance measures importance of individual attributes in relative, not absolute, terms.

    The second part of our analysis shows that our store’s employees look very professional. In fact, it’s the highest performing attribute of all (while importance is viewed on the X, or horizontal, axis, performance is viewed on the Y, or vertical, axis).

     

    This means that our store does well on this important attribute and it should be considered a core strength.  This is not the case with the other important attribute, wait times, however. Our store gets the lowest performance rating on that very important feature.

     

    From our survey results, management can quickly see that resources should be directed toward reducing wait times (more cashiers), speeding transaction times (again, more cashiers and registers), and helping customers find things (better signage in the store).

     

    The Bottom Line: We’ve precisely and reliably identified those few specific items that need to be prioritized, as improvement in satisfaction with those particular things will have a direct and measurable impact on overall satisfaction generally.

    Recent Posts