Global marketers conduct much of their market research for the same reason other marketers do: to cover their backsides.
In case anyone ever asks, they can whip out the professionally executed study replete with significant significance tests. It shows incontestably (at p<.05) that the reason they altered the packaging, the scent, and the color of the shampoo for the Nicaraguan market was that consumers there were significantly different from those in neighboring Costa Rica.
Never mind that the additional costs of making, storing, and selling the unique Nicaraguan shampoo wipe out the gross margin.
The global market research industry spends inordinate amounts of time and money demonstrating that national boundaries are an excellent segmentation variable.
But what if they’re not?
What if consumers are actually more similar across countries than the research shows?
More similar than the research shows? How can consumers be more similar than the research shows? If they were more similar, the research would show it, wouldn’t it?
Actually – brace yourself – it would not.
If you look for differences, you'll find them. Similarities, on the other hand, are elusive.* Our current research methods are biased toward demonstrating differences rather than similarities. Most market researchers know this, but they don't shout it from the rooftops.
So here's the secret.
So here's the secret.
If you wanted to test the hypothesis that consumers in Michigan are different from those in bordering Ontario, you could design a study that would give you the answer. In fact, if you had a large enough sample on both sides of the border, you'd know the answer in advance: it would almost always show you that the two populations are different – that is the way significance tests work.
The significance test tells you that if you were to run the study again and again, 19 times out of twenty you would find these types of differences. And that is taken as pretty strong evidence of difference. But suppose you wanted to examine if consumers in Michigan and Ontario are the same or similar. What test would you use?
You’re out of luck. The only tests are ones designed to show differences.
Okay, but surely the lack of difference on a statistical significance test is evidence of similarity, right?
Sorry, out of luck again (you can’t demonstrate the absence of difference). It only means that no difference was found in this study. This may have been that one study out of twenty where the difference was not found. And there is no guarantee you would not find differences if you ran the study a few more times.
Okay, but surely the lack of difference on a statistical significance test is evidence of similarity, right?
Sorry, out of luck again (you can’t demonstrate the absence of difference). It only means that no difference was found in this study. This may have been that one study out of twenty where the difference was not found. And there is no guarantee you would not find differences if you ran the study a few more times.
One outcome of this asymmetry is that findings of cultural differences abound, while similarities cannot even be tested. In other words, consumers may be far more similar than we think across national and cultural boundaries.
Now think about how much time, effort, and money goes into localizing products, positioning, prices, and so on, based on findings of significant difference in market research studies.
Managers are generally not held accountable for the wasted effort of localization – at least they tried, the reasoning goes. And they had the market research to support it!
*Weijters, Bert and Niraj Dawar Testing for Marketing Universals: A Practical Methodological Proposal and Application to a Pan-European Consumer Survey. Working Paper.
See also: Dawar, Niraj and Philip Parker (1994) Marketing universals: Consumers’ use of brand name, price, physical appearance, and retailer reputation as signals of product quality, Journal of Marketing, 58 (2): 81-95.
4 comments:
An excellent piece Niraj uncovering one of the marketing profession's darkest secrets: that hardly anyone who works in it understands the mountains of research commissioned annually.
I had many issues with my product development colleagues who were forever testing new, cheaper ingredients in the product, comparing it to the existing. If the research failed to show a difference at the 95% confidence level, which was the case most of the time, they would trumpet the reesearch as proving that there was no difference between the two recipes. Which, as you correctly point out, cannot be proven. Absence of proof of difference is not the same as proof of similarity. This of course is how incremental product degradation happens.
I was lucky in that my career involved several years working in and then running a major packaged goods market research department. When I was VP marketing, I had more experience of market research than the rest of my department put together, so I made sure I sat in every research presentation just to make sure people didn't run off with polar opposite conclusions to the real learnings.
As a marketer, you ultimately stand or fall by the hit rate of the initiatives you champion. If managers want to improve that hit rate, they'd better make sure they really understand research methodologies, design and statistics, because virtually everyone they work with doesn't.
Great post, Niraj, and a terrific reminder that a metric isn't necessarily important just because it's measurable.
Hi Niraj,
I always enjoy reading your posts and this one is particularly thought-provoking, but please clarify something for us: Are you arguing against localization and in favor of standardization? And to what degree, or under what circumstances, is localization a good idea? These days, many marketers and gurus trumpet about increasingly finer segmentation and "micro-marketing", but your post makes us think twice, particularly when one considers the costs of customization wiping out the incremental profits. I hope you will post again about how and when we should localize and customize. I look forward to more discussion on this topic!
Cheers,
Nigel Goodwin
@John: you're right to point to the incremental changes that add up to large changes over time -- all justified on the basis of inaccurate conclusions from inconclusive significance tests!
@Jay: thanks Jay! You're right -- and you could add that statistical significance is not necessarily strategic significance.
@Nigel: segmentation remains important (crucial!) but the dimension of national boundaries may not be a great segmentation variable. There are so many other segmentation bases that cut across geography and culture. I like the idea of writing more posts on this topic -- I'll post them later in the year.
Cheers - Niraj
Post a Comment