Presidential pollsters got it wrong, what are the implications for consumer research?
If political polls were so off, might consumer research be equally flawed?
Most of the major presidential polls were off – some way off. And while Joe Biden appears to be headed for victory, as predicted, it’s by margins considerably smaller than expected. In short, the pollsters blew it, and in a bigger way than in 2016.
Every day at Search Engine Land our inboxes are filled with story pitches built on consumer research: surveys about intended holiday spending, about device usage, about privacy and many other topics. These surveys are fielded with varying degrees of rigor, but most claim to be sound research. Brands such as Harris, IPSOS and Forrester are often used to bolster the credibility of findings, which are then used for PR and marketing purposes.
Political and consumer surveys use similar methodology
As it’s now being called, the “polling debacle of 2020” made me think about the validity of consumer research that we see every day. How related are the two? And is consumer research similarly “off” in many (or even most) cases?
A late October ABC/Washington Post survey gave Biden a 17-point lead in Wisconsin; he won by less than a point. South Carolina Democratic senatorial candidate Jamie Harrison was allegedly behind by two points, within the margin of error, on the eve of the election. Republican incumbent Lindsey Graham won by 15 points. And there are many other such examples.
The methods used to conduct online political polling are basically the same as those typically used for consumer market research: panels and samples that are “census weighted” to reflect age, gender, education, marital status and other demographic variables.
Surveys and real-world behavior often diverge
I recently received a story pitch, based on consumer research, that claimed, “Roughly 1 in 3 online shoppers use their voice assistants to make purchases at least once a month.” While I’ve searched for products with virtual assistants (on smartphones and smart speakers), I’ve never bought anything via virtual assistant, nor do I know anyone in my extended network who has.
Compare the “1 in 3” claim with a report in 2018, based on internal Amazon data, asserting that only 2% of Alexa users have made a voice purchase. While that number could have grown in two years, especially during the pandemic, it’s extremely unlikely — if not impossible — it would have reached 33% of online shoppers.
The survey report indicated it polled 1,000 “frequent online shoppers.” Upon further questioning, the survey source revealed that respondents were predominantly Millennials and mostly re-ordering or completing saved carts, initiated presumably from other devices. While this makes it slightly more plausible, the numbers are still way too high.
Another example: surveys over the years have shown that a majority of consumers prefer to buy from brands that support social and political causes they agree with. However, behavioral data often reflect that there’s little relationship between these survey-based attitudes and actual consumer purchase patterns.
I could go on.
Look behind the curtain
Surveys are used by brands and marketers for different purposes. Some use them for market validation and to make product and pricing decisions. Much more often they’re used for PR and content marketing. In the latter case, methodological rigor is somewhat less important. Whether 60% or 70% of consumers intend to spend more money online this holiday season is of limited consequence — consumers will be spending more. But when product and go-to-market decisions are based on assumptions from flawed data, that can be a problem.
No single survey should be treated as conclusive or even entirely accurate. And we should be very cautious when small samples are extrapolated to the entire population on the basis of the claim that they’re “census balanced.” There are also phantom data points that get repeated and take on an aura of truth because they’re so widely circulated. This was the case with the now infamous projection that “50% of all searches will be voice searches by 2020,” which is both a misstatement and wrongly attributed. (I still see this cited.)
People need to look behind the curtain and not blithely accept such things at face value. I’ve seen many “20 stats about [topic]” roundups that are full of inaccurate information or incorrect source citations. Yet this doesn’t mean we should disregard market research. Instead, surveys should, at best, be seen as “directional” and potentially indicative of broader consumer sentiment — but not as 1:1 representations of reality.
Consumer surveys are a useful tool, but they must be reality checked with behavioral data for a more complete and accurate picture of consumer activity. Going forward, we should receive any individual piece of data or survey with healthy skepticism and a clear understanding of the underlying methodology.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.