The total and utter failure of polling to predict the outcome of yesterday's UK general election is another example of the shortcomings of conventional research.
The polls predicted that the Conservative and Labour parties would be neck and neck on election day, leading to a "hung parliament" where neither big party has a majority. Multiple research companies ran multiple polls and they all told the same story. Nice work if you can get it, right?
Well, it turns out that what must have been thousands if not millions of pounds of quant research was an utter, total waste of money. The Conservatives did much better than the polls suggested, and got an overall majority. So, what can we learn from this?
The limits of rational research
Back in 2010 I posted on the limits of rational research, writing that "Asking consumer what they "think" is flawed, as it engages explicit thought, not auto-pilot reaction." As Neil Davidson says in an interesting Marketing article, "People don’t really know what they’re going to, or even aren’t always sure what they’ve already done." He gives some great examples of this:
- 70% of people who stated their affinity for and intention to fly British Airways were actually flying EasyJet
– 60% of people stating they didn’t and wouldn’t eat at McDonald’s proved themselves wrong when they kept their own behavioural diaries.
Use more "real world" research
Rather than asking people directly and rationally about brands, products and services there are other real world techniques which can be used. For example, we often ask people on food projects to create diaries where they write down and photograph what they actually do. Ethnography is also a powerful tool, where you follow and video people in their everyday lives. Doing this on a coffee shop project showed that the cup and saucer a coffee was served in were as important as the coffee, something that didn't come out in qual research. Digital technology means you can also use "social listening" to understand what people are really saying to their friends, not what they say when sitting behind a mirror, being paid to answer questions.
Implicit quant research
There are also things you can do to make your quant research more real-life. In the pilot of our IcAT study (Iconic Asset Tracker) to measure the impact of brand properties we used "implicit research", as I posted on here. This involves getting people to react more quickly when asked image questions, so they use their auto-pilot, system 1 thinking. This should give a truer read on what is really in their heads. I wonder if this would have more accurately predicted the election result?
In conclusion, the election shows just how unreliable research can be. It is often used like a drunk uses a lamppost: for support, not illumination. My suggestion: do less conventional research, and more real world studies instead. Even better, prototype your ideas and try them out for real on a small scale, learning then refining.