How cognitive biases can influence market research results
Let’s assume you are looking to invest some money and come across two investment options:
A portfolio that promises an 80% chance of earning $4,000, but a 20% chance of no return
A portfolio with a 100% chance of earning $3,000
Which one would you choose? If you are like most people, you would choose option two. However, when we analyze each of the prospects, we realize that we are better off with option one (expecting to earn an average of $3,200 vs. $3,000 with the second portfolio).
We all want to believe that our decisions are rational and based on a careful evaluation of all possible outcomes, especially when it comes to money. But in reality, emotions, intuition and cognitive biases play a bigger role in our decisions than we acknowledge or want to admit, both in financial decision-making and in market research. Due to these biases, we often deviate from the conventional economic model of decisions based on expected utility: We like the idea of winning $100 and dread the idea of losing $100, and not because $100 has a major impact on our wealth. We simply dislike losing more than we like winning. Continue reading →
B2B audiences are a hot target for market research and marketing communications, which can make it difficult to get their attention for surveys.
Market Strategies has conducted research among hard-to-reach B2B audiences—including IT managers, physicians and executives—for more than 25 years. In our experience, securing healthy response rates boils down to:
Providing just compensation
Managing respondent fatigue
Following up appropriately
To help, I’ve identified ten ways to motivate response rates. Many of the suggestions increase cost and some are not appropriate for certain study designs. So, the approach that best balances response rate, budget, timing and analytic needs depends on your research requirements. Continue reading →
I’ve been known to ask a lot of questions. Just ask my team at work, my daughter or my wife, for that matter. (Turns out she doesn’t appreciate being subjected to the Socratic method…go figure.) But even I’ve learned that there are diminishing returns when it comes to asking questions—the 10th one just makes them mad.
And so it is with survey research. Dr. Reg Baker, AKA the “Survey Geek,” has posted on the topic no fewer than nine times in the last few years, quoting many definitive sources including Galesic and Bosnjak’s important study on the impact of survey length on data quality in web studies.
My visit to the ESOMAR Congress in Amsterdam provoked plenty of thought about how we can accommodate different kinds of respondents by offering different survey experiences. Historically, we have been a “one size fits all” industry. Survey design templates are pretty much the same regardless of who is being surveyed, the topic, the study goals.
I can’t beat us up too badly for having one size fits all. After all, Office Depot has the same website for business buyers and consumers. The desktop of Windows Home Edition looks pretty much the same as Windows Enterprise Edition minus a few bells and whistles (and Mac OS X looks exactly the same, as there is only one edition). Companies produce one size because it’s cost efficient, which certainly benefits them but also benefits us as consumers. So it goes with research – if we have too many survey SKUs, as it were, prices will rise to the sponsoring client and incentives will go down to the panel members.
However, recent experimentation with game-ish surveys opens up a path to – and conversation about – a different model for surveys. Most of those pursuing game-ish surveys seem to recognize that this is not the next size that fits all, rather another tool in our kit. I have to believe that having more options for how surveys can be designed can only benefit us in the long run.