The Risks of AI in Primary Market Research Studies
By Angela Burtch, Vice President, Merrill Research
4-Minute Read
Artificial Intelligence (AI) has revolutionized many industries, including market research. AI can automate data collection, analyze large sets of data, and can often provide insights faster than more traditional methods. However, using AI in primary market research studies is not without risks. Here are some potential pitfalls to consider:
1. Data bias in analysis
AI systems learn from the data they are trained on. If the training data is biased, the AI system will also be biased, leading to skewed results. This is particularly concerning in market research, where unbiased data is critical for accurate insights.
In addition, the results may be biased by your very query. I recently did a ChatGPT-4 search regarding “why door-to-door research is superior to an online methodology in market research”. The generated content told me – convincingly – about why door-to-door is superior to online (human interaction, eliminating “bad actors”, etc.), with no mention of the obvious drawbacks: time, cost, zip code bias, and so on.
2. Trust your hunches
While AI can process raw data faster than humans, it lacks the ability to understand context and nuance in the same way humans do. This could lead to misinterpretation of data and inaccurate conclusions.
Sometimes we collect data and it doesn’t intuitively make sense with what we think are true market conditions. What do we do? Compare to previous research studies we’ve completed, conduct secondary research to see if the results are truly corroborated in the industry, and look at the data forensically to ensure that there are no data integrity issues – often AI-generated fraud. That’s also covered in a separate blog we wrote.
3. Dependence on technology
Over-reliance on AI could lead to a lack of critical thinking and problem-solving skills among researchers. It’s important to remember that AI is a tool to aid market research, and still needs a human hand.

4. Who owns your AI generated marketing research content?
In a working paper authored by the National University of Singapore, the author postulates that “Knowledge Workers”, as defined by Peter Drucker, was a title for people who “think for a living” and earn their wages by being analytical problem solvers. Clearly, we are increasingly using AI for routine and non-routine problem-solving. What will become of the knowledge workers? And what’s their ownership of the content? There are questions surrounding who owns the content produced by generative AI and what requirements will there need to be to prove that content was produced by a human vs. AI. [1]
This has significant implications for companies conducting market research that includes sensitive individual or corporate information. Who actually owns this data and how and with whom will it be shared? We don’t have NDAs with AI. We expect this will continue to keep IP legal experts busy and market researchers understandably concerned about legal and reputational consequences for the foreseeable future. While AI has the potential to greatly enhance primary market research studies, it’s important to be aware of these risks and take steps to mitigate them. As with any tool, the key is to be cautious, judicious, and use HI – Human Intelligence.
Do you have research questions that need answers delivered by skilled “knowledge workers”? Please connect with us to discuss solutions.
[1] Chesterman, Simon, AI-Generated Content is Taking over the World. But Who Owns it? (January 11, 2023). NUS Law Working Paper No. 2023/002, Available at SSRN: https://ssrn.com/abstract=4321596 or http://dx.doi.org/10.2139/ssrn.4321596