By Angela Burtch, Vice President, Merrill Research
In Part 1 of this series we touched on a few tips on how to ensure the integrity of data when conducting a market research study.
A lot has changed in the past few years and we thought it was time to add more content. More bots, frauds, and other illegitimate means are increasingly common and able to circumvent the safety nets we’ve cast to detect poor data. Everyone is working hard together to mitigate bad survey data. So, with this blog, we’d like to continue to offer up more suggestions that will help all of us to this end.
Before you even see a respondent in a survey, make sure your sample provider has done all the pre-conditioning to ensure the integrity of the participant. These are just a few of the myriad factors that impact respondent quality. Your respondents need to be squeaky clean. Here are some questions to ask your vendor:
- How are they sourcing their sample? Proprietary panel? APIs? if so which ones? Affiliate publishers? River/intercept sampling? Some other way? Make sure you know the source.
- If panels are used, what tools are used in the registration process to mitigate fraud? Digital fingerprinting? 3rd party validation? Double opt-in validation? IP validation?
We could write an entire series of blogs just on this first lather. Go to the sample provider’s website and see what they say they do!
Even when the legwork is done up-front by your sample provider to vet your respondents, as researchers we are responsible for ensuring that quality checks continue post-recruitment.
So, it’s time to rinse and get rid of anything that wasn’t taken out in the suds by your sample vendor. How do you do that? Make sure your questionnaire can stand up to fraud.
It used to be that we would sprinkle a “gotcha” question or two into the respondent screening to rinse out the duds. Not anymore. For a typical 10-minute survey we will include all types of trap questions starting in the screening section all the way through the main questionnaire and we use elaborate decision trees to decide what flags or completely terminates a respondent.
Here are a few tips that we highly recommend you incorporate into your next questionnaire:
✔ Red Herring questions. Most researchers know about these. As an example, in a branding study you could include a “fake brand” in an aided awareness list and flag or terminate respondents who select this option. Be careful with this one. Make sure you do your due diligence and make sure that your red herring brand is not too similar to a real-life brand. For example, if the subject matter is awareness of social media sites, a question with an artificial brand “Tweeter” could potentially be selected because of its very close similarity to “Twitter”. Always remember, you don’t want to entrap and confuse respondents, you just want to make sure that you’re rinsing away the unengaged and potentially bad data responses.
✔ Skill testing questions. An actual legal requirement in Canada for sweepstakes. A simple mathematical problem slows down respondents and adds another level of security. Don’t ask respondents to solve a differential calculus problem (unless they are engineers!), but instead, ask a simple problem like (10 x 10) – 5. BTW – that’s 95. Give yourself a pat on the back if you got it right. This may not work with AI created bots, but it can help detect human or click-farm respondents. Stay tuned for my next blog on that..
✔ Attention check questions. These work really well to wash out speeders. In this instance, you tell a respondent to pick a specific response option. For example: “We want to make sure you are taking your time to read all the survey questions, for this question please select ‘Not Very Familiar’ below.” If they didn’t follow the instruction, then what else were they speeding through? Rinse them out of your hair.
✔ Look at the open ends. If you have any open-text questions in your survey, these often provide tell-tale indicators of junk you want to flush out. Nonsense answers (FDSKALFDAKJFSA). Obvious multiple copy and pastes. Or out-of-context answers (e.g., you are asking about car brands, and the responses are about bicycles). Even if you didn’t budget time or expense for coding an open end, including one just for the purpose of monitoring data quality can be a good idea. Example: “In a few words, please type below the kind of movies you like to watch.” Limit the character count to 50 characters so you aren’t using valuable survey taking time, but you can still gauge respondent presence in the question.
✔ Just way too positive. If in a battery of questions, everything is “Extremely Favorable”, “Extremely Important”, etc. Consider you might have an over-endorsing respondent seeking to please the research sponsor and stay in the respondent pool. This is somewhat similar to straight lining, which we addressed in this previous blog.
✔ Speaking of straight lining answers, make it harder to do. Instead of long boring batteries of attributes to rate on a scale, consider mixing up your question structure. How about a drag and drop exercise asking respondents to sort the attributes by Like, Dislike, and Neutral emojis instead of having them cruise down a page clicking numbers.
✔ Look for contradictions. Do the survey answers contradict themselves or do they make sense? Often this means deciding to ditch a respondent on a case-by-case basis. For example, if someone answers that they drink beer once a month and subsequently answers that they buy beer daily, is this valid? Cross-check with some of the other tips listed above.
Yes, repeat. Repeat with every prospective sample partner you are considering. Repeat with every questionnaire you design. Lather, rinse, repeat.
Need help with designing research mousetraps? Let Merrill Research guide you through your next research study!