Dead Data Tell no Tales: Can Your Survey Results Survive?

By David M. Schneer, Ph.D.

Sour Sample, Part 2: How to Prevent DOA Data

This blog entry continues the thoughts of our last post.  If you haven’t read it yet, you can see it here: When Data Goes Bad: How Sour Survey Samples Nearly Ruined the Research Industry

Yeah, we know. The subject of data cleaning is real yawner. But dirty data done dirt cheap can result in the death of any study. Part of good quality data is crafting a good survey. It is absolutely critical. These poorly constructed surveys – and not so much the respondents themselves – are mostly to blame for the rotten data that results. A well-crafted and clever survey, thoughtfully designed with the respondent experience in mind, should be everyone’s primary defense against deadly bad data. But this is a topic for another blog.

In this post, we seek to heed the words of Donato Diorio, renowned data scientist, who posited: “Without a systematic way to start and keep data clean, bad data will happen.”

Bad data is no good. We at Merrill Research identify bad data by evaluating individual responses and identifying “flags”. When people ask us what kind of business we are in, we tell them that we’re in the Red Flag business. Depending on the survey, we may have as few as three or as many as eight or more flags programmed, depending on the length and complexity of the survey. We then monitor respondent flags throughout fielding, and replace bad data along the way.

Some of our flags are old-school – things everyone should already be looking for. The power is in leveraging a variety of flags in clever ways to identify the zombies. Here are just a few flags we use to ensure our data are sound:

  1. Speeders: Individual respondents must complete the survey in at least 40% of the average overall length of interview. Of course, there are exceptions to this rule, as one needs to consider timing impact of skip patterns, open-ends, loading time for images, etc., but this is a good baseline.
  2. Straight-Liners: We check to see if a respondent straight-lines (provides same response to) a series of rating scale questions. We run a distribution of the codes to calculate what percentage of straight-lining will be used remove a respondent (e.g., term a respondent out if they straight-lined a certain percentage of the statements)
  3. Inconsistent Responses: We may ask in the screener what brands have respondents consumed in the past 3 months. Later in the survey we may have a follow up question asking when exactly did they last consume that brand. If the respondent answers longer than past 3 months, we will flag that respondent. Another way to use this method is to craft survey questions that test a respondent’s knowledge on the topic in question. If they are unable to accurately answer a question designed to test their basic knowledge of a topic, they earn yet another flag.
  4. Responses to Verbatim Questions: We review the open-ends to verify that the respondent is answering the survey in a thoughtful manner. Short answers or garbage responses are flagged.

We may never achieve 100% confidence that every respondent is genuinely engaged with our survey instrument, but unless more stringent data cleaning measures are employed, conclusions drawn from these data will be increasingly questionable – think of the IT professional who just finished your survey who was actually a 15-year-old in New Jersey.  At Merrill Research, we employ the most effective measures to make sure our data is as clean as possible, and we are always on the lookout for the latest methods to ensure data quality.

We all know data research isn’t sexy, but neither is staking your reputation on bad results. Let us help you keep your data from getting zombified.

Merrill Research, Experience You Can Count On