• Skip to main content
  • Skip to primary sidebar
Merrill_Logo_CMYK_tag
  • Home
  • About
    • Team
  • Services
    • Protecting Client Investments Through Research
    • Complete Range of Research Services
    • New Product Development
    • Custom Panel Development
    • Audiences
  • Work Examples
  • Clients
  • Blog
  • Ecosystem Partners
  • The Merrill Institute
  • Contact
×
  • Home
  • About
    • Team
  • Services
    • Protecting Client Investments Through Research
    • Complete Range of Research Services
    • New Product Development
    • Custom Panel Development
    • Audiences
  • Work Examples
  • Clients
  • Blog
  • Ecosystem Partners
  • The Merrill Institute
  • Contact

Surveys

The Blueprints of a Proper Questionnaire

November 5, 2018 by Rich Stimbra

By David M. Schneer, Ph.D.

All too often, we hear about research efforts—DIY or otherwise—that did not yield helpful results. When digging into those efforts to identify what went wrong, we find a combination of contributing factors—perhaps the worst of which is poor questionnaire design. Nothing foreshadows the early death of a research project better than a crummy questionnaire.

And while some presume the questionnaire is the first phase of a project, nothing could be further from the truth.  Every marketing research project starts with a clear definition of the marketing and research objectives.

For example, the marketing objective may be to sell more Chardonnay wine and the research objective may be to determine which of 3 new label designs will best accomplish that.  For a tech company, the marketing objective may be to increase profitability for the next gen mobile phone while the research objective is to determine which subset of potential new features will maximize profitability.

Often this step is ignored or assumed, and that can lead to an unfortunate disconnect between the actionability of a study and the client’s decision-making process.

Crafting the questionnaire begins only after marketing and research objectives are aligned.  You notice we chose the word “craft” instead of “develop” or “write.”  That’s because questionnaire preparation is a craft—a seamless union between art and science. It is critical to have an experienced and objective researcher craft your questionnaire. Why?

  • We’ve developed the optimum survey methodology to answer your marketing and research objectives.
  • We thoughtfully craft the questionnaire so that it provides the raw data required for specific statistical analyses needed to address those objectives.
  • We go to great lengths to ensure the questionnaire is both valid and reliable, and respectful of the respondent experience.

So what’s the deal with reliability and validity? Outside statistical research, these two terms are often used interchangeably but they actually mean very different things. Reliability refers to consistency.  In the case of a questionnaire, reliability means the degree to which measurements provide consistent outcomes.  Validity, on the other hand, represents the degree to which a question or scale measures what it is intended to measure. A good way to remember this is the example of a clock. A clock measures “true” time, and does so continuously.  If the clock were to show the wrong time, we would say it is invalid.  If it were sometimes fast and sometimes slow, we would say it is unreliable.  It is possible, however, to have a measure that is highly reliable but of poor validity (e.g., a clock that is precisely 20 minutes fast consistently).

As relates to questionnaire development, reliability is a bit easier to deal with than validity.  To maximize reliability, it’s important the questionnaire includes easy-to-understand questions, clear instructions, and unambiguous scales. As for validity, the sequencing of questions is critical.  Except for the very first question asked, all questions are potentially biased by the questions that appear earlier in the survey. For this reason, care must be taken not to “tip off the witness” by inadvertently educating the respondent or by creating awareness as a result of questions asked.  For example. We’d not want to list brands and ask the respondent to indicate which they were aware of and later ask what their favorite brand is if determining the latter was a key study objective.

To make a questionnaire valid, care must be taken to ensure that the way a question is asked maps as closely as possible to what it is intended to measure.

At its best, a questionnaire is crafted in a perfectly clear, unambiguous manner, with carefully designed questions that yield results that are usable for their intended purpose.  At it’s worst, a questionnaire ignores all these critical components, and yields invalid or unreliable results.

So, don’t try this at home, folks. We seasoned researchers live and breathe this stuff. We have the experience required to ensure survey results are valid and reliable, and provide you the sound guidance you need to drive your critical business decisions.

Merrill Research, Experience You Can Count On

Filed Under: David Schneer, Research Tagged With: blog, David Schneer, Questionnaire, Research, Surveys

Dead Data Tell no Tales: Can Your Survey Results Survive?

September 17, 2018 by Rich Stimbra

By David M. Schneer, Ph.D.

Sour Sample, Part 2: How to Prevent DOA Data

This blog entry continues the thoughts of our last post.  If you haven’t read it yet, you can see it here: When Data Goes Bad: How Sour Survey Samples Nearly Ruined the Research Industry

Yeah, we know. The subject of data cleaning is real yawner. But dirty data done dirt cheap can result in the death of any study. Part of good quality data is crafting a good survey. It is absolutely critical. These poorly constructed surveys – and not so much the respondents themselves – are mostly to blame for the rotten data that results. A well-crafted and clever survey, thoughtfully designed with the respondent experience in mind, should be everyone’s primary defense against deadly bad data. But this is a topic for another blog.

In this post, we seek to heed the words of Donato Diorio, renowned data scientist, who posited: “Without a systematic way to start and keep data clean, bad data will happen.”

Bad data is no good. We at Merrill Research identify bad data by evaluating individual responses and identifying “flags”. When people ask us what kind of business we are in, we tell them that we’re in the Red Flag business. Depending on the survey, we may have as few as three or as many as eight or more flags programmed, depending on the length and complexity of the survey. We then monitor respondent flags throughout fielding, and replace bad data along the way.

Some of our flags are old-school – things everyone should already be looking for. The power is in leveraging a variety of flags in clever ways to identify the zombies. Here are just a few flags we use to ensure our data are sound:

  1. Speeders: Individual respondents must complete the survey in at least 40% of the average overall length of interview. Of course, there are exceptions to this rule, as one needs to consider timing impact of skip patterns, open-ends, loading time for images, etc., but this is a good baseline.
  2. Straight-Liners: We check to see if a respondent straight-lines (provides same response to) a series of rating scale questions. We run a distribution of the codes to calculate what percentage of straight-lining will be used remove a respondent (e.g., term a respondent out if they straight-lined a certain percentage of the statements)
  3. Inconsistent Responses: We may ask in the screener what brands have respondents consumed in the past 3 months. Later in the survey we may have a follow up question asking when exactly did they last consume that brand. If the respondent answers longer than past 3 months, we will flag that respondent. Another way to use this method is to craft survey questions that test a respondent’s knowledge on the topic in question. If they are unable to accurately answer a question designed to test their basic knowledge of a topic, they earn yet another flag.
  4. Responses to Verbatim Questions: We review the open-ends to verify that the respondent is answering the survey in a thoughtful manner. Short answers or garbage responses are flagged.

We may never achieve 100% confidence that every respondent is genuinely engaged with our survey instrument, but unless more stringent data cleaning measures are employed, conclusions drawn from these data will be increasingly questionable – think of the IT professional who just finished your survey who was actually a 15-year-old in New Jersey.  At Merrill Research, we employ the most effective measures to make sure our data is as clean as possible, and we are always on the lookout for the latest methods to ensure data quality.

We all know data research isn’t sexy, but neither is staking your reputation on bad results. Let us help you keep your data from getting zombified.

Merrill Research, Experience You Can Count On

Filed Under: David Schneer, Research Tagged With: blog, Data, David Schneer, Donato Diorio, Red Flags, Surveys

Primary Sidebar

© 2023 Merrill Research. All Rights Reserved.