By David M. Schneer, Ph.D./CEO
11-Minute Read
I’ll never forget my very first research methods class in college. Two concepts emerged that shaped my entire career: 1) Quantitative research is AMAZING, and I must do this for the rest of my life, and 2) The fundamental responsibility of any researcher is to continually improve the quality of research. That includes sharing expertise and constantly seeking to learn new or improved methods.
Market research professionals are dedicated to providing you with accurate, unbiased, valid, reliable, and ethical data, so that you are armed with the best information to drive critical business decisions. We have staked our professional reputations on our ability to do just that.
In a huge win for market research, DIY survey tools have democratized research, and exposed countless professionals to the power and influence that research results carry. I once managed a killer team of professional researchers at a huge technology company, where we leveraged these tools daily. Coupled with our expertise, these tools empowered us to conduct a tremendous amount of research and drive countless business decisions across our global organizations.
But there’s a less rosy flip side to the democratization of market research. The fact is, a person simply does not know what they do not know. Even with the best of intentions, DIY tools in the wrong hands leads to bad data and highly risky decision-making. It is critically important that users of these tools take an honest inventory of their skills and weaknesses with regard to their ability to design, execute, interpret, report, and defend the research they conduct.
You wouldn’t do DIY electrical work on your house if you didn’t have the required expertise, would you? So, don’t be ashamed to call professional researchers for your market research projects. Even seasoned researchers seek the advice of industry colleagues when in need of a particular expertise.

There are countless considerations to weigh before answering that question. We do not raise these considerations to shame anyone. We raise them because there exists a sentiment that since anyone can use DIY tools, then anyone should. As we discussed earlier, well-intentioned people cannot be faulted for not knowing what they do not know. But in the case of DIY research to support important business decisions, what you don’t know will hurt you. Here, we’ve limited our focus to just a handful of the most critical considerations:
- Defining methodology: Is a survey the most appropriate methodology to address your objectives? (Because it isn’t always.) Who is your target audience? What segments within that audience need to be analyzed separately (e.g., by gender, job role, revenue, etc.), and how does that impact total sample size and quota groups? How long should the interview be? Are respondent incentives warranted? If so, what kind of incentive is appropriate based on respondent spec, length of survey, and complexity of feedback you seek, and are you ready to manage incentive fulfillment? Are you aware of any legal requirements in your state or country relating to incentive distribution (especially drawings)?
- Sample source: Where are you getting your respondents? Is it a trusted provider, or just the cheapest one who claims they can get you whoever you need? If your sample is being fed to your survey directly through your survey tool, do you know and trust how the participants were recruited? (Expert researchers have been around the block long enough to know who to trust – and who not to trust – for survey sample.) Will they stand behind their quality and happily replace respondents who need to be replaced during your data cleaning process? If you’re asked to send invitations to a database of potential respondents, are you confident they are legally contactable? Are you knowledgeable about location-specific data protection laws, GDPR, general best practices regarding how to receive or send that internal contact list, and how to protect that data once you have it? Do you know what response and completion rates to expect from that database, and therefore how many contacts you need in order to achieve the final sample size you seek?
Check out our earlier blog post on sample quality.
- Questionnaire Design: Are you skilled in the art and science of crafting concise, unambiguous, unbiased, and non-leading survey questions? Are you confident in your ability to define reasonable screening questions to accurately identify who you do – and do not – want to participate in your survey? Are you knowledgeable about what question types/setups are best suited to meet your research objectives? How about the types of variables required for basic analysis, or any potential multivariate analysis you may need to conduct? Do you understand the principles around providing a good respondent experience, in order to encourage participants to complete the entire survey without bailing out of boredom, frustration, or fatigue? Are you able to respect your respondents’ time by holding your internal stakeholders to a reasonable survey length? Do you know that if you do not, you will suffer from higher drop-off rates, longer field time, negative impacts to feasibility, and higher sample costs?
Here are some helpful blog posts on questionnaire design:
- Programming: Are you confident in your technical ability to program, QA test, launch, and manage the survey via your chosen DIY tool? Are you skilled in optimizing the look and feel of your survey, so it both encourages respondents to complete it and prevents visual burn-out? Are you considerate of the fact that many participants will be taking your survey on a mobile device, and that there are ways to make the survey experience appropriate for them? Are you skilled at designing logic statements to ensure your skip patterns, “show if” logic, and other survey flow elements are designed properly, function as intended, and are maximized to provide valid data? Are you prepared to perform an exhaustive QA check on your functional link, trying everything possible to “break” it or identify other issues before launch? Do you understand why soft launches are critical, and how to execute and analyze one?
- Data cleaning and processing: Are you prepared to review your data in an effort to identify respondents who need to be removed and replaced due to inadequate subject matter knowledge, straight-lining, speeding, or providing contradictory responses? Are you familiar with methods of identifying bots or participants who are not fully engaged? (Special question types, IP address identification, time stamps, and other respondent integrity measures are absolutely critical!) Are you ready to go back into field after removing 10-20% of your completes via data cleaning? Are you prepared to clean, manage, and analyze verbatim, open-ended comments?
For more on data cleaning, click here and here.
- Analysis, interpretation, and reporting: Do you understand the principles of data analysis? Are you skilled at connecting the dots between various datapoints? How about how to critically analyze conflicting data? Do you understand how to identify whether a result is meaningful, statistically significant, or both? Or neither of those? (Not all statistically significant differences are meaningful!) Are you competent in objectively analyzing data, and defending the voice of your respondents when challenged by internal stakeholders? (Such challenges should and will happen.) Are you confident that you will not inadvertently misinterpret the data? Are you able to spot when others do, and willing to correct them to ensure they have an accurate understanding of what the data is – and is not – saying? Are you confident enough in your methodological approach, execution, and data analysis to defend them when an internal stakeholder tries to punch holes in them when he/she doesn’t like what the data say? (Because that often happens when stakeholders are threatened by results.)
Here’s a blog on the importance of a solid report.
In one of our earlier blog posts, we said: “We all know data processing isn’t sexy, but neither is staking your reputation on bad results.” We strongly advise you to honestly assess your skill level in regard to executing DIY survey research, and consider the weight and cost of your business decision before jumping into the DIY deep end. If the resulting business decision is not terribly significant, then the risk to the business is lower, and DIY approaches are less likely to have a negative impact. But if the results are intended to influence business decisions which have significant resource expenditures (budget, staff, effort) and the cost of a flawed decision is meaningful, then please call in professional reinforcements. The cost of doing so is far less than a failed business plan and damage to your credibility.
So, if there is any doubt in your mind that any piece of this multi-faceted process may fall through the cracks under your watch, then please, PLEASE, be careful, or call an expert. We’re here to help.
Additional reading:
What type of research firm is best for you?
How to best work with a research partner
Project management fundamentals
How to set up a research retainer so you are always poised to get expert help when you need it, whether it’s for consulting on questionnaire design, or running an entire project: