A survey is only as good as the answers it receives and participants who lose interest or give poor answers can skew data and make a survey unreliable. So, how do you ensure you’re getting reliable data on the individual level? The answer differs across methodologies, but with online research, it’s important to be aware and vigilant when writing your survey, programming, and reviewing your data set.
Here are 6 quality checks you can use to help guarantee quality answers from your respondents:
Mobile First Design
A strong, mobile-first design is critical to achieving the highest quality answers by ensuring that your survey is accessible and usable across all devices and browsers. Today, upwards of 50% of all web activities happen on mobile devices, and the number keeps growing. When possible, our recommendation is to always offer an engaging, respondent-centric survey experience with mobile users in mind.
When you’re designing a survey, and you want to verify that your respondents are actually human, one easy way to do so is by using honey pots—a method designed to identify and act against unauthorized use of information systems. Our programming template utilizes honey pots to aid in the detection of fraudulent activity.
Honey pots can be used in two ways:
- To detect automated responses (i.e., spam). If the answers provided by one or more respondents appear automated, then those answers will be flagged as suspicious and not counted against your survey results.
- Surveys can include hidden honey pot questions that are embedded in the survey. Human participants will not be able to see these questions; however, bots would see these questions and respond to them.
Red Herring or Trap Questions
There are a few different types of trap questions that can be used. Some of these focus on consistency of answers, whereas others focus on honesty or the ability to follow instructions. These techniques can also be used to test how familiar a respondent is with a subject or industry.
We can suggest effective ways to implement red herring questions in a non-offensive way, to help identify inattention.
When you get identical or nearly identical answers to questions that have the same response scale, this flag is triggered. It can be a result of fatigue; therefore, it is highly encouraged to design the survey to minimize this possibility.
Which questions should or shouldn’t be flagged for straight-lining? The type of questions and the number of attributes influence whether or not straight-lining should be used. Our standard QC review takes these things into account to ensure your data is representative and not diluted due to a lack of consideration for evaluative criteria.
The goal of identifying speeders is so we can find people who spent so little time that it would have been impossible to give the questionnaire the attention it needed. They should not be people who can get through the questionnaire faster than average. Setting the proper thresholds is key in the review process. The industry standard for speeder checks recommends that anything less than one-third of the median length of a survey be flagged.
Open-ended questions give respondents more room to elaborate on their answers and allow the researchers to have more in-depth data. However, unlike the other criteria, open-ended question quality should be evaluated manually. Areas to monitor include identical responses across respondents, inconsistencies, gibberish, foul language, irrelevant answers, etc.
Expert Market Research Project Management, Survey Programming, and Data Processing Services
These are not the only areas to monitor, but you will want to make sure you’re partnering with a data collection company that is knowledgeable enough to navigate these issues and address data quality proactively. Our project management team has 100+ years of combined experience in sampling and data collection and we would be delighted to provide the consultation you need to feel confident in the data you’re delivering. Contact us today to get started.