Getting the Most from Online Surveys
Do you wonder about online surveys? Are they reliable? Is there bias? A recent article by the Pew Research Center For Weighting Online Opt-In Samples, What Matters Most? 1provides guidance to allay those concerns.
Pew experimented with different procedures for weighting results from surveys with online opt-in samples to discover which techniques best reduced bias on estimates. They compared online results with the results given for 24 benchmark questions drawn from “high-quality federal surveys,” public surveys that were conducted using more traditional methods.
This was not a small study: it was based on over 30,000 online opt-in surveys, conducted by three different vendors, and focused on national (U.S.) estimates. They evaluated three different statistical weighting techniques: raking, propensity weighting, and matching, used singularly and in combination and each procedure was performed on simulated samples ranging in size from n=2,000 to n=8,000 to see if the results differed by sample size.
They found that the statistical procedures used to reduce bias were less important than choosing the correct variables for weighting. Moreover, they found that the sample that takes the survey and their association with the survey topic are most important.
What does this have to do with pharmaceutical marketing? Those two factors are the core of any survey, whether it is conducted online or in person:
- What distinguishes this sample from the population?
- What is their association with the survey topic?
In surveys where the core topics are health care and treatment, the “factor that differentiates the sample from the population” is a common condition. When “the [respondent’s] association with the survey topic” is personal, the results come directly from the genuine patient and caregiver experience. The sponsors goal is to find a vendor who knows their sample.
The issue is not whether online surveys distort results more or less than traditional survey methods. As the authors of an article2 published in the Journal of Strategic Information Systems defending the data quality of online surveys pointed out, many of skepticisms of the accuracy of online polls such as “respondents may lie and cheat” and “respondents may not give tasks their full attention” apply equally to traditional methods. In-person respondents can rush, misread, and misrepresent too. All survey instruments have known weaknesses and most employ methods to compensate for bias.
Online surveys are less expensive and are faster to collect than traditional. Further, online surveys overcome the limitations of geography, portability, and reproducibility associated with traditional studies. The key factor is the vendor’s access to, and relationship with, the subgroup(s) being studied.
Ultimately, the quality and accuracy of a survey or focus group study, whether online or in person, depend on factors related to the vendor:
- the skill of the vendor designing the survey, including a deep understanding of the target;
- the suitability of the vendor’s selected respondent group; and
- the skills and tools used by the analysts performing the interpretation.
As the authors of a 2016 paper published in the Journal of Business Research titled “A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples” concluded, “The choice of an Internet survey sample vendor is critical, as it can impact sample composition, respondent integrity, data quality, data structure and substantive results.”3
Inspire offers a trusted community to patients and caregivers. Our goal with this blog, this website and our content is to provide the life science industry access to the true, authentic patient voice. In so doing, we support faithful operationalization of patient-centricity. Take a look at our case studies, eBooks and news outlet coverage.