Unless I specify otherwise in a particular post, I program and host all Survey Says surveys in a SurveyGizmo account.
Unless I specify otherwise in a particular post, I field all surveys through Fulcrum (Lucid)—a panel platform exchange I’ve used to survey more than 100,000 people over the past two years. These panelist have signed up, by double opt-in, to receive and take online surveys. Every respondent is compensated according to his or her own terms, including cash, points, or credits toward some kind of pecuniary reward. I recognize and share some analysts’ concerns with online surveys. But in my professional opinion, these shortcomings are similar in impact to the shortcomings of traditional phone surveys, and I take measures to mitigate them wherever possible.
Because Fulcrum is a panel provider exchange, I’m able to mix-and-match recruiting methodologies (that is, the way panelists are recruited into my sample) to mitigate the risk of bias. That is, each Survey Says survey is fielded by a handful of different panel providers, each of which recruits and engages their respondents differently.
I specify each survey’s sample size in that survey’s post. My target sample size for each survey depends on a number of considerations, including the size of my population of interest and the significance level of differences I see between key segments. In general, I try to keep my sample size to a minimum without compromising the legitimacy of my findings, as my costs are a function of survey length and sample size.
Unless I specify otherwise in a particular post, I control each survey’s sample to roughly match the 2010 US Census as regards age, gender and race. I will always publish each survey’s topline findings to these demographic questions.
Finally, I clean these datasets for speeders (those taking the survey excessively fast), flatliners (those selecting the same answer option for every question), and contradictory answers (those reporting two opposing facts about themselves or their perceptions). Regarding contradictory answers, specifically—I catch these by including one or more “foils” through each survey, designed to identify those who are not reading and answering each question carefully.