What We’re Saying
Adjusting the bias dials: What makes an online survey different?
What is really important is understanding the context in which data are collected
As anyone who has tried to watch No County For Old Men on an mp3 player in a packed train carriage knows, device matters. If you’ve ever abandoned an online survey after tapping half a grid-full of inaccurate radio buttons, you’ve been scratched by the cutting edge of technological innovation. As researchers try to maximise respondent attention and honesty, and to minimise potential sources of bias, they find answers in new technology but, in their enthusiasm to adopt new technology, they can create new problems.
The main source of bias is social desirability: we all generally want people to think we’re good eggs so we tend in our responses towards what we think is acceptable rather than admit to anything shady
The evolution of the survey started with face-to-face and has been driven by changes in technology to progress to telephone, online, and most recently to mobile methods. In each stage, something was gained and something else lost. Face-to-face surveys have particular advantages like the warmth of a good interviewer and the conversational tone that helps respondents engage. The main source of bias is social desirability: we all generally want people to think we’re good eggs so we tend in our responses towards what we think is acceptable rather than admit to anything shady.
The next stage in the evolution was telephone interviews, largely driven by the cost of having someone stand on a street or knock on doors and ask people questions compared to the opportunities afforded by the expansion of the landline user-base. The survey scripts were the same and it was a matter of persuading someone not to hang up, rather than to stand still or not close the front door. In telephone interviewing, you get to keep the personal interaction but you lose the eye contact, and it’s easier for people to lie when they’re not looking at you. There was also the problem of the late-adopters and the transition period when telephone surveys were supplemented by face-to-face for certain demographic groups.
We moved to online surveys when the internet started to go faster, but there are similar biases in any sample as online samples are restricted to people with an email address. That’s an increasing number of people, but it’s not everybody so if you’re especially interested in the older cohort it’s important to know that you’re either getting the most tech-savvy of them, or the ones with the kindest offspring, but not all of them. There are also new types of bias in online surveys. There’s no interviewer to control the pace of responding or to query unusual patterns so watch out for straight-liners – people who give the same response to every item on a scale – and watch out for people who complete the survey in an unfeasibly quick time. Similarly, there’s no way of monitoring respondent fatigue, that is, how bored and disengaged they’re getting, and no one to help them stay on-task so data quality towards the end of a survey could suffer.
Turning to the user experience of online surveys, change blindness is the notion that people sometimes do not notice that something in their environment has changed; applying this to an online survey means that consecutive questions need to look as different as possible to make sure respondents know they’ve moved on. Anywhere that uses the Roman alphabet has a left-to-right response bias, meaning that respondents are more likely to tick the first box regardless of whether it is labelled ‘Strongly agree’ or ‘Strongly disagree’. (The reverse applies for scripts read right to left; this is the kind of thing people working on large-scale international studies add to the study design just for kicks.) It makes a difference of about one-third of a standard deviation, and there is not much we can do about it except acknowledge it.
We’re at a time of transition again, from answering online surveys on big-screen devices to answering on small-screen devices. The first casualty is the grid question, and the progammers have been hard at work designing new ways of asking those types of questions. As with every other evolutionary stage, mobile phones have introduced a new source of response bias and this time it’s devise optimisation. The current trend is to specify in the cover email whether the survey is optimised to be completed on a specific device, and reports indicate that about three-quarters of surveys make it clear from the start. Problems arise when you ask respondents to switch devices after they’ve already started in that they might just drop out and never come back. Attention on mobile is different too, though we’re not yet quite sure how it will change things and the recent trend is to use mobiles for short surveys to be on the safe side.
What is really important here is not battling to overcome the biases inherent in any data collection process, but making sure to understand the context in which information is collected and to interpret it accordingly. We’re always on the look-out for which problems could be solved by technology, thereby dialling down certain biases, but we’re also aware of which ones are dialled up in the process. The more we know about what it feels like to complete a survey, the better we can make sense of the responses we get. At MCCP, we think about this stuff all the time. We control the controllables and we know what effect the uncontrollables are having.
**As seen in the Irish Marketing Journal, June 2016**