p_1.gif +64 9 815 0320

New Zealand’s Leading Independent Fieldwork Company

Survey WOF Article

New Service - Like a WOF for your Survey

Article by: Duncan Stuart FMRSNZ 


Years ago I used to work at Heylen Research, and learned a lot from colleagues and mentors like Colin Ingram, Roz Calder, Richard Dunbar and Michael Cook.  But I think I learned most from the head of the field department: Ngaire Reid. In terms of the sheer crunch of getting sampling and methodology right, and of using a questionnaire that absolutely works, Ngaire Reid was, and is, a formidable advocate for quality.  She was the bastion between the draft-quality questionnaires we’d take up to Field, and the final product.

In Ngaire-proofed questionnaires the skips actually skip to the right places, all respondent possibilities (such as “Don’t Know or Not Applicable”) are accounted for, and the language itself – often confusing –is cleaned up, made grammatically correct and made understandable for the respondent.

Working with her was a great experience for me. I thought it was funny how, when you typed her name into a word document, (this was in the early 1990s when that annoying Microsoft paperclip would pop up and make suggestions) the software would helpfully suggest that instead of Ngaire what I really meant was Nagger. Fair call!

She is a precise person, but always with a sense of humour. She would pick up every little thing and in so doing would take my lumps of well intentioned coal, and turn it into diamonds.

Ngaire has since then been director of Reid Research, and in conversation the other week I learned that she’s offering a specific Questionnaire Quality Check service, available to anyone who wants their questionnaires, whether CATI, online or face-to-face  to be proofed, tweaked and readied before going into field.


QUESTION: Ngaire: an interesting service. By implication you’re seeing a lot of questionnaires that really do need an extra quality check. Has the standard of questionnaire writing been falling – or has there always been this room for improvement? 

NR: It has always been an issue. Standards vary greatly and I do think there has been a drop in overall quality in recent years, as costs have been squeezed.  When a researcher creates a questionnaire, they get so close to it they can sometimes miss some of the fine-tuning intricacies. And sometimes the researcher’s expectations of what a question will produce can make them somewhat tunnel visioned. I think today, in such a price driven market, clients can demand too much input, and we accept it in an attempt to please them. In those circumstances we are really not taking the professional lead and giving them the benefit of our expertise, which is, after all, what they are paying for. 

It is always a good idea to get someone who is not involved in the project to go through the questionnaire and “proof” it.  We often forget that the language we use comfortably may be quite unintelligible to a large proportion of the population. There’s a need to put things in plain English.

There will never be a perfect questionnaire, as there’s always going to be someone out there whose scenario was outside what we thought about when writing the questions.   But every questionnaire should be as good as we can make it, for the benefit of both the data we collect, and also for our pool of willing respondents.  Keep alienating them, and we’re all in big trouble.     

CATI surveys have always been tightly managed – with online it’s pretty easy to just keep on chucking extra questions into the mix. Has the online medium led to a drop in standards?

NR: To some degree, I believe so.  I guess we’ve all been in the situation where we’ve been conscientiously completing an online survey where we get to a question which we can’t answer – the appropriate option isn’t there, or there is nowhere to write a comment.  So, to continue, we take either the nearest option, tell an outright lie, or escape out of the questionnaire. I think the sometimes “cheap and quick” image of online work does occasionally necessitate getting the survey “live” without adequate checking.

Of course, the length of the questionnaire has a huge bearing on the quality of the data collected. Once respondent fatigue or boredom sets in any old response can be coded.  Being asked to rate long batteries of statements becomes extremely tedious, and data quality takes a dive.

It isn’t just questionnaire quality either. I think a big overlooked factor is the whole issue of sampling. A fully representative sample of the total population versus a sample of the population with a computer? And what sort of auditing takes place with online surveys? One difference between CATI and online is that if there’s a bug or issue with the questionnaire, you can pick it up on the fly – whereas with online, often, problems are not seen until the survey is closed off.

What are three common mistakes made in questionnaire writing? 

There are obviously more than three! But the most common mistakes I see are: 
  • Missing all the technical details i.e. skips, making sure there are adequate “none” or “don’t know” options, and making sure there are all the necessary precodes.   
  • Ambiguity and the language within the survey may not be comprehensible for the respondent.
  • Choosing the wrong methodology for the information you need to elicit – for example there are good ways and bad ways of asking pricing questions.  
  • Too long. Introductions not being catchy, and hooking the respondents into the survey.
  • Questions which are critical to quotas, being placed at the end of the questionnaire rather than at the beginning.

How does your Questionnaire Quality Check service work? Let’s suppose I drafted a pretty good 12 minute survey for use online – and sent you the word doc, how would the process work?

NR: We will have a brief look at it, and provide you an idea of what the cost will be.  We will go through it with a fine-toothed comb, and mark up a copy for your consideration.  We would talk you through the items we’ve found. In that respect we’re not being didactic about the changes; rather, you’re getting a sounding board, and some very good suggestions. It’s still your questionnaire.

And a rough cost?

NR: It obviously would depend upon how “good” your questionnaire was, but between one and two hours should be plenty for a “pretty good” straightforward 12 minutes survey. So you’re talking about $300. The whole idea is to protect you from the risk of having mistakes get through.

Is that risk getting worse?

NR: One of the problems in questionnaire writing is that there’s never enough time – the project is due out in field yesterday. So the checking processes that all research companies had 20 years ago just aren’t in place. These days it’s so easy for a really important oversight to occur: not something you want to happen when it’s a $50,000 project and you have a client breathing down your neck.

How long would turnaround take, typically?

NR: We will always work to your deadline.  We have sufficient skilled staff that we should always be able to put someone on to the proofing, same-day or next-day, if need be. You’ll get quicker turnaround if you send us the ‘final draft’ for polishing rather than a sketchy first draft. But if you need help with your draft, we’re here to offer it. Really most surveys should only take a few hours – it depends upon the quality and length of questionnaire we receive.  If we can’t deliver in time, we’ll tell you.

These days a lot of researchers are still going to say: “I can’t afford that extra 24 hours.”

NR: It’s a simple trade-off. Save a few hours and take risks or slow things down just a little and err on the side of quality. You must always consider the implications of putting the survey into field or going live, when there is a problem with the questionnaire.  That’s the point: almost every unchecked questionnaire has bugs or problems – that’s just reality. Questionnaires are complex documents serving different stakeholders, so the chances of getting it right first time are pretty slim. So for that extra layer of checking you may be saving much bigger, more expensive problems.

Such as…?

NR: Oh a typical one is when a logic error causes fieldwork to stop whilst it is corrected.  In a situation like that some of the completed interviews may need to be deleted from the datafile, or respondents may have to be re-contacted to correct errors, or collect missing data. Another consequence is getting data that doesn’t work the way your analysts need it to work. Things like that add days to the project, and cost somebody – your own firm probably – a lot of time and money.

There’s a bigger issue as well. If we go out with mistakes, or the respondent isn’t given space to share their opinions then they will not be impressed by our professionalism. Most respondents who agree to participate in a survey really want to help, and really want to give the right answers.  But if they feel uncomfortable about the interview because their answers didn’t really fit, or they feel they were being asked questions that are irrelevant to them, or they feel dumb as they are not sure they’re providing helpful, useful answers, etc. etc. then these people will frequently be lost to us in the future – our small pool of respondents in NZ is being diminished as we aren’t being as professional as we should.

Reminds me of a survey I did where the skip logic didn’t work. And a pile of people who said they didn’t consume Beverage X were then asked a raft of questions about that beverage.

NR: And how did they feel?

Some got really angry. You could see it in the open enders. I felt bad; it was a technical hitch…

NR: And I bet some vowed they’d never do another survey.

Three of them emailed me and that’s what they said.

NR: Our industry conducts hundreds upon hundreds of surveys every year and you can see how the problem multiplies. Long or error prone questionnaires will lead to higher refusal rates, and for all of us, higher costs.

Then there is the possibility of complaints – to the MRSNZ, AMRO, the client company, government departments.

Sooner or later there will be issues that aren’t black and white. Does the service include discussion time?  Does someone like me get the chance to hear your advice?

NR: Yes, as mentioned earlier, this is critical.  Obviously we won’t know all the intricacies of the subject matter, and we may need to ask some questions.  Our comments will often create some re-thinking on your part.  But at the end of the day, we will make the point and it is your decision as to whether or not you take it.  Our aim is to provide a service which not only helps professionals develop their own questionnaire writing skills, but also we hope it will help to improve the overall standard of our industry.

So on a $30-50,000 project, and everything’s riding on the questionnaire...this is like getting a WOF on the project before it hits the road. Who do you envisage will most need the service...researchers, or clients who want to protect their research investment?

NR: Yes, of course, both the researcher and the client will benefit from this service, but don't forget the respondent.  These guys are precious to us, and we must make their experience completing an interview as pleasant as possible or else we’ll be shooting ourselves in the foot in the long term.


Duncan Stuart FMRSNZ

CONTACT US

p_1.gif +64 9 815 0320

p_2.gif  ngaire@reidresearch.co.nz
btn_enquiries.gif

JOIN US

Interested in participating in paid research?
btn_join_our_panel.gif