September 30, 2024

10 mistakes to avoid when drafting your market research survey

From leading questions to inconsistent answer scales, here’s what to keep in mind.
Research
TABLE OF CONTENTS

Let the great Coca-Cola disaster of 1985 be a reminder of the importance of finding ways to truly understand one’s audience. 

As it was losing ground to Pepsi, the soda giant discovered its customers preferred a sweeter flavor, so it introduced New Coke, with a new corn syrup-enhanced recipe—and it backfired royally. 

Customers protested and signed petitions demanding the old formula back, until finally the company’s president appeared in a commercial announcing the return of Coca-Cola Classic. In the end, Coca-Cola realized that customers had an emotional attachment to the original product—and that while they preferred a sweeter taste, they only wanted that in small quantities.

One of the tried-and-true tools for gaining insight into the needs, desires, values and opinions of a broad base of customers is, rightly, the survey. 

But what if you don’t ask the right questions? 

Below, we’ll explain some of the most common mistakes market researchers make when designing surveys, and how to fix them so they yield more reliable and actionable data. (Spoiler alert: This is an area where AI assistants can truly come in handy.) 

1. Leading questions

Brands are used to promoting their products and conveying them in a positive light. But that can pose problems in surveys if the questions subtly suggest a desired answer. If people follow the question’s lead, they may be giving responses that make you feel good, but aren’t truthful. 

Example: "Don't you think our new espresso-flavored hygiene line is amazing?" 

Instead: To avoid eliciting a false positive, avoid language like “great” or “amazing.” Even though you love the fizzy fun of your espresso-flavored bath bombs, try to frame questions in neutral terms that don’t imply the answer you necessarily want. 

For instance: "How do you feel about our new espresso-flavored bath bombs?"

2. Biased answer choices

The integrity of your survey depends on thoughtful questions and answer options. Even if your question isn’t leading or biased, your answers might tilt too heavily toward one end of the spectrum.

Example: You ask your respondents “How would you rate the new Cosmic Wonder Burger?,” and offer them four multi-select answer options: “Excellent / Great / Good / Fair.”

Instead: What if they think the burger is terrible? Or have not tried it? By providing a range of neutral, balanced answer choices you can ensure less biased responses.

For instance: "How would you rate the new Cosmic Wonder Burger?”  > “Excellent / Good / Average / Not Great / Poor / Don’t Know”

3. Double-barreled questions

The issue here is that double-barreled questions ask about two things at once, making it unclear which part the respondent is actually answering. 

Example: "How satisfied are you with our customer service and the quality of our rubber Halloween masks?"

Instead: Separate the questions to address one issue at a time. It might seem redundant or repetitive, but it’s the only way to discern the factor that is driving the response—it’s possible that the respondent adores his novelty Elon Musk mask, but is intensely dissatisfied with the company’s 1-800 help line. 

For instance: "How satisfied are you with our customer service?" followed by "How satisfied are you with the quality of our Halloween masks?"

4. Ambiguous timeframes

Asking a question about how frequently someone uses a product may seem straightforward at first. But sometimes the timelines aren’t sufficiently specific, leaving the question open to multiple interpretations. 

Example: "How often do you use our organic antacids?"

Instead: Clearly define what you mean by “how often.” Maybe some people have daily heartburn, and others just once a year. A specific timeline ensures all respondents understand the question the same way.

For instance: "How many times per week do you use our organic antacids?" (With answers ranging from “I don’t use the organic antacids” through “5+ times a week.”)

5. Overly complex questions

When drafting a survey, it can help to put yourself in the respondent’s shoes. How will they interpret these questions? Will they feel overwhelmed?

Don’t barrage these poor strangers with questions that try to cram too much into them, or that lean on jargon that might not be accessible to everyone.

Example: "How would you rate the efficacy of our omnichannel marketing strategy in delivering holistic customer experiences across multiple touchpoints?"

Instead: Simplify your language and avoid insider terms like “omnichannel” and “touchpoints,” which might give an everyday survey respondent a headache. You want them to get to the end of the survey, so make it easy by breaking down complex ideas into digestible parts.

For instance: "How effective do you think our marketing is across different channels, including things like Facebook or streaming TV ads?" 

7. Assumptive questions

Sometimes questions falsely take for granted that respondents will be familiar with a particular experience or program when they’re not. Making assumptions can leave a person unable to answer the question. 

Example: "How satisfied are you with our VIP loyalty program?"

Instead: What if the respondent’s not in the VIP program? What if now that they’ve heard about the VIP program they’re annoyed they weren’t invited? Instead of assuming that everyone’s a VIP, direct the follow-up question only to those who are.

For instance: Ask "Have you used our VIP loyalty program?” first, and set your survey’s logic so that only those who answer “Yes” will be steered to the follow-up question about satisfaction.

8. Too many open-ended questions

Open-ended questions are ones that allow the respondent to share their thoughts in an unstructured way (reigned in only by whatever space limitations you set). This can be a great way to glean deep insights and tease out themes and sentiments, especially if you layer on an AI tool that can help perform some of that analysis for you.

Example: "What do you think about our product? How would you improve it? What features do you like the most?"

Instead: Think carefully about areas in which you really need unstructured responses, and use open-ended questions sparingly. Otherwise, you’ll tire out respondents (who might in turn begin giving monosyllabic answers).

For instance: “Which features of your hot-dog toaster do you like the most? a) mini tongs b) drip tray c) bun slots d) hot dog slot e) other (specify).” In this case, only (e) would enable an open-end response. 

This can be followed by specific follow-up, open-end questions, such as “Do you see a way to improve any of the features? If yes, please explain.” 

9. Overloading with rating scales

Applying the same type of rating scale to numerous questions can bore respondents, such as a repetitive use of the Likert scale (a multiple choice scale used to track opinions about things, with options generally spanning from “strong to agree” to “strongly disagree”).

If respondents don’t give up altogether, they may start to glaze over and give less thoughtful answers. 

Example: “How satisfied are you with our canned meat? a) very satisfied, b) somewhat satisfied c) neither satisfied nor dissatisfied, d) somewhat dissatisfied, e) very dissatisfied”...repeated ad infinitum for the next dozen questions.

Instead: Vary the format of your questions and mix in different types of answer choices, like open-ended questions or multiple-choice answers, to maintain the respondent’s engagement.

For instance: Follow up a Likert scale question with something like, “How many times a month do you eat our canned meat? a) I don’t eat it every month, b) 1-3 times a month, c) 4-6 times a month c) more than 6 times a month.”

10. Inconsistent answer scales

While a little variety is a good thing, you don’t want to vary answer choices so much that they confuse people. Inconsistent numeric scales, for example, can make a respondent feel like they’re doing a math problem, which no one wants.

Example: “On a scale of 1-5, how much do you like the texture of our canned meat?” followed by, “On a scale of 1-10, how much do you like the taste of our meat?” 

Instead: Feel free to change up question types in a survey overall, but standardize answer scales across questions of a similar type so that readers aren’t using their energy sorting out numbers instead of providing meaningful answers.

For instance: “On a scale of 1-5, how much do you like the texture of our canned meat?” followed by, “On a scale of 1-5, how much do you like the taste of our meat?” 

How AI can help write surveys

Now that we’ve covered some common mistakes people make when drafting market-research surveys, we might as well mention that Harris QuestDIY can smooth some of this friction.

The self-service tool helps you draft and deploy surveys with confidence, thanks to an AI assistant that will suggest improved question phrasing and answer choices. We'd love to show you how it works.

Stagwell Marketing Cloud

Take five minutes to elevate your marketing POV
Twice monthly, get the latest from Into the Cloud in your inbox.
Related articles
Brands Making a Splash: Spotify
How the music-streaming giant softened the blow of a price hike.
Research
How QuestDIY uses AI to simplify survey writing
Consider it a drafting & research assistant that never sleeps.
Research