Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

a survey research question

Academic Experience

How to write great survey questions (with examples)

Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help.

Figuring out how to make a good survey that yields actionable insights is all about sweating the details. And writing effective questionnaire questions is the first step.

Essential for success is understanding the different types of survey questions and how they work. Each format needs a slightly different approach to question-writing.

In this article, we’ll share how to write survey questionnaires and list some common errors to avoid so you can improve your surveys and the data they provide.

Free eBook: The Qualtrics survey template guide

Survey question types

Did you know that Qualtrics provides 23 question types you can use in your surveys ? Some are very popular and used frequently by a wide range of people from students to market researchers, while others are more specialist and used to explore complex topics. Here’s an introduction to some basic survey question formats, and how to write them well.

Multiple choice

Familiar to many, multiple choice questions ask a respondent to pick from a range of options. You can set up the question so that only one selection is possible, or allow more than one to be ticked.

When writing a multiple choice question…

  • Be clear about whether the survey taker should choose one (“pick only one”) or several (“select all that apply”).
  • Think carefully about the options you provide, since these will shape your results data.
  • The phrase “of the following” can be helpful for setting expectations. For example, if you ask “What is your favorite meal” and provide the options “hamburger and fries”, “spaghetti and meatballs”, there’s a good chance your respondent’s true favorite won’t be included. If you add “of the following” the question makes more sense.

Asking participants to rank things in order, whether it’s order of preference, frequency or perceived value, is done using a rank structure. There can be a variety of interfaces, including drag-and-drop, radio buttons, text boxes and more.

When writing a rank order question…

  • Explain how the interface works and what the respondent should do to indicate their choice. For example “drag and drop the items in this list to show your order of preference.”
  • Be clear about which end of the scale is which. For example, “With the best at the top, rank these items from best to worst”
  • Be as specific as you can about how the respondent should consider the options and how to rank them. For example, “thinking about the last 3 months’ viewing, rank these TV streaming services in order of quality, starting with the best”

Slider structures ask the respondent to move a pointer or button along a scale, usually a numerical one, to indicate their answers.

When writing a slider question…

  • Consider whether the question format will be intuitive to your respondents, and whether you should add help text such as “click/tap and drag on the bar to select your answer”
  • Qualtrics includes the option for an open field where your respondent can type their answer instead of using a slider. If you offer this, make sure to reference it in the survey question so the respondent understands its purpose.

Also known as an open field question, this format allows survey-takers to answer in their own words by typing into the comments box.

When writing a text entry question…

  • Use open-ended question structures like “How do you feel about…” “If you said x, why?” or “What makes a good x?”
  • Open-ended questions take more effort to answer, so use these types of questions sparingly.
  • Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than “How is our customer service?”, write “Thinking about your experience with us today, in what areas could we do better?”

Matrix table

Matrix structures allow you to address several topics using the same rating system, for example a Likert scale (Very satisfied / satisfied / neither satisfied nor dissatisfied / dissatisfied / very dissatisfied).

When writing a matrix table question…

  • Make sure the topics are clearly differentiated from each other, so that participants don’t get confused by similar questions placed side by side and answer the wrong one.
  • Keep text brief and focused. A matrix includes a lot of information already, so make it easier for your survey-taker by using plain language and short, clear phrases in your matrix text.
  • Add detail to the introductory static text if necessary to help keep the labels short. For example, if your introductory text says “In the Philadelphia store, how satisfied were you with the…” you can make the topic labels very brief, for example “staff friendliness” “signage” “price labeling” etc.

Now that you know your rating scales from your open fields, here are the 7 most common mistakes to avoid when you write questions. We’ve also added plenty of survey question examples to help illustrate the points.

Likert Scale Questions

Likert scales are commonly used in market research when dealing with single topic survyes. They're simple and most reliable when combatting survey bias . For each question or statement, subjects choose from a range of possible responses. The responses, for example, typically include:

  • Strongly agree
  • Strongly disagree

7 survey question examples to avoid.

There are countless great examples of writing survey questions but how do you know if your types of survey questions will perform well? We've highlighted the 7 most common mistakes when attempting to get customer feedback with online surveys.

Survey question mistake #1: Failing to avoid leading words / questions

Subtle wording differences can produce great differences in results. For example, non-specific words and ideas can cause a certain level of confusing ambiguity in your survey. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question.

In addition, strong words such as “force” and “prohibit” represent control or action and can bias your results.

Example: The government should force you to pay higher taxes.

No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.

Wording alternatives can be developed. How about simple statements such as: The government should increase taxes, or the government needs to increase taxes.

Example: How would you rate the career of legendary outfielder Joe Dimaggio?

This survey question tells you Joe Dimaggio is a legendary outfielder. This type of wording can bias respondents.

How about replacing the word “legendary” with “baseball” as in: How would you rate the career of baseball outfielder Joe Dimaggio? A rating scale question like this gets more accurate answers from the start.

Survey question mistake #2: Failing to give mutually exclusive choices

Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents.

Review your survey and identify ways respondents could get stuck with either too many or no single, correct answers to choose from.

Example: What is your age group?

What answer would you select if you were 10, 20, or 30? Survey questions like this will frustrate a respondent and invalidate your results.

Example: What type of vehicle do you own?

This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle, or no vehicle at all?

Survey question mistake #3: Not asking direct questions

Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.

Example: What suggestions do you have for improving Tom’s Tomato Juice?

This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.

Example: What do you like to do for fun?

Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.

Survey question mistake #4: Forgetting to add a “prefer not to answer” option

Sometimes respondents may not want you to collect certain types of information or may not want to provide you with the types of information requested.

Questions about income, occupation, personal health, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent.

Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.

While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non-disclosure option.

Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.

  • What is your race?
  • What is your age?
  • Did you vote in the last election?
  • What are your religious beliefs?
  • What are your political beliefs?
  • What is your annual household income?

These types of questions should be asked only when absolutely necessary. In addition, they should always include an option to not answer. (e.g. “Prefer Not to Answer”).

Survey question mistake #5: Failing to cover all possible answer choices

Do you have all of the options covered? If you are unsure, conduct a pretest version of your survey using “Other (please specify)” as an option.

If more than 10% of respondents (in a pretest or otherwise) select “other,” you are probably missing an answer. Review the “Other” text your test respondents have provided and add the most frequently mentioned new options to the list.

Example: You indicated that you eat at Joe's fast food once every 3 months. Why don't you eat at Joe's more often?

There isn't a location near my house

I don't like the taste of the food

Never heard of it

This question doesn’t include other options, such as healthiness of the food, price/value or some “other” reason. Over 10% of respondents would probably have a problem answering this question.

Survey question mistake #6: Not using unbalanced scales carefully

Unbalanced scales may be appropriate for some situations and promote bias in others.

For instance, a hospital might use an Excellent - Very Good - Good - Fair scale where “Fair” is the lowest customer satisfaction point because they believe “Fair” is absolutely unacceptable and requires correction.

The key is to correctly interpret your analysis of the scale. If “Fair” is the lowest point on a scale, then a result slightly better than fair is probably not a good one.

Additionally, scale points should represent equi-distant points on a scale. That is, they should have the same equal conceptual distance from one point to the next.

For example, researchers have shown the points to be nearly equi-distant on the strongly disagree–disagree–neutral–agree–strongly agree scale.

Set your bottom point as the worst possible situation and top point as the best possible, then evenly spread the labels for your scale points in-between.

Example: What is your opinion of Crazy Justin's auto-repair?

Pretty good

The Best Ever

This question puts the center of the scale at fantastic, and the lowest possible rating as “Pretty Good.” This question is not capable of collecting true opinions of respondents.

Survey question mistake #7: Not asking only one question at a time

There is often a temptation to ask multiple questions at once. This can cause problems for respondents and influence their responses.

Review each question and make sure it asks only one clear question.

Example: What is the fastest and most economical internet service for you?

This is really asking two questions. The fastest is often not the most economical.

Example: How likely are you to go out for dinner and a movie this weekend?

Dinner and Movie

Dinner Only

Even though “dinner and a movie” is a common term, this is two questions as well. It is best to separate activities into different questions or give respondents these options:

5 more tips on how to write a survey

Here are 5 easy ways to help ensure your survey results are unbiased and actionable.

1. Use the Funnel Technique

Structure your questionnaire using the “funnel” technique. Start with broad, general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey before giving them a challenge. The most difficult questions are placed in the middle – those that take time to think about and those that are of less general interest. At the end, we again place general questions that are easier to answer and of broad interest and application. Typically, these last questions include demographic and other classification questions.

2. Use “Ringer” questions

In social settings, are you more introverted or more extroverted?

That was a ringer question and its purpose was to recapture your attention if you happened to lose focus earlier in this article.

Questionnaires often include “ringer” or “throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question sparingly.

3. Keep your questionnaire short

Questionnaires should be kept short and to the point. Most long surveys are not completed, and the ones that are completed are often answered hastily. A quick look at a survey containing page after page of boring questions produces a response of, “there is no way I’m going to complete this thing”. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time. Web surveys have some advantages because the respondent often can't view all of the survey questions at once. However, if your survey's navigation sends them page after page of questions, your response rate will drop off dramatically.

How long is too long?  The sweet spot is to keep the survey to less than five minutes. This translates into about 15 questions. The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.

4. Watch your writing style

The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to 11th grade level. Don’t use big words. Use simple sentences and simple choices for the answers. Simplicity is always best.

5. Use randomization

We know that being the first on the list in elections increases the chance of being elected. Similar bias occurs in all questionnaires when the same answer appears at the top of the list for each respondent. Randomization corrects this bias by randomly rotating the order of the multiple choice matrix questions for each respondent.

While not totally inclusive, these seven survey question tips are common offenders in building good survey questions. And the five tips above should steer you in the right direction.

Focus on creating clear questions and having an understandable, appropriate, and complete set of answer choices. Great questions and great answer choices lead to great research success. To learn more about survey question design, download our eBook, The Qualtrics survey template guide or get started with a free survey account with our world-class survey software .

Sarah Fisher

Related Articles

February 8, 2023

Smoothing the transition from school to work with work-based learning

December 6, 2022

How customer experience helps bring Open Universities Australia’s brand promise to life

August 9, 2022

3 things that will improve your teachers’ school experience

August 2, 2022

Why a sense of belonging at school matters for K-12 students

July 14, 2022

Improve the student experience with simplified course evaluations

March 17, 2022

Understanding what’s important to college students

February 18, 2022

Malala: ‘Education transforms lives, communities, and countries’

July 8, 2020

5 challenges in getting back to school (and 5 ways to tackle them)

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

a survey research question

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

a survey research question

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

a survey research question

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

a survey research question

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

a survey research question

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

Are you an agency specialized in UX, digital marketing, or growth? Join our Partner Program

Learn / Blog / Article

Back to blog

Survey questions 101: 70+ survey question examples, types of surveys, and FAQs

How well do you understand your prospects and customers—who they are, what keeps them awake at night, and what brought them to your business in search of a solution? Asking the right survey questions at the right point in their customer journey is the most effective way to put yourself in your customers’ shoes.

Last updated

Reading time.

a survey research question

This comprehensive intro to survey questions contains over 70 examples of effective questions, an overview of different types of survey questions, and advice on how to word them for maximum effect. Plus, we’ll toss in our pre-built survey templates, expert survey insights, and tips to make the most of AI for Surveys in Hotjar. ✨

Surveying your users is the simplest way to understand their pain points, needs, and motivations. But first, you need to know how to set up surveys that give you the answers you—and your business—truly need. Impactful surveys start here:

❓ The main types of survey questions : most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions.

💡 70+ good survey question examples : our top 70+ survey questions, categorized across ecommerce, SaaS, and publishing, will help you find answers to your business’s most burning questions

✅ What makes a survey question ‘good’ : a good survey question is anything that helps you get clear insights and business-critical information about your customers 

❌ The dos and don’ts of writing good survey questions : remember to be concise and polite, use the foot-in-door principle, alternate questions, and test your surveys. But don’t ask leading or loaded questions, overwhelm respondents with too many questions, or neglect other tools that can get you the answers you need.

👍 How to run your surveys the right way : use a versatile survey tool like Hotjar Surveys that allows you to create on-site surveys at specific points in the customer journey or send surveys via a link

🛠️ 10 use cases for good survey questions : use your survey insights to create user personas, understand pain points, measure product-market fit, get valuable testimonials, measure customer satisfaction, and more

Use Hotjar to build your survey and get the customer insight you need to grow your business.

6 main types of survey questions

Let’s dive into our list of survey question examples, starting with a breakdown of the six main categories your questions will fall into:

Open-ended questions

Closed-ended questions

Nominal questions

Likert scale questions

Rating scale questions

'Yes' or 'no' questions

1. Open-ended survey questions

Open-ended questions  give your respondents the freedom to  answer in their own words , instead of limiting their response to a set of pre-selected choices (such as multiple-choice answers, yes/no answers, 0–10 ratings, etc.). 

Examples of open-ended questions:

What other products would you like to see us offer?

If you could change just one thing about our product, what would it be?

When to use open-ended questions in a survey

The majority of example questions included in this post are open-ended, and there are some good reasons for that:

Open-ended questions help you learn about customer needs you didn’t know existed , and they shine a light on areas for improvement that you may not have considered before. If you limit your respondents’ answers, you risk cutting yourself off from key insights.

Open-ended questions are very useful when you first begin surveying your customers and collecting their feedback. If you don't yet have a good amount of insight, answers to open-ended questions will go a long way toward educating you about who your customers are and what they're looking for.

There are, however, a few downsides to open-ended questions:

First, people tend to be less likely to respond to open-ended questions in general because they take comparatively more effort to answer than, say, a yes/no one

Second, but connected: if you ask consecutive open-ended questions during your survey, people will get tired of answering them, and their answers might become less helpful the more you ask

Finally, the data you receive from open-ended questions will take longer to analyze compared to easy 1-5 or yes/no answers—but don’t let that stop you. There are plenty of shortcuts that make it easier than it looks (we explain it all in our post about how to analyze open-ended questions , which includes a free analysis template.)

💡 Pro tip: if you’re using Hotjar Surveys, let our AI for Surveys feature analyze your open-ended survey responses for you. Hotjar AI reviews all your survey responses and provides an automated summary report of key findings, including supporting quotes and actionable recommendations for next steps.

2. Closed-ended survey questions

Closed-end questions limit a user’s response options to a set of pre-selected choices. This broad category of questions includes

‘Yes’ or ‘no’ questions

When to use closed-ended questions

Closed-ended questions work brilliantly in two scenarios:

To open a survey, because they require little time and effort and are therefore easy for people to answer. This is called the foot-in-the-door principle: once someone commits to answering the first question, they may be more likely to answer the open-ended questions that follow.

When you need to create graphs and trends based on people’s answers. Responses to closed-ended questions are easy to measure and use as benchmarks. Rating scale questions, in particular (e.g. where people rate customer service or on a scale of 1-10), allow you to gather customer sentiment and compare your progress over time.

3. Nominal questions

A nominal question is a type of survey question that presents people with multiple answer choices; the answers are  non-numerical in nature and don't overlap  (unless you include an ‘all of the above’ option).

Example of nominal question:

What are you using [product name] for?

Personal use

Both business and personal use

When to use nominal questions

Nominal questions work well when there is a limited number of categories for a given question (see the example above). They’re easy to create graphs and trends from, but the downside is that you may not be offering enough categories for people to reply.

For example, if you ask people what type of browser they’re using and only give them three options to choose from, you may inadvertently alienate everybody who uses a fourth type and now can’t tell you about it.

That said, you can add an open-ended component to a nominal question with an expandable ’other’ category, where respondents can write in an answer that isn’t on the list. This way, you essentially ask an open-ended question that doesn’t limit them to the options you’ve picked.

4. Likert scale questions

The Likert scale is typically a 5- or 7-point scale that evaluates a respondent’s level of agreement with a statement or the intensity of their reaction toward something.

The scale develops symmetrically: the median number (e.g. a 3 on a 5-point scale) indicates a point of neutrality, the lowest number (always 1) indicates an extreme view, and the highest number (e.g. a 5 on a 5-point scale) indicates the opposite extreme view.

Example of a Likert scale question:

#The British Museum uses a Likert scale Hotjar survey to gauge visitors’ reactions to their website optimizations

When to use Likert scale questions

Likert-type questions are also known as ordinal questions because the answers are presented in a specific order. Like other multiple-choice questions, Likert scale questions come in handy when you already have some sense of what your customers are thinking. For example, if your open-ended questions uncover a complaint about a recent change to your ordering process, you could use a Likert scale question to determine how the average user felt about the change.

A series of Likert scale questions can also be turned into a matrix question. Since they have identical response options, they are easily combined into a single matrix and break down the pattern of single questions for users.

5. Rating scale questions

Rating scale questions are questions where the answers map onto a numeric scale (such as rating customer support on a scale of 1-5, or likelihood to recommend a product from 0-10).

Examples of rating questions:

How likely are you to recommend us to a friend or colleague on a scale of 0-10?

How would you rate our customer service on a scale of 1-5?

When to use rating questions

Whenever you want to assign a numerical value to your survey or visualize and compare trends , a rating question is the way to go.

A typical rating question is used to determine Net Promoter Score® (NPS®) : the question asks customers to rate their likelihood of recommending products or services to their friends or colleagues, and allows you to look at the results historically and see if you're improving or getting worse. Rating questions are also used for customer satisfaction (CSAT) surveys and product reviews.

When you use a rating question in a survey, be sure to explain what the scale means (e.g. 1 for ‘Poor’, 5 for ‘Amazing’). And consider adding a follow-up open-ended question to understand why the user left that score.

Example of a rating question (NPS):

#Hotjar's Net Promoter Score® (NPS®) survey template lets you add open-ended follow-up questions so you can understand the reasons behind users' ratings

6. ‘Yes’ or ‘no’ questions

These dichotomous questions are super straightforward, requiring a simple ‘yes’ or ‘no’ reply.

Examples of yes/no questions:

Was this article useful? (Yes/No)

Did you find what you were looking for today? (Yes/No)

When to use ‘yes’ or ‘no’ questions

‘Yes’ and ‘no’ questions are a good way to quickly segment your respondents . For example, say you’re trying to understand what obstacles or objections prevent people from trying your product. You can place a survey on your pricing page asking people if something is stopping them, and follow up with the segment who replied ‘yes’ by asking them to elaborate further.

These questions are also effective for getting your foot in the door: a ‘yes’ or ‘no’ question requires very little effort to answer. Once a user commits to answering the first question, they tend to become more willing to answer the questions that follow, or even leave you their contact information.

#Web design agency NerdCow used Hotjar Surveys to add a yes/no survey on The Transport Library’s website, and followed it up with an open-ended question for more insights

70+ more survey question examples

Below is a list of good survey questions, categorized across ecommerce, software as a service (SaaS), and publishing. You don't have to use them word-for-word, but hopefully, this list will spark some extra-good ideas for the surveys you’ll run immediately after reading this article. (Plus, you can create all of them with Hotjar Surveys—stick with us a little longer to find out how. 😉)

📊 9 basic demographic survey questions

Ask these questions when you want context about your respondents and target audience, so you can segment them later. Consider including demographic information questions in your survey when conducting user or market research as well. 

But don’t ask demographic questions just for the sake of it—if you're not going to use some of the data points from these sometimes sensitive questions (e.g. if gender is irrelevant to the result of your survey), move on to the ones that are truly useful for you, business-wise. 

Take a look at the selection of examples below, and keep in mind that you can convert most of them to multiple choice questions:

What is your name?

What is your age?

What is your gender?

What company do you work for?

What vertical/industry best describes your company?

What best describes your role?

In which department do you work?

What is the total number of employees in your company (including all locations where your employer operates)?

What is your company's annual revenue?

🚀 Get started: gather more info about your users with our product-market fit survey template .

👥 20+ effective customer questions

These questions are particularly recommended for ecommerce companies:

Before purchase

What information is missing or would make your decision to buy easier?

What is your biggest fear or concern about purchasing this item?

Were you able to complete the purpose of your visit today?

If you did not make a purchase today, what stopped you?

After purchase

Was there anything about this checkout process we could improve?

What was your biggest fear or concern about purchasing from us?

What persuaded you to complete the purchase of the item(s) in your cart today?

If you could no longer use [product name], what’s the one thing you would miss the most?

What’s the one thing that nearly stopped you from buying from us?

👉 Check out our 7-step guide to setting up an ecommerce post-purchase survey .

Other useful customer questions

Do you have any questions before you complete your purchase?

What other information would you like to see on this page?

What were the three main things that persuaded you to create an account today?

What nearly stopped you from creating an account today?

Which other options did you consider before choosing [product name]?

What would persuade you to use us more often?

What was your biggest challenge, frustration, or problem in finding the right [product type] online?

Please list the top three things that persuaded you to use us rather than a competitor.

Were you able to find the information you were looking for?

How satisfied are you with our support?

How would you rate our service/support on a scale of 0-10? (0 = terrible, 10 = stellar)

How likely are you to recommend us to a friend or colleague? ( NPS question )

Is there anything preventing you from purchasing at this point?

🚀 Get started: learn how satisfied customers are with our expert-built customer satisfaction and NPS survey templates .

Set up a survey in seconds

Use Hotjar's free survey templates to build virtually any type of survey, and start gathering valuable insights in moments.

🛍 30+ product survey questions

These questions are particularly recommended for SaaS companies:

Questions for new or trial users

What nearly stopped you from signing up today?

How likely are you to recommend us to a friend or colleague on a scale of 0-10? (NPS question)

Is our pricing clear? If not, what would you change?

Questions for paying customers

What convinced you to pay for this service?

What’s the one thing we are missing in [product type]?

What's one feature we can add that would make our product indispensable for you?

If you could no longer use [name of product], what’s the one thing you would miss the most?

🚀 Get started: find out what your buyers really think with our pricing plan feedback survey template .

Questions for former/churned customers

What is the main reason you're canceling your account? Please be blunt and direct.

If you could have changed one thing in [product name], what would it have been?

If you had a magic wand and could change anything in [product name], what would it be?

🚀 Get started: find out why customers churn with our free-to-use churn analysis survey template .

Other useful product questions

What were the three main things that persuaded you to sign up today?

Do you have any questions before starting a free trial?

What persuaded you to start a trial?

Was this help section useful?

Was this article useful?

How would you rate our service/support on a scale of 1-10? (0 = terrible, 10 = stellar)

Is there anything preventing you from upgrading at this point?

Is there anything on this page that doesn't work the way you expected it to?

What could we change to make you want to continue using us?

If you did not upgrade today, what stopped you?

What's the next thing you think we should build?

How would you feel if we discontinued this feature?

What's the next feature or functionality we should build?

🚀 Get started: gather feedback on your product with our free-to-use product feedback survey template .

🖋 20+ effective questions for publishers and bloggers

Questions to help improve content.

If you could change just one thing in [publication name], what would it be?

What other content would you like to see us offer?

How would you rate this article on a scale of 1–10?

If you could change anything on this page, what would you have us do?

If you did not subscribe to [publication name] today, what was it that stopped you?

🚀 Get started: find ways to improve your website copy and messaging with our content feedback survey template .

New subscriptions

What convinced you to subscribe to [publication] today?

What almost stopped you from subscribing?

What were the three main things that persuaded you to join our list today?

Cancellations

What is the main reason you're unsubscribing? Please be specific.

Other useful content-related questions

What’s the one thing we are missing in [publication name]?

What would persuade you to visit us more often?

How likely are you to recommend us to someone with similar interests? (NPS question)

What’s missing on this page?

What topics would you like to see us write about next?

How useful was this article?

What could we do to make this page more useful?

Is there anything on this site that doesn't work the way you expected it to?

What's one thing we can add that would make [publication name] indispensable for you?

If you could no longer read [publication name], what’s the one thing you would miss the most?

💡 Pro tip: do you have a general survey goal in mind, but are struggling to pin down the right questions to ask? Give Hotjar’s AI for Surveys a go and watch as it generates a survey for you in seconds with questions tailored to the exact purpose of the survey you want to run.

What makes a good survey question?

We’ve run through more than 70 of our favorite survey questions—but what is it that makes a good survey question, well, good ? An effective question is anything that helps you get clear insights and business-critical information about your customers , including

Who your target market is

How you should price your products

What’s stopping people from buying from you

Why visitors leave your website

With this information, you can tailor your website, products, landing pages, and messaging to improve the user experience and, ultimately, maximize conversions .

How to write good survey questions: the DOs and DON’Ts

To help you understand the basics and avoid some rookie mistakes, we asked a few experts to give us their thoughts on what makes a good and effective survey question.

Survey question DOs

✅ do focus your questions on the customer.

It may be tempting to focus on your company or products, but it’s usually more effective to put the focus back on the customer. Get to know their needs, drivers, pain points, and barriers to purchase by asking about their experience. That’s what you’re after: you want to know what it’s like inside their heads and how they feel when they use your website and products.

Rather than asking, “Why did you buy our product?” ask, “What was happening in your life that led you to search for this solution?” Instead of asking, “What's the one feature you love about [product],” ask, “If our company were to close tomorrow, what would be the one thing you’d miss the most?” These types of surveys have helped me double and triple my clients.

✅ DO be polite and concise (without skimping on micro-copy)

Put time into your micro-copy—those tiny bits of written content that go into surveys. Explain why you’re asking the questions, and when people reach the end of the survey, remember to thank them for their time. After all, they’re giving you free labor!

✅ DO consider the foot-in-the-door principle

One way to increase your response rate is to ask an easy question upfront, such as a ‘yes’ or ‘no’ question, because once people commit to taking a survey—even just the first question—they’re more likely to finish it.

✅ DO consider asking your questions from the first-person perspective

Disclaimer: we don’t do this here at Hotjar. You’ll notice all our sample questions are listed in second-person (i.e. ‘you’ format), but it’s worth testing to determine which approach gives you better answers. Some experts prefer the first-person approach (i.e. ‘I’ format) because they believe it encourages users to talk about themselves—but only you can decide which approach works best for your business.

I strongly recommend that the questions be worded in the first person. This helps create a more visceral reaction from people and encourages them to tell stories from their actual experiences, rather than making up hypothetical scenarios. For example, here’s a similar question, asked two ways: “What do you think is the hardest thing about creating a UX portfolio?” versus “My biggest problem with creating my UX portfolio is…” 

The second version helps get people thinking about their experiences. The best survey responses come from respondents who provide personal accounts of past events that give us specific and real insight into their lives.

✅ DO alternate your questions often

Shake up the questions you ask on a regular basis. Asking a wide variety of questions will help you and your team get a complete view of what your customers are thinking.

✅ DO test your surveys before sending them out

A few years ago, Hotjar created a survey we sent to 2,000 CX professionals via email. Before officially sending it out, we wanted to make sure the questions really worked. 

We decided to test them out on internal staff and external people by sending out three rounds of test surveys to 100 respondents each time. Their feedback helped us perfect the questions and clear up any confusing language.

Survey question DON’Ts

❌ don’t ask closed-ended questions if you’ve never done research before.

If you’ve just begun asking questions, make them open-ended questions since you have no idea what your customers think about you at this stage. When you limit their answers, you just reinforce your own assumptions.

There are two exceptions to this rule:

Using a closed-ended question to get your foot in the door at the beginning of a survey

Using rating scale questions to gather customer sentiment (like an NPS survey)

❌ DON’T ask a lot of questions if you’re just getting started

Having to answer too many questions can overwhelm your users. Stick with the most important points and discard the rest.

Try starting off with a single question to see how your audience responds, then move on to two questions once you feel like you know what you’re doing.

How many questions should you ask? There’s really no perfect answer, but we recommend asking as few as you need to ask to get the information you want. In the beginning, focus on the big things:

Who are your users?

What do potential customers want?

How are they using your product?

What would win their loyalty?

❌ DON’T just ask a question when you can combine it with other tools

Don’t just use surveys to answer questions that other tools (such as analytics) can also answer. If you want to learn about whether people find a new website feature helpful, you can also observe how they’re using it through traditional analytics, session recordings , and other user testing tools for a more complete picture.

Don’t use surveys to ask people questions that other tools are better equipped to answer. I’m thinking of questions like “What do you think of the search feature?” with pre-set answer options like ‘Very easy to use,’ ‘Easy to use,’ etc. That’s not a good question to ask. 

Why should you care about what people ‘think’ about the search feature? You should find out whether it helps people find what they need and whether it helps drive conversions for you. Analytics, user session recordings, and user testing can tell you whether it does that or not.

❌ DON’T ask leading questions

A leading question is one that prompts a specific answer. Avoid asking leading questions because they’ll give you bad data. For example, asking, “What makes our product better than our competitors’ products?” might boost your self-esteem, but it won’t get you good information. Why? You’re effectively planting the idea that your own product is the best on the market.

❌ DON’T ask loaded questions

A loaded question is similar to a leading question, but it does more than just push a bias—it phrases the question such that it’s impossible to answer without confirming an underlying assumption.

A common (and subtle) form of loaded survey question would be, “What do you find useful about this article?” If we haven’t first asked you whether you found the article useful at all, then we’re asking a loaded question.

❌ DON’T ask about more than one topic at once

For example, “Do you believe our product can help you increase sales and improve cross-collaboration?”

This complex question, also known as a ‘double-barreled question’, requires a very complex answer as it begs the respondent to address two separate questions at once:

Do you believe our product can help you increase sales?

Do you believe our product can help you improve cross-collaboration?

Respondents may very well answer 'yes', but actually mean it for the first part of the question, and not the other. The result? Your survey data is inaccurate, and you’ve missed out on actionable insights.

Instead, ask two specific questions to gather customer feedback on each concept.

How to run your surveys

The format you pick for your survey depends on what you want to achieve and also on how much budget or resources you have. You can

Use an on-site survey tool , like Hotjar Surveys , to set up a website survey that pops up whenever people visit a specific page: this is useful when you want to investigate website- and product-specific topics quickly. This format is relatively inexpensive—with Hotjar’s free forever plan, you can even run up to 3 surveys with unlimited questions for free.

a survey research question

Use Hotjar Surveys to embed a survey as an element directly on a page: this is useful when you want to grab your audience’s attention and connect with customers at relevant moments, without interrupting their browsing. (Scroll to the bottom of this page to see an embedded survey in action!) This format is included on Hotjar’s Business and Scale plans—try it out for 15 days with a free Ask Business trial .

Use a survey builder and create a survey people can access in their own time: this is useful when you want to reach out to your mailing list or a wider audience with an email survey (you just need to share the URL the survey lives at). Sending in-depth questionnaires this way allows for more space for people to elaborate on their answers. This format is also relatively inexpensive, depending on the tool you use.

Place survey kiosks in a physical location where people can give their feedback by pressing a button: this is useful for quick feedback on specific aspects of a customer's experience (there’s usually plenty of these in airports and waiting rooms). This format is relatively expensive to maintain due to the material upkeep.

Run in-person surveys with your existing or prospective customers: in-person questionnaires help you dig deep into your interviewees’ answers. This format is relatively cheap if you do it online with a user interview tool or over the phone, but it’s more expensive and time-consuming if done in a physical location.

💡 Pro tip: looking for an easy, cost-efficient way to connect with your users? Run effortless, automated user interviews with Engage , Hotjar’s user interview tool. Get instant access to a pool of 200,000+ participants (or invite your own), and take notes while Engage records and transcribes your interview.

10 survey use cases: what you can do with good survey questions

Effective survey questions can help improve your business in many different ways. We’ve written in detail about most of these ideas in other blog posts, so we’ve rounded them up for you below.

1. Create user personas

A user persona is a character based on the people who currently use your website or product. A persona combines psychographics and demographics and reflects who they are, what they need, and what may stop them from getting it.

Examples of questions to ask:

Describe yourself in one sentence, e.g. “I am a 30-year-old marketer based in Dublin who enjoys writing articles about user personas.”

What is your main goal for using this website/product?

What, if anything, is preventing you from doing it?

👉 Our post about creating simple and effective user personas in four steps highlights some great survey questions to ask when creating a user persona.

🚀 Get started: use our user persona survey template or AI for Surveys to inform your user persona.

2. Understand why your product is not selling

Few things are more frightening than stagnant sales. When the pressure is mounting, you’ve got to get to the bottom of it, and good survey questions can help you do just that.

What made you buy the product? What challenges are you trying to solve?

What did you like most about the product? What did you dislike the most?

What nearly stopped you from buying?

👉 Here’s a detailed piece about the best survey questions to ask your customers when your product isn’t selling , and why they work so well.

🚀 Get started: our product feedback survey template helps you find out whether your product satisfies your users. Or build your surveys in the blink of an eye with Hotjar AI.

3. Understand why people leave your website

If you want to figure out why people are leaving your website , you’ll have to ask questions.

A good format for that is an exit-intent pop-up survey, which appears when a user clicks to leave the page, giving them the chance to leave website feedback before they go.

Another way is to focus on the people who did convert, but just barely—something Hotjar founder David Darmanin considers essential for taking conversions to the next level. By focusing on customers who bought your product (but almost didn’t), you can learn how to win over another set of users who are similar to them: those who almost bought your products, but backed out in the end.

Example of questions to ask:

Not for you? Tell us why. ( Exit-intent pop-up —ask this when a user leaves without buying.)

What almost stopped you from buying? (Ask this post-conversion .)

👉 Find out how HubSpot Academy increased its conversion rate by adding an exit-intent survey that asked one simple question when users left their website: “Not for you? Tell us why.”

🚀 Get started: place an exit-intent survey on your site. Let Hotjar AI draft the survey questions by telling it what you want to learn.

I spent the better half of my career focusing on the 95% who don’t convert, but it’s better to focus on the 5% who do. Get to know them really well, deliver value to them, and really wow them. That’s how you’re going to take that 5% to 10%.

4. Understand your customers’ fears and concerns

Buying a new product can be scary: nobody wants to make a bad purchase. Your job is to address your prospective customers’ concerns, counter their objections, and calm their fears, which should lead to more conversions.

👉 Take a look at our no-nonsense guide to increasing conversions for a comprehensive write-up about discovering the drivers, barriers, and hooks that lead people to converting on your website.

🚀 Get started: understand why your users are tempted to leave and discover potential barriers with a customer retention survey .

5. Drive your pricing strategy

Are your products overpriced and scaring away potential buyers? Or are you underpricing and leaving money on the table?

Asking the right questions will help you develop a pricing structure that maximizes profit, but you have to be delicate about how you ask. Don’t ask directly about price, or you’ll seem unsure of the value you offer. Instead, ask questions that uncover how your products serve your customers and what would inspire them to buy more.

How do you use our product/service?

What would persuade you to use our product more often?

What’s the one thing our product is missing?

👉 We wrote a series of blog posts about managing the early stage of a SaaS startup, which included a post about developing the right pricing strategy —something businesses in all sectors could benefit from.

🚀 Get started: find the sweet spot in how to price your product or service with a Van Westendorp price sensitivity survey or get feedback on your pricing plan .

6. Measure and understand product-market fit

Product-market fit (PMF) is about understanding demand and creating a product that your customers want, need, and will actually pay money for. A combination of online survey questions and one-on-one interviews can help you figure this out.

What's one thing we can add that would make [product name] indispensable for you?

If you could change just one thing in [product name], what would it be?

👉 In our series of blog posts about managing the early stage of a SaaS startup, we covered a section on product-market fit , which has relevant information for all industries.

🚀 Get started: discover if you’re delivering the best products to your market with our product-market fit survey .

7. Choose effective testimonials

Human beings are social creatures—we’re influenced by people who are similar to us. Testimonials that explain how your product solved a problem for someone are the ultimate form of social proof. The following survey questions can help you get some great testimonials.

What changed for you after you got our product?

How does our product help you get your job done?

How would you feel if you couldn’t use our product anymore?

👉 In our post about positioning and branding your products , we cover the type of questions that help you get effective testimonials.

🚀 Get started: add a question asking respondents whether you can use their answers as testimonials in your surveys, or conduct user interviews to gather quotes from your users.

8. Measure customer satisfaction

It’s important to continually track your overall customer satisfaction so you can address any issues before they start to impact your brand’s reputation. You can do this with rating scale questions.

For example, at Hotjar, we ask for feedback after each customer support interaction (which is one important measure of customer satisfaction). We begin with a simple, foot-in-the-door question to encourage a response, and use the information to improve our customer support, which is strongly tied to overall customer satisfaction.

How would you rate the support you received? (1-5 scale)

If 1-3: How could we improve?

If 4-5: What did you love about the experience?

👉 Our beginner’s guide to website feedback goes into great detail about how to measure customer service, NPS , and other important success metrics.

🚀 Get started: gauge short-term satisfaction level with a CSAT survey .

9. Measure word-of-mouth recommendations

Net Promoter Score is a measure of how likely your customers are to recommend your products or services to their friends or colleagues. NPS is a higher bar than customer satisfaction because customers have to be really impressed with your product to recommend you.

Example of NPS questions (to be asked in the same survey):

How likely are you to recommend this company to a friend or colleague? (0-10 scale)

What’s the main reason for your score?

What should we do to WOW you?

👉 We created an NPS guide with ecommerce companies in mind, but it has plenty of information that will help companies in other industries as well.

🚀 Get started: measure whether your users would refer you to a friend or colleague with an NPS survey . Then, use our free NPS calculator to crunch the numbers.

10. Redefine your messaging

How effective is your messaging? Does it speak to your clients' needs, drives, and fears? Does it speak to your strongest selling points?

Asking the right survey questions can help you figure out what marketing messages work best, so you can double down on them.

What attracted you to [brand or product name]?

Did you have any concerns before buying [product name]?

Since you purchased [product name], what has been the biggest benefit to you?

If you could describe [brand or product name] in one sentence, what would you say?

What is your favorite thing about [brand or product name]?

How likely are you to recommend this product to a friend or colleague? (NPS question)

👉 We talk about positioning and branding your products in a post that’s part of a series written for SaaS startups, but even if you’re not in SaaS (or you’re not a startup), you’ll still find it helpful.

Have a question for your customers? Ask!

Feedback is at the heart of deeper empathy for your customers and a more holistic understanding of their behaviors and motivations. And luckily, people are more than ready to share their thoughts about your business— they're just waiting for you to ask them. Deeper customer insights start right here, with a simple tool like Hotjar Surveys.

Build surveys faster with AI🔥

Use AI in Hotjar Surveys to build your survey, place it on your website or send it via email, and get the customer insight you need to grow your business.

FAQs about survey questions

How many people should i survey/what should my sample size be.

A good rule of thumb is to aim for at least 100 replies that you can work with.

You can use our  sample size calculator  to get a more precise answer, but understand that collecting feedback is research, not experimentation. Unlike experimentation (such as A/B testing ), all is not lost if you can’t get a statistically significant sample size. In fact, as little as ten replies can give you actionable information about what your users want.

How many questions should my survey have?

There’s no perfect answer to this question, but we recommend asking as few as you need to ask in order to get the information you want. Remember, you’re essentially asking someone to work for free, so be respectful of their time.

Why is it important to ask good survey questions?

A good survey question is asked in a precise way at the right stage in the customer journey to give you insight into your customers’ needs and drives. The qualitative data you get from survey responses can supplement the insight you can capture through other traditional analytics tools (think Google Analytics) and behavior analytics tools (think heatmaps and session recordings , which visualize user behavior on specific pages or across an entire website).

The format you choose for your survey—in-person, email, on-page, etc.—is important, but if the questions themselves are poorly worded you could waste hours trying to fix minimal problems while ignoring major ones a different question could have uncovered. 

How do I analyze open-ended survey questions?

A big pile of  qualitative data  can seem intimidating, but there are some shortcuts that make it much easier to analyze. We put together a guide for  analyzing open-ended questions in 5 simple steps , which should answer all your questions.

But the fastest way to analyze open questions is to use the automated summary report with Hotjar AI in Surveys . AI turns the complex survey data into:

Key findings

Actionable insights

Will sending a survey annoy my customers?

Honestly, the real danger is  not  collecting feedback. Without knowing what users think about your page and  why  they do what they do, you’ll never create a user experience that maximizes conversions. The truth is, you’re probably already doing something that bugs them more than any survey or feedback button would.

If you’re worried that adding an on-page survey might hurt your conversion rate, start small and survey just 10% of your visitors. You can stop surveying once you have enough replies.

Related articles

a survey research question

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

a survey research question

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

a survey research question

Shadz Loresco

a survey research question

An 8-step guide to conducting empathetic (and insightful) customer interviews in mid-market companies

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

COMMENTS

  1. 10 Research Question Examples to Guide your Research Project

    Learn how to turn a weak research question into a strong one with examples suitable for a research paper, thesis or dissertation.

  2. How to write great survey questions (with examples)

    Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help. Figuring out how to make a good survey that yields actionable insights is all about sweating the details.

  3. Survey Research | Definition, Examples & Methods - Scribbr

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  4. Writing Survey Questions - Pew Research Center

    We frequently test new survey questions ahead of time through qualitative research methods such as focus groups, cognitive interviews, pretesting (often using an online, opt-in sample), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on ...

  5. Survey Questions: 70+ Survey Question Examples & Survey Types

    This comprehensive intro to survey questions contains over 70 examples of effective questions, an overview of different types of survey questions, and advice on how to word them for maximum effect. Plus, we’ll toss in our pre-built survey templates, expert survey insights, and tips to make the most of AI for Surveys in Hotjar. .

  6. Doing Survey Research | A Step-by-Step Guide & Examples - Scribbr

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.