21 12. Survey design

Chapter Outline

  1. What is survey research? (15 minute read time)
  2. Conducting a survey (18 minute read time)
  3. Creating a questionnaire (16 minute read time)
  4. Strengths and challenges of survey research (11 minute read time)

Content warning: examples in this chapter contain references to racial inequity, mental health treatment/symptoms/diagnosis, sex work, burnout and compassion fatigue, involuntary hospitalization, terrorism, religious beliefs and attitudes, drug use, physical (chronic) pain, workplace experience and discrimination.

12.1 What is survey research?

Learning Objectives

Learners will be able to…

  • Demonstrate an understanding of survey research as a type of research design
  • Think about the potential uses of survey research in their student research project

Surveys are a type of design

Congratulations! Your knowledge of social work research project has evolved. You have learned new terminology and the processes needed to develop good questions and to select the best measurement tools to answer your questions.  Now, we will the transition to a discussion on research design.

We are in Part 3: Using quantitative methods of this research text; therefore, the first designs we will discuss are those that focus on collecting data for quantitative analysis.  The first design we will discuss is survey design. Note: It is important to remember that even though survey design is featured in the quantitative methods section of this text, survey design research may also be used to collect qualitative data or a combination of both qualitative and quantitative data. In about six chapters from now, the following section of the text, Part 4: Qualitative Methods, will provide a more detailed focus on collecting qualitative data.

So, what do we mean when we use the term “research design?” When we think of research designs, we are thinking about an overall strategy or approach used to conduct research projects.[1] This chapter discusses survey design which involves strategies for conducting research that utilize a set of questions (contained in a questionnaire) to gain specific information from participants about their opinions, perceptions, reactions, knowledge, beliefs, values, or behaviors.


Photo by Hampton Lamoureux

Caution: It is important to preface this chapter with a statement about the distinction between a questionnaire and survey design. Most people use these definitions interchangeably; however, they are quite different. The term  “survey” is used in research design and involves asking questions and collecting and using tools to analyze data.[2] Specifically, the term “survey” denotes the overall strategy or approach to answering questions. Conversely, the term questionnaire is the actual tool that collects data. So, in essence, researchers use a questionnaire to engage in survey research. This chapter will teach you how to employ a research approach that uses questionnaires to collect information.

The good news is that we have all been exposed to survey research. At the end of the semester when you complete your course evaluations, you are engaging in survey research. If you have ever completed any type of satisfaction questionnaire, you have completed survey research. In fact, every ten years, a random selection of individuals living in the United States are asked to participate in a large-scale survey research project that is conducted by the United States Census Bureau. So, survey research is widespread and familiar to many people, even those who do not have a formal understanding of research terminology.

This section further defines elements of survey research and provides an overview of the characteristics that distinguish survey research from other types of research. As you read this section, please think about your research project and how survey research might be used to help you answer your research question.


Photo by andibreit



Survey research is frequently employed by social work researchers because we often seek to develop an understanding of how groups of people, communities, organizations, and population feel about a certain topic.  Social workers might seek to gather survey data from:

  • Neighborhood residents
  • People who possess certain characteristics or experiences
  • Family members or people affected by a particular condition or experience
  • Staff at an agency
  • Service recipients
  • The general public
  • People with specialized knowledge in a given area
  • Members of an organization or group

As you think about your research topic, you will likely select one (or maybe two) of these viewpoints to survey as you collect your data. However, it can be helpful to think about how these various perspectives might contribute to research in your given area. As a thought activity, try to fill out as many examples as you can of who you might consider collecting survey data from for your topic.

For example, suppose I am interested in researching the topic of perceptions of racial inequity.

  • Neighborhood residents: I could survey two different neighborhoods, one that is more racial diverse and one that is more racially similar (homogenous) 
  • People who possess certain characteristics or experiences: I could specifically survey people who are part of an interracial family
  • Family members or people affected by a particular condition or experience: I could survey people who have a loved one that has been incarcerated 
  • Staff at an agency: I could survey staff from agencies that serve predominately communities of color, but where the agency staff makeup is predominately white
  • Service recipients: I could survey service recipients from agencies that serve predominately communities of color, but where the agency staff makeup is predominately white
  • The general public: I could survey people at a large local shopping mall 
  • People with specialized knowledge in a given area: I could survey state legislators   
  • Members of an organization or group: I could survey members of racial justice advocacy organizations 

These are just a small sample of groups that could be surveyed. For each category, we could go in many different directions with many perspectives that can make valuable contributions to this topic.  That is what makes research so exciting…the possibilities are limitless!

Characteristics of survey research

Quite simply, survey research is a type of research design that has two important characteristics. First, the variables of interest are measured using self-reports. These self-reports are gathered by questionnaires, either completed independently by a participant or administered by a member of a research team. Researchers ask their participants, the people who have opted to participate in the research, to report directly on their own thoughts, feelings, and behaviors. Second, often survey research is conducted to understand something about a larger population; remember, this is known as generalizing results. Consequently, considerable attention is paid to the type of sampling and the number of cases used. In general, researchers using a survey design have a preference for large randomly selected samples because they provide the most accurate estimates of what is true in the population.

In previous chapters, we learned about the purposes of research (exploratory, descriptive, and explanatory). Survey research can be used for all of these types of research; however, it may be a little challenging to use with exploratory research. Why? The purpose of exploratory research is to uncover experiences in which little is known. Therefore, you may lack the knowledge base needed to develop your questionnaire.

Survey research is best suited for studies that have individual people as the unit of analysis. However, other units of analysis, such as families, groups, organizations, or communities may also be used in survey research. If researchers use a family, group, organization, or community as the unit of analysis,  they usually denote a specific person who is identified as a key informant or a “proxy” to complete the actual research tool. Researchers must be intentional with these choices, as they may introduce measurement error if the informant chosen does not have adequate knowledge or has a biased opinion about the phenomenon of interest.

For instance, many schools of social work are very interested in the school of social work rankings that are published annually by US News and World Report. For a full description of the methodology used in this process, please visit https://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings. Many students are not aware that these rankings are actually composite scores created by analyzing a variety of data sources. One type of data used in this process is known as peer review data, or data in which schools provide feedback on their perceptions of similar schools. A questionnaire is sent to several key informants at each school. Each key informant is asked to rank the other schools of social work on a variety of dimensions. These data are then collected and combined with other indicators to calculate the school rankings. However, what if an informant is unfamiliar with a school or has a personal bias against a school? This could significantly skew results. In summary, if you are not using individuals as the unit of analysis, it is important that you choose the right key informant who is knowledgeable about the topic of which you are asking, and who can provide an unbiased perspective.

Finally, most survey research is used to describe single variables (e.g., voter preferences, motivation, or social support) and to assess statistical relationships between variables (e.g., the relationship between income and health). For instance, Nesje (2016) used a survey design to understand the relationship between profession and personality traits. The author was interested in studying the relationship between two variables, personality (empathy and care) and selected profession (social work, nursing, or education). Specifically, Nesje sought to understand if a certain field of study had practitioners with higher levels of empathy and care than others. The author administered two tools, the Blau’s Career Commitment Scale and Orlinsky and Rønnestad’s Interpersonal Adjective Scale, to 1,765 students. Results failed to find a statistically significant difference between groups on the levels of empathy and care.[3]

The above example illustrates several characteristics of a survey research design. Please complete the following interactive exercise to see if you can identify the characteristics of survey research design that are found in this study.

History of survey research

Survey research has roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the proliferation of social problems such as poverty (Converse, 1987). [4] By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research studying consumer preferences for American businesses turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was. Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies (see http://ces-eec.arts.ubc.ca/)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. We will discuss Likert scales later in this chapter.  Survey research has a strong historical association with the social psychological studies of attitudes, stereotypes, and prejudice. Survey research has also been used by social workers to understand a variety of conditions and experiences. 

In summary, survey research is a valuable research design, and one that may be used to study a variety of concepts. This flexibility of survey research allows it to be applied to many research projects, making it appealing for a variety of disciplines. Furthermore, its potential to gather information from a large number of people with a relatively low commitment of resources (compared to other methods) can also make it quite attractive to social science researchers.   


Photo by Craig Adderley

Survey research in social work

The above section mentioned concern with the sample size and type of sampling as being important considerations for survey research. In general, many studies using survey research have the goal of generalizable findings from a sample to a population. That said, if you conduct a literature search for studies using survey research, you will find that most large survey research studies utilizing random sampling are conducted by psychologists or sponsored by large non-profit or government research organizations such as the Pew Research (https://www.pewresearch.org/) Center or the United States Census Bureau (https://www.census.gov/). For example, each year, the Pew Research Center randomly selects and interviews thousands of people in order to study a variety of social attitudes and beliefs. Additionally, every ten years, the U.S Census bureau implements a large-scale data collection process to understand population characteristics and changes. Both of these organizations seek to generalize sample results to the larger US population. Finally, since 1984 the Center for Disease Control and Prevention (CDC) (https://www.cdc.gov/) has maintained the Behavioral Risk Factor Surveillance System, “the nation’s premier system of telephone surveys that collect state-level data about health risk behaviors, chronic health conditions, and use of preventive services”[5]. While often gathered by professionals in other disciplines, all of these sources of survey data can be very useful for social workers seeking to look at quantitative data across a variety of topics.

So, why are social work researchers less likely to utilize large probability sampling techniques? Due to the nature of the client systems with which we work, sometimes collecting large random samples may not be feasible. Remember that in order to utilize a probability sample, you need to have access to a considerable  sampling frame.  Many of the populations with which we work are “hidden” or harder to access. Thus, securing a list of all possible cases would be challenging, if not impossible.  For example, think about a researcher wanting to study sex workers operating in a certain neighborhood. The researcher may have difficulty finding a list of all of the persons engaging in sex work in that neighborhood. The researcher could look at arrest records and seek to find all sex workers with an arrest record. However, having this list does not mean that the researcher would have access to sex workers. Next, sometimes social workers want to understand individual experiences so that they bring the perspectives of marginalized groups into the mainstream scholarly literature. These social workers may be less concerned with generalizing results and more concerned with “uncovering or discovering knowledge from oppressed groups”. For those social workers, a smaller-scale qualitative research project may be more feasible and allow the researcher to meet their goals.

As previously mentioned, social work practitioners are less likely to use large-scale probability samples. While they are less likely to implement these, there are situations where large-scale probability samples are used by social workers. For example,  university-affiliated social work academics who have received federal grants may conduct multi-site projects. Additionally, licensing organizations such as the NASW may utilize questionnaires to collect information about members’ practice experiences. Furthermore, social work researchers are often part of interdisciplinary teams that may extend resources and access to larger sampling frames.

Social work student projects and survey design research

Within social work schools, students are usually required to demonstrate their proficiency in basic research by implementing an empirical study. Many students end up implementing a project that utilizes survey design, often selected due to convenience. In addition, sometimes agencies have existing questionnaires they want to be used for the student research project. Agencies may feel more comfortable with students using survey design research instead of other designs. For example, interviewing clients may be seen as part of students’ existing responsibilities; whereas implementing an experimental or quasi-experimental design may seem more time-consuming and labor-intensive for the agency. Further, my students have found survey research projects to be interesting, intellectually rewarding, and feasible. Below is a list of past social work research projects that were conducted by second-year MSW students. Can you see how each of these studies involves students asking participants to provide information (orally or in writing) that is then analyzed?


Past Student Research Projects

  1. What is the level of interpersonal relationship satisfaction among those diagnosed with an eating disorder?
  2. Does age, gender and/or DSM 5 diagnoses indicate the level of mental health support that clients receive?
  3.  For those seen in the XXX, Is there a difference in IPV injury patterns by gender?
  4. Does worker burn-out rate differ between departments within social service agencies?
  5. Is there a correlation between poor physical health and poor mental health functioning in college freshmen at XXX?
  6. Is there a relationship between burnout and compassion satisfaction among healthcare professionals who work in a mental health facility?
  7. Is there a difference in the levels of compassion fatigue and compassion satisfaction among the different types of direct service employees at the XXX agency?
  8. Is there a difference in the length of stay at XXX Hospital between individuals admitted voluntarily and those admitted involuntarily?
  9. What are the primary concerns that cause college students to present for services at their university’s counseling center?
  10. Does an individual’s level of stress influence treatment decisions?

Key Takeaways

  • Survey research is common and used to gather a variety of information.
  • Survey research is a design/approach, and a questionnaire is an actual tool used to collect data. While these words are often used interchangeably, they are different things.
  • Two characteristics define survey research: participants being asked to provide information and a focus on sample size and sampling.
  • Large random samples provide the opportunity to generalize results from your sample to the population from which it was drawn; however, this is often not possible for social work researchers.
  • Successful questionnaire development takes time and requires feedback from multiple sources.


Think about your research project at this point.

  • If you are planning to use a survey for your research:
    • Why do you think this is the most appropriate way to gather data?
    • Begin thinking about how you will access your population. What are some barriers you might experience to administering a survey?
  • If you are not planning to use a survey:
    • What made you decide not to use a survey? This is not to say you should use one!
    • Are there related research questions to the one you chose that you could use a survey to answer?

12.2 Conducting a survey

Learning Objectives

Learners will be able to…

  • Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research
  • Describe the three types of longitudinal surveys
  • Describe retrospective surveys and identify their strengths and weaknesses
  • Discuss the benefits and drawbacks of the various methods of administering surveys

There is immense variety when it comes to surveys. This variety includes both how the survey is intended to reflect time and how the survey is administered or delivered to participants. In this section, we’ll look at variations across these two dimensions.


With respect to time, survey design is generally divided into two types: cross-sectional or longitudinal. Cross-sectional surveys are those that reflect responses that are given at just one point in time. These surveys offer researchers a snapshot in time and offer an idea about how things are for the respondents at the particular point in time that the survey is administered.

An example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011)[1] of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of one’s life and health. The researchers found from analysis of their cross-sectional data that anxiety and depression were highest among those who had both strong religious beliefs and some doubts about religion.

Yet another recent example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011)[2] of how the perceived ‘publicness’ of social networking sites influences users’ self-disclosures. These researchers administered an online survey to undergraduate and graduate business students to understand perceptions and behaviors on this topic. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

One problem with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. They change over time and may be influenced by any number of things. Thus, generalizing from a cross-sectional survey about the way things are can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. Think, for example, about how Americans might have responded if they received a survey asking for their opinions on terrorism on September 12, 2000. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. The point is not that cross-sectional surveys are useless; they have many important uses. But researchers must remember what they have captured by administering a cross-sectional survey—that is, as previously noted, a snapshot of life as it was at the time that the survey was administered.

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys are those that enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey. The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time the researchers gather data, they ask different people from the group they are studying because their concern is capturing the sentiment of the group, not the individual people they survey. Let’s look at an example.

The Monitoring the Future Study (http://www.monitoringthefuture.org/) is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. Perhaps surprisingly, fewer high school children reported using alcohol in the past month than at any point over the last 20 years. Recent data also reflected an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. The data points provide insight into targeting substance abuse prevention programs and resources. As you will note, this study is looking at general trends for this age group; it is not interested in tracking the changing attitudes or behaviors of specific students over time.

Unlike in a trend survey, in a panel survey the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, 5 years in a row. Keeping track of where people live, when they move, how to contact them and when they die, etc. takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). [3] Contrary to popular beliefs about the impact of work on adolescents’ performance in school and transition to adulthood, work in fact increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people. You can read more about the Youth Development Study at its website: https://cla.umn.edu/sociology/graduate/collaboration-opportunities/youth-development-study.

Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that may be of interest to researchers include people of particular generations or those who were born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common.

An example of this sort of research can be seen in Christine Percheski’s work (2008) [4] on cohort differences in women’s employment. Percheski compared women’s employment rates across seven generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. She found, among other patterns, that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003). [5]

All three types of longitudinal surveys share the strength in that they permit a researcher to make observations over time. This means that if whatever behavior or other phenomenon the researcher is interested in changes, either because of some world event or because people age, the researcher will be able to capture those changes. Table 12.1 summarizes these three types of longitudinal surveys.

Table 12.1 Longitudinal survey types
Sample type Description
Trend Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once.
Panel Researcher surveys the exact same sample several times over a period of time.
Cohort Researcher identifies a defining characteristic and then regularly surveys people who have that characteristic.

Finally, retrospective surveys are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the highly likely possibility that people’s recollections of their pasts may be faulty, incomplete,or slightly modified by the passage of time. Imagine, for example, that you’re asked in a survey to respond to questions about where, how, and with whom you spent last Valentine’s Day. As last Valentine’s Day can’t have been more than 12 months ago, chances are good that you might be able to respond accurately to some survey questions about it. But now let’s say the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so she asks you to report on where, how, and with whom you spent the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been had you been asked the question each year over the past 6 years, rather than asked to report on all years today?

In sum, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. Furthermore, by maintaining and accessing contact information for participants over long periods of time, we are increasing the opportunities for their privacy to be compromised. The issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are larger matters of research design that really apply to all types of research. But we’ve placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey design deals with how surveys are administered. We’ll examine that next.


Surveys vary not just in terms of the way they deal with time, but also in terms of how they are administered. One common way to administer surveys is through self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond to autonomously.  These questionnaires can be hard copy or virtual. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or via snail mail. Perhaps you’ve take a survey that was given to you in person; on many college campuses, it is not uncommon for researchers to administer surveys in large social science classes (as you might recall from the chapter on sampling). If you are ever asked to complete a survey in a similar setting, it might be interesting to note how your perspective on the survey and its questions could be shaped by the new knowledge you’re gaining about survey research in this chapter.

Researchers may also deliver surveys in person by going door-to-door or in public spaces by either asking people to fill them out right away or making arrangements for the researcher to return to pick up completed surveys or having them dropped off or mailed (with a self-addressed stamped envelope provided) to a designated location. The advent of online survey tools and greater widespread internet access has made door-to-door and snail mail delivery of surveys much less common, although I still see an occasional survey researcher at my door, especially around election time. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

While choosing snail mail to disseminate your survey may not be ideal (imagine how much less likely you’d probably be to return a survey that didn’t come with the researcher standing on your doorstep waiting to take it from you), sometimes it is the only available or the most practical option. As mentioned, though, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey. Additionally, mail that is received and not recognized may be regarded with suspicion or ignored altogether.  If you are choosing to mail out your survey by post, make sure you are very thoughtful about the materials, including the envelope.  They should look professional, but also personalized whenever possible to help engage the participant quickly.  Chances are you worked hard on your study – the last thing you want is the potential participant to receive your survey in the mail and chuck it in the waste bin without even opening it!

Often survey researchers who deliver their surveys via snail mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope.

Earlier, I mentioned online delivery as another way to administer a survey. This delivery mechanism is becoming increasingly common, no doubt because it is easy to use, relatively cheap, and may be more efficient than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, the most frequent method employed by researchers is to use an online survey management service or application.  These might be paid subscription services, like SurveyMonkey (https://www.surveymonkey.com) or Qualtrics (https://www.qualtrics.com), or free applications, like Google Forms. With any of these options you will design your survey online and then be provided a link to send out to your potential participants either via email or by posting the link in a virtually accessible space, like a forum, group, or webpage.  Wherever you choose to share the link, you will need to consider how you will gain permission to do so, which may mean getting permission to use a distribution list of emails or gaining permission from a group forum administer to post a link in the forum for members to access.

Many of the suggestions provided for improving the response rate on a hard copy questionnaire apply to online questionnaires as well. One difference of course is that the sort of incentives one can provide in an online format differ from those that can be given in person or sent through the mail. But this doesn’t mean that online survey researchers cannot offer completion incentives to their respondents. I’ve taken a number of online surveys; many of these did not come with an incentive other than the joy of knowing that I’d helped a fellow social scientist do their job. However, for participating in one survey, I was given a coupon code to use for $30 off any order at a major online retailer. I’ve taken other online surveys where on completion I could provide my name and contact information if I wished to be entered into a lottery together with other study participants to win a larger gift, such as a $50 gift card or an iPad.

Online surveys, however, may not be accessible to individuals with limited, unreliable, or no access to the internet or less skill at using a computer. If those issues are common in your target population, online surveys may not work as well for your research study. While online surveys may be faster and cheaper than mailed surveys, mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The choice of which delivery mechanism is best depends on a number of factors, including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

Sometimes surveys are administered by having a researcher pose questions verbally to respondents, rather than having respondents read the questions on their own. Researchers using phone or in-person surveys use an interview schedule which contains the list of questions and answer options that the researcher will read to respondents. Consistency in the way that questions and answer options are presented is very important with an interview schedule. The aim is to pose every question-and-answer option in the same way to every respondent. This is done to minimize interviewer effect, or possible changes in the way an interviewee responds based on how or when questions and answer options are presented by the interviewer. In-person surveys may be recorded, but because questions tend to be closed ended, taking notes during the interview is less disruptive than it can be during a qualitative interview.

Interview schedules are used in phone or in-person surveys and are also called quantitative interviews. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers pose questions verbally to participants. As someone who has poor research karma, I often decline to participate in phone studies when I am called. It is easy, socially acceptable even, to hang up abruptly on an unwanted caller. Additionally, a distracted participant who is cooking dinner, tending to troublesome children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. Another challenge comes from the increasing number of people who only have cell phones and do not use landlines (Pew Research, n.d.). [7] Unlike landlines, cell phone numbers are portable across carriers, associated with individuals, not households, and do not change their first three numbers when people move to a new geographical area. Computer-assisted telephone interviewing (CATI) programs have also been developed to assist quantitative survey researchers. These programs allow an interviewer to enter responses directly into a computer as they are provided, thus saving hours of time that would otherwise have to be spent entering data into an analysis program by hand.

Quantitative interviews must also be administered in such a way that the researcher asks the same question the same way each time. While questions on hard copy questionnaires may create an impression based on the way they are presented, having a person administer questions introduces a slew of additional variables that might influence a respondent. Even a slight shift in emphasis on a word may bias the respondent to answer differently. As I’ve mentioned earlier, consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. On the positive side, quantitative interviews can help reduce a respondent’s confusion. If a respondent is unsure about the meaning of a question or answer option on a self-administered questionnaire, they probably won’t have the opportunity to get clarification from the researcher. An interview, on the other hand, gives the researcher an opportunity to clarify or explain any items that may be confusing. If a participant asks for clarification, the researcher often uses pre-determined responses to make sure each quantitative interview is exactly the same as the others.

In-person surveys are conducted in the same way as phone surveys but must also account for non-verbal expressions and behaviors. In-person surveys do carry one distinct benefit—they are more difficult to say “no” to. Because the participant is already in the room and sitting across from the researcher, they are less likely to decline than if they clicked “delete” for an emailed online survey or pressed “hang up” during a phone survey.  In-person surveys are also much more time consuming and expensive than mailing questionnaires. Thus, quantitative researchers may opt for self-administered questionnaires over in-person surveys on the grounds that they will be able to reach a large sample at a much lower cost than were they to interact personally with each and every respondent.

Key Takeaways

  • Time is a factor in determining what type of survey a researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are administered over time.
  • Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
  • Self-administered questionnaires may be delivered in hard copy form to participants in person or via snail mail or online.
  • Interview schedules are used with in-person or phone surveys.
  • Each method of survey administration comes with benefits and drawbacks.


Think about the population you want to research.

  • Which type of survey (i.e., in-person, telephone, web-based, by mail) do you think would most effectively reach your population? Why?
  • Are there elements of your population you could miss by choosing one of these ways to administer your survey? How might this affect your results?

12.3 Writing a questionnaire

Learning Objectives

Learners will be able to…

  • Define different formats of questions
  • Describe the principles of a good survey question
  • Discuss the importance of pilot testing questions
  • Understand principles of question development
  • Evaluate questionnaire and interview questions
Man seated at desk typing on computer
Photo by Nicole Lee

How are questionnaires developed? Developing an effective questionnaire takes a long time and is both a science and art. It is a science because the questionnaire should be developed based on accepted principles of questionnaire development that have evolved over time and practice. For instance, you must be attentive to issues of conceptual development, as well as reliability and validity. On the other hand, questionnaire development is also an art because it must take into account things such as color, font, use of white-space, etc. that will make a written questionnaire aesthetically pleasing. Researchers who develop questionnaires rely on colleagues and pilot testing to refine their measurement tools.

When implementing a survey, conduct an initial literature search to determine if there are existing questionnaires or interview questions you may use for your study. If not, you must create your own tool or tools, which may be a challenging process. You must have a strong understanding of what you want to ask, why you want to ask it, and how you want to ask. You need to be able to understand the potential barriers to your project and take these into account as you design your instrument(s). As discussed above, surveys are often self-administered. This means they must stand on their own so that they can be correctly understood and interpreted by your research participants.  While this may seem like an easy task, you would be surprised how quickly things get misinterpreted!

How to ask the right questions

How are items for questionnaires and interviews developed? Questions should be developed based on existing principles concerning item development. Remember that a questionnaire is developed to measure some variable or concept. We are often going to develop a series of questions that will help us to gather data about various aspects of that variable.  These questions should be grounded in the existing literature on your topic and should comprehensively assess the variable you are seeking to understand. For instance, if I develop a questionnaire about depression, but I don’t ask any questions about loss of interest in doing things, it would be a major gap in the information I am collecting about this variable. A good literature search will help me to identify the various areas that I will need to ask about in my questionnaire so that I can get the most complete picture of depression from participants. Questionnaire items must take into account idiosyncrasies regarding language, meaning that we need to anticipate the variety of ways that people might read and process the meaning of a question and its responses. Continuing on with the depression questionnaire example, we might ask a question about whether people feel blue much of the time. While it might be evident to you or I that the phrase “feeling blue” means experiencing low mood or sadness, that might not be interpreted the same by everyone, especially across cultural groups. Remember, being attentive to the way in which you ask questions is critical.

The next few sections will discuss the different characteristics of questionnaires and interviews and provide guidance on writing effective questions. Please note that this section discusses “guidelines.”  There may be times when these guidelines are not relevant. It is up to you as the researcher to read each guideline and determine if your study requires exceptions to them.

Guidelines for creating good questions

Crafting good questions is hard and requires thoughtful attention, feedback and revision. Below are some resources that will aid you in these tasks.

Resource: Here is a link to a short video from the PEW Research Center that discusses important considerations as you are crafting your questionnaire. It offers some good examples, mostly in the context of political polling, but they can be applied to many different topics.

Participants in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with little value. Dillman (1978) provides several “rules” or guidelines for creating good questions: 

Every question should be carefully scrutinized for the following issues:

  • Is the question clear and understandable? Questions should use very simple language, preferably in the active voice and without complicated words or jargon that may not be understood by a typical participant. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your questionnaire is targeted at a specialized group of respondents, such as doctors, lawyers, and researchers, who use such jargon in their everyday work environment.
  • Is the question worded in a negative manner? Negatively worded questions, such as “Should your local government not raise taxes?” tend to confuse participants and lead to inaccurate responses. Such questions should be avoided, and in all cases, avoid double-negatives.
  • Is the question ambiguous? Questions should not use words or expressions that may be interpreted differently by different participants (e.g., words like “any” or “just”). For instance, if you ask a respondent, what is your annual income, it is unclear whether you referring to salary/wages, or also dividend, rental, and other income, whether you referring to personal income, family income (including spouse’s wages), or personal and business income? Different interpretations will lead to incomparable responses that cannot be interpreted correctly.
  • Does the question have biased or value-laden words? Bias refers to any property of a question that encourages participants to answer in a certain way. As social workers, we understand how we must be intentional with language. For instance, Kenneth Rasinky (1989) examined several studies on people’s attitudes toward government spending and observed that respondents tend to indicate stronger support for “assistance to the poor” and less for “welfare,” even though both terms had the same meaning. Remember the difference in public perception between “Obamacare” and the “Affordable Care Act?” Biased language or tone tends to skew observed responses. In summary, questions should be carefully evaluated to avoid biased language.
  • Is the question double-barreled? Double-barreled questions are those that can have multiple answers. For example, are you satisfied with your professor’s grading style and lecturing? In this example, how should a respondent answer if they are satisfied with the grading style but not the lecturing and vice versa? It is always advisable to separate double-barreled questions into separate questions: (1) are you satisfied with your professor’s grading? and (2) are you satisfied with your professor’s lecturing? Another example: does your family favor public television? Some people may favor public television for themselves, but favor certain cable television programs such as Sesame Street for their children.
  • Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Another way to examine questions is to use the BRUSO model (Peterson, 2000). [6] Note: Here this model is focused on questionnaires; however, it is also relevant for interview questions. An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes it easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items requesting information on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed-ended items that are “double-barreled.” They ask about two conceptually distinct issues but allow only one response. For example, “Please rate the extent to which you have been feeling anxious and depressed.” This item should probably be split into two separate items—one about anxiety and one about depression. Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. 

Table 12.2 The BRUSO model of writing effective questionnaire items, with examples from a perceptions of gun ownership questionnaire
Criterion Poor Effective
B- Brief “Are you now or have you ever been the possessor of a firearm?” Have you ever possessed a firearm?
R- Relevant “Who did you vote for in the last election?” Note: Only include items that are relevant to your study.
U- Unambiguous “Are you a gun person?” Do you currently own a gun?”
S- Specific How much have you read about the new gun control measure and sales tax?” “How much have you read about the new sales tax on firearm purchases?”
O- Objective “How much do you support the beneficial new gun control measure?” “What is your view of the new gun control measure?”

Response formats

Questions may be found on questionnaires and in interview guides in a variety of formats. When developing questions, it is important to think about the type of data you will collect and how useful it will be to your project. Remember our discussion on levels of measurement?  When you think about the format of your questions, it is also important to think about the level of measurement. Are you concerned with yes/no answers? Dichotomous response questions would work well for you. Do you have items where you really want participants to explain feelings or experiences? Perhaps open-ended items are best.  Is computing an overall score important? You might want to consider using interval-ratio response items or continuous response questions.

Below is a list of some of the different question formats. Remember, questions may be more than one type of format. For instance, you may have a filter question that is a dichotomous response item. As you look at this list, think about the questions that you have been asked in questionnaires or interviews. Which were the most common?


Question Formats

Based on Level of Measurement

  • Nominal response question-Participants are presented with more than two un-ordered options, such as: What is your social work track ( Children and Families, Mental Health, Medical Social Work, International Social Work, Planning and Administration)?
  • Ordinal response question-Participants have more than two ordered options, such as: what is your highest level of social work education (AS, BSW, MSW, PhD)?
  • Interval response question-Participants are presented with an opportunity to indicate a numerical response in which the answer cannot be zero or none. For example, “how old are you?” This type of format can also include answers from a semantic differential scale or Guttman scale. Each of these scale types was discussed in the previous chapter.
  • Continuous or ratio response question-Participants enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the-blanks type.

Other Types of Questions

  • Dichotomous response question-Participants are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think those who receive public assistance should be drug tested (Yes or No)?
  • Filter or Screening Questions–Questions that screen out/identify a certain type of respondent. For instance, let’s pretend that you want to survey your research class to determine how those with a letter of accommodation (for a disability) are navigating their field placement. One of the first questions is a filter question that asks students if they have a letter of accommodation. In other words, everyone receives the tool but you have a way to “screen in” those who can answer your research question. 
  • Close-ended questions– Question type where participants are asked to choose their response from a list of existing responses. For instance, how many semesters of research should MSW students take: one, two, or three?
  • Open-ended question–Question type in which participants are asked to provide a detailed answer to a question. For example, “How do you feel about the new medication-assisted recovery center?”
  • Matrix question–Matrix questions are used to gather data across a number of variables that all have the same response categories. For examples, I might be interested in knowing “How likely you are to agree with the following statements: I prefer to study in the morning, I prefer to study with music playing, I prefer to study alone, I prefer to study in my room, I prefer to study in a coffee shop”. These are all separate questions, but the responses categories for all of these will be “Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree”. When I set this question up I will develop a table or matrix, where the questions form the rows and the responses categories are the columns.

For visual examples, please see this book chapter on types of survey questions which includes some helpful diagrams.

A note about closed-ended questions

Closed-ended questions are used when researchers have a good idea of the different responses participants might make. They are more quantitative in nature, so they are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.

For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are mutually exclusive. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: JewishHinduBuddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an Other category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply. However, note that when you allow a participant to select more than one category, you need to realize that it may make analyzing your data more complicated. 

For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. 

Putting your questions together

An additional consideration is the “flow” of questions. Imagine being a participant in an interview. In the first scenario, the interviewer begins by asking you to answer questions that are very sensitive. Now imagine another scenario, one in which the interviewer begins with less intrusive questions. Which scenario sounds more appealing? In the first scenario, you might feel caught off guard and uncomfortable. In the second situation, you have time to develop rapport before moving into more sensitive questions.  The order in which you structure your questions matters. Generally,  questions should flow from the least sensitive to the most sensitive and from the general to the specific. A few other considerations are identified in the box below. 

General Rules for Question Sequencing And Other Important Considerations

  • Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and ‘firmographics’ (employee count, annual revenues, industry) for firm-level surveys.
  • Never start with an open-ended question.
  • If following a historical sequence of events, follow a chronological order from earliest to latest.
  • Ask about one topic at a time. When switching topics, use a transition, such as “The next section examines your opinions about …”
  • Use filter or contingency questions as needed, such as: “If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3.”  


  • People’s time is valuable. Be respectful of their time. Keep your questionnaire as short as possible and limit it to what is absolutely necessary. Participants do not like spending more than 10-15 minutes on any questionnaire, no matter how important or interesting the topic. Longer surveys tend to dramatically lower response rates.
  • Always assure participants about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate). Your informed consent should be clear about these.
  • For organizational questionnaires, assure participants that you will send a copy of the final results to the organization (and follow through!). 
  • Thank respondents for their participation in your study. 
  • Finally, and perhaps most importantly, pretest your questionnaire, by at least using a convenience sample, before administering it to your participants. Such pretesting may uncover ambiguity, lack of clarity, or biases in question-wording, which should be eliminated before administering to the intended sample. As a student, you might pretest with classmates, friends, other people at your field agency, etc.  


Key Takeaways

  • Evaluating questions to be used in a questionnaire or interview is critical to the research project. There are many ways to examine your questions.
  • There are different types of question formats. The researcher must select the type of question that is consistent with the type of data that they need to collect.


  • Draft a few potential questions you might include on a questionnaire as part of a survey for your topic.

12.4 Strengths and challenges of survey research

Learning Objectives

Learners will be able to…

  • Understand the benefits of surveys as a raw data collection method
  • Understand the drawbacks of surveys as a raw data collection method

Strengths of survey methods

Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In a study of older people’s experiences in the workplace, researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. I realize that $1,000 is nothing to sneeze at, but just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus, surveys are relatively cost-effective.

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 10. Of all the data collection methods described in this textbook, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, as they are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 18, do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact is that the survey researcher is generally stuck with a single instrument for collecting data: the questionnaire. Surveys are in many ways rather inflexible. Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American woman but not an African American man? [1]

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth

Potential for bias

If you choose to use a survey design in your research project, you will have to weigh the pros and cons of that approach and make sure that it is appropriate to your research question. In addition, as you implement your survey, you should be aware of some potential issues that may arise in the data that result from conducting survey research.

Non-Response Bias

Survey research is generally notorious for its low response rates. A response rate of 15-20% is typical in a mail survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity of the study’s results, especially as this relates to the representativeness of the sample. This is known as non-response bias. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to satisfaction questionnaires. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalizability, but the observed outcomes may also be an artifact of the biased sample. Several strategies that can be employed to improve response rates are discussed in the box below.


Strategies to Improve Response Rate

  • Advance notification: A short letter sent in advance to the targeted respondents soliciting their participation in an upcoming survey can prepare them and improve likelihood of response. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their cooperation. A variation of this technique may request the respondent to return a postage-paid postcard indicating whether or not they are willing to participate in the study.
  • Ensuring that content is relevant: If a survey examines issues of relevance or importance to respondents, then they are more likely to respond.
  • Creating a respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, inoffensive, and easy to respond to tend to get higher response rates.
  • Having the project endorsed: For organizational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organization. Such endorsements can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.
  • Providing follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.
  • Ensuring that interviewers are properly trained: Response rates for interviews can be improved with skilled interviewers trained on how to request interviews, use computerized dialing techniques to identify potential respondents, and schedule callbacks for respondents who could not be reached.
  • Providing incentives: Response rates, at least with certain populations, may increase with the use of incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, the promise of contribution to charity, and so forth.
  • Providing non-monetary incentives: Businesses in particular are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.
  • Making participants fully aware of confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates.


Sampling bias

Sampling bias is present when our sampling process results in a sample that does not represent our population in some way. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and will include a disproportionate number of respondents who have land-line telephone service with listed phone numbers and people who stay home during much of the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and people who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the incorrect or incomplete population, such as asking teachers (or parents) about the academic learning of their students (or children) or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and can hurt generalizability claims about inferences drawn from the biased sample.

Social desirability bias

Social desirability bias occurs when we create questions that lead respondents to answer in ways that don’t reflect their genuine thoughts or feelings to avoid being perceived negatively. With negative questions such as, “do you think that your project team is dysfunctional?”, “is there a lot of office politics in your workplace?”, or “have you ever illegally downloaded music files from the Internet?”, the researcher may not get truthful responses. This tendency among respondents to “spin the truth” in order to portray themselves in a socially desirable manner is called social desirability bias, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming social desirability bias in a questionnaire survey outsides of designing questions that minimize the opportunity for social desirability bias to arise. However, in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias

Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviors, or perhaps their memory of such events may have evolved with time and are no longer retrievable. This phenomenon is know as recall bias. For instance, if a respondent is asked to describe their utilization of computer technology one year ago, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias

Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, and using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artifacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff et al. 2003)[7], Lindell and Whitney’s (2001)[8] market variable technique, and so forth. This bias can be potentially avoided if the independent and dependent variables are measured at different points in time, using a longitudinal survey design, or if these variables are measured using different methods, such as computerized recording of dependent variable versus questionnaire-based self-rating of independent variables.

Social Science Research: Principles, Methods, and Practices. Authored by: Anol Bhattacherjee. Provided by: University of South Florida. Located at: http://scholarcommons.usf.edu/oa_textbooks/3/. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike


Key Takeaways

  • Survey research has several strengths, including being versatile, cost-effective, and familiar to participants.
  • Survey research may be used to examine a variety of variables as well as comparing the relationship(s) between variables.
  • Limitations of survey research include several types of bias (non-response bias, sampling bias, social desirability bias, recall bias, and common method bias).
  • There are strategies to help reduce bias.


  • After what you learned in this section, what might be some potential sources of bias in survey results on your topic? How might you minimize those?

  1. Engel, R. & Schutt. (2013). The practice of research in social work (3rd. ed.). Thousand Oaks, CA: SAGE.
  2. Merriam-Webster. (n.d.). Survey. In Merriam-Webster.com dictionary. Retrieved from https://www.merriam-webster.com/dictionary/survey
  3. Nesje, K. (2016). Personality and professional commitment of students in nursing, social work, and teaching: A comparative survey. International Journal of Nursing Studies, 53, 173-181.
  4. Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960. Berkeley, CA: University of California Press.
  5. Center for Disease Control and Prevention, CDC. (n.d.). Behavioral risk factor surveillance system. cdc.gov, https://www.cdc.gov/chronicdisease/resources/publications/factsheets/brfss.htm
  6. Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage
  7. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.
  8. Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86(1), 114.


Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Graduate research methods in social work Copyright © 2020 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book