13. Surveys
13.1. The Strengths and Weaknesses of Survey Research
Victor Tan Chen; Gabriela León-Pérez; Julie Honnold; and Volkan Aytar
Learning Objectives
- Identify when it is appropriate to employ survey research for data collection.
- Discuss the drawbacks of surveys as a research method.
- Discuss the problems that might arise if your survey has a low response rate.
Although surveys are a quantitative research method, they have many of the same advantages of qualitative interviews. Like in-depth interviews, they are an excellent way to gather data on a phenomenon that is hard or impossible to directly observe. For instance, you could use surveys to understand people’s preferences (such as their political orientation), their attitudes (such as their views toward immigrants), and their personality traits (such as their self-esteem). You could also get a sense of the level of understanding they have about a particular topic (such as what they know about a newly enacted law) and what behaviors or activities they engage in (such as whether they smoke and how much they drink).
The chief difference between surveys and in-depth interviews is that you sacrifice depth for breadth. The goal of surveys is not to gather rich detail about a few cases; it is to compare the responses of many, many individuals. In fact, of all the data-collection methods described in this book, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a population that is too large to observe directly. Researchers can cover a large area—even an entire country—by using online, telephone, or mail-in surveys, and the results from their sample will be generalizable to the population of interest so long as they use rigorous probability sampling techniques (which we discussed in Chapter 6: Sampling). Surveys can employ probability sampling because they allow researchers to collect data from large samples for a relatively low cost. Indeed, if we consider the time, effort, and expense that goes into tracking down a single case for a given sample, surveys are more cost-effective than other research methods. For instance, a survey researcher could get a single respondent’s overall take on an issue in mere minutes, whereas an in-depth interview would require much longer, given the open-ended questions that a qualitative interviewer would ask.
Survey research also tends to be a reliable method of inquiry (refer to Chapter 7: Measuring the Social World, for a discussion of reliability in research). This is because surveys are standardized: they pose the same questions, phrased in exactly the same way. This makes it possible for survey researchers to compare responses across the individuals in their sample in an apples-to-apples fashion. Other methods, including in-depth interviewing, do not achieve the same degree of consistency and comparability that a quantitative survey does. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, leading to wildly inconsistent answers. If you construct your questionnaire carefully, however, it is likely to produce reliable results.
The versatility of survey research is yet another of its advantages. Knowing how to construct and administer surveys is a useful skill to have in all kinds of professions, and this knowledge is specialized enough that being proficient in survey design could land you a job all by itself. For instance, businesses use surveys to learn about their customers and ensure the quality of the services they provide. Governments use them to understand the opinions and needs of communities. Political campaigns and news outlets use them to capture public attitudes about issues and candidates. And all sorts of other organizations—from social service agencies to churches and clubs to nonprofit and activist groups—regularly employ surveys to better understand their members or clients and evaluate the effectiveness of their efforts.
We should note, too, that surveys also do not have to use individuals as their unit of analysis (the type of cases that a researcher is studying). Most surveys are conducted at the individual level, but researchers also use this method to sample organizations, which may ultimately mean some representative of the organization fills out the form, but the details gathered relate to the group. One organizational survey you may have encountered is U.S. News & World Report’s “Best Colleges” rankings, which rely on surveys of college and university administrators who provide information about their own and other institutions.
Surveys can be used for exploratory, descriptive, or explanatory research. For instance, surveys can be used as a way to efficiently gather general details about your population of interest. Oftentimes, this description is all that nonacademic researchers really want to know: what the spending habits of their customers are, what their clients or voters believe is the most pressing issue, and so on. We discussed previously how in-depth interviews can be used to conduct pilot interviews in preparation for a larger study, but we should point out that a similar sort of exploratory research can be done with surveys. At the start of a research project, for instance, you might administer a quick survey that gives you a broad sense of the characteristics of your target population. This preliminary data collection could give you the necessary context to pursue more focused and time-intensive methods, perhaps helping you identify specific individuals or locations to examine with in-depth interviewing or ethnographic observation. (In fact, some surveys will ask respondents explicitly if they want to be contacted for follow-up interviews.)
As for explanatory research, survey research is the method that many sociologists turn to when they want to engage in causal inference—that is, determining whether and to what extent a cause-effect relationship exists between two variables. As we discussed in Chapter 4: Research Questions, the explanatory research questions that survey research tends to tackle are focused on testing and measuring relationships between two variables, such as the following: What effect does the independent variable have on the dependent variable? Surveys are not as good as experiments are in ensuring internal validity—that a change in the independent variable truly does cause a change in the dependent variable—but they are often the only method available to sociologists when we want to study causal relationships in social life. With proper statistical controls (which we discuss later in the textbook), an analysis of survey data can do an adequate job of inferring causality. Meanwhile, the fact that most surveys collect data from a sample of the actual population of interest means that the findings from this research method tend to have a higher degree of external validity: we are more confident that any patterns and relationships we detect in our sample actually hold up in our larger population of interest (refer to Chapter 12: Experiments for a discussion of internal and external validity).
As with all methods of data collection, survey research also comes with drawbacks. First, surveys are inflexible—their questions cannot change after you begin your data collection (i.e., after you field your survey). In fact, altering a single word of a single item could potentially ruin the results you obtain from that question. Let’s say you email a survey to 1,000 people. Within a few hours, the results start rolling in. Much to your horror, you realize that a number of your respondents are confused by the phrasing for a particular question. At this stage, however, it’s too late for a quick fix. You can’t just change the question for the respondents who haven’t yet returned their surveys; if you did so, it would not be appropriate to compare their responses to those that the earlier group provided. The different wording means the two groups were essentially asked different questions, so you’d be comparing apples or oranges. You would need to resend the revised questionnaire to everyone who already filled out their forms to generate comparable data.
Consider how different this question-asking style is to the one adopted in in-depth interviews. Because comparisons between responses are not so crucial to the latter method, a qualitative researcher can tweak their questions—both between interviews and within a single interview. They can also pose entirely new questions that weren’t on their interview guide, redirecting their line of inquiry as they learn more over the course of a conversation.
Validity can also be a problem with surveys. As we have noted, survey questions are standardized. They cannot change from respondent to respondent (with a few exceptions that we will get to later), and researchers cannot ask any follow-up questions that were not originally included as part of the questionnaire. As a result, it is difficult to ask anything other than general questions on a survey—questions that a broad range of people will understand. This means that the results you obtain from a survey may not be as valid as findings derived from methods like in-depth interviewing, which gives you the freedom to delve more deeply and comprehensively into whatever topic is being examined. Let’s say that you want to learn something about voters’ willingness to elect a nonwhite president, as in the chapter’s opening example. GSS respondents were asked, “If your party nominated an African American for President, would you vote for him if he were qualified for the job?” Respondents could only respond “yes” or “no” to the question. But what if someone’s opinion was more complex than a “yes” or “no” would indicate? What if, for example, a person was willing to vote for an African American woman but not an African American man? In that case, they might answer “yes” for the similar question on voting for a woman president, but none of the response options for the racial question would match their view. We are not suggesting that such a view makes any logical sense, but it is conceivable that an individual might hold it.
On surveys, a person with a complex (or convoluted) view might be forced to commit to simple yes/no answers (or another set of response options) that do not accurately reflect their actual point of view. As we will discuss later, survey questionnaires can address this problem by having catchall “Other” response options that a respondent can choose when their opinion doesn’t align with the main response options. Nevertheless, the more fundamental problem of validity remains. The answers that respondents give on surveys must always be simplified for the survey to work its magic of being able to reduce a complex set of opinions into a broad pattern. Through this simplification, we can take Americans’ complicated views on race and the presidency and boil them down to a single figure, such as 3 percent—the proportion of Americans who would not vote for a qualified African American presidential nominee in 2010, according to the GSS.
A final issue with surveys that we want to highlight is the potential for nonresponse bias (discussed in Chapter 7: Measuring the Social World), which can severely hamper this method’s key strength of generalizability. Given how inundated people are nowadays with marketing requests of various kinds, researchers are finding it increasingly hard to get people to respond to their surveys. (See the sidebar Response Rates and the Dirty Secret of Political Polling to learn more about this worrisome trend.) As participation has dwindled, researchers have become increasingly concerned that the people who opt into their surveys look different from those who opt out in ways that might affect their findings. This danger—nonresponse bias—would mean that the samples they wind up with are not truly representative of the target population. People with particular characteristics have chosen not to join the sample, and if those characteristics have any bearing on the research question at hand—say, if Republicans happen to be less likely to answer our election poll (a real concern nowadays)—that will skew the results of the survey.
Response Rates and the Dirty Secret of Political Polling
A survey’s response rate is the percentage of people who actually completed the survey out of everyone asked to participate. If you were handing out survey forms, the response rate would be the number of completed questionnaires you receive divided by the number of questionnaires you distributed. If you were calling people for a phone survey, it would be the number of calls that led to a person actually answering your survey questions divided by the number of all the phone numbers you dialed.
Let’s say your sample included 100 people and you sent questionnaires to each of those people. It would be wonderful if all 100 returned completed questionnaires, but the chances of that happening—assuming you didn’t put a gun to each potential respondent’s head—are about zero. If you’re lucky, perhaps 75 or so will return completed questionnaires. In this case, your response rate would be 75 percent (75 divided by 100). That’s pretty darn good. Although response rates vary, and researchers don’t always agree about what makes a good response rate, having three-quarters of your surveys returned would be considered good, even excellent, by most survey researchers.
Indeed, researchers who work on national opinion surveys and political polls often get much, much lower response rates than that. While not all pollsters publish their rates, those who do have produced alarming figures. For example, the Pew Research Center, a respected U.S. think tank that uses phone surveys, saw its response rate plummet from 36 percent in 1997 to just 6 percent in 2018 (Kennedy and Hartig 2019).[1] Pew researchers go to great lengths to eventually get the people they identify for their samples to agree to an interview, reaching out to each potential respondent seven times; other pollsters are far less rigorous.
Political polls are conducted regularly throughout the campaign season by a wide range of nonpartisan and partisan polling outfits—some more reputable than others. With the constant churn of new polls, researchers don’t have the time to follow up, resulting in meager response rates. For instance, the New York Times/Siena College poll, a telephone survey, draws its phone numbers from the contact details that voters provide when registering to vote (notably, something that not all voters do). “In the poll we have in the field right now, only 0.4 percent of dials have yielded a completed interview,” wrote New York Times chief political analyst Nate Cohn (2022) in October 2022, a month before the U.S. midterm election. Pollsters worry about low response rates because they fear it might be a sign of nonresponse bias, when respondents and nonrespondents differ in important ways. If only those who have strong opinions about your study topic return their questionnaires, for instance, the opinions in your sample would be more extreme than they would be if you had a truly representative sample of the target population.
Some studies have concluded that rigorous efforts to boost response rates—namely, by following up with potential respondents and convincing them to participate—do not alter the findings of surveys in any substantial way (Curtin, Presser, and Singer 2000; Keeter et al. 2006). For their part, Pew researchers compared the characteristics of individuals who responded to one of their phone surveys to what they could learn through other means about those who didn’t participate. (To learn about these nonresponders, they utilized two large national databases maintained by commercial vendors that have collected information on nearly every U.S. household.) Importantly, Pew found that there were no significant differences in voter registration or party affiliation between responders and nonresponders. However, responders were more likely to vote (54 percent versus 44 percent), more interested in community affairs and charities (43 percent versus 33 percent), and more interested in politics and current affairs (31 percent versus 25 percent). These results suggest that nonresponse bias will likely be a problem whenever we survey U.S. households about their civic and political engagement—given that people who are more engaged in these ways are also more likely to pick up the phone and answer survey questions—but it might not be a problem when we just want to know how the electorate will vote, which instead hinges on the mix of Democrats, Republicans, and third-party supporters in our sample.
Perhaps we don’t have to worry about the abysmally low response rates of today’s political polls, then. Although critics complain (“Polling seems to be irrevocably broken,” a Washington Post columnist said after miscalls in the 2020 U.S. presidential election), the industry continues to be as visible and influential as ever (Sullivan 2020). Even if they need to reach out to many more people nowadays to obtain adequate samples, pollsters continue to field surveys on a daily basis in the leadup to every major election.
Regardless of whether concerns about nonresponse bias in national surveys are overblown, we advise that you shoot for as high a response rate as possible for any surveys you conduct. Doing so will make your study’s findings more credible. So, what is a good survey response rate? The truth is that there is no “magic number”—the response rate will vary depending on the method of data collection and on the target population. If you are targeting a known population, or if you are contacting a smaller number of people, your survey’s response rate will likely be higher. Furthermore, some types of surveys are more effective than others at getting people to participate. One systematic review of the response rates in more than 2,000 healthcare surveys found that in-person surveys yielded the highest response rate (average of 76 percent), followed by mail surveys (65 percent), email surveys (51 percent), and web-based surveys (46 percent) (Meyer et al. 2022). (We discuss these different types of surveys in the next section.)
We suggest that you evaluate a survey’s response rate in light of all these factors. Let’s say you are sending mass emails to all of the students at a university. On the one hand, you might be able to get a list of all the students at that university (so in that sense, the population is well-defined—as opposed to, say, the U.S. population). On the other hand, the target population is quite large, and you are emailing your survey to students, so there’s a good chance many people will simply ignore your requests. If you happen to receive a 50 percent response rate on this particular survey, that’d be a respectable number—around the average for an email survey. The same percentage would be low, however, for an in-person survey.
How do you avoid a low response rate? Here are some practical strategies to improve participation:
- Personalize your questionnaires: If you are sending out your questionnaires by email or snail mail, address your recruitment message to specific potential respondents by name rather than to some generic recipient such as “Dear Resident” or “Dear Student.”
- Highlight your credibility in describing your study: In your recruitment pitch, provide ample details about the study and the credentials of any researchers involved, and include your full contact information so that potential participants can reach out with any questions. If at all possible, try to establish partnerships with respected institutions—such as university centers and departments, hospitals and clinics, and nonprofit organizations—so that respondents know you have institutional backing.
- Keep your questionnaire short and simple: When potential respondents are deciding whether it’s worth their time and effort to help you, the perceived length and complexity of your questionnaire will be a major factor. This is yet another reason to keep your survey instruments as short, simple, and focused as possible. Be sure to state upfront in your recruitment pitch that your survey only takes a few minutes to complete (if that is indeed the case).
- Be persistent in your recruitment efforts: If you are asking a potential respondent over the phone or in person to participate in your study, be polite but persistent in your recruitment pitch. It’s natural for people to hesitate about agreeing to a request from a stranger. One veteran political pollster told National Public Radio that the go-to line she uses in her recruitment pitch is, “How about we try some questions and see how it goes?” (Guo et al. 2022).
- Send out pre-questionnaire notices and post-questionnaire reminders. A commonly used strategy to raise a survey’s response rate is to send a pre-questionnaire notice. In this short message, you inform potential respondents that they will be asked to participate in a survey in the near future. This signals to the recipient that they should be on the lookout for the questionnaire and that the study being conducted is important (so important it needs to be announced beforehand!). After you send out the actual questionnaire, you should send simple follow-up reminders to nonresponders after an appropriate interval of time (days or weeks, depending on the survey). If they still don’t complete the questionnaire, don’t be afraid to keep reminding them. This can only go so far, of course—not everyone will share your obsession with rigorous social science, and you don’t want to infuriate or harass anyone—but you should make a good-faith effort to get people to respond. (Remember that the Pew Research Center contacts potential respondents seven times before giving up.) Note that the nudge you give can be very simple, too: a friendly reminder to those who have yet to complete the survey, paired with a thank you to those who have already returned it.
- Offer compensation or at least a token of appreciation: To incentivize participation in surveys, researchers will often offer cash, gifts, or gift cards to those who complete their surveys. Even if you can’t provide substantial compensation, a small gesture—such as including a $1 bill with every mailed questionnaire—can still increase responses. For an online survey, you can offer every respondent an opportunity to enter a raffle for one or more prizes (be sure to check applicable laws so that you do not run afoul of any regulations concerning raffles or rewards). Putting some money into purchasing, say, a tablet that you can give away just to one winner can boost your response rate substantially.
Key Takeaways
- Survey research is often used by researchers who wish to describe or explain the characteristics of large groups. This is because it can employ probability sampling techniques that ensure a representative sample whose characteristics more or less match those of the target population.
- In addition to its generalizability, the strengths of survey research include its cost-effectiveness, reliability, and versatility.
- Although experiments are better at inferring causality, survey research can provide a good understanding of potential cause-effect relationships between two or more variables, allowing researchers to test whether a relationship exists and—if it does—measure the strength of that relationship.
- Like qualitative interviewing, survey research can gather data about phenomena that the researcher cannot directly observe by allowing respondents to discuss things they have experienced or emotions or beliefs they currently have. Unlike questions in qualitative interviews, however, survey questions cannot be altered once data collection has begun, and this research method, by necessity, simplifies the opinions of respondents, which can pose problems of validity.
- While survey researchers should always aim to obtain the highest response rate possible, some recent research argues that high return rates on surveys may be less important than we once thought.
Exercises
- Recall some of the possible research questions you came up with while reading previous chapters of this textbook. How might you frame those questions so that they could be answered using survey research?
- What are some ways that sociologists who use survey research might overcome the weaknesses of this method?
- Find a journal article reporting results from survey research (refer to Chapter 5: Research Design for advice on searching for academic papers). How do the authors describe the strengths and weaknesses of their study? Are any of the strengths or weaknesses we’ve described here mentioned explicitly in that article?
- For its phone surveys, Pew also provides information about its contact rate (households in which an adult was reached divided by all households called) and its cooperation rate (households that yielded an interview divided by all households in which an adult was reached). Those rates have also declined precipitously: between 1997 and 2018, Pew’s contact rate fell from 90 percent to 62 percent, while its cooperation rate fell from 43 percent to 14 percent (Pew Research Center 2012). ↵
The class of phenomena (e.g., individuals, groups, objects, societies) that researchers want to learn about through their research.
A study’s ability to determine if changes in the independent variable truly cause changes in the dependent variable.
A study’s ability to generalize any results obtained from its sample to its target population (or, more generally, to other people, organizations, contexts, or times).
Bias introduced into a study when respondents and nonrespondents differ in important ways, which means that the relevant characteristics observed in the sample differ from those in the target population.
A percentage determined by dividing the number of completed survey questionnaires by the number originally distributed, or the number of individuals successfully interviewed by the number contacted for an interview.