- Describe how probability sampling differs from nonprobability sampling
- Define generalizability, and describe how it is achieved in probability samples
- Identify the various types of probability samples, and describe why a researcher may use one type over another
Quantitative researchers are often interested in making generalizations about groups larger than their study samples; they seek nomothetic causal explanations. While there are certainly instances when quantitative researchers rely on nonprobability samples (e.g., when doing exploratory research), quantitative researchers tend to rely on probability sampling techniques. The goals and techniques associated with probability samples differ from those of nonprobability samples. We’ll explore those unique goals and techniques in this section.
Unlike nonprobability sampling, probability sampling refers to sampling techniques for which a person’s likelihood of being selected from the sampling frame is known. You might ask yourself why we should care about a potential participant’s likelihood of being selected for the researcher’s sample. The reason is that, in most cases, researchers who use probability sampling techniques are aiming to identify a representative sample from which to collect data. A representative sample is one that resembles the population from which it was drawn in all the ways that are important for the research being conducted. If, for example, you wish to be able to say something about differences between men and women at the end of your study, you better make sure that your sample doesn’t contain only women. That’s a bit of an oversimplification, but the point with representativeness is that if your population varies in some way that is important to your study, your sample should contain the same sorts of variation.
Obtaining a representative sample is important in probability sampling because of generalizability. In fact, generalizability is perhaps the key feature that distinguishes probability samples from nonprobability samples. Generalizability refers to the idea that a study’s results will tell us something about a group larger than the sample from which the findings were generated. In order to achieve generalizability, a core principle of probability sampling is that all elements in the researcher’s sampling frame have an equal chance of being selected for inclusion in the study. In research, this is the principle of random selection. Researchers use a computer’s random number generator to determine who from the sampling frame gets recruited into the sample.
Using random selection does not mean that your sample will be perfect. No sample is perfect. The only way to come with a perfect result would be to include everyone in the population in your sample, which defeats the whole point of sampling. Generalizing from a sample to a population always contains some degree of error. This is referred to as sampling error, a statistical calculation of the difference between results from a sample and the actual parameters of a population.
Generalizability is a pretty easy concept to grasp. Imagine a professor were to take a sample of individuals in your class to see if the material is too hard or too easy. The professor, however, only sampled individuals whose grades were over 90% in the class. Would that be a representative sample of all students in the class? That would be a case of sampling error—a mismatch between the results of the sample and the true feelings of the overall class. In other words, the results of the professor’s study don’t generalize to the overall population of the class.
Taking this one step further, imagine your professor is conducting a study on binge drinking among college students. The professor uses undergraduates at your school as her sampling frame. Even if that professor were to use probability sampling, perhaps your school differs from other schools in important ways. There are schools that are “party schools” where binge drinking may be more socially accepted, “commuter schools” at which there is little nightlife, and so on. If your professor plans to generalize her results to all college students, she will have to make an argument that her sampling frame (undergraduates at your school) is representative of the population (all undergraduate college students).
Types of probability samples
There are a variety of probability samples that researchers may use. These include simple random samples, systematic samples, stratified samples, and cluster samples. Let’s build on the previous example. Imagine we were concerned with binge drinking and chose the target population of fraternity members. How might you go about getting a probability sample of fraternity members that is representative of the overall population?
Simple random samples are the most basic type of probability sample. A simple random sample requires a real sampling frame—an actual list of each person in the sampling frame. Your school likely has a list of all of the fraternity members on campus, as Greek life is subject to university oversight. You could use this as your sampling frame. Using the university’s list, you would number each fraternity member, or element, sequentially and then randomly select the elements from which you will collect data.
True randomness is difficult to achieve, and it takes complex computational calculations to do so. Although you think you can select things at random, human-generated randomness is actually quite predictable, as it falls into patterns called heuristics. To truly randomly select elements, researchers must rely on computer-generated help. Many free websites have good pseudo-random number generators. A good example is the website Random.org, which contains a random number generator that can also randomize lists of participants. Sometimes, researchers use a table of numbers that have been generated randomly. There are several possible sources for obtaining a random number table. Some statistics and research methods textbooks offer such tables as appendices to the text.
As you might have guessed, drawing a simple random sample can be quite tedious. Systematic sampling techniques are somewhat less tedious but offer the benefits of a random sample. As with simple random samples, you must possess a list of everyone in your sampling frame. Once you’ve done that, to draw a systematic sample you’d simply select every kth element on your list. But what is k, and where on the list of population elements does one begin the selection process? k is your selection interval or the distance between the elements you select for inclusion in your study. To begin the selection process, you’ll need to figure out how many elements you wish to include in your sample. Let’s say you want to interview 25 fraternity members on your campus, and there are 100 men on campus who are members of fraternities. In this case, your selection interval, or k, is 4. To arrive at 4, simply divide the total number of population elements by your desired sample size. This process is represented in Figure 10.2.
To determine where on your list of population elements to begin selecting the names of the 25 men you will interview, select a number between 1 and k, and begin there. If we select 3 as our starting point, we’d begin by selecting the third fraternity member on the list and then select every fourth member from there. This might be easier to understand if you can see it visually. Table 10.2 lists the names of our hypothetical 100 fraternity members on campus. You’ll see that the third name on the list has been selected for inclusion in our hypothetical study, as has every fourth name after that. A total of 25 names have been selected.
|Number||Name||Include in study?||Number||Name||Include in study?|
|In case you’re wondering how I came up with 100 unique names for this table, I’ll let you in on a little secret: lists of popular baby names can be great resources for researchers. I used the list of top 100 names for boys based on Social Security Administration statistics for this table. I often use baby name lists to come up with pseudonyms for field research subjects and interview participants. See Family Education. (n.d.). Name lab. Retrieved from http://baby-names.familyeducation.com/popular-names/boys.|
There is one clear instance in which systematic sampling should not be employed. If your sampling frame has any pattern to it, you could inadvertently introduce bias into your sample by using a systemic sampling strategy. (Bias will be discussed in more depth in the next section.) This is sometimes referred to as the problem of periodicity. Periodicity refers to the tendency for a pattern to occur at regular intervals. Let’s say, for example, that you wanted to observe binge drinking on campus each day of the week. Perhaps you need to have your observations completed within 28 days and you wish to conduct four observations on randomly chosen days. Table 10.3 shows a list of the population elements for this example. To determine which days we’ll conduct our observations, we’ll need to determine our selection interval. As you’ll recall from the preceding paragraphs, to do so we must divide our population size, in this case 28 days, by our desired sample size, in this case 4 days. This formula leads us to a selection interval of 7. If we randomly select 2 as our starting point and select every seventh day after that, we’ll wind up with a total of 4 days on which to conduct our observations. You’ll see how that works out in the following table.
|Day #||Day||Drinking||Observe?||Day #||Day||Drinking||Observe?|
Do you notice any problems with our selection of observation days in Table 1? Apparently, we’ll only be observing on Tuesdays. Moreover, Tuesdays may not be an ideal day to observe binge drinking behavior. Unless alcohol consumption patterns have changed significantly since I was in my undergraduate program, I would assume binge drinking is more likely to happen over the weekend.
In cases such as this, where the sampling frame is cyclical, it would be better to use a stratified sampling technique. In stratified sampling, a researcher will divide the study population into relevant subgroups and then draw a sample from each subgroup. In this example, we might wish to first divide our sampling frame into two lists: weekend days and weekdays. Once we have our two lists, we can then apply either simple random or systematic sampling techniques to each subgroup.
Stratified sampling is a good technique to use when, as in our example, a subgroup of interest makes up a relatively small proportion of the overall sample. In our example of a study of binge drinking, we want to include weekdays and weekends in our sample, but because weekends make up less than a third of an entire week, there’s a chance that a simple random or systematic strategy would not yield sufficient weekend observation days. As you might imagine, stratified sampling is even more useful in cases where a subgroup makes up an even smaller proportion of the sampling frame—for example, if we want to be sure to include in our study students who are in year five of their undergraduate program but this subgroup makes up only a small percentage of the population of undergraduates. There’s a chance simple random or systematic sampling strategy might not yield any fifth-year students, but by using stratified sampling, we could ensure that our sample contained the proportion of fifth-year students that is reflective of the larger population.
In this case, class year (e.g., freshman, sophomore, junior, senior, and fifth-year) is our strata, or the characteristic by which the sample is divided. In using stratified sampling, we are often concerned with how well our sample reflects the population. A sample with too many freshmen may skew our results in one direction because perhaps they binge drink more (or less) than students in other class years. Using stratified sampling allows us to make sure our sample has the same proportion of people from each class year as the overall population of the school.
Up to this point in our discussion of probability samples, we’ve assumed that researchers will be able to access a list of population elements in order to create a sampling frame. This, as you might imagine, is not always the case. Let’s say, for example, that you wish to conduct a study of binge drinking across fraternity members at each undergraduate program in your state. Just imagine trying to create a list of every single fraternity member in the state. Even if you could find a way to generate such a list, attempting to do so might not be the most practical use of your time or resources. When this is the case, researchers turn to cluster sampling. Cluster sampling occurs when a researcher begins by sampling groups (or clusters) of population elements and then selects elements from within those groups.
Let’s work through how we might use cluster sampling in our study of binge drinking. While creating a list of all fraternity members in your state would be next to impossible, you could easily create a list of all undergraduate colleges in your state. Thus, you could draw a random sample of undergraduate colleges (your cluster) and then draw another random sample of elements (in this case, fraternity members) from within the undergraduate college you initially selected. Cluster sampling works in stages. In this example, we sampled in two stages— (1) undergraduate colleges and (2) fraternity members at the undergraduate colleges we selected. However, we could add another stage if it made sense to do so. We could randomly select (1) undergraduate colleges (2) specific fraternities at each school and (3) individual fraternity members. As you might have guessed, sampling in multiple stages does introduce the possibility of greater error (each stage is subject to its own sampling error), but it is nevertheless a highly efficient method.
Jessica Holt and Wayne Gillespie (2008)  used cluster sampling in their study of students’ experiences with violence in intimate relationships. Specifically, the researchers randomly selected 14 classes on their campus and then drew a random subsample of students from those classes. But you probably know from your experience with college classes that not all classes are the same size. So, if Holt and Gillespie had simply randomly selected 14 classes and then selected the same number of students from each class to complete their survey, then students in the smaller of those classes would have had a greater chance of being selected for the study than students in the larger classes. Keep in mind, with random sampling the goal is to make sure that each element has the same chance of being selected. When clusters are of different sizes, as in the example of sampling college classes, researchers often use a method called probability proportionate to size (PPS). This means that they take into account that their clusters are of different sizes. They do this by giving clusters different chances of being selected based on their size so that each element within those clusters winds up having an equal chance of being selected.
To summarize, probability samples allow a researcher to make conclusions about larger groups. Probability samples require a sampling frame from which elements, usually human beings, can be selected at random from a list. The use of random selection reduces the error and bias present in nonprobability samples reviewed in the previous section, though some error will always remain. In relying on a random number table or generator, researchers can more accurately state that their sample represents the population from which it was drawn. This strength is common to all probability sampling approaches summarized in Table 10.4.
|Simple random||Researcher randomly selects elements from sampling frame.|
|Systematic||Researcher selects every kth element from sampling frame.|
|Stratified||Researcher creates subgroups then randomly selects elements from each subgroup.|
|Cluster||Researcher randomly selects clusters then randomly selects elements from selected clusters.|
In determining which probability sampling approach makes the most sense for your project, it helps to know more about your population. A simple random sample and systematic sample are relatively similar to carry out. They both require a list all elements in your sampling frame. Systematic sampling is slightly easier in that it does not require you to use a random number generator, instead using a sampling interval that is easy to calculate by hand.
The relative simplicity of both approaches is counterweighted by their lack of sensitivity to characteristics in of your population. Stratified samples can better account for periodicity by creating strata that reduce or eliminate the effects of periodicity. Stratified samples also ensure that smaller subgroups are included in your sample, thus making your sample more representative of the overall population. While these benefits are important, creating strata for this purpose requires knowing information about your population before beginning the sampling process. In our binge drinking example, we would need to know how many students are in each class year to make sure our sample contained the same proportions. We would need to know that, for example, fifth-year students make up 5% of the student population to make sure 5% of our sample is comprised of fifth-year students. If the true population parameters are unknown, stratified sampling becomes significantly more challenging.
Common to each of the previous probability sampling approaches is the necessity of using a real list of all elements in your sampling frame. Cluster sampling is different. It allows a researcher to perform probability sampling in cases for which a list of elements is not available or pragmatic to create. Cluster sampling is also useful for making claims about a larger population, in our example, all fraternity members within a state. However, because sampling occurs at multiple stages in the process, in our example at the university and student level, sampling error increases. For many researchers, this weakness is outweighed by the benefits of cluster sampling.
- In probability sampling, the aim is to identify a sample that resembles the population from which it was drawn.
- There are several types of probability samples including simple random samples, systematic samples, stratified samples, and cluster samples.
- Probability samples usually require a real list of elements in your sampling frame, though cluster sampling can be conducted without one.
- Cluster sampling- a sampling approach that begins by sampling groups (or clusters) of population elements and then selects elements from within those groups
- Generalizability – the idea that a study’s results will tell us something about a group larger than the sample from which the findings were generated
- Periodicity- the tendency for a pattern to occur at regular intervals
- Probability proportionate to size- in cluster sampling, giving clusters different chances of being selected based on their size so that each element within those clusters has an equal chance of being selected
- Probability sampling- sampling approaches for which a person’s likelihood of being selected from the sampling frame is known
- Random selection- using a randomly generated numbers to determine who from the sampling frame gets recruited into the sample
- Representative sample- a sample that resembles the population from which it was drawn in all the ways that are important for the research being conducted
- Sampling error- a statistical calculation of the difference between results from a sample and the actual parameters of a population
- Simple random sampling- selecting elements from a list using randomly generated numbers
- Strata- the characteristic by which the sample is divided
- Stratified sampling- dividing the study population into relevant subgroups and then draw a sample from each subgroup
- Systematic sampling- selecting every kth element from a list
- Figure 10.2 copied from Blackstone, A. (2012) Principles of sociological inquiry: Qualitative and quantitative methods. Saylor Foundation. Retrieved from: https://saylordotorg.github.io/text_principles-of-sociological-inquiry-qualitative-and-quantitative-methods/ Shared under CC-BY-NC-SA 3.0 License (https://creativecommons.org/licenses/by-nc-sa/3.0/) ↵
- Holt, J. L., & Gillespie, W. (2008). Intergenerational transmission of violence, threatened egoism, and reciprocity: A test of multiple psychosocial factors affecting intimate partner violence. American Journal of Criminal Justice, 33, 252–266. ↵