"

4 | The Psychological Scientist

What do psychologists do?

Alison Heinhold Melley and Anna Sofia Caruso

What do Psychologists Do?

KEY THEME: Psychological science relies on empirical evidence and adapts as new data develop.

Research story: A Problem with Play

Part of being a good researcher is being able to acknowledge the areas where your research area needs strengthening. Sometimes that means taking a critical lens to your field and your own past work. One such example happened in 2013, related to researchers who study children’s pretend play. In a published paper, developmental psychologist Angeline Lillard called out previous research on the way it was conducted and the results were interpreted. She makes suggestions about methods and statistical analysis to improve the rigor in child development research in general, and on pretend play specifically. She also calls for researchers to replicate (repeat) prior studies so that more definitive conclusions might be drawn.

It is important to note that the researchers mentioned in Lillard’s paper did not do anything wrong or unethical. Science is an ever evolving practice. Oftentimes, methods and statistics that were acceptable and regular practice even just five years ago can become outdated or replaced by better, more rigorous, or more valid practices. Psychological scientists and practitioners should always be looking to improve and not get stuck in certain practices just because they have always been done that way. Sometimes it takes being called out by another researcher like Lillard and her colleagues to reevaluate our methods and to improve our field.  

 

One critique in this paper questions some conclusions that were made about how pretend play is related to social skills. The research design in these studies was correlational. With this type of design, the strongest claim that can be made is an association claim. For example, we can say that children who engaged in more pretend play had more developed social skills than children who didn’t play this way as often. However, we cannot say that pretend play caused an improvement in social skills. Some of the research reports made statements implying causal relationships – and this is what Lillard and her colleagues were concerned about. Correlation does not equal causation – a more rigorous research design is required to support causal claims. Additionally, there are several variables that can influence the development of complex functions such as social skills, and some were not considered. Lillard’s paper reminds readers to be critical consumers of information. 

 

Another critique that Lillard and her colleagues pointed out in their 2013 paper was about how the researchers worked with children and approached different questions in their studies. For example, one critique was that researchers weren’t always masked. Masking means that the researchers do not know which children are in which condition. When the researchers were masked, the significant findings went away or were not as strong as before. This suggests that strengthening scientific rigor in studies of pretend and play is likely necessary. 

 

Although this paper sent shock waves in the pretend-play research community, it ended up being an overall positive ripple. Now, almost every paper about pretend play addresses this article and most of them follow Lillard and colleagues’ recommendations for how to better conduct science in this area. Overall, it helped strengthen the field by pushing researchers to approach studying play in a way that ensures their results are real and conclusions have a strong scientific backing.[1]

KEY Integrative THEME

Psychological science relies on empirical evidence and adapts as new data develop.

Like the Ethics theme, this one applies to the work of all psychologists. Although it seems mostly related to psychologists who are actively conducting research, it also includes psychologists who are consuming research, including health service psychologists, teachers, and students of psychology.

First, let’s break this theme down into its two components:

  1. Relies on empirical evidence
  2. Adapts as new data develop

Psychological science relies on empirical evidence

Western Psychology has relied heavily on quantitative research (using numbers and statistics) and the scientific approach to build its theoretical foundations. There are three basic features to using the scientific approach.

 

  1. Research strives to be systematic – As Lilliard suggested, researchers must be methodical and consistent in their methods of inquiry so they can trust their results.
  2. Research is focused on an empirical question –  In the pretend play example, the question was “is pretend play related to social skill development?” Empirical questions can be answered through observation and experience.
  3. Research creates public knowledge – Psychologists, like other scientists, publish their work. This usually means writing an article in a peer-reviewed professional journal targeted for other scholars. The article explains their research question in the context of previous research, describes the methods used to answer their question, and clearly presents their results and conclusions.

 

Psychological science adapts as new data develop

Psychological knowledge is mostly developed through theories. Theories are a well developed set of ideas that propose an explanation for observed behaviors we see in the world – and they are often in a continuous cycle of development and testing. Theories are typically based on multiple studies conducted by different researchers over many years. No one study can lead to full understanding – we must consider different contexts and populations using a variety of methods. Additionally, despite researchers’ best efforts, sometimes methods are flawed and/or conclusions are incomplete. Publishing the research allows others in the scientific community to contribute to the conversation, detect and correct errors, and develop new ways of answering the question. Over time, theories become more refined and accurate.

As we can see in Figure 4.1, knowledge that is built through the scientific process often changes over time.

 

The scientific method - Theory - Hypothesis - Design the Studey - Collect & Analyze data -- Summarize Data and Report Findings
Fig 4.1 The Scientific Method

 

Psychological scientists ask empirical questions and make predictions (hypotheses) derived from theories and then test those hypotheses. If the results are consistent with what was expected, then the theory is supported. If the results are not consistent, then the theory might need modification and further studies are needed.

Theories are too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory. A hypothesis is a testable prediction. For example, in a 2014 study by Mueller and Oppenheimer,[2] researchers hypothesized that when learners take notes by hand, it leads to better learning than taking notes on a laptop. Then they set out to test their prediction by collecting and analyzing data. They found support for their hypothesis, which sent a ripple through the education world. Following this, other researchers set out to replicate these results – some in controlled research labs, and others in live classrooms.[3] Some results suggested that the way notes are taken is less important than how deeply the information is processed. This further contributes to the theories about how people learn. This is good news for folks who struggle with physically writing – they may be able to learn just as well as someone who is writing longhand, as long as they process the information in a deep way and avoid distraction.

photo showing part of a laptop keyboard and a notebook. There is a hand holding a pen writing on the notebook. In the background there is a sign saying Life is a Journey Enjoy IT

Adding an additional layer to this is the context in which theories are developed and research is conducted. Consider technology such as predictive text and tools to edit and organize your typed notes. Since the time the Mueller & Oppenheimer studies were conducted, technology has rapidly advanced, and the ways that students interact with technology and note-taking has changed quite a bit. Although the results of the 2014 study are considered robust, they may not be fully generalizable to today’s learners. This is why psychological science must continually re-visit theories and adapt as new data develop. A recently published meta-analysis (a method where the results of many studies are aggregated) suggests that handwritten notes are superior to typed notes for learning, but calls for more research.[4]

 The debate around “laptop versus longhand” is not quite settled. Perhaps this is no longer the most relevant question, however. Maybe we should be asking how technology and handwritten notes interact  for optimal learning. The effect of instructor-created supplements such as slides and lecture outlines has not been studied, nor has the unique characteristics such as note-taking skill and motivation. Further, many college instructors no longer rely on lecture + student note-taking as the primary activity in class. Student-driven discussion, small group activities, and physical movement are additional forms of learning. As with most evidence-based decisions, when instructors make classroom policies or students choose note-taking modality, they must consider the research as well as teaching style, learning goals, accessibility, and other individual differences in how humans learn.

 

Link to Learning

Applying the scientific method to our research is called “empiricism” and is only one way of knowing about the world. When you hear the results of a study that doesn’t match your personal experience, rather than dismissing the research, you might remind yourself that the research is one way of knowing, and lived experience is another way of knowing. For more about ways of knowing in psychology, check out this video resource:  VIDEO activity on ways of Knowing

 

 

Determining Cause and Effect

Did the results of the note-taking study surprise you at first, or did you feel skeptical about it? You might have thought—
doesn’t learning depend on a lot of different factors—including a person’s attention and desire to learn, time spent studying, their prior knowledge, or life circumstances? What about the topic or level of the class and one’s personal interests? What if some people were bored by the lectures and didn’t pay attention? Wouldn’t people who are having a bad day perform worse on the quizzes?

You are absolutely right and you are thinking like a psychologist! All of these factors (and many more) can easily affect a person’s performance on quiz questions. So, how can we take all of these different variables into account and be sure that it is the different note-taking that is causing the change in quiz scores?

Although there are many valid ways of learning and knowing about the world, if we want to understand cause and effect, we need to use a specific kind of empiricism called an experimental design to control for those variables that can get in the way of the one we want to study. There are two major required features of an experimental research design:
random assignment and manipulation or control of variables. The Mueller and Oppenheimer (2014) study is an example of this.  

In random assignment, participants do not pick which group they are in, the experimenter assigns them to a particular condition or group randomly. This might be based on the flip of a coin, the roll of a die, or the generation of a random number by a computer. Random assignment makes it so the groups are relatively similar on all characteristics, except the one that the experimenter wants to manipulate. In Mueller and Oppenheimer’s study, participants were randomly assigned to either computer or longhand note-taking, because of that random assignment, there should be equal numbers of people with strong attention skills and equal numbers of people who were having a bad day in each group.

The requirement of manipulation or control means researchers are able to give each group a different task, or change the conditions under which participants are doing the task, so they can compare participants’ performance on some measurement. In the note-taking study, the researchers manipulated whether or not students took notes using their computer or by hand (independent variables) and measured the outcome (dependent variable). Because these researchers manipulated the conditions by randomly assigning students to computer or hand notes, they can make some causal claims about note-taking impacting test scores in those experimental conditions. In contrast, in other types of research design, our claims about cause and effect are much weaker because we don’t have experimental control over the groupings.

How could the note-taking effect be tested in a more realistic setting? If you said “test it in a real classroom” you’re thinking like a psychologist again! Students typically choose their own method of note-taking – so studying “reality” might mean that rather than assigning the groups, we study the grouping that already exists to compare laptop to longhand. The study design used here is known as a quasi-experimentEven though a quasi-experimental design is similar to an experimental design (i.e., the researcher controls or manipulates important variables), because there’s no random assignment you cannot reasonably draw the same conclusions you could with an experimental design. However, quasi-experiments are still very useful and often necessary when we want to gain information about groups of people that already exist and cannot be manipulated, such as culture, income, religion, or family history.

It is sometimes difficult to know whether experimental findings would apply in real-world settings. This is one of the reasons why it is important that psychology as a discipline includes multiple types of research methods to build knowledge about the mind and behavior. In another type of design, correlational studies, scientists do not intervene and measure changes in behavior. Instead, they passively observe and measure variables and identify patterns of relationships between them. So, they can make predictions about relationships between two variables, but they cannot make conclusions about cause and effect.

We call this the golden rule of research; correlation does not equal causation. For example, if researchers collected data in a real classroom, recorded the note-taking methods, and compared information about grades – but did not intervene at all – that data would inform the theory but we could not make a cause-effect claim based on it. Correlational studies show the
strength and direction of the relationship between two constructs or variables of interest but not whether one causes the other.

 

 

Analysis Questions

To assess whether you understand the strength of claims made based on empirical research, you should ask yourself the following:

 

  • What was the question being addressed by the investigator(s)?
  • Did the study design allow the researchers to answer the question?
  • What were the results and conclusions regarding the question?

 

Fig. 4.2 Research Design Decision Tree Text version linked here.

 

The Replication Crisis

Successful replications of published research studies make psychologists (and scientists in general) more confident in those findings, but repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists in different directions. For example, we might be eager to take a new drug if a study shows it improves peoples’ memories. But if other scientists cannot replicate the results, we would question the original study’s claims.

Replication is an important tool in “adapting as new data develop.” When results don’t replicate, we ask why – and can design more research to answer that question. Replication is not the same as “further research.” In replication, scientists attempt to repeat the exact methods and study design of the original research. In further research, scientists build on what has already been done, and the questions that rise from prior research. In almost all cases, one study cannot make definitive conclusions about a research question. At best, results from one study apply to the participants in that study (the sample). If we want to say something about a population of people, we need evidence from many studies – we call this generalization of results. Replication studies are one piece of the puzzle.

In recent years, concerns about a “replication crisis” have risen. This has affected several scientific fields, including psychology. Some of the most well-known scientists have produced research that has failed to replicate in another research lab.[6] In some cases, this might be partially explained by a lack of transparency; there may be important details missing from the original research article that make the “recipe” hard to follow precisely. Also, sample sizes (numbers of participants) in the original studies were often too small and not representative of the general population. There is also a publication bias in psychology, where journals typically only publish significant results that support hypotheses, whereas studies that fail to show significant differences are not made public.

Finally, although psychologists are bound by a code of ethics, a very small number of researchers falsified their results—this is the scientific equivalent of fake news. When false reports are discovered, the original published research articles are often retracted from the scientific journals. Unfortunately, fake scientific news can sometimes have very disturbing and widespread consequences even after they are retracted. A critical example of this in medicine comes from false data about vaccine side effects. In 1998, a team of doctors in the UK led by Andrew Wakefield published a study on 12 children in a highly influential journal called The Lancet[7] claiming that autism was linked to the measles, mumps, and rubella (MMR) vaccine. Subsequently, the article has been retracted, and the main author, Wakefield, has been discredited by indisputable evidence that he falsified the data. He also failed to get proper consent from the participants and their parents. Since then, many, many other studies have overwhelmingly shown that vaccination is NOT linked to autism. However, Wakefield’s article got a lot of media attention, and parents became reluctant to vaccinate their children, despite the fact that measles can result in blindness, encephalitis (swelling of the brain), pneumonia, and even death. This legacy continues today; increasing numbers of children are unvaccinated and the prevalence of measles is rising dramatically in Europe and the USA.[8]

 

Bringing it all together

From Lillard’s critical examination of pretend play research to the ongoing debates about note-taking methods, we see how psychological science constantly reevaluates its methods and conclusions. Psychology is a living, evolving discipline. The scientific process —with its emphasis on empirical questions, systematic research, and public knowledge—combined with human creativity and critical thinking, ensures that theories evolve over time. The replication crisis, while challenging, demonstrates the field’s commitment to self-correction and improvement. By embracing new data, refining methodologies, and addressing biases, psychological science continues to strive for more accurate and reliable insights into human behavior and cognition. This adaptive nature strengthens the field and reminds us of the importance of critical thinking, tolerance for ambiguity, and the ongoing nature of scientific inquiry.

The Unit 2 Supplement on the following pages describes various approaches to scientific inquiry, including some you may not hear about in your typical classes.

 

CHAPTER VOCABULARY

 

  • Control of variables
  • Correlational study
  • Dependent variable
  • Direct replication study
  • Empirical question
  • Experimenter bias
  • Generalization
  • Hypothesis
  • Independent variable
  • Longitudinal study
  • Negative correlation
  • Participant
  • Phenomenological approach
  • Quantitative research
  • Quasi-experiment
  • Random assignment
  • Replication
  • Replication crisis
  • Systematic
  • Theory

 

License

Icon for the Creative Commons Attribution 4.0 International License

4 | The Psychological Scientist Copyright © 2024 by Alison Heinhold Melley and Anna Sofia Caruso is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.