Decision-Making

Decision-Making Systems

The human brain processes information for decision-making using one of two routes: a reflective system and a reactive (or reflexive) system. [1],[2] The reflective system, also referred to as System 2 thinking (Video 1), is logical, analytical, deliberate, and methodical, while the reactive system, also referred to as System 1 thinking (Video 1), is quick, impulsive, and intuitive, relying on emotions or habits to provide cues for what to do next. Research in neuropsychology suggests that the brain can only use one system at a time for processing information[3] and that the two systems are directed by different parts of the brain. The prefrontal cortex is more involved in the reflective system, and the basal ganglia and amygdala (more primitive parts of the brain, from an evolutionary perspective) are more involved in the reactive system.[4]

Video 1: Daniel Kahneman: Thinking Fast vs. Thinking Slow. Closed captioning is available.

Reactive Decision-Making

We tend to assume that the logical, analytical route leads to superior decisions, but whether this is accurate depends on the situation. The quick, intuitive route can be lifesaving; when we suddenly feel intense fear, a fight-or-flight response kicks in that leads to immediate action without methodically weighing all possible options and their consequences. Additionally, experienced decision-makers can often make decisions very quickly because experience or expertise has taught them what to do in a given situation. These decision-makers might not be able to explain the logic behind their decision, and will instead say they just went with their “gut,” or did what “felt” right. Because the decision-maker has faced a similar situation in the past and has figured out how to deal with it, the brain shifts immediately to the quick, intuitive decision-making system.[5]

Reflective Decision-Making

The quick route is not always the best decision-making path to take, however. When faced with novel and complex situations, it is better to process available information logically, analytically, and methodically. As a decision-maker, you need to think about whether a situation requires not a fast, “gut” reaction, but some serious thought prior to making a decision. It is especially important to pay attention to your emotions, because strong emotions can make it difficult to process information rationally.

Successful decision make recognize the effects of emotions and know to wait and address a volatile situation after their emotions have calmed down. Intense emotions—whether positive or negative—tend to pull us toward the quick, reactive route of decision-making. Have you ever made a large “impulse” purchase that you were excited about, only to regret it later? This speaks to the power our emotions exert on our decision-making. Big decisions should generally not be made impulsively, but reflectively.

Types of Decisions

In the context of work, we generally have limited time and must use that time wisely to be effective. Thus, is important for us to distinguish between decisions that can have structure and routine applied to them (called programmed decisions) and decisions that are novel and require thought and attention (nonprogrammed decisions).

Programmed Decisions

Programmed decisions are those that are repeated over time and for which an existing set of rules can be developed to guide the process. These decisions might be simple, or they could be fairly complex, but the criteria that go into making the decision are known or can at least be estimated with a reasonable degree of accuracy. For example, deciding how many raw materials to order should be a programmed decision based on anticipated production, existing stock, and anticipated length of time for the delivery of the final product. As another example, consider a retail store manager developing the weekly work schedule for part-time employees. A manager must consider how busy the store is likely to be, taking into account seasonal fluctuations in business. Then, they must consider the availability of the workers by taking into account requests for vacation and for other obligations that employees might have (such as school). Establishing the schedule might be complex, but it is still a programmed decision: it is made on a regular basis based on well-understood criteria, so structure can be applied to the process.

For programmed decisions, individuals often develop heuristics, or mental shortcuts, to help reach a decision. For example, a retail store manager may not know how busy the store will be the week of a big sale, but might routinely increase staff by 30% every time there is a big sale (because this has been fairly effective in the past). Heuristics are efficient—they save time for the decision-maker by generating an adequate solution quickly. Heuristics don’t necessarily yield the optimal solution—deeper cognitive processing may be required for that. However, they generally yield a good solution. Heuristics are often used for programmed decisions, because experience in making the decision over and over helps the decision-maker know what to expect and how to react. Programmed decision-making can also be taught fairly easily to another person. The rules and criteria, and how they relate to outcomes, can be clearly laid out so that a good decision can be reached by the new decision-maker. Programmed decisions are also sometimes referred to as routine or low-involvement decisions because they don’t require in-depth mental processing to reach a decision.

Nonprogrammed Decisions

In contrast, nonprogrammed decisions are novel, unstructured decisions that are generally based on criteria that are not well-defined. With nonprogrammed decisions, information is more likely to be ambiguous or incomplete, and the decision-maker may need to exercise some thoughtful judgment and creative thinking to reach a good solution. These are also sometimes referred to as nonroutine decisions or as high-involvement decisions because they require greater involvement and thought on the part of the decision-maker. For example, consider a manager trying to decide whether or not to adopt a new technology. There will always be unknowns in situations of this nature. Will the new technology really be better than the existing technology? Will it become widely accepted over time, or will some other technology become the standard? The best the manager can do in this situation is to gather as much relevant information as possible and make an educated guess as to whether the new technology will be worthwhile. Clearly, nonprogrammed decisions present the greater challenge.

The Decision-Making Process

While decisions-makers can use mental shortcuts with programmed decisions, they should use a systematic process with nonprogrammed decisions. The decision-making process is illustrated in Figure 1 and can be broken down into a series of six steps:

  1. Recognize that a decision needs to be made.
  2. Generate multiple alternatives.
  3. Analyze the alternatives.
  4. Select an alternative.
  5. Implement the selected alternative.
  6. Evaluate its effectiveness.

While these steps may seem straightforward, decision-makers often skip steps or spend too little time on some steps. In fact, sometimes decision-makers refuse to acknowledge a problem exists (Step 1) because they aren’t sure how to address it. We’ll discuss the steps more at the end of this chapter, when we review ways to improve the quality of decision-making.

A flowchart shows the six steps in the decision-making process.
Figure 1: The decision-making process.

You may notice similarities between the two systems of decision-making in our brains (reactive and reflective) and the two types of decisions (programmed and nonprogrammed). Nonprogrammed decisions will generally need to be processed via the reflective system (System 2) in our brains in order for us to reach a good decision. But with programmed decisions, heuristics can allow decision-makers to switch to the quick, reactive system (System 1) and then move along quickly to other issues.

Bounded Rationality

In his Nobel Prize–winning work, psychologist Herbert Simon[6],[7] argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision-makers to accurately make the best choice based on the information that is available.

About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman[8],[9],[10] produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Video 2). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision-making.

Video 2: Cognitive Bias. Closed captioning is available. Click HERE to read a transcript.

Barriers to Effective Decision-Making

Confirmation Bias

The brain excels at organizing information into categories, and it doesn’t like to expend the effort to re-arrange once the categories are established. As a result, we tend to pay more attention to information that confirms our existing beliefs and less attention to information that is contrary to our beliefs, a shortcoming that is referred to as confirmation bias (Video 3).[11]

Video 3: Conformation Bias. Closed captioning is available. Click HERE to read a transcript.

In fact, we don’t like our existing beliefs to be challenged. Such challenges feel like a threat, which tends to push our brains towards the reactive system and prevent us from being able to logically process the new information via the reflective system. It is hard to change people’s minds about something if they are already confident in their convictions. So, for example, when a manager hires a new employee who they like and are convinced is going to be excellent, they will tend to pay attention to examples of excellent performance and ignore examples of poor performance (or attribute those events to things outside the employee’s control). The manager will also tend to trust that employee and therefore accept their explanations for poor performance without verifying the truth or accuracy of those statements. The opposite is also true; if we dislike someone, we tend to pay attention to their negatives and ignore or discount their positives. We are less likely to trust them or believe what they say at face value. This is why politics tend to become very polarized and antagonistic within a two-party system. It can be very difficult to have accurate perceptions of both those we like and those we dislike. The effective decision-maker will try to evaluate situations from multiple perspectives and gather multiple opinions to offset this bias when making decisions.

Anchoring Bias

The anchoring bias refers to our tendency to rely on initial values, prices, or quantities when estimating the actual value, price, or quantity of something. If you are presented with a quantity, even if that number is arbitrary, you will have a hard discounting it in your subsequent calculations; the initial value “anchors” subsequent estimates. For instance, Tversky and Kahneman[12] reported an experiment in which subjects were asked to estimate the number of African nations in the United Nations. First, the experimenters spun a wheel in front of the subjects that produced a random number between 0 and 100. Let’s say the wheel landed on 79. Subjects were asked whether the number of nations was higher or lower than the random number. Subjects were then asked to estimate the real number of nations. Even though the initial anchoring value was random, people in the study found it difficult to deviate far from that number. For subjects receiving an initial value of 10, the median estimate of nations was 25, while for subjects receiving an initial value of 65, the median estimate was 45.

In the same paper, Tversky and Kahneman described the way that the anchoring bias interferes with statistical reasoning. In a number of scenarios, subjects made irrational judgments about statistics because of the way the question was phrased (i.e., they were tricked when an anchor was inserted into the question). Instead of expending the cognitive energy needed to solve the statistical problem, subjects were much more likely to “go with their gut,” or think intuitively. That type of reasoning generates anchoring bias. To counter the anchoring bias, the effective decision-maker will the urge to latch on to the first thought that jumps into their head, and will try to think the problem through with all the cognitive resources at their disposal.

Availability Heuristic

The availability heuristic refers to the tendency to evaluate new information based on the most recent or most easily recalled examples. The availability heuristic occurs when people take easily remembered instances as being more representative than they objectively are (i.e., based on statistical probabilities). In very simple situations, the availability of instances is a good guide to judgments. Suppose you are wondering whether you should plan for rain. It may make sense to anticipate rain if it has been raining a lot in the last few days since weather patterns tend to linger in most climates. More generally, scenarios that are well-known to us, dramatic, recent, or easy to imagine are more available for retrieval from memory. Therefore, if we easily remember an instance or scenario, we may incorrectly think that the chances are high that the scenario will be repeated. For instance, people may overestimate the probability of dying in a commercial plane crash. In fact, these are extremely rare occurrences compared to death by other modes of transportation. But stories of commercial plane crashes are featured prominently in the news when they occur. Because these instances are dramatic and easily recalled, we have a skewed view of how frequently they occur.

Sunk Cost Fallacy

Sunk costs refer to the time, energy, money, or other costs that have been expended in the past. These costs are “sunk” because they cannot be recovered. The sunk-cost fallacy is thinking that attaches a value to things in which you have already invested resources that is greater than the value those things have today. Human beings have a natural tendency to become attached to whatever they invest in and are resistant to giving something up even after it has been proven to be a liability. For example, a person may have invested money into a business over time, and the business may clearly be failing. Nonetheless, the businessperson will be reluctant to close shop or sell the business because of the time, money, and emotional energy they have spent on the venture.

Escalation of Commitment

Closely related to the sunk cost fallacy, escalation of commitment is the tendency of decision-makers to remain committed to poor decisions, even when doing so leads to negative outcomes. It can be thought of as the behavioral manifestation of the sunk cost fallacy. Once we commit to a decision, we may find it difficult to reevaluate that decision rationally. It can seem easier to “stay the course” than to admit (or to recognize) that a decision was poor. Escalation of commitment is the behavior of “throwing good money after bad” by continuing to invest in something that has lost its worth because of emotional attachment to a failed–or failing–cause. It’s important to acknowledge that not all decisions are going to be good ones, in spite of our best efforts.

Why does escalation of commitment occur? There may be many reasons, but two are particularly important. First, decision-makers may not want to admit that they were wrong. This may be because of personal pride or being afraid of the consequences of such an admission. Second, decision-makers may incorrectly believe that spending more time and energy might somehow help them recover their losses. Effective decision-makers recognize that progress down the wrong path isn’t really progress, and they are willing to reevaluate decisions and change direction when appropriate. Implementing strict turn-back points or assigning different decision-makers for different stages of a decision are additional techniques to counter escalation of commitment.

Gambler’s Fallacy

Another type of faulty reasoning that is closely related to the sunk-cost fallacy is the gambler’s fallacy, in which a person reasons that future chance events will be more likely if they have not happened recently. For instance, if I flip a coin many times in a row, I may get a string of heads. But even if I flip several heads in a row, that does not make it more likely I will flip tails on the next coin flip. Each coin flip is statistically independent, and there is an equal chance of turning up heads or tails. The gambler, like the reasoner from sunk costs, is tied to the past when they should be reasoning about the present and future.

Framing Bias

Framing bias refers to the tendency to be influenced by the way that a situation or problem is presented (Video 4). For example, when making a purchase, customers find it easier to let go of a discount as opposed to accepting a surcharge, even though they both might cost the person the same amount of money. Similarly, customers tend to prefer a statement such as “85 percent lean beef” as opposed to “15 percent fat.”[13] It is important to be aware of this tendency, because depending on how a problem is presented to us, we might choose an alternative that is disadvantageous simply because of the way it is framed.

Video 4: Framing. Closed captioning is available.

Attribution Theory

A major influence on how people behave is the way they interpret the events around them. People who feel they have control over what happens to them are more likely to accept responsibility for their actions than those who feel control of events is out of their hands. The cognitive process by which people interpret the reasons or causes for their behavior is described by attribution theory.[14] Specifically, “attribution theory concerns the process by which an individual interprets events as being caused by a particular part of a relatively stable environment.”[15]

Attribution theory is based largely on the work of Fritz Heider. Heider argues that behavior is determined by a combination of internal forces (e.g., abilities or effort) and external forces (e.g., task difficulty or luck), as well as that it is perceived determinants, rather than actual ones, that influence behavior. Hence, if employees perceive that their success is a function of their own abilities and efforts, they can be expected to behave differently than they would if they believed job success was due to chance.

The Attribution Process

The underlying assumption of attribution theory is that people are motivated to understand their environment and the causes of particular events. If individuals can understand these causes, they will then be in a better position to influence or control the sequence of future events. This process is diagrammed in Figure 2.

Figure 2: The general attribution process.

Specifically, attribution theory suggests that particular behavioral events (e.g., being promoted) are analyzed by individuals to determine their causes. This process may lead to the conclusion that the promotion resulted from the individual’s own effort or, alternatively, from some other cause, such as luck. Based on such cognitive interpretations of events, individuals revise their cognitive structures and rethink their assumptions about causal relationships. For instance, an individual may infer that performance does indeed lead to promotion. Based on this new structure, the individual makes choices about future behavior. In some cases, the individual may decide to continue exerting high levels of effort in the hope that it will lead to further promotions. On the other hand, if an individual concludes that the promotion resulted primarily from chance and was largely unrelated to performance, a different cognitive structure might be created, and there might be little reason to continue exerting high levels of effort. In other words, the way in which we perceive and interpret events around us significantly affects our future behaviors.

Attribution Biases

One final point should be made with respect to the attributional process. In making attributions concerning the causes of behavior, people tend to make certain errors of interpretation. Two such errors, or attribution biases, should be noted here. The first is called the fundamental attribution error (Video 5). This error is a tendency, when assessing another person’s actions or behavior, to underestimate the effects of external or situational causes and to overestimate the effects of internal or personal causes. For example, if we observe a major problem within another department, we are more likely to blame people rather than events or situations.

Video 5: Fundamental Attribution Error. Closed captioning is available. Click HERE to read a transcript.

The second error in attribution processes is called the self-serving bias (Video 6). There is a tendency, not surprisingly, for individuals to attribute success on an event or project to their own actions while attributing failure to others. Hence, we often hear sales representatives saying, “I made the sale,” but “They stole the sale from me” rather than “I lost it.” Considered together, fundamental attribution error and self-serving bias help explain why employees looking at the same event often “see” substantially different things.

Video 6: Self-serving Bias. Closed captioning is available. Click HERE to read a transcript.

Interpersonal Biases

Similar-to-me Bias

One of the most common biases is the tendency to like other people who we think are similar to us (Video 7).[16] While these similarities can be observable (based on demographic characteristics such as race, gender, and age), they can also be a result of shared experiences (such as attending the same university) or shared interests (such as being in a club or on a sports team together). This similar-to-me bias, also known as the similarity bias or the affinity bias, can lead to a variety of problems within organizations, such as hiring less-qualified applicants because they are similar to the manager in some way, or paying more attention to some employees’ opinions and ignoring or discounting others. 

Video 7: Blind Spots: Broaden Perspectives. Closed captioning is available.

Self-Fulfilling Prophecy and Stereotypes

A self-fulfilling prophecy is an expectation held by a person that alters their behavior in a way that tends to make it true. For example, when we hold stereotypes about a person, we tend to treat the person according to our expectations. This treatment can influence the person to act according to our stereotypic expectations, thus confirming our stereotypic beliefs. Research by Rosenthal and Jacobson found that disadvantaged students whose teachers expected them to perform well had higher grades than disadvantaged students whose teachers expected them to do poorly.[17]

Consider this example of cause and effect in a self-fulfilling prophecy: If an employer expects a job applicant to be incompetent, the potential employer might treat the applicant negatively during the interview by engaging in less conversation, making little eye contact, and generally behaving coldly toward the applicant.[18] In turn, the job applicant will perceive that the potential employer dislikes him, and he will respond by giving shorter responses to interview questions, making less eye contact, and generally disengaging from the interview. After the interview, the employer will reflect on the applicant’s behavior, which seemed cold and distant, and the employer will conclude, based on the applicant’s poor performance during the interview, that the applicant was in fact incompetent. Do you think this job applicant is likely to be hired?

Another dynamic that can reinforce stereotypes is confirmation bias. When interacting with the target of our prejudice, we tend to pay attention to information that is consistent with our stereotypic expectations and ignore information that is inconsistent with our expectations. Furthermore, we tend to seek out information that supports our stereotypes or pre-existing beliefs and ignore information that is inconsistent with our stereotypes or pre-existing beliefs.[19] In the job interview example, the employer may not have noticed that the job applicant was friendly and engaging, and that he provided competent responses to the interview questions in the beginning of the interview. Instead, the employer focused on the job applicant’s performance in the later part of the interview, after the applicant changed his demeanor and behavior to match the interviewer’s negative treatment. 

In-Groups and Out-Groups

We all belong to gender, race, age, and social economic groups. These groups provide a powerful source of identity and self-esteem and serve as our in-groups.[20] An in-group is a group that we identify with or see ourselves as belonging to. A group that we don’t belong to, or an out-group, is a group that we view as fundamentally different from us (Video 8). Because we often feel a strong sense of belonging and emotional connection to our in-groups, we develop in-group bias: a preference for our own group over other groups. This in-group bias can result in prejudice and discrimination because the out-group is perceived as different and is less preferred than our in-group.

Video 8: In-group/Out-group. Closed captioning is available. Click HERE to read a transcript.

One function of prejudice is to help us feel good about ourselves and maintain a positive self-concept. This need to feel good about ourselves extends to our in-groups: we want to feel good and protect our in-groups. We seek to resolve threats individually and at the in-group level. This often happens by blaming an out-group for the problem. Scapegoating is the act of blaming an out-group when the in-group experiences frustration or is blocked from obtaining a goal.[21]

Despite the group dynamics that seem only to push groups toward conflict, there are forces that promote reconciliation between groups: the expression of empathy, the acknowledgment of past suffering on both sides, and the halt of destructive behaviors.

Techniques for Making Better Decisions

For situations in which the quality of the decision is more critical than the time spent on the decision, decision-makers can use several tactics. As stated previously, nonprogrammed decisions should be addressed using a systematic process. We therefore discuss these tactics within the context of the decision-making steps. To review, the steps include the following:

  1. Recognize that a decision needs to be made.
  2. Generate multiple alternatives.
  3. Analyze the alternatives.
  4. Select an alternative.
  5. Implement the selected alternative.
  6. Evaluate its effectiveness.

Step 1: Recognize That a Decision Needs to Be Made

Ineffective decision-makers will sometimes ignore problems because they aren’t sure how to address them. However, this tends to lead to more and bigger problems over time. Effective decision-makers will be attentive to problems and to opportunities and will not shy away from making decisions that could make their team, department, or organization more effective and more successful.

Step 2: Generate Multiple Alternatives

Often a decision-maker only spends enough time on Step 2 to generate two alternatives and then quickly moves to Step 3 in order to make a quick decision. A better solution may have been available, but it wasn’t even considered. It’s important to remember that for nonprogrammed decisions, you don’t want to rush the process. Generating many possible options will increase the likelihood of reaching a good decision. Some tactics to help with generating more options include talking to other people (to get their ideas) and thinking creatively about the problem.

Talk to Other People

Decision-makers can often improve the quality of their decision-making by involving others in the process, especially when generating alternatives. Other people tend to view problems from different perspectives because they have had different life experiences. This can help generate alternatives that you might not otherwise have considered. Talking through big decisions with a mentor can also be beneficial, especially for new decision-makers who are still learning and developing their expertise; someone with more experience will often be able to suggest more options.

Be Creative

We don’t always associate decision-making with creativity, but creativity can be quite beneficial in some situations. In decision-making, creativity can be particularly helpful when generating alternatives. Creativity is the generation of new or original ideas; it requires the use of imagination and the ability to step back from traditional ways of doing things and seeing the world. While some people seem to be naturally creative, it is a skill that you can develop. Being creative requires letting your mind wander and combining existing knowledge from past experiences in novel ways. Creative inspiration may come when we least expect it because we aren’t intensely focused on the problem—we’ve allowed our minds to wander. Decision-makers who strive to be creative will take the time to view a problem from multiple perspectives, try to combine information in new ways, search for overarching patterns, and use their imaginations to generate new solutions to existing problems.

Step 3: Analyze Alternatives

When implementing Step 3, it is important to take many factors into consideration. Some alternatives might be more expensive than others, for example, and that information is often essential when analyzing options. Effective decision-makers will ensure that they have collected sufficient information to assess the quality of the various options. They will also utilize the tactics described below: engaging in evidence-based decision-making, talking to other people, and considering long-term and ethical implications.

Evidence-based Decision-Making

Evidence-based decision-making is an approach to decision-making that states that decision-makers should systematically collect the best evidence available to help them make effective decisions. The evidence that is collected might include the decision-maker’s own expertise, but it is also likely to include external evidence, such as a consideration of other stakeholders, contextual factors relevant to the organization, potential costs and benefits, and other relevant information. With evidence-based decision-making, decision-makers are encouraged to rely on data and information rather than their intuition. This can be particularly beneficial for new decision-makers or for experienced decision-makers who are starting something new.

Talk to Other People

As mentioned previously, it can be worthwhile to get help from others when generating options. Another good time to talk to other people is while analyzing those options; other individuals in the organization may help you assess the quality of your choices. Seeking out the opinions and preferences of others is also a great way to maintain perspective, so getting others involved can help you to be less biased in your decision-making (provided you talk to people whose biases are different from your own).

Consider Long-term Implications

A focus on immediate, short-term outcomes—with little consideration for the future—can cause problems. For example, imagine that a manager must decide whether to issue dividends to investors or put that money into research and development to maintain a pipeline of innovative products. It’s tempting to just focus on the short-term: providing dividends to investors tends to be good for stock prices. But failing to invest in research and development might mean that in five years the company is unable to compete effectively in the marketplace, and as a result the business closes. Paying attention to the possible long-term outcomes is a crucial part of analyzing alternatives.

Consider Ethical Implications

It’s also important to think about whether the various alternatives available to you are better or worse from an ethical perspective, as well. Sometimes decision-makers make unethical choices because they haven’t considered the ethical implications of their actions. In the 1970s, Ford manufactured the Pinto, which had an unfortunate flaw: the car would easily burst into flames when rear-ended. The company did not initially recall the vehicle because they viewed the problem from a financial perspective, without considering the ethical implications.[22] People died as a result of the company’s inaction. Unfortunately, these unethical decisions continue to occur—and cause harm—on a regular basis in our society. Effective decision-makers strive to avoid these situations by thinking through the possible ethical implications of their decisions.

Step 4: Select an Alternative

Once alternative options have been generated and analyzed, the decision-maker must select one of the options. Sometimes this is easy—one option is clearly superior to the others. Often, however, this is a challenge because there is not a clear “winner” in terms of the best alternative. As mentioned earlier in the chapter, there may be multiple good options, and which one will be best is unclear even after gathering all available evidence. There may not be a single option that doesn’t upset some stakeholder group, so you will make someone unhappy no matter what you choose. A weak decision-maker may become paralyzed in this situation, unable to select among the various alternatives for lack of a clearly “best” option. They may decide to keep gathering additional information in hopes of making their decision easier. As a decision-makers, it’s important to think about whether the benefit of gathering additional information will outweigh the cost of waiting. If there are time pressures, waiting may not be possible.

Recognize that Perfection is Unattainable

Effective decision-makers recognize that they will not always make optimal (best possible) decisions because they don’t have complete information and/or don’t have the time or resources to gather and process all the possible information. They accept that their decision-making will not be perfect and strive to make good decisions overall. Recognizing that perfection is impossible will also help decision-makers to adjust and change if they realize later on that the selected alternative was not the best option.

Talk to Other People

This is another point in the process at which talking to others can be helpful. Selecting one of the alternatives will ultimately be your responsibility, but when faced with a difficult decision, talking through your choice with someone else may help you clarify that you are indeed making the best possible decision from among the available options. Sharing information verbally also causes our brains to process that information differently, which can provide new insights and bring greater clarity to our decision-making.

Step 5: Implement the Selected Alternative

After selecting an alternative, you must implement it. This may seem too obvious to even mention, but implementation can sometimes be a challenge, particularly if the decision is going to create conflict or dissatisfaction among some stakeholders.

Sometimes we know what we need to do but still try to avoid actually doing it because we know others will be upset—even if it’s the best solution. Follow-through is a necessity, however, to be effective as a decision-makers. If you are not willing to implement a decision, it’s a good idea to engage in some self-reflection to understand why. If you know that the decision is going to create conflict, try to think about how you’ll address that conflict in a productive way. It’s also possible that we feel that there is no good alternative, or we are feeling pressured to make a decision that we know deep down is not right from an ethical perspective. These can be among the most difficult of decisions. You should always strive to make decisions that you feel good about—which means doing the right thing, even in the face of pressures to do wrong.

Step 6: Evaluate the Effectiveness of Your Decision

Decision-makers sometimes skip the last step in the decision-making process because evaluating the effectiveness of a decision takes time, and decision-makers, who are generally busy, may have already moved on to other projects. Yet evaluating effectiveness is important. When we fail to evaluate our own performance and the outcomes of our decisions, we cannot learn from the experience in a way that enables us to improve the quality of our future decisions.

Optional Resources to Learn More

Articles
Behavioral Scientist: Mental Models to Help You Cut Your Losses by Annie Duke
The AtlanticHow to Predict the Future by David Epstein
Scientific American: Of 2 Minds: How Fast and Slow Thinking Shape Perception and Choice by Daniel Kahneman
McKinsey Insights: What is Decision Making?
Books
Decisive: How to Make Better Choices in Life and Work by Chip Heath and Dan Heath
How to Decide: Simple Tools for Making Better Choices by Annie Duke
Smart Choices: A Practical Guide to Making Better Decisions by John Hammond, Ralph Keeney, and Howard Raiffa
The Checklist Manifesto: How to Get Things Right by Atul Gwande
Think Again: The Power of Knowing What You Don’t Know by Adam Grant
Thinking Fast and Slow by Daniel Kahneman
Podcasts
Re:Thinking with Adam Grant: Daniel Kahneman Doesn’t Trust Your Intuition YouTube Transcript
WorkLife with Adam Grant: How to Rethink a Bad Decision YouTube Transcript
Videos
California Management Review: Decision-Making in Organizations
California Management Review: Overcoming Cognitive Bias in Business
CrashCourse: How to Make Tough Decisions
Harvard Business Review: Hidden Traps in Decision Making
LearnFree: Decision Making Strategies
Outsmarting Implicit Bias: Illusions at Work
SciShow Psych: Why Is It So Hard to Make a Decision?
TED-Ed: How to Make Smart Decisions More Easily
TED-Ed: The Psychology Behind Irrational Decisions
Websites
Alliance for Decision Education (free course on decision making, and other resources)
Cognitive Biases (a list of the most relevant biases in behavioral economics) https://thedecisionlab.com/biases
Mental Model Practices (techniques for better decision making)

Chapter Attribution

This chapter incorporates material from the following sources:

Bazerman, M. H. (2024). Judgment and decision making. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. http://noba.to/9xjyvc3a. Licensed with CC BY-NC-SA 4.0.

Chapters 3 and 6 of Black, J. S. & Bright, D. S. (2019). Organizational behavior. OpenStax. https://openstax.org/details/books/organizational-behavior. Licensed with CC BY 4.0.

Chapter 2 of Bright, D. S. & Cortes, A. H. (2019). Principles of management. OpenStax. https://openstax.org/books/principles-management/pages/2-introduction. Licensed with CC BY 4.0.

Chapter 2 of Smith, N. (2022). Introduction to philosophy. OpenStax. https://openstax.org/books/introduction-philosophy/pages/2-introduction. Licensed with CC BY 4.0.

Chapter 12 of Spielman, R. M., Jenkins, W. J., & Lovett, M. D. (2020). Psychology 2e. OpenStax. https://openstax.org/books/psychology-2e/pages/12-introduction. Licensed with CC BY 4.0.

Chapter 7 of Westmaas, L. (2022). Psychology, communication, and the Canadian workplacehttps://ecampusontario.pressbooks.pub/communicationpsychology/part/chapter-7-decision-making/. Licensed with CC BY-NC-SA 4.0.

Media Attributions

Figure 1: Rice University. (2019, March 20). The decision-making process. OpenStax. https://openstax.org/books/principles-management/pages/2-3-programmed-and-nonprogrammed-decisions. Licensed with CC BY 4.0.

Figure 2: Rice University. (2019, June 5). The general attribution process. OpenStax. https://openstax.org/books/organizational-behavior/pages/3-3-attributions-interpreting-the-causes-of-behavior#ch03fig05. Licensed with CC BY-NC-SA 4.0

Video 1: Inc. (2013, December 3). Daniel Kahneman: Thinking fast vs. thinking slow [Video]. YouTube. https://youtu.be/PirFrDVRBo4

Video 2: McCombs School of Business. (2021, January 28). Cognitive bias [Video]. YouTube. https://www.youtube.com/watch?v=TlOUnOWfw3M

Video 3: McCombs School of Business. (2021, January 28). Confirmation bias [Video]. YouTube. https://www.youtube.com/watch?v=7zoWTb3KP-k

Video 4: McCombs School of Business. (2018, December 18). Framing [Video]. YouTube. https://youtu.be/ZuMA92VGgOM

Video 5: McCombs School of Business. (2018, December 18). Fundamental attribution error [Video]. YouTube. https://www.youtube.com/watch?v=Y8IcYSrcaaA

Video 6: McCombs School of Business. (2018, December 18). Self-serving bias [Video]. YouTube. https://www.youtube.com/watch?v=NkpXMxt4f3s

Video 7: PwC. (2017, June 23). Blind spots: Broaden perspectives [Video]. YouTube. https://youtu.be/HbBTM8bJt8Q

Video 8: McCombs School of Business. (2018, December 18). In-group/out-group [Video]. YouTube. https://www.youtube.com/watch?v=AkYJOYrNiSw


  1. Peter A. Facione & Noreen C. Facione. 2007. Thinking and Reasoning in Human Decision Making: The Method of Argument and Heuristic Analysis, Millbrae, CA: The California Academic Press.
  2. Matthew D. Lieberman. 2003. “Reflexive and reflective judgment processes: A social cognitive neuroscience approach.” In (Eds.) Joseph P. Forgas, Kipling D. Williams, & William von Hippel’s: Social judgments: Implicit and explicit processes, 44-67. Cambridge, UK: Cambridge University Press.
  3. Adam L. Darlow & Steven A. Sloman. 2010. “Two systems of reasoning: Architecture and relation to emotion,” WIREs Cognitive Science, 1: 382-392.
  4. Adam L. Darlow & Steven A. Sloman. 2010. “Two systems of reasoning: Architecture and relation to emotion,” WIREs Cognitive Science, 1: 382-392.
  5. Malcolm Gladwell. 2005. Blink: The Power of Thinking Without Thinking. New York: Back Bay Books.
  6. Simon, H. A. (1957). Models of man, social and rational: Mathematical essays on rational human behavior in a social setting. New York, NY: John Wiley & Sons.
  7. March, J. G., & Simon, H. A. (1958). Organizations. Oxford: Wiley.
  8. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.
  9. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, New Series, 185(4157), 1124–1131.
  10. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
  11. Kolbert, E. (2017, February 27). Why facts don’t change our minds. The New Yorker.
  12. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, New Series, 185(4157), 1124–1131.
  13. Li, S., Sun, Y., & Wang, Y. (2007). 50% off or buy one get one free? Frame preference as a function of consumable nature in dairy products. Journal of Social Psychology147, 413–21.
  14. Kelley, H. H. (1973, February). The process of causal attributions. American Psychologist, 107–128.;  Forsterling, F. (1985, November). Attributional retraining: A review. Psychological Bulletin, 495–512.; Weiner, B. (1980). Human motivation. Holt, Rinehart and Winston.
  15. Heider, F. (1958). The psychology of interpersonal relations. John Wiley & Sons Inc., 297.
  16. Aberson, C. L., Healy, M., & Romero, V. (2000). Ingroup bias and self-esteem: A meta-analysis. Personality and Social Psychology Review, 4: 157-173.
  17. Rosenthal, R., & Jacobson, L. F. (1968). Teacher expectations for the disadvantaged. Scientific American, 218, 19–23.
  18. Hebl, M. R., Foster, J. B., Mannix, L. M., & Dovidio, J. F. (2002). Formal and interpersonal discrimination: A field study of bias toward homosexual applicants. Personality and Social Psychology Bulletin, 28(6), 815–825.
  19. Wason, P. C., & Johnson-Laird, P. N. (1972). The psychology of deduction: Structure and content. Harvard University Press.
  20. Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations, 33–48. Brooks-Cole.
  21. Allport, G. W. & Odbert, H. S. (1936). Trait-names: A psycho-lexical study. Psychological Review Company.
  22. Linda K. Trevino & Michael E. Brown. 2004. Managing to be ethical: Debunking five business ethics myths. Academy of Management Executive, 18: 69-81.
definition

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Management and Organizational Behavior Copyright © by Charlotte Hoopes is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book