Jump to content

Motivation and emotion/Book/2024/Moral dilemmas and emotion

From Wikiversity
Moral dilemmas and emotion:
How can emotion aid or hinder in moral decision making?

Overview

[edit | edit source]

This chapter explores the relationship between moral dilemmas and emotion. We all deal with moral dilemmas in our lives. Should I lie to protect someone's feelings or tell them the truth to help them? When I see someone struggling or being treated unfairly, should I intervene to help them or stay out of other people's affairs? We often think about this in rational philosophical terms: what is the right thing to do? But we rarely consider how our emotions influence the moral decisions we make. This chapter shows that understanding the role of emotions in moral decision-making can help us to make more informed moral decisions. It will do so by:

  • Introducing the concept of moral dilemmas
  • Defining emotion and introducing theories of emotion
  • Investigating the interactions between moral decision-making and emotion using psychological theory
  • Exploring the ways in which this knowledge can be used in practice to make more informed moral decisions
Focus questions:
  • What is a moral dilemma?
  • What are emotions, and how can they affect our decision-making?
  • Can emotion affect our decision-making capacity when faced with a moral dilemma?
  • Can this information benefit people and serve a meaningful purpose in day-to-day life?

What is a moral dilemma?

[edit | edit source]

In moral philosophy, a moral dilemma is a situation in which a person must make a choice between two or more options that leads the person to violate one or more of their moral principles. In the discipline of moral philosophy, a moral dilemma is strictly defined and has the following characteristics: there is an agent (such as an individual) whose responsibility it is to make a decision between two or more options but may only choose one. In addition, to qualify as a genuine ethical dilemma, each available choice must not be obviously more significant than the others; that is, they must carry a similar or equivalent moral weight. Ultimately, resolving a genuine ethical dilemma must lead one way or another to moral failure (McConnell 2018).

The following case study is an example of a genuine ethical dilemma:

Case study: Sarah is a new mother to twins Jim and Natasha. The doctor comes to inform Sarah that both her children are sick and that there is only enough medicine to treat one of the twins. However, the doctor says that if they give half the recommended dose to an infant, there is a 30% chance that the baby will survive. The illness is rapidly progressing, so there is not enough time to search for more medicine and a decision must be made now, or else both twins will pass away. Sarah may choose to definitely save one of the twins or take a chance with both their lives.

How should the medicine be distributed?

This is clearly a moral dilemma. Sarah is an agent that has to make a decision between two options. The choice is between the certainty of saving one child's life and the possibility of saving both, at the risk of losing one or both children. One such theory is known as utilitarianism, which states that a moral decision is one that produces the most happiness and pleasure (Kahane et al. 2018). Another is deontology, which states that individuals make moral decisions based on innate, universal principles of right and wrong and which focuses on the intention of the individual rather than the consequences of an action (Jennifer M. Barrow; Paras B. Khandahar, 2023).

What is your intuitive response? What decision should Sarah make? Do your emotions push one way or the other?

Now, let's further consider the problem to see if your answer changes when engaged in a more rational philosophical analysis. Should Sarah choose the first option of administering a full dose of the medicine to either Jim or Natasha, the other child will perish. Should Sarah choose the second option of administering half a dose to both children, there are four possible outcomes:

  1. Jim and Natasha survive
  2. Jim and Natasha die
  3. Jim survives while Natasha dies
  4. Natasha survives while Jim dies

Therefore Sarah is faced with certainty in the first choice, choosing between one of her two children, and uncertainty in the second choice, where one, both, or neither child survives. But what exactly is the rate of survival when administering the half dose? Using binomial probability, it is theoretically possible to predict the likelihood of survival if both children are treated.

  1. Jim and Natasha survive - 9%
  2. Jim and Natasha die - 49%
  3. The probability of one child surviving - 42%

Figure 1. Statue of thinking Socrates.

After a more careful inspection of the facts, has your answer changed? If so, why? Or if not, why not? The response of each individual to various ethical dilemmas is bound to differ on the basis of one's beliefs.

One can greatly benefit from the wealth of knowledge in the literature of moral philosophy. For example, the utilitarian may argue that the path by which most pain is avoided is the morally right choice. Therefore, all risk should be avoided, and the full dose should be given to a single baby in order to avoid the risk of both children dying, which would maximize pain and suffering. Whereas deontologists may argue that the agent's innate sense of right and wrong, in this case, Sarah, will guide her to make a moral decision. To help understand what choice Sarah may make, consider the following study, where researchers in Norway interviewed the mothers of children who were given a life-threatening diagnosis at birth and given a choice to perform surgery or not. Those families who chose to perform the surgery said that they perceived this to be the only correct choice in order to prevent the deaths of their children, whereas families who chose not to perform the surgery commented that they chose to do this in order to prevent pain and suffering to the child (Vandvik & Forde, 2000).

But is there a piece of the puzzle that is missing? In the past, moral decision-making has primarily been explored through the lens of moral philosophy, that is, by asking what is the right thing to do through the application of abstract moral theories. This perspective does not engage with the fundamental impact of human emotion in all decision-making, including moral decision-making. It is well documented that moral behavior does not always align with moral belief and that people's emotions can affect their moral behaviors and attitudes (Bentahila et al., 2021). This chapter explores the idea that psychological science regarding emotion and decision-making can be used to develop a more comprehensive understanding of moral decision-making than that offered by moral philosophy alone.

What are emotions and how can they affect our decision making?

[edit | edit source]

While there are several definitions for emotion, the American Psychological Association definition will be used. It defines emotion as "conscious mental reactions subjectively experienced as strong feelings, usually directed toward a specific object and usually accompanied by a physiological reaction and behavioural changes in the body." (American Psychological Association, 2021). These responses are experienced as feelings such as happiness or sadness and are experienced alongside a physiological response to stimuli. In the past few decades, researchers have looked at how emotions affect our decision making capabilities.

The dual process model of moral reasoning as described by Greene et al,. attempts to answer this question. In their study, they differentiate between two processes. The first process is one in which an individual is capable of using the lens of a utilitarian: an emotionless, rational thinker, capable of making a difficult, necessary decision to maximize pleasure and minimize pain. Whereas the second process is one in which there is emotional interference, and as a result of which, the individual is unable to make calculated, rational decisions. The emotion being observed in the majority of these studies is stress. Using fMRIs and measuring cortisol levels, researchers are able to objectively measure the levels of stress experienced by participants. In addition, many decisions, especially those made during moral dilemmas induce stress.

In 2012, a study at the University of the West Indies looked at stress as an emotion which affects decision making. Students were recruited and then separated into two groups. In one group, participants were stressed using the TSST, a test that has been shown to induce a stress response. In the other group, the control group, participants were asked to read a textbook. Following this, participants' cortisol levels were recorded. Then, both groups were exposed to three categories of decision-making tasks: nonmoral dilemmas (NM), impersonal moral dilemmas (IM), and personal moral dilemmas (PM). All problems were presented over three slides; the first two slides provide context about the problem, while the third slide asks the participants to provide a solution to the problem. At the end of this task, cortisol levels were recorded once again. While the study did not find any significant differences between the two groups at NM and IM, significant differences were found when making decisions for PM. All problems were presented over three slides, the first two slides provide context about the problem, while the third slide asks the participants to provide a solution to the problem. At the end of this task, cortisol levels were recorded once again. While the study did not find any significant differences between the two groups at NM and IM, significant differences were found when making decisions for PM. The results of the study showed that stressed participants were less likely to make rational, utilitarian decisions when compared with the control group (Youssef et al., 2012).

Critics of the dual process model of moral reasoning argue that it is not clear which of the two viewpoints come intuitively to an individual. Several studies conducted online on Hungarian students in Budapest attempted to demonstrate this point. The first study included conflict problems, wherein participants were asked whether they were willing to sacrifice a smaller number of people to save a larger group; non-conflict problems, wherein participants were asked whether they were willing to sacrifice a larger group to save less. Between these conflict and non-conflict problems, a problem with an obvious solution was presented. This step was taken to ensure that participants did not identify a pattern in the problems and answer in the same pattern. Problems were presented over two slides; the first slide provided background information, while the second slide asked participants for an answer. Researchers also used a dot memorization task to expend cognitive resources and ensure that an intuitive response was provided by participants. Additionally, it was decided that, in order to encourage participants not to think too hard about the problems, a deadline of 12 seconds should be set. Furthermore, to test whether or not participants would later change their viewpoints or be influenced by a desire to appear consistent, a one-response pretest was used, wherein participants were given unlimited time to read through and provide an answer to a problem. Finally, a two-response moral reasoning task was given to participants. In the instructions for this task, participants were asked to read through a problem, then on the following slide provide their answer. After providing an answer, participants were told that the problem would then appear again, only this time they could take as much time as they needed to come up with an answer. Throughout the instructions, it was repeatedly made clear that the researchers were interested in the intuitive answer, asking participants not to think too hard about the problem. Participants were then shown two practice questions to familiarise them with the interface, followed by the dot memorization task, then asked to complete the two practice questions again. Finally, participants were provided with each problem and asked to provide a score between 0-100 regarding their confidence level about whether they felt their answer was right for both the initial and final answers. The results of this study showed that the majority of the participants' intuitive response was utilitarian (Bago & De Neys, 2019).

Overall, the studies providing the foundation for a dual process model of moral reasoning lay out a compelling argument for its validity. They do so by having relatively similar methods of testing the effects of emotion, in this case stress, on decision-making. The critique of this theory does not call into question whether or not emotion influences outcomes for decision-making. Instead, the focus is on the reliability of the theory itself. Critics ask whether it can truly be said that the intuitive (deontology) and deliberated (utilitarian) responses of an individual always be the same across all individuals? This is a fair criticism and one that needs to be considered and addressed in future research. One avenue that researchers could consider is to investigate intuitive and deliberated responses of individuals across cultures.

What is the interaction between emotion and moral dilemmas?

[edit | edit source]

The connection between emotion and moral dilemmas can be investigated through the lens of various psychological theories.

Evolutionary psychology and functionalism

[edit | edit source]

Evolutionary psychology asks the question how did the capacity for human behaviour develop? Whereas, functionalism asks the question why did the capacity for human behaviour develop? These two theories, while not identical, have much in common in the way of their approach to understanding emotion and moral behaviour. One common theme in the literature is that of survival. That is, the how and why of human emotion and behaviour can be understood to have developed to improve the chances of survival. For example, the fight or flight response, in which one responds to a threatening stimulus by either fleeing or fighting, is speculated to have developed as an adaptive survival mechanism in many animals including humans (Šimić et al, 2021). The mechanism is associated with the emotions fear and anger.

In this example, the benefits are clear from an evolutionary psychology viewpoint. In the face of the threat of predators, humans learned over time to appraise a threat and respond with a behaviour or face fatal consequences. The benefits from the functionalist perspective, why humans learned to respond to the aforementioned threats are very similar; out of necessity, to survive in the face of existential threats. Therefore, it is possible to view the emotional psychology of moral reasoning, using this lens: emotions induce an individual to make the decision that increases the likelihood of survival.

In order to better illustrate this concept, consider the following case study:

John is a married man with one newborn child. His mother is staying with them to help look after the newborn. One evening, John fell asleep in his living room on the couch. When he wakes, he finds that there is a fire burning in his home. At this point, there is no way to extinguish the flames. John knows that the distance from the living room to his bedroom where his wife is staying, the cot room where his newborn is staying and the spare bedroom where his mother is staying are equally far apart, and that he has time only to save one of them.

Who should John save?

Evolutionary psychologists have argued that compassionate behaviour and love are adaptive, evolved emotions that increase the chances of survival in humans (Goetz et al, 2010). It can also be argued that similar to stress, love may lead one to respond to this dilemma in a way which is intuitive rather than calculated. One can imagine that in that moment, abstract moral reasoning is very far from John's mind, and his reasoning is being dominated by emotion.

Cognitive dissonance theory

[edit | edit source]

Cognitive dissonance is the mental discomfort experienced by an individual when an individual performs an action that does not align with their beliefs. According to cognitive dissonance theory, this discomfort causes the individual to shift their attitudes and beliefs in alignment with the behaviour, in order to justify the behaviour and reduce discomfort. Alternatively, the individual may adjust their behaviour to reduce dissonance.

Consider this case study:


Maddie is a teacher in a high school. Her student, Maria, has been struggling to keep up her grades. Maria's father is sick and she has had to pick up extra work to stay financially stable, but often at the expense of the quality of her work. Just before the due date of her next assessment, Maria approaches Maddie, explaining her situation and begging for an extension, saying that if she cannot get one, she will fail. Due to the timing of the request, it goes against the school's policy to grant the extension, but Maddie knows that giving Maria an extension could be the difference between passing and failing.

What should Maddie do?

[grammar?]

Maddie's dissonance stems from her commitment to the school's policy regarding extensions and her feelings of empathy toward Maria. If she grants the extension, she may feel guilty for breaking the rules in favour of Maria and not for other students. If Maddie does not give the extension to Maria, Maddie may feel that she is being fair to the other students, but guilty because she understands the position that Maria is in and not offering help. It is once again arguable that Maddie's emotions will compel her to act intuitively and grant an extension rather than consider the other students. Moral philosophy would have you think that Hanna should decide what to do by applying moral frameworks, but in reality her emotions dictate the choices she makes.

How can knowledge about how emotions influence moral decision making be useful in everyday life?

[edit | edit source]

This chapter is a resource which promotes ethical preparedness as well as a resource that challenges the paradigm of philosophy as the main source of moral wisdom. Part of this preparedness is one's ability to become aware of the processes by which decisions are made (Lerner et al., 2014).

A study in 2015 looked at the role of positive, neutral and negative affective states on moral decision-making. Participants in each condition watched a 5 minute video that would induce a positive, neutral or negative affective state. After watching the video, participants were presented with an ethical dilemma, and were provided with two solutions. One solution aligned with a utilitarian point of view, while the other with a deontological point of view. The study found that individuals experiencing a positive or neutral affective state were more likely to choose the utilitarian option (Guzak, 2015).

Although moral philosophy can attempt to answer what is the better way to choose in a moral dilemma, people and their actions are often heavily by their emotions. To improve one's ability to make a moral choice, one must first be aware of the effects of emotions on their capacity to make a decision.

Conclusion

[edit | edit source]

  • Psychological theory has a vast amount of valuable information to contribute to the discussion of morality and ethics
  • A moral dilemma is a decision with two or more choices, wherein an agent must choose from one option, but may not choose from more than one, resulting in a moral loss
  • Emotions are feelings experienced by people in response to stimuli in combination with a physiological response
  • Emotions cause us to be more likely to act intuitively rather than deliberate on moral choices
  • Learning about emotions and morality through the lenses of psychological theories, we can become aware of the reasons why we make certain decisions and attempt to make better, informed moral choices

See also

[edit | edit source]

References

[edit | edit source]
Bandura, A. (2016). Moral Disengagement: How People Do Harm and Live With Themselves. Worth Publishers.

Bago, B., & De Neys, W. (2019). The Intuitive Greater Good: Testing the Corrective Dual Process Model of Moral Cognition. Journal of Experimental Psychology: General, 148(10), 1782-1801. EbscoHost. http://dx.doi.org/10.1037/xge0000533

Bentahila, L., Fontaine, R., & Pennequin, V. (2021). Universality and Cultural Diversity in Moral Reasoning and Judgment. Frontiers in psychology, 13(12), 764360. PubMed. 10.3389/fpsyg.2021.764360

Cannon, W. B. (1987). The James-Lange Theory of Emotions: A Critical Examination and an Alternative Theory. The American Journal of Psychology, 100(3/4), 567-586. Jstor. https://doi.org/10.2307/1422695

Greene, J., Sommerville, R., Darley, J., & Cohen, J. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 14(293), 2105-8. PubMed. doi: 10.1126/science.1062872

Gu, S., Wang, F., Patel, N. P., Bourgeois, J. A., & Huang, J. H. (2019). A Model for Basic Emotions Using Observations of Behavior in Drosophila. Frontiers in psychology, 10, 1-13. Frontiers. https://doi.org/10.3389/fpsyg.2019.00781

Guzak, J. R. (2015). Affect in Ethical Decision Making: Mood Matters. Ethics & Behavior, 25, 386-399. Taylor & Francis. https://doi-org.ezproxy.canberra.edu.au/10.1080/10508422.2014.941980

Lerner, J. S., Li, Y., & Kassam, K. S. (2014). Emotion and Decision Making. Annual Review of Psychology, 66, 799-823. Annual Reviews. https://doi.org/10.1146/annurev-psych-010213-115043

Mavroudis, C., Mavroudis, C. D., Farrell, R. M., Jacobs, M. L., Jacobs, J. P., & Kodish, E. D. (2011). Informed consent, bioethical equipoise, and hypoplastic left heart syndrome. Cardiology in the young, 2(133), 40. PubMed. 10.1017/S1047951111001715

Schachter, S., & Singer, J. (1962). Cognitive, social and physiological determinants of emotional state. Psychological review, 69(5), 379-399. APA PsycARTICLES. 10.1037/h0046234

Youssef, F. F., Dookeeram, K., Basdeo, V., Francis, E., Doman, M., Mamed, D., Maloo, S., Degannes, J., Dobo, L., Ditshotlo, P., & Legall, G. (2012). Stress alters personal moral decision making. Psychoneuroendocrinology, 37(4), 491-498. ScienceDirect. https://doi.org/10.1016/j.psyneuen.2011.07.017