Jump to content

Motivation and emotion/Book/2024/Artificial intelligence empathy

From Wikiversity
Artificial intelligence empathy:
How can algorithms relate to humans?


Overview

[edit | edit source]
Figure 1. Artificial intelligence is getting more and more complex.

Have you ever felt lonely? If you have friends, you can help remedy this feeling by sending one a message - start a conversation! If you're feeling a little down, maybe you could vent to this friend. This can work well if your friend is free to talk, but if you happen to feel down during work hours or when everyone else is asleep, you can be left with no-one to talk to.

Now, an artificial intelligence "chatbot" can fill the role of that friend 24/7 (see Figure 1). There is someone you can talk to whenever, wherever. But is the experience the same as talking to a human? Or could it be ... better?

The introduction of artificial intelligence (AI) in the field of psychology could be the next step in the evolution of the field. While psychology in basic forms may have existed since ancient times, modern psychology can be dated within the last approximately 100 years (King et al., 2009). AI could be the next step for psychology, being able to provide support at times trained professionals aren't available. However, Generative AI (GAI) has many limitations (Sai et al., 2024). It is unlikely in the near future that psychology will be completely overtaken by AI, but that should not discount AI's current capabilities and use-cases. This chapter analyses AI's current position within the field of psychology, specifically looking at its capability for empathy.

The problems
Figure 2. Can AI be empathic?
  • Sometimes, people just need to talk to someone
  • Friends aren't always available to talk
  • Venting can overwhelm a friend
  • Can AI really replicate the experience of talking to a friend? (see figure 2)

Focus questions:

  • How does AI recognise emotion?
  • How effectively can AI show emotion?
  • Can AI empathise with humans?
  • Is AI empathy harmful?

What is empathy?

[edit | edit source]
Figure 3. Greeting a friend.

Have you ever been asked the question "how are you?" (see figure 3) Most people respond to this question with a simple "good", as is standard for greetings in many Anglicised cultures. But what happens if someone responds with something else? What if they're not doing so well?

Say this person responds with "not so good, my cat got sick and had to go to the vet", what do you say in response? Will you show empathy? Can AI?

In order to understand if AI can empathise, we must first understand what empathy is. Empathy as a term was only introduced in the 1900s (Smith, 2017), but was a psychological phenomenon much before the word to describe it was created. There are two dimensions to empathy: feeling it and showing it. Feeling empathy can be described essentially as resonating with someone else's emotions, whether from observing facial expressions/body language or from listening to what they are saying. Additionally, it is important to distinguish the difference between empathy and sympathy [Add link to chapter about this topic]. Smith (2017) succinctly describes the difference as feeling with the other person (empathy) and feeling for the other person (sympathy).

Is this empathy or sympathy?

1 "I know how you feel":

empathy
sympathy

2 "I'm sorry to hear that":

empathy
sympathy


Empathy provides many benefits for both the one empathising and the person receiving empathy. The first notable benefit is social connection. Ferguson et al. (2021) specifically mention that showing empathy can encourage that person to show you empathy in the future, can improve their perception of you, and can even make you feel good simply from helping someone. Conversely, the receiver benefits from forming a connection with the person showing them empathy, but is this still applicable when talking to AI? If it's incapable of truly empathising, can AI to even give the illusion of empathising? To do so, it will first need to recognise the emotions at play.

Can AI recognise human emotion?

[edit | edit source]
Figure 4. Facial expressions convey emotion, even for non-human faces.

How can you tell what someone is feeling? Asking them is easy enough, but are there other methods you can use to identify their emotions?

You could analyse what they're saying and try to use context clues to parse their feelings. Perhaps their tone of voice?

Facial expressions are perhaps the most universal identifier of emotions. Facial expressions transcend race, gender, culture and even species (see figure 4). Can AI read facial expressions?

Facial expressions are a very accessible way of recognising emotions, transcending language and cultural barriers (Izard, 1994). Even other species, such as other primates, use and recognise facial expressions. Evolutionarily, they are core to development and communication, providing warnings, teaching and even simply communicating one's emotional state at a glance. If developers want AI to recognise emotion, facial expressions seem like an important part of the human experience to understand.

According to Kim et al. (2019), AI uses seven core emotions when analysing facial expressions: happiness, sadness, fear, disgust, surprise, anger and neutral. These were identified as the core 6 emotions, plus neutrality, based on cross-cultural validity and physiology. These emotions will come into play during the AI algorithm's analysis of the given facial expression, in its current state this is typically performed with a still image. The steps for this analysis are as follows:

  1. The algorithm locates the face within the image
  2. The face is broken down into core components
    • Eyes
    • Eyebrows
    • Nose
    • Mouth
  3. Computing black magic is undertaken, resulting in an output of estimated values for the desired features
    • For example, the angle of the mouth
  4. The acquired data is classified based on a pre-determined model, in this case, the Hidden Markov Model, and a Support Vector Machine is used to improve the data's predictive reliability

The reliability of Kim et al. (2019)'s model was 96.5% in accurately determining the emotion behind a given facial expression. This is very impressive, showing that AI models are reliable in identifying emotions in humans when shown their facial expressions. Importantly however, this test was performed in a somewhat controlled environment and on still images. More complex models in the future will likely be able to analyse facial expressions in real-time, however this will take more time.

Which of the following was not an identified core emotion for facial expression recognition?:

Happiness/Joy
Sadness
Fear
Disgust
Contempt
Surprise
Anger


Figure 5. A conversation with a chatbot.

In the 21st century, text-based conversation is one of the most common forms of communication. With a very significant percentage of the population possessing a texting-capable phone or computer, many choose to communicate with friends and coworkers using text-based communication, whether through apps, SMS, or email.

As with verbal and physical communication, emotion can be found within text-based communication. But how can one come to understand these emotions without tone of voice and body language to rely on?

Things are easy enough when they just outright state how they're feeling, you could even prompt them for this kind of response. But this won't always be how the conversation flows.

Sometimes their emotions will shine through with subtext. More about the way they speak rather than what they say. Can AI see this kind of subtext?

The other primary area of interest for emotional recognition in AI is recognition of emotion through words. This is applicable through the now widely popular chatbot (see figure 5). Poria et al. (2019) describe some of the processes behind AI's recognition of emotion through text. The specifics are incredibly technical, for the purposes of this chapter, a simplified explanation will be provided. Once the text is acquired by the algorithm, it can be analysed in two levels:

  1. Lower-level, individual words that indicate specific emotions can be analysed
    • For example, the word "happy" appearing likely indicates the presence of joy
  2. Higher-level, multiple nearby words can be analysed to predict emotions
    • Memory of earlier parts of the conversation can also be used for context, though it is somewhat limited in this

Of note, AI models also have a limited capacity for memory through the conversation. This memory can be used to provide more context to the emotions of the human, if the data is analysed in a chatbot context rather than analysing a body of text specifically. Memory of course has less use when analysing single bodies of text.

In their study, Poria et al. (2019) found that AI was able to show significant proficiency in recognising emotions through conversational text, and there is promise shown for larger bodies of text. Ahmed et al. (2022) found that AI can even analyse an individual's social media posts to spot symptoms of depression or anxiety through their use of specific language or patterns of language. With further developments of AI models such as ChatGPT, the emotional recognition of AI will only get better.

Can AI empathise with humans?

[edit | edit source]
Figure 6. Friends can provide comfort.

Have you ever gone through a tough time? Unfortunately, pain is a part of life, so it's likely that you have. Being that pain is a near universal experience, there are others who will know how you feel (see figure 6).

Bottling up your emotions can be dangerous. Venting your feelings to a friend can help provide a release for these emotions. If they've gone through something similar to what you're dealing with, you can find solidarity in the struggle. Even just vocalising your feelings can help you recontextualise your situation.

If an AI can't have experienced what you're going through, how can they show you empathy?

AI models are proficient in recognising emotion via text and facial expressions. This however does not necessarily mean AI is able to show emotion itself, or even empathise. This is an area that has seen much investigation, such as in a study by Bartneck et al. (2017). Bartneck mentions that a core component in conversation is showing emotion, as it can add believability to whichever "character" you are interacting with. One would expect a human to react emotionally to a negative event, with a level of emotionality proportional to the intensity of the event. This delicate balance must be navigated by AI to provide a believable experience for the human interacting with it.

The specific aspects of believable empathy in AI can be broken down into the following components (Kim et al., 2024):

  • Primary appraisal stage
    • Personalisation
    • Anthropomorphism
  • Secondary appraisal stage (Perceived human-ness)
    • Competence
    • Warmth
  • Secondary appraisal stage (Emotion towards AI chatbots)
    • Empathy

With these aspects present, people can connect with AI, helping to bridge the empathy between humans and algorithms. Modern GAI models such as ChatGPT have a degree of empathy ingrained to its responses, as was analysed in a study by Markovitch et al. (2024). As part of their study, they analysed the differences between chatbots and humans in service contexts. In line with prior findings, chatbots which showed little empathy provided for significantly lower customer satisfaction than humans. Interestingly, when interacting with ChatGPT which showed more empathy, the gap between human and chatbot was blurred near completely. Their findings indicate that perceived empathy was perhaps the biggest determinant in customer satisfaction, showing that not only is empathy very important for the future of AI-human interactions, but AI is currently able to achieve that level of empathy in some contexts.

What are the risks of empathetic AI?

[edit | edit source]
Figure 7. Is AI a substitute for a professional?

Imagine you have hurt your leg really badly, what's the first thing you would do? Call an ambulance? Use a search engine? Or maybe you'll ask ChatGPT for advice (see figure 7).

It might be a quick and easy solution, but is it reliable? Can you trust an AI trained to say what it expects to be the most optimal response? Is it a verified source of information? What do the professionals have to say?

Simplified versions of chatbots have been used in the customer service industry for years, helping to automate certain tasks and increase efficiency (Markovitch et al., 2024). It has also been shown that with integration of emotion to the AI model, it can reach levels of believability that can rival humans. Seitz (2024) found in their research that humans perceiving warmth and/or empathy can enhance human trust in AI. Unfortunately, current AI models are fairly flawed and use unreliable sources, especially in a healthcare setting (Sai et al., 2024).

Since GAI is a new technology, many of the concerns around its use in healthcare scenarios centre around technical issues (Sai et al., 2024). Problems such as its integration with existing systems or patient privacy are concerns that will likely be addressed with time, yet nonetheless necessitate caution when considering adding AI into a workplace. Other issues centre around its reliability in giving proper advice. AIs are developing artificial empathy to degrees that provide them with a degree of warmth, resulting in humans being more trusting of the chatbots. Balcombe et al. (2022), recommend that, in its current state, AI interactions be monitored by humans and not be used as the sole form of communication. A coordinated effort from AI and professionals must be integrated due to concerns of AI's ability to properly ensure the safety of patients, as it is still a developing technology and not yet wholly reliable.

The lack of AI reliability can be shown through a study by Memon et al. (2024) which analysed AI's reliability with providing accurate information in search engines. It was shown that when s perfectly formatted prompt was entered into the search engine, the AI would respond with an accurate piece of information and provide a quality source for its information. Conversely, results would either not appear or provide completely incorrect information with an incorrectly formatted question. This poses a great risk, as someone unaware of this issue could receive proper information during one use of AI, leading them to trust the AI, only to receive incorrect and potentially harmful information that they may believe in a future use of the AI. A person who is not perfectly versed in English, or someone in a panicked state might make an error when entering their prompt, and the AI could provide them with completely incorrect information. With chatbots, the error of trust could exacerbate the issue, since AI that are able to come off as "warm" are likely to earn people's trust, which could cause them to believe false information.

At present, there are also ethical concerns with the current state of generative AI. According to Ghotbi (2022), AI may categorise its user into demographical groups which might cause it to discriminate or use bias in its responses. Ghotbi poses that this could be due to their lack of nuance in analysing correlational studies, drawing conclusions about populations without considering the broader context around the socioeconomic situations of said population. The study raises concerns that AI's interpretation of this data could cause issues with humans using these conclusions to justify discriminatory practises, causing further harm to disparaged communities.

Conclusion

[edit | edit source]

Artificial Intelligence empathy is an ever-evolving field, showing much promise to transform the field of psychology for the better. AI has been shown to be proficient in detecting human emotion through multiple mediums, such as via text or from facial expressions. Research also reveals that humans are more willing to engage with AI models when they are able to show emotion, and are receptive to empathy from the AI. AI models have shown the capacity for displaying a believable level of emotion which resonates with humans, allowing for a degree of empathy to be achieved.

Unfortunately, many issues need to be addressed prior to properly implementing AI systems into current psychological practise. Ethical concerns around AI's lack of nuance when analysing data can cause issues when interacting with people from certain demographics. Additionally, there are issues with AI's ability to provide reliable information at present, which could cause harm to people in vulnerable positions. Once AI models are properly refined, they are poised to improve the reliability and access to psychology services, working with human professionals to provide the best possible care for their patients.

See also

[edit | edit source]

References

[edit | edit source]
Ahmed, A., Aziz, S., Toro, C. T., Alzubaidi, M., Irshaidat, S., Serhan, H. A., Abd-alrazaq, A. A., & Househ, M. (2022). Machine Learning Models to Detect Anxiety and Depression through Social Media: A Scoping Review. Computer Methods and Programs in Biomedicine Update, 100066. https://doi.org/10.1016/j.cmpbup.2022.100066

Balcombe, L., & De Leo, D. (2022). Human-Computer Interaction in Digital Mental Health. Informatics, 9(1), 14. https://doi.org/10.3390/informatics9010014

Bartneck, C., Lyons, M. J., & Saerbeck, M. (2017, June 28). The Relationship Between Emotion Models and Artificial Intelligence. ArXiv.org. https://doi.org/10.48550/arXiv.1706.09554

Carl, E., Stein, A. T., Levihn-Coon, A., Pogue, J. R., Rothbaum, B., Emmelkamp, P., ... & Powers, M. B. (2019). Virtual reality exposure therapy for anxiety and related disorders: A meta-analysis of randomized controlled trials. Journal of anxiety disorders, 61, 27-36.

Ferguson, A. M., Cameron, C. D., & Inzlicht, M. (2021). When does empathy feel good?. Current Opinion in Behavioral Sciences, 39, 125-129.

Ghotbi, N. (2022). The Ethics of Emotional Artificial Intelligence: A Mixed Method Analysis. Asian Bioethics Review. https://doi.org/10.1007/s41649-022-00237-y

Izard, C. E. (1994). Innate and universal facial expressions: evidence from developmental and cross-cultural research.

Kim, J. H., Kim, B. G., Roy, P. P., & Jeong, D. M. (2019). Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE access, 7, 41273-41285.

Kim, W. B., & Hur, H. J. (2024). What makes people feel empathy for AI chatbots? Assessing the role of competence and warmth. International Journal of Human–Computer Interaction, 40(17), 4674-4687.

King, D. B., Viney, W., & Woody, W. D. (2009). A history of psychology. A: Ideas and Context. Pearson Education.

Markovitch, D. G., Stough, R. A., & Huang, D. (2024). Consumer reactions to chatbot versus human service: An investigation in the role of outcome valence and perceived empathy. Journal of Retailing and Consumer Services, 79, 103847.

Memon, S. A., & West, J. D. (2024). Search engines post-ChatGPT: How generative artificial intelligence could make search less reliable. arXiv preprint arXiv:2402.11707.

Poria, S., Majumder, N., Mihalcea, R., & Hovy, E. (2019). Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE access, 7, 100943-100953.

Sai, S., Gaur, A., Sai, R., Chamola, V., Guizani, M., & Rodrigues, J. J. (2024). Generative ai for transformative healthcare: A comprehensive study of emerging models, applications, case studies and limitations. IEEE Access.

Seitz, L. (2024). Artificial empathy in healthcare chatbots: Does it feel authentic?. Computers in Human Behavior: Artificial Humans, 2(1), 100067.

Smith, J. (2017). What is empathy for?. Synthese, 194(3), 709-722.

[edit | edit source]