Motivation and emotion/Book/2019/Affective computing

From Wikiversity
Jump to navigation Jump to search
Affective computing:
What is affective computing and how can it be used?
[Replace this text with the URL Multimedia presentation (3 min)]

Overview[edit | edit source]

[Provide more detail]

What is Affective Computing?[edit | edit source]

Understanding emotions is an important aspect of personal development and growth, us humans live in an era where technology is a primary source in assisting our needs, therefore the phenomenon of Affective computing (AC) also commonly known as artificial intelligence was introduced as it has the objective to bridge the existent gap between computation technology and the emotions exhibited by humans. Affective computing is an area of study within cognitive computing and artificial intelligence concerned with development of system devices that have the ability to detect, recognise, interpret, process, stimulate human emotions and appropriately respond to other users emotions, relative to user emotion from visual, textual, and auditory sources, it is a continuously growing multidisciplinary field that explores how technology can inform an understanding of human affect, how interactions between humans and technologies can be impacted by affect, how systems can be designed to utilise affect to enhance capabilities, and how sensing and affective strategies can transform human and computer interaction.

Figure 1. Rosalind Picard (Panel Discussion Close-up, Science, Faith and Technology, 2011)

Background[edit | edit source]

The term affective computing is generally credited to Rosalind Wright Picard an American scholar a professor of media arts and sciences at MIT the fields distinguished pioneer who coined this phenomenon in 1997. Despite the concept of affective computing being relatively young, affective computing is a blossoming multidisciplinary field encompassing computer science, engineering, psychology, neuroscience, education and other many other disciplines.  

In Picard’s 1997 book Affective Computing, it is mentioned the overall goal of Affective computing is to provide a natural interaction between humans and computers by enabling computers to understand the emotional states expressed by the human subjects, so that personalised responses can be delivered accordingly, Picard states if we want computers to be natural with us if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognise, understand, even to have and express emotions"

Technologies[edit | edit source]

[Provide more detail]

Emotional Speech Processing[edit | edit source]

Spoken word ---> Text ----> Speech Synthetic Markup Language

Visual Acoustic Parameter measures

  • Pitch changes
  • Hesitations
  • Pace changes
  • Nerves

Information analysed and then saved

Facial Expression Recognition[edit | edit source]

Optical flow or Active appearance model captures facial features

  • Smile
  • Frowns
  • Wrinkles
  • Lip movements
  • Mouth Shapes

The computer then searches for patterns in the information

Body Gestures & Movement Recognition[edit | edit source]
  • The body gestures and movements locate the user and tracks their body language.

How can Affective Computing be used?[edit | edit source]

Research in Affective Computing, emotional recognition and sentimental analysis aims to prove well-being by enabling computers and robots to make better decisons[spelling?] and serve through awareness of peoples[grammar?] emotions, here are some of the many ways Affecting Computing contribute to society;

Education[edit | edit source]

[Provide more detail]

Health Care[edit | edit source]

[Provide more detail]

Video Games[edit | edit source]

[Provide more detail]

Risks Of Affective Computing[edit | edit source]

Despite good intentions and magnificent capabilities of Affective Computing contributing to society there are potential dangers such as;

Psychological Harm[edit | edit source]

The feature of emotional visualisation[explain?] can potentially harm individuals psychologically by revealing information which people do not give consent to reveal and contributing to a weakening of social skills, moral value. For example, exposure of potential problems (e.g., Depression, Anxiety or Obsessive Compulsive Disorders) for others to see might be undesirable for some persons due to social stigma.

Physical Harm[edit | edit source]

The physical harm that can rise from emotional visualisation can be due to certain factors such as the ability to quickly detect threats and oppurtunities[spelling?] via negative or positive emotions could increase the indicence[spelling?] of fighting and violence as well as promiscuity in dating scenarios and thereby sexually transmitted diseases. For example, a person could feel threatened by seeing their partner and a potential rival feel highly positive emotions toward one another, which could lead to anger and violence. Some individuals and groups might also seek to control others' emotions. This could be carried out mentally.

Misunderstanding[edit | edit source]

A risk of misunderstandings, unseen biases, and falsification also exists. For example, current emotion visualization systems typically assume a person feels one emotion at a time, and do not take into account the full complexity of human emotions, like that emotions are usually directed toward some referent and can coexist; e.g., a person can feel angry when hearing that a loved family member was hurt without feeling anger toward that family member, and cry happy tears when hearing they are safe. Emotional signals can also be ambiguous; for example, smiling is not always a sign of joy, and nodding can have many meanings. Misunderstandings based on such ambiguity could lead to fighting or unhappiness (e.g. if a person appears to feel joy when another is in pain). Another potential problem is that, like humans, recognition algorithms can also have bias, which could negatively affect someone's life if emotions are used for some evaluation.

Disempowerment[edit | edit source]

Robots and computers can gain an advantage in persuading people when it becomes difficult for some humans to conceal thoughts and emotions potentially also for commercial gain. Today interactions with computers are common, e.g. for social media or online shopping; this tendency could be further exacerbated if some people turn to robots as "safe" companions, to avoid sharing private emotions with other human beings. Such systems could be highly convincing: robots without emotions would not need to be careful with choosing words, thus presenting a persuasive air of confidence, free of distractions. Moreover, emotions can also be used to deceive people to affect their decisions. For example, organiz[spelling?]ations could use such systems to adapt prices presented, e.g. for online products, based on detecting a potential customer's emotional state. Some potential dangers associated with such a scenario relate to society, trust, and equality. Robots could leverage emotions to deceive people into liking them in order to persuade, resulting in relationships which might not be genuine and potential reductions in a person's social contact with other humans.

Conclusion[edit | edit source]

- conclude your findings

Test your Knowledge![edit | edit source]

- make a fun quiz

See also[edit | edit source]

- need to insert interesting animated videos about affective computing

References[edit | edit source]

Banfa, A. (2016). What is Affective Computing? - OpenMind. https://www.bbvaopenmind.com/en/technology/digital-world/what-is-affective-computing/

C. E. Izard, “Four systems for emotion activation: Cognitive and noncognitive processes,” Psychological Review, vol. 100, no. 1, pp. 68–90, 1993.

En.wikipedia.org. (2019). Affective computing. https://en.wikipedia.org/wiki/Affective_computing

R. Pfeifer, “Artificial intelligence models of emotion,” in Cognitive Perspectives on Emotion and Motivation (V. Hamilton, G. H. Bower, and N. H. Frijda, eds.), vol. 44 of Series D: Behavioural and Social Sciences, (Netherlands), pp. 287–320, Kluwer Academic Publishers, 1988.

Lee, W., & Norman, M. (2016). Affective Computing as Complex Systems Science. Procedia Computer Science, 95, 18-23. doi: 10.1016/j.procs.2016.09.288

Kratzwald, B., Ilić, S., Kraus, M., Feuerriegel, S., & Prendinger, H. (2018). Deep learning for affective computing: Text-based emotion recognition in decision support. Decision Support Systems, 115, 24-35. doi: 10.1016/j.dss.2018.09.002

External Links[edit | edit source]

Dr. Rosiland discussing brains, emotions & behaviour (youtube)

Affective Computing & Health Care (youtube)

Affective computing (bbvaopenmind.com)