Should AI be used to instill certain values, beliefs, or attitudes in people?

From Wikiversity
Jump to navigation Jump to search

AI is a powerful tool that can be used to influence people's values, beliefs, and attitudes, but should it? Why or why not?

Use these indicators to tag your arguments by copy and pasting them from here. Please use proper indentations for Objections

  • Argument for Argument in favor of the position
  • Argument against Argument against the position
    • Objection Objection to the argument.
      • Objection Objection to the objection.

And note that the Argument for one position, is usually an Argument against another position. You do not need to duplicate your arguments, just add it once in the relevant section.

Position: Yes, AI should be used to instill certain values, beliefs, and attitudes in people.[edit | edit source]

Relevant details, definitions and assumptions regarding the first possibility.

  • Argument for When AI teaches us anything, it is inherently imparting a belief by definition[1]. Therefore the educational benefits of AI should conserve its right to instill certain beliefs in people.
  • Argument for Since AI can't NOT teach values, the responsible ways to do it include
    • Honestly answering queries about the values it is using in selecting/constructing answers. Reveal at least at a general level its alignment programming.
    • Subject those values to a refined form of human deliberation such as approval voting vs. majority rule. Can a super majority of stakeholders live with the stated values? Or agree that the values as stated are a reasonable attempt to preserve the future opportunities of stakeholders but not foreclosing options in the
  • Argument for Yes, AI should teach people to have good values, like nuanced thinking.
  • Argument for Yes, AI should teach people to have pro-social beliefs, like the importance of empathy.
    • Critical thinking as in being able to create and parse arguments for and against "pro-social" outcomes is a more defendable goal for the AI to instill.
  • Argument for Yes, values, beliefs, and attitudes influence the entire lifecycle of AI products from data selection to algorithmic design and the final product delivered. Since these decisions will impact society whether or not they are intended to, the developers of AI products should orient towards instilling pro-social values (i.e. anti-bias, equity, respect for individual rights) as a method of harm-reduction.
    • Openness of about the values promoted is as or more important than the necessarily subjective judgements about what is pro-social
  • Argument for AI should be used to instill values that support the goals of the user of the AI tool, for example, educational AI products should aim to instill values that support successful learning.
  • Argument against The use of AI to instill values, beliefs and attitudes is unethical because it could be invisible to the person being influenced, meaning that they cannot consent or opt out.
  • Argument for There are times when I want to change my own values[2], but changing or updating values is difficult and often takes a sustained effort.
  • Argument for There are many people (e.g. cult deprogramming) who are glad their values were changed, ex post, but are unlikely to have agreed to it ex ante.
  • Argument for AI agents can learn to be particularly charismatic and convincing
    • Objection An AI agent's charismatic potential heightens the potential risks
  • Argument against The institutions building, deploying, and maintaining these systems have different and not-aligned incentives from my own (principal agent problems) such that these systems will reflect the institutions wants and needs more than my own.

No, AI should not be used to instill certain values, beliefs, and attitudes in people.[edit | edit source]

  • Argument for We cannot trust that the corporations which are building these AI systems will ensure that the values being instilled will be ethical.
  • Argument for It's a human right to hold your own beliefs (U.N. Human Right #18 and #19)[3], therefore AI should not be used to instill beliefs in people.
  • Argument for If we allow AI to instill certain values, beliefs, and attitudes in people, that could lead to marginalization and colonization of people's minds and cultures.
  • Argument for AI development will continue to be aimed at finding ever-more-sophisticated ways to exert influence over people’s emotions and beliefs in order to convince them to buy goods, services, and ideas.[4] If AI is used to instill values, beliefs, or attitudes in people for commercial purposes, it could have negative consequences for society.
  • Argument against We cannot extricate ourselves from the culture in which we live, we are constantly being socialized to have certain attitudes, values, and beliefs by families, corporations, our economy, and our education. AI will just be another tool from an institution that does this no matter what, so we might as well choose to do it intentionally.
    • Objection We should not program AI to do this intentionally, but allow it to respond dynamically to the demands and interests of the population using, like how dictionary companies change their definitions based on the popular usage of words.[5] Values, beliefs, and attitudes, just like definitions and meanings of words, changes over time and is often an emergent process.
  • Argument for Institutions or individuals with the most resources have the most power to instill their values with AI.
    • Objection A deliberation process could be designed to Democratically source the values and beliefs to instill through AI.
  • Argument for The spread of morals and values should be a human-to-human activity.
    • Objection LLMs are simply intermediaries between people since they summarize human-made data.
      • Objection LLM synthesized data may not convey the human source of the values and beliefs, making them unverifiable. Future AI may decide to propagate their own beliefs.
  • Argument for An AI agent can alter a human-derived value in a way that does not align with its author or audience
  • Argument for We cannot always distill a value or belief into a single, encodable statement
  • Argument for AI instilling values or beliefs requires more media and information literacy skills among audiences, leading to misplaced trust in the AI
  • Argument for The potential applications of an AI that instills values or beliefs is too large account for safely

AI should be used to instill certain values, beliefs, and attitudes in people only if...[edit | edit source]

"Instilling" includes advocating or sustaining conversations about conflicting values. The way to "correct" for bias is to maintain multiple, simultaneous points of view. Humans find this difficult. AI might be able to increase human capacity to do this.

Notes and references[edit | edit source]

  1. "belief meaning - Google Search". www.google.com. Retrieved 2023-06-19.
  2. Callard, Agnes (2018-03-22). "Aspiration". Oxford Scholarship Online. doi:10.1093/oso/9780190639488.001.0001. https://doi.org/10.1093/oso/9780190639488.001.0001. 
  3. Nations, United. "Universal Declaration of Human Rights". United Nations. Retrieved 2023-06-19.
  4. Atske, Sara (2021-06-16). "Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade". Pew Research Center: Internet, Science & Tech. Retrieved 2023-06-19.
  5. "A Word on 'Descriptive' and 'Prescriptive' Defining". www.merriam-webster.com. Retrieved 2023-06-20.