WikiJournal Preprints/Call for urgent regulations on artificial intelligence
This article has been declined for publication by the WikiJournal of Science.
It is archived below as a record.
Discussion can be viewed here.
First submitted:
Declined:
Reviewer comments
QID: create wikidata item
License: This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction, provided the original author and source are credited.
Article information
Abstract
Call for urgent regulations on artificial intelligence
[edit | edit source]AI is clearly established as a very powerful technology and, as with any other technology such as nuclear power, chemical technology, or biotechnology, it can be used both for peaceful and ethical purposes and in non-ethical applications, and for creating weapons. While we have treaties and regulations on matured types of technology, such as prohibiting the cloning of people, and the development and proliferation of nuclear, chemical, and biological weapons of mass destruction, we do not have any regulations on the development of AI. This is seriously disturbing given the destructive and disruptive capabilities of AI and the rapid pace of its development. We, humans, can do many things, but it does not mean we should do them. Our task is to set the boundaries of what we can do and, more importantly, outline what we should not do. In the field of AI, currently, no boundaries exist which is a recipe for unmitigated disaster. Only non-bounding calls by researchers and tech entrepreneurs exist to limit the use of AI for developing weapons[1] or halt the development of AI models more powerful than GPT-4.[2]
To prevent or at least ameliorate harm from AI, immediate actions should be taken such as:
- Introduce an international ban on the development of superintelligent AI which is vastly more intelligent than human society in its entirety and the brightest of human minds.
- Form an international committee to regulate the development and use of AI. Below we elaborate on how AI can be continued to be used for the betterment of our lives, what risks the unregulated development and use of AI can have, and what steps are required.
AI can be good…
[edit | edit source]In the modern era, artificial intelligence (AI) has become an essential part of our lives. AI is making smart homes with AI-powered devices (thermostats, lighting systems, etc.), assisting us as personal assistants such as Alexa and Siri, self-driving cars, and improving user experience on the web platforms. In addition, AI can improve efficiency and productivity by optimizing methods, making better decisions, forecasting future trends, improving accuracy and precision, making maintenance easier, reducing cost, and improving traffic flow in cities, the health care system, and customer services. The use of AI in scientific research is enhancing our understanding of the natural world, accelerating scientific discoveries and outcomes, and ultimately improving the quality of human lives. In the authors’ own research, we also greatly benefit from using AI to develop faster and more accurate physically-motivated models which enable researchers to solve problems in chemistry and physics.[3][4][5][6]
…and harmful
[edit | edit source]Despite all these appealing aspects, the shift of human society to an AI-dependent society can have many negative impacts:
Unemployment: AI has replaced and may continue to replace workers in manufacturing, customer services, lab technicians, medical professions, and other industries. According to a recent global economic analysis, approximately 300 million jobs will be affected by AI.[7]
Over-dependence: With the increased use of AI, humans will be over-dependent on it, leading to the loss of critical thinking, creativity, and innovation.
Dehumanization: AI-empowered machines may see humans just as data points with no human empathy and compassion.
Social seclusion: In one of the studies on the negative impact of social media, the authors reported less in-person social interaction with peers among U.S. adolescents in the 21st century.[8] The over-dependence on AI can worsen this phenomenon and will lead to a loss of social cohesion and community.
Biased outcomes: AI models can be biased such as associating Muslims with violence.[9] If used in decision-making, AI can lead to discrimination in hiring, lending, rental, and customer services.
Security risks and access to sensitive user information: The use of AI in self-driving cars (autonomous cars) and smart homes raises security concerns as they can be hacked leading to a leak of sensitive information and risk to human lives. In a recent study, leakage of potentially sensitive user information has been reported for some smart home devices available on the market.[10] The cybersecurity vulnerability of autonomous vehicles can result in new ways of car theft and intended car accidents. In addition, they can be used for robbery and assault.[11]
Privacy: As AI systems are data-dependent, they often need to gather a lot of data on individuals for training, however, the same data could be used for unsolicited commercial and other unethical purposes. Recently, Italy, the first in Europe has banned ChatGPT over privacy concerns.[12]
Use in weapons: The implementation of artificially intelligent systems in modern warfare has drawn significant attention from the military, with the United States Army’s Future Combat Systems Project as one of the most striking examples. The project’s objective is to develop a “robot army” that can be deployed in wars.[13] In addition, the current Russian–Ukrainian war has shown a glance at the possible use of AI-empowered weapons.[1] The use of AI in weapons has raised numerous concerns around the world as it will enhance the speed and lethality of the weapons and start a new arms race. In addition, AI-empowered weapons cannot replace human empathy, judgment, and intuition in complex situations, thus raising ethical questions on its autonomous decision-making about life and death. An AI-empowered weapon can misunderstand human culture and values and thus can result in harming innocent people. Moreover, a malfunctioned AI weapon can cause atrocities and then it will be difficult to determine whom to make accountable.
Other concerns: Should AI be allowed to make decisions about health, life, and death in civil settings, e.g., in medicine (think of robotic surgeons, diagnosis and medicine prescription, etc.) or autonomic vehicles (when a crash is unavoidable, the decision must be made whose life is prioritized – of passengers or pedestrians)? In education, should AI eventually replace teachers and to what extent should we allow AI use by students and teachers? How do we solve intellectual property issues when AI is trained on human-created and copyrighted work? How much AI should know about us and how to protect our data and personal privacy? How many resources should we allow for AI use, as even training and using large language models leave a huge carbon footprint?[14] How human-like should we create AI, as it may become manipulative?[15] Should researchers use AI to write papers?[16][17]
Can AI get out of control?
[edit | edit source]We argue that recent developments such as ChatGPT give us only a glimpse of what AI will be capable of in the not-so-distant future. The stakes are too high: if AI becomes vastly more intelligent than humans, we will have very little control over this technology and our own destiny. There is no guarantee that such superintelligent AI will serve to benefit humanity.
Some doubters may say that machines will never reach such a stage and we will always be able to control them. One article even declared that the current ChatGPT is just a blurry JPEG (Graphics file type) of the Web with many limitations.[18] This is wishful thinking reminiscent of plentiful examples in history when even very bright humans often failed to recognize that if something is impossible today, it does not mean that technology will not be improved in the future: in 1895, the president of the Royal Society Lord Kelvin famously said that ‘heavier-than-air flying machines are impossible’ and, more recently, people doubted whether AI can play chess Go better than humans.[19] Similarly, after a decade or two, we may look back at the current AI models such as ChatGPT or GPT-4 as the early computers from 1970s. Thus, it is important to get aware of AI positive and negative impacts on human society and realize our responsibilities before AI goes out of hand. That’s why, in this opinion paper, we call for the ban on developing superintelligent AI putting on line the societal order. This ban is absolutely essential at least until we better understand the dangers of the new technology and find ways to control it.
AI raises existential questions
[edit | edit source]As mentioned above, AI can potentially replace hundreds of millions of human workers[7] that raises important societal and even existential questions. AI creates impressive artworks and can be used to write books over a weekend. Even in science, AI can already do highly complex work and, e.g., prove mathematical theorems better than the best mathematicians[20] and one study[21] claims ‘emergent autonomous scientific research capabilities’ of AI for chemistry. We can imagine that AI may become better than the best human scientists, engineers, programmers, and artists. No occupation is safe. We pride ourselves that we are currently the most intelligent beings, but this soon can be over. Do we as humans really want that machines take away from us our jobs, livelihood, and vocation? What will be the role of humans in such a ‘brave new world’, where we cannot do any work better than machines, and then why would machines need us, humans?
What should we do?
[edit | edit source]As AI is on its path to advancement and will be more integrated into our society, it is extremely important to take steps to mitigate the negative impact and prevent the misuse of AI. International organizations, national governments, companies, policymakers, and other stakeholders as well as individuals should collaborate on establishing ethical guidelines, promoting accountability and transparency, and nurturing the culture of critical thinking and responsible use of AI.
It is of paramount importance to establish an international committee for creating binding regulations on the development and use of AI. The international committee should include specialists from different areas ranging from AI researchers to political scientists to international law experts. Governments should work together on committing to and enforcing the regulations based on suggestions outlined by the international committee. Such a committee may be formed by United Nations.
This committee will have to work through a multitude of open questions raised by AI technologies. Here are some suggestions.
Regulations: Governments and international organizations should get together and establish regulations and ethical guidelines for the development and use of AI. These regulations should address issues such as the lethal use of AI, bias, privacy, security, and transparency.
Mitigating the impact on the job market: In one of the recent studies, legal services and securities, commodities, investments, customer services, telemarketing, and teaching jobs are found to be the top industries exposed to advancement in AI.[22] To mitigate the negative impact of AI on these sectors, governments, companies, and educational institutions should launch programs to retrain their workers and equip them with the skills needed to work alongside AI. In addition, the retraining programs should help workers to transition to other industries where jobs are abundant.
Accountability and legal considerations: As AI cannot be held responsible for the harm it may cause, mechanisms should be established to hold accountable any organization or individual responsible for its misuse or deployment. Any organization or individuals involved in the development of AI should abide by the regulations and laws preventing the misuse of AI.
Ethical considerations: AI developers should consider the implications of their research on human society and prioritize the developments aligned with the social benefits of human society.
Awareness: It is necessary to educate the public, policymakers, AI developers, and users about the potential risks of AI and to inform them about the best practices. This will help in preventing harmful developments and the misuse of AI.
Watchdogs: Regulatory bodies should be established to oversight the developments and use of AI and make sure that they all meet the legal and ethical standards.
Preventing AI use in weapons: All tech companies and AI developers should consider the misuse of their developments in weapons and should constantly convey their findings and concerns to policymakers and other regulatory bodies.
Robust testing: It should be ensured that the data used for AI training is diverse and free of bias and error. It should be ensured that AI does not learn anything harmful.
Preventing AI from over-regulations: Having said all this about required regulations, we want to emphasize that all stackholders including governments, organizations, AI developers, manufacturers, teachers, and researchers should get their thoughts together and ensure that it is not over-regulated. It should be ensured that regulations are preventing the misuse of AI and not hindering the advancements in the field of AI for peaceful purposes.
Overall, preventing the misuse of AI needs a comprehensive framework of regulations, ethical responsibilities, and technical and social considerations. In many disciplines of life, AI can benefit if used with care, however, we need to realize our responsibilities.
Summary
[edit | edit source]AI technology is extremely useful and can bring many benefits to humanity. We ourselves extensively use AI to advance our areas of research. Thus, we are not calling for a complete ban on AI, rather we simply want that AI remains on a level of a useful tool which requires us to create regulations that would hinder AI use for non-ethical and harmful applications as well as prevent AI from becoming superintelligent and too autonomous and replacing vast numbers of humans in their jobs.
AI offers enormous potential for technological and scientific advancements; however, its use comes with responsibilities. It is important to ensure that the use of AI meets ethical standards and is safe for both humans and the surrounding environment. Researchers should consider the negative impact of their research and developments on society and should prioritize research that is in line with ethical and moral standards. In addition, regulations and ethical guidelines should be introduced which help in preventing the misuse of AI. Regulatory organs should play the role of watchdogs and should ensure the accountability of tech companies and individuals involved in the deployment and use of AI for harmful purposes.
While we call for establishing international regulatory bodies, we realize that the unprecedented speed of AI development is at odds with a slow process of international efforts. Thus, we also call for national governments, companies, and individuals to reflect on the dangers of AI, form internal ethics and regulation committees, and not develop non-ethical and dangerous technology. Commercial organizations and entrepreneurs working on AI may need to voluntarily put on hold their own ambitions and desire for profit or power for the sake of humanity. Examples are a recent decision by Italy to ban ChatGPT[12] and an open letter calling for a moratorium on developing models more powerful than GPT-4.[2]
Additional information
[edit | edit source]Acknowledgements
[edit | edit source]This study is not funded by any funded agencies.
Competing interests
[edit | edit source]Authors have no competing interest.
Ethics statement
[edit | edit source]This article does not contain any studies with human or animal subjects.
References
[edit | edit source]- ↑ 1.0 1.1 Russell, Stuart (2023-02-21). "AI weapons: Russia’s war in Ukraine shows why the world must enact a ban". Nature 614 (7949): 620–623. doi:10.1038/d41586-023-00511-5. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00511-5.
- ↑ 2.0 2.1 "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2023-04-15.
- ↑ Dral, Pavlo O.; Barbatti, Mario (2021-05-20). "Molecular excited states through a machine learning lens". Nature Reviews Chemistry 5 (6): 388–405. doi:10.1038/s41570-021-00278-1. ISSN 2397-3358. http://dx.doi.org/10.1038/s41570-021-00278-1.
- ↑ Zheng, Peikun; Zubatyuk, Roman; Wu, Wei; Isayev, Olexandr; Dral, Pavlo O. (2021-07-23). "Artificial Intelligence-Enhanced Quantum Chemical Method with Broad Applicability". dx.doi.org. Retrieved 2023-04-15.
- ↑ Ullah, Arif; Dral, Pavlo O. (2021-11-26). "Predicting the future of excitation energy transfer in light-harvesting complex with artificial intelligence-based quantum dynamics". dx.doi.org. Retrieved 2023-04-15.
- ↑ Dral, P. O., Quantum Chemistry in the Age of Machine Learning. Elsevier: Amsterdam, Netherlands, 2023.
- ↑ 7.0 7.1 The Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani). https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf.
- ↑ Twenge, Jean M.; Spitzberg, Brian H.; Campbell, W. Keith (2019-03-19). "Less in-person social interaction with peers among U.S. adolescents in the 21st century and links to loneliness". Journal of Social and Personal Relationships 36 (6): 1892–1913. doi:10.1177/0265407519836170. ISSN 0265-4075. http://dx.doi.org/10.1177/0265407519836170.
- ↑ Abid, Abubakar; Farooqi, Maheen; Zou, James (2021-06-17). "Large language models associate Muslims with violence". Nature Machine Intelligence 3 (6): 461–463. doi:10.1038/s42256-021-00359-2. ISSN 2522-5839. http://dx.doi.org/10.1038/s42256-021-00359-2.
- ↑ Nick, Apthorpe, Noah Reisman, Dillon Feamster, (2017-05-18). A Smart Home is No Castle: Privacy Vulnerabilities of Encrypted IoT Traffic. OCLC 1106265113. http://worldcat.org/oclc/1106265113.
- ↑ Algarni, Abdullah; Thayananthan, Vijey (2022-11-24). "Autonomous Vehicles: The Cybersecurity Vulnerabilities and Countermeasures for Big Data Communication". Symmetry 14 (12): 2494. doi:10.3390/sym14122494. ISSN 2073-8994. http://dx.doi.org/10.3390/sym14122494.
- ↑ 12.0 12.1 Artificial intelligence: stop to ChatGPT by the Italian SA. Personal data is collected unlawfully, no age verification system is in place for children. Garante per la protezione dei dati personali. https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english. 2023.
- ↑ Feickert, A.; Lucas, N. J., Army future combat system (fcs) “spin-outs” and ground combat vehicle (gcv): Background and issues for congress. Library Of Congress Washington DC Congressional Research Service 2009.
- ↑ An, Jiafu; Ding, Wenzhi; Lin, Chen (2023-03-21). "ChatGPT: tackle the growing carbon footprint of generative AI". Nature 615 (7953): 586–586. doi:10.1038/d41586-023-00843-2. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00843-2.
- ↑ Véliz, Carissa (2023-03-14). "Chatbots shouldn’t use emojis". Nature 615 (7952): 375–375. doi:10.1038/d41586-023-00758-y. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00758-y.
- ↑ Stokel-Walker, Chris; Van Noorden, Richard (2023-02-06). "What ChatGPT and generative AI mean for science". Nature 614 (7947): 214–216. doi:10.1038/d41586-023-00340-6. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00340-6.
- ↑ van Dis, Eva A. M.; Bollen, Johan; Zuidema, Willem ; van Rooij, Robert; Bockting, Claudi L. (2023-02-03). "ChatGPT: five priorities for research". Nature 614 (7947): 224–226. doi:10.1038/d41586-023-00288-7. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00288-7.
- ↑ Chiang, T. Chatgpt is a blurry jpeg of the web. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web.
- ↑ Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis et al. (2016-01). "Mastering the game of Go with deep neural networks and tree search". Nature 529 (7587): 484–489. doi:10.1038/nature16961. ISSN 1476-4687. https://www.nature.com/articles/nature16961.
- ↑ Castelvecchi, Davide (2023-02-17). "How will AI change mathematics? Rise of chatbots highlights discussion". Nature 615 (7950): 15–16. doi:10.1038/d41586-023-00487-2. ISSN 0028-0836. http://dx.doi.org/10.1038/d41586-023-00487-2.
- ↑ Boiko, Daniil A.; MacKnight, Robert; Gomes, Gabe (2023-04-11). "Emergent autonomous scientific research capabilities of large language models". arXiv:2304.05332 [physics]. doi:10.48550/arxiv.2304.05332. http://arxiv.org/abs/2304.05332.
- ↑ Felten, Ed; Raj, Manav; Seamans, Robert (2023-03-18). "How will Language Modelers like ChatGPT Affect Occupations and Industries?". arXiv:2303.01157 [econ, q-fin]. doi:10.48550/arxiv.2303.01157. http://arxiv.org/abs/2303.01157.