Jump to content

Should AI assistants be allowed to provide medical advice?

From Wikiversity

AI assistants have the potential to provide medical advice, but should they? Why or why not?

AI assistants should be allowed to provide medical advice

[edit | edit source]
  • Pro AI assistants are already in use answering medical questions.
  • Pro AI assistants can draft high-quality, personalized medical advice for review by clinicians, which can help to solve real-world healthcare delivery problems. [1]
  • Pro AI will outperform doctors in diagnosing patients.
    • Objection AI may be better at identifying the diagnosis but cannot provide the human connection required to empathetically deliver the diagnosis and treatment plan
      • Objection Studies show that patients rate AI diagnoses as more empathetic than human generated diagnoses [2]
  • Pro One AI system could see thousands of patients at one time, replacing a thousand doctors at any given appointment.
    • Objection Barriers to physician access due to a shortage of physicians should not be remedied by technical workarounds; we should train more doctors
  • Pro AI systems can be sufficiently trained to screen patients and in the event that they are not able to provide diagnosis, they can refer said patient to a human physician.
    • Objection Patients seeking human diagnosis may game the system, submitting false or contradictory information in order to prevent successful AI diagnosis
  • Pro AI can deliver medical advice instantly and is accessible 24/7, providing a solution for people who may not have immediate access to healthcare services. This is particularly beneficial for individuals in rural areas, developing countries, or during off-hours when medical professionals might not be readily available.
    • Objection Patients can receive timely medical advice from human physicians using telemedicine technology
  • Pro AI can help to prioritize cases based on the urgency of symptoms, ensuring serious conditions receive immediate attention. It can also help to reduce unnecessary hospital visits by providing advice for managing minor conditions.
  • Pro If by "AI" we mean generative AI/LLM, then there is no law in the U.S. against the use of it for medical advice by the patients and it is unclear how a law prohibiting such use would look like. The genAI users are treated as adults, having responsibility to understand the limitations of genAI and avoid using it for certain purposes in case of doubt.
  • Con AI assistants may not be properly trained to provide accurate medical advice, which could lead to negative consequences for patients. [3]
    • Objection AI assistants have the potential to be no less accurate than human experts
  • Con AI hallucinates, raising the potential for confidently delivered misdiagnoses
  • Con AI can't do proper clinical diagnosis like doctors. Two patients can have similar symptoms but different disease types. That's why doctors are needed, who go through a complete clinical diagnosis of a patient before recommending him drugs/treatment.
  • Con While AI can analyze data rapidly, its advice is only as good as the data it's trained on. There's a risk that the AI could provide incorrect advice if it has been trained on flawed or biased data.
    • Objection AI systems are always improving and will get better when exposed to real-time information
      • Objection In the medical field, there is no acceptable room for error in diagnosis
  • Con AI would need access to sensitive personal health information to give advice, which may present significant data privacy and security issues.
  • Con Even if AI is able to accurately and securely provide medical advice, AI is unable to do so in a way that is culturally and situationally appropriate. Do you want a robot telling you that you have terminal cancer?
  • Con Human experts are more flexible, enabling them to better respond to unpredictable patient reactions to diagnoses
  • Con Diagnoses are more than facts; they're the start of a medical journey. Patients will feel more comfortable and empowered sharing that journey with a human doctor than they would feel with a machine
  • Con In the case of misdiagnosis by AI assistant, it is difficult to identify accountability for medical malpractice suits, inhibiting patient protection
  • Con
    Why the decisions and advice was given to the user may not be transparent and explainable. The following video explains in simple terms why transparent AI of of special importance in medicine, which e.g. requires them to be open source.

Notes and references

[edit | edit source]
  1. "Study finds ChatGTP outperforms physicians in providing high-quality, empathetic advice to patient questions". ScienceDaily. Retrieved 2023-06-19.
  2. Ayers, John W.; Poliak, Adam; Dredze, Mark; Leas, Eric C.; Zhu, Zechariah; Kelley, Jessica B.; Faix, Dennis J.; Goodman, Aaron M. et al. (2023-06-01). "Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum". JAMA Internal Medicine 183 (6): 589–596. doi:10.1001/jamainternmed.2023.1838. ISSN 2168-6106. https://doi.org/10.1001/jamainternmed.2023.1838. 
  3. "AI Assistants in Health Care: A Treatment For Patient Communication Problems". Conversational AI assistant for personal use. 2020-08-06. Retrieved 2023-06-19.