INDIANAPOLIS

Log in



Ethics of AI in Physician-Patient Relationships

  • June 23, 2025
  • 12:00 PM - 1:00 PM
  • 2100 E 71st Street Indianapolis, IN 46220

Registration


Registration is closed


Program: Ethics of AI in Physician – Patient Relationships
Speaker: Jane Hartsock JD, Director of Clinical Ethics, IU Health
Introduced By: John Langdon
Attendance: NESC: 91; Zoom: 23
Guest(s): Steve Johantgen, William Johnston, Manish Krishnamaneni
Scribe: Ruth Schmidt
Editor: Bill Elliott
View a Zoom recording of this talk at: https://www.scientechclubvideos.org/zoom/06232025.mp4

Dr. Hartsock, J.D., M.A., is Director of Clinical Ethics at IU Health. Sponsored by John Langdon.

Dr. Hartsock described the use of artificial intelligence (AI) in medicine. She shares medical ethics history, and the AI disruption of the medical profession. The first principle of all ethics discussion is to maximize the good. Yet this concept is often missing from the AI ethics discussion focusing on bias, transparency, safety, responsibility, and security. We apply new tools with the old rules. John McCarthy coined the term artificial Intelligence in 1955. Dr. Hartsock favors the Turing Institute definition of AI: The design and study of machines that can perform tasks that would previously have required human brainpower to accomplish. To show how fast AI is improving she contrasts a bad script from 2018 with the improved script today.  She wonders what it will be like to have a bot discuss patient concerns rather than a doctor. AI is increasing in potency according to this list (quoted from aisafetybook.com):

  1. Narrow AI can perform specific tasks, potentially at a level that matches or surpasses human performance.
  2. Artificial general intelligence (AGI) can perform many cognitive tasks across multiple domains. It is sometimes interpreted as referring to AI that can perform a wide range of tasks at a human or superhuman level.
  3. Human-level AI (HLAI) could perform all tasks that humans can do.
  4. Transformative AI (TAI) is a term for AI systems with a dramatic impact on society, at least at the level of the Industrial Revolution.
  5. Artificial superintelligence (ASI) is the most powerful, describing systems vastly outclassing human performance on virtually all intellectual tasks.

We are currently beginning to develop AGI (ii). We are expected to be at TAI (iv) in 2 years. Dr. Hartsock focuses on how this may impact ethics in the medical profession including all allied professions: doctors, nurses, social workers, etc.

Dr. Hartsock believes that two millennia of thought, from Hippocrates to John Gregory will continue governing medical ethics.

Did Hippocrates say first do no harm? No, he never actually said this. He is known for writing (circa 400 BC) about the natural causes of disease and on the art of healing. His two important writings are: On the sacred disease (epilepsy caused by disease of the brain), and On the epidemics. He described the art of healing as a relationship where the physician is the servant of the art; and both the patient and the physician fight the disease.

John Gregory wrote about Clinical Ethics in 1772. He was influenced by the philosopher David Hume. Hume believed people mirrored each other. We see suffering: we feel suffering. Inspired by his three virtuous daughters and David Hume, John Gregory’s ethics demand that medical professionals see suffering, recognize it and soothe it. This includes palliative care for the dying.

Dr. Hartsock shared experiences on the IU Health Board. The board asks questions about medical products such as what part of the product uses AI; what decisions are made by AI; and what bias might be introduced by AI. Some vendors are not prepared to answer these questions or fail to account for both the algorithms and the data used to train AI. This leads to agnotological concerns. Will AI be used as a tool to promote cultural ignorance or doubt via misinformation? Will venture capitalists and technologists do what they can rather than what they should? What will the impact on society be? Work displacement? False information? Copyright violations? Not knowing what is influenced by AI? Remember the story of Frankenstein by Mary Shelley. We are the creators of AI, let us not become monsters.

Conclusion: Possible solution

1.      Regulations to protect the patient physician relationship.

2.      Burden the companies benefiting from AI with the cost, not society.

3.      Maintain the choice to use or not use AI.

                Jane Hartsock JD



Powered by Wild Apricot Membership Software