When Physicians Are Replaced with a Protocol

My experience in medicine allows me to distinguish between genuine innovation and subtle reclassification that fundamentally alters practice while appearing unchanged. Artificial intelligence has recently attracted considerable attention, including the widely circulated assertion that AI has been “legally authorized to practice medicine” in the United States. Interpreted literally, this claim is inaccurate. No medical board has licensed a machine. No algorithm has sworn an oath, accepted fiduciary duty, or assumed personal liability for patient harm. No robot physician is opening a clinic, billing insurers, or standing before a malpractice jury.

However, stopping at this observation overlooks the broader issue. Legal concepts of liability are currently being redefined, often without public awareness.

A significant transformation is underway, warranting more than either reflexive dismissal or uncritical technological enthusiasm. The current development is not the licensure of artificial intelligence as a physician, but rather the gradual erosion of medicine’s core boundary: the intrinsic link between clinical judgment and human accountability. Clinical judgment involves making informed decisions tailored to each patient’s unique needs and circumstances, requiring empathy, intuition, and a deep understanding of medical ethics.

Human accountability refers to the responsibility healthcare providers assume for these decisions and their outcomes. This erosion is not the result of dramatic legislation or public debate, but occurs quietly through pilot programs, regulatory reinterpretations, and language that intentionally obscures responsibility. Once this boundary dissolves, medicine is transformed in ways that are difficult to reverse.

The main concern isn’t whether AI can refill prescriptions or spot abnormal lab results. Medicine has long used tools, and healthcare providers generally welcome help that reduces administrative tasks or improves pattern recognition. The real issue is whether medical judgment—deciding on the right actions, patients, and risks—can be viewed as a computer-generated outcome separated from moral responsibility. Historically, efforts to disconnect judgment from accountability have often caused harm without taking ownership.

Recent developments clarify the origins of current confusion. In several states, limited pilot programs now allow AI-driven systems to assist with prescription renewals for stable chronic conditions under narrowly defined protocols. At the federal level, proposed legislation has considered whether artificial intelligence might qualify as a “practitioner” for specific statutory purposes, provided it is appropriately regulated. These initiatives are typically presented as pragmatic responses to physician shortages, access delays, and administrative inefficiencies. While none explicitly designates AI as a physician, collectively they normalize the more concerning premise that medical actions can occur without a clearly identifiable human decision-maker.

In practice, this distinction is fundamental. Medicine is defined not by the mechanical execution of tasks, but by the assignment of responsibility when outcomes are unfavorable. Writing a prescription is straightforward; accepting responsibility for its consequences—particularly when considering comorbidities, social context, patient values, or incomplete information—is far more complex. Throughout my career, this responsibility has continuously resided with a human who could be questioned, challenged, corrected, and held accountable. When Dr. Smith makes an error, the family knows whom to contact, ensuring a direct line to human accountability. No algorithm, regardless of sophistication, can fulfill this role.

The primary risk is not technological, but regulatory and philosophical. This transition represents a shift from virtue ethics to proceduralism. When lawmakers and institutions redefine medical decision-making as a function of systems rather than personal acts, the moral framework of medicine changes. Accountability becomes diffuse, harm is more difficult to attribute, and responsibility shifts from clinicians to processes, from judgment to protocol adherence. When errors inevitably occur, the prevailing explanation becomes that ‘the system followed established guidelines.’ Recognizing this transition clarifies the shift from individualized ethical decision-making to mechanized procedural compliance.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment