Unsupported Browser
The American College of Surgeons website is not compatible with Internet Explorer 11, IE 11. For the best experience please update your browser.
Menu
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Become a Member
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Membership Benefits
ACS
Viewpoint

Ethical Concerns Grow as AI Takes on Greater Decision-Making Role

Ameera AlHasan, MD, MRCSEd, FACS

February 8, 2023

23febbull-viewpoint-alhasan-ameera.png

Dr. Ameera AlHasan

Both ethics and artificial intelligence (AI) are complex disciplines that, when applied to healthcare, give rise to many practical dilemmas, conflicts, and contradictions. In order to understand the ethical challenges that may arise from the use of AI in healthcare, one must simplify its underlying principles; only then can future obstacles and potential solutions be realized. Although some may contend that the autonomous use of AI in hospitals, particularly in the operating room, is far from an everyday reality—autonomous or not—this is a technology that continues to advance at an exponential pace.

This article reviews biomedical ethics1 and AI, and proposes a novel “ABCD” approach to understanding the challenges that arise when these fundamental concepts intersect in the healthcare setting. Potential solutions also are proposed with the goal of overcoming the ethical challenges. In healthcare, ethics cannot afford to lag behind science.

Back to Basics

Four basic principles constitute the cornerstone of medical ethics: autonomy, beneficence, nonmaleficence, and justice.2 Simplified definitions of these concepts are provided in Table 1. For additional information, refer to a designated ethics text or guide.

In order to understand AI, it is important to understand how it is created. First, massive quantities of data are collected. Thousands or millions of healthcare records are used to retrieve the data, which may be organized within specific parameters such as patient vital signs, symptoms, medications, or surgeon hand movements. These data are then categorized and entered into software that uses complex programming and mathematical processes to generate algorithms.

The algorithms make up the backbone of AI, as they are able to make decisions or take actions when faced with new information based on the background data they have been fed (see Figure 1).3

Ethical challenges may arise at any point in the AI creation process.3 For example, using biased data from a White male population to create an algorithm that makes treatment decisions may result in decisions that are not beneficial, or even harmful, to non-White or female patients (thereby violating beneficence and nonmaleficence).

Table 1. Four Principles of Medical Ethics
Autonomy
Patients have the right to make choices and take their fate into their own hands after being well-informed.
Beneficence
Healthcare professionals have the best interests of the patient at heart and will always attempt to choose what is most beneficial to the patient.
Nonmaleficence
“Do no harm.” The decisions made by healthcare professionals bear no ill intentions or harm toward the patient.
Justice
Equal opportunities are given to patients in similar situations, thereby establishing equity and fairness.
Figure 1. How AI Is Created: An Oversimplified Review
Figure 1. How AI Is Created: An Oversimplified Review

ABCD Approach to Ethics in AI

An ABCD model is proposed here to summarize the potential ethical challenges associated with AI:

  • A: Accountability
  • B: Bias
  • C: Confidentiality
  • D: Decision-making
Accountability

Whether at the level of a healthcare institution or in a court of law, doctors are held accountable for complications that arise from treatment decisions that cause unnecessary suffering for patients; they also may have to endure a penalty.

In surgery, the primary surgeon is the first individual to be questioned when an error or complication occurs. Machines, like humans, are prone to committing errors. When AI-driven machines and robots start making critical treatment decisions or operating autonomously, who then is responsible for errors, complications, or patient death?

It is futile and illogical to hold the machine responsible, and it is unreasonable for the surgeon to bear full responsibility for AI-driven errors. Likewise, it does not seem plausible to penalize the software developers and programmers for every complication that arises in every hospital that has opted to use their AI platforms. Such dilemmas of accountability will become even more complicated when AI-driven machines start to cause harm on their own through deep learning.

It seems there is a need for a change in mindset from shifting responsibility to sharing responsibility among the involved parties. The impact of this shared responsibility on medical litigation cases remains unclear and requires further investigation.

An example from outside healthcare is the Amnesty International campaign “Stop Killer Robots,”4 which refers to “autonomous weapons systems” used in warfare. This campaign is a stark example of how ethical issues regarding the use of AI may surface. Discussions continue to take place on accountability and responsibility in the battlefield—why not in healthcare?

Bias

Bias always has existed in some form or another in both medical practice and healthcare research. Whereas blatant bias and discrimination exhibited by an individual against a specific racial group may become more noticeable over time, AI exhibits a more implicit bias.

As previously mentioned, biased or skewed data used to create algorithms will result in AI that is biased against any number of attributes such as race, gender, or socioeconomic status.3 This bias, in practice, violates ethical principles such as justice. Moreover, unequal opportunities are present for patients receiving care at under-resourced centers, whereas more privileged hospitals may have access to more advanced and, perhaps, more regulated AI platforms.

Confidentiality

Patient data are considered sensitive, and worldwide attempts to protect these data include the Data Protection Act 2018 in the UK and the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the US. Protecting data and patient privacy is important. However, AI cannot exist without data. This creates a paradox where demands are made to conceal data for the sake of confidentiality and release data to AI developers in order to create better algorithms.

In certain applications, AI not only requires patient data, it also could include data on healthcare workers and institutions such as surgeons’ hand motions and operative performance or hospital team dynamics.

Data ownership is just as important of an issue as data privacy. On robotic surgery platforms, for instance, the manufacturer automatically owns all the data.

Decision-making

AI is designed to make decisions that may include making a diagnosis, prescribing medication, or controlling instrument movement in the operating room.

The nature of the decision must be questioned if it is to adhere to ethical standards. Such questions may include asking whether a decision is beneficial, appropriate, or harmful to a specific patient.

It is possible that the decision made by the AI platform overrides patient autonomy where the platform chooses for patients without allowing them to choose for themselves. Equally important are the autonomy and decision-making capacity of the healthcare professional, which may be restricted by AI if that possibility is not taken into consideration during the AI design process.

Potential Solutions

Solving ethical problems in AI starts with vigilance and the understanding that AI is not immune to human prejudice. It is of utmost importance that strict legislation be put forth to regulate the design as well as the implementation of AI platforms. Lawyers and judiciary officials with a specific focus on healthcare policy and litigation must be consulted early in order to plan accordingly for potential problems and create solutions that protect all stakeholders.

Moreover, healthcare professionals and AI developers must work hand in hand with ethicists and philosophers5 to develop an ethical code of conduct that guarantees the preservation of human rights, dignity, and justice through the use of AI. This code may one day be known as “The Robocratic Oath.” Whereas Isaac Asimov’s Three Laws of Robotics6 once seemed like science fiction, the need to regulate automaton behavior and ensure no harm is done to human beings is now a reality.

On the other hand, specific challenges that arise in the practical setting require more technical solutions. This means if the clinical application of AI creates an ethical challenge in a specific hospital or patient population, then this issue has to be addressed at the level of the AI algorithm, whether at the creation or implementation phases. The “4-D model,”3 for example, proposes a cyclical solution for managing bias in AI at the level of the design process, from data collection to postdelivery evaluation in the “dashboard” phase.

Conclusion

Understanding the basic principles that govern biomedical ethics and AI creates an awareness of the ethical challenges that may arise. Ethical issues concerning the application of this technology generally can be organized into the following categories: accountability, bias, confidentiality, and decision-making. It is only through awareness, legislation, codes of conduct, and conscientious design that potential solutions may be found.

Disclaimer

The thoughts and opinions expressed in this viewpoint article are solely those of Dr. AlHasan and do not necessarily reflect those of the ACS.


Dr. Ameera AlHasan is a specialist surgeon at the Department of Surgery at Jaber Al-Ahmad Al-Sabah Hospital, Kuwait, and Course Director and State Faculty for Advanced Trauma Life Support in Kuwait.


References
  1. Hashimoto DA, Rosman G, Meireles O. Artificial Intelligence In Surgery: Understanding The Role Of AI In Surgical Practice. New York, NY: McGraw Hill; 2021.
  2. Hope T, Dunn M. Medical Ethics: A Very Short Introduction. Oxford, England: Oxford University Press; 2018.
  3. AlHasan AJMS. Bias in medical artificial intelligence. The Bulletin of the Royal College of Surgeons of England. 2021;103(6): 302-305.
  4. Piasecka N. Towards a ban on killer robots: Expectations high as states at UN to decide next steps on regulation of hi-tech weaponry. Amnesty International 2021. Available at: https://www.amnesty.org/en/latest/campaigns/2021/12/towards-a-ban-on-killer-robots. Accessed January 13, 2023.
  5. Harai YN. 21 Lessons for the 21st Century. London, England: Penguin Random House UK; 2018.
  6. Asimov I. I, Robot. New York, NY: HarperVoyager; 2018.