Blog

ProjectBlog: Ethical Considerations of Artificial Intelligence in Medicine

Blog: Ethical Considerations of Artificial Intelligence in Medicine



Image from Ijaz, 2018

The advancement of medicine is just one of the many ways in which artificial intelligence (AI) is expanding throughout society. Using disciplines such as machine learning and robotics, AI is now being integrated into healthcare for processes such as clinical diagnosis, image analysis, data interpretation and waveform analysis (Ramesh, Kambhampati, Monson, & Drew, 2004). It is even capable of helping with prognosis: artificial neural nets have demonstrated a capability of predicting survival in patients with breast cancer and colon cancer (Ramesh, Kambhampati, Monson, & Drew, 2004). However, as the technology is so new and primarily undeveloped, the ethical considerations and regulations of such are largely overlooked. In addition, there is a lack of oversight and regulation on these new AI systems, which could result in unethical or even dangerous situations for patients (Luxton, 2019).

Ethical Dilemmas:

There are many ethical concerns regarding the use of AI in medicine. There are three main dilemmas that could result from this new technology: the limits of machine learning, the increased potential for discrimination, and the lack of empathetic capability.

AI algorithms can help doctors make decisions: but are there limitations on these technologies’ capabilities? IBM Watson is a clinical decision support system (CDSS) that can assist medical professionals in diagnosing and treating patients (Luxton, 2019). However, if there are flaws in the data it examines, then this could lead to errors (Luxton, 2019). Programmers are constantly working to improve existing software and fix bugs in the code. But for an algorithm as impactful as Watson, these errors could be severe and even damaging to a patient’s health. Patients may also be overly optimistic if AI technology such as Watson is used (Luxton, 2019). A CDSS is an example of augmented intelligence, which is when “normal human intelligence is supplemented through use of technology in order to help people become faster and more accurate at the tasks that they’re performing” (Luxton, 2019). Augmented intelligence may help medical professionals make decisions but also could create a reliance on this technology. Thus, if the algorithm gives the wrong diagnosis or prognosis and the doctor is overly reliant on the augmented intelligence, they wouldn’t give the patient the best care possible. It is important that augmented intelligence does not replace the doctor’s own intelligence and that it is simply used as a supplement.

Image from SSI Staff, 2019

The next ethical concert is the potential increase of discrimination in medicine. Health disparities due to discrimination already exist due to biases held by medical professionals. However, using AI algorithms in roles traditionally held by people could worsen these disparities through various means. Artificial Intelligence algorithms learn on training data, which could exclude certain groups in society (Khullar, 2019) These groups as a result would not receive as accurate diagnoses as others. In a recent study done by researchers at MIT and Stanford University, three facial detection programs had error rates never worse than 0.8% for light-skinned males. For darker-skinned females, the error rates were at times as high as 20% and 34% (Hardesty, 2018). This disparity could severely affect the lives of dark-skinned females if the technology is incorporated into routine medicine. Even “unbiased” algorithms could result in unfair outcomes for patients of different backgrounds, thus creating a disparity between groups (Khullar, 2019). Another way that AI could discriminate is the data used for training the algorithms. The data used for AI algorithms to learn is real-world data, so it could perpetuate existing biases that are encapsulated in the data (Khullar, 2019). Lastly, access to advanced AI systems may be limited for people of low SES, thus increasing the existing gap in health care availability.

Finally, we need to consider the fact that AI algorithms are not human and thus do not have empathy or emotions. Systems such as Watson do not have emotions, which may lead to misguided decisions. Also, they lack the context that doctors can get from the patient. For example, let’s consider the case in which a patient is not conscious and the doctor must decide if it is time to “pull the plug.” An algorithm may determine it should or should not be done, but a doctor may know of extenuating circumstances that determine otherwise. It is important for the doctor to consider all factors that an algorithm may not be able to.

GIF from GIPHY

Vulnerabilities:

To get a better idea at how these ethical principles may be challenged in practice, we analysed two different cases in which artificial intelligence is currently being tested for use in the medical field

Case Study #1: Medical Implant Risk Analysis

GIF from GIPHY

Implantable heart monitoring devices built by the medical tech startup Coraźon use a smartphone app to monitor and control the devices via a wireless connection. The implant can only be accessed when the phone is in close proximity, and all data is sent and stored under encryption (“Case: Medical Implant Risk Analysis”, 2018). However, vulnerability testing in the technology showed that under certain circumstances data exchanges could be modified or manipulated to send faulty commands and reports.

In this scenario multiple parties or “stakeholders” are involved and have distinct interests in the outcome of the technology being used: the developers, the doctors, and the patients. While the developers may be more interested in producing an economically viable product, doctors and patients are generally more concerned with how the product works to better improve the health and lifestyle of the patient. The difference in vision exposes the potential for vulnerabilities and increased risk of the technology. As previously mentioned, such an implant is at risk for hacking that may lead to modification of data and or commands. Such actions could cause a doctor to misdiagnose or mistreat a patient based of faulty data or the implant itself could physically harm the patient. Although in this specific case, harm was deemed negligible due to the device’s limited capabilities, similar technologies could easily lead to fatalities. Medicine and technology now hold a patient’s life at stake, a bug in the code or a hack could be lethal.

The ethical dilemmas most prevalent to the patient and doctor has to do with technological accessibility. A main draw to this heart implant monitor is that is connects and is controlled by a smartphone application. Many Americans cannot afford basic health care let alone a smartphone. Without access to a technology such as this one that will give patients greater control over their own health, the gap in socioeconomic accessibility and affordability will only increase. The lifestyle of one type of patient will improve with this access, however, we will still see many lower income and minority groups coming in for repeated check ups and thus increased healthcare bills. A doctor is required to provide the same level of care to all patients, regardless of race, religion, and status. This technology and others like it don’t observe the same moral obligations, thus they may worsen our system in the long run.

According to the ACM Ethical Principle 1.4, computing professionals should “be fair and take action not to discriminate.” Creating a device that is only available to the upper echelon of society, however, is not fair and just. Even more, this implant and associated application could be used to send faulty commands to disrupt a patient’s heart. Their data may be sold, stolen, or hacked and used by people who were not meant to see it. ACM Ethical Principle 1.2 states “avoid harm.” Developers must consider whether their stake in the product is worth potential life lost. Does the benefit and profit outweigh the value of a life?

In a best-case-scenario, this heart implant and application is accessible to everyone, at the same rate. It allows for independent monitoring and control of the device via an application that can be downloaded onto a smartphone or via optional external tablet provided to patients at no extra cost. The app and function of the implant is highly secure and and hacking that may inevitably occur can in no way alter the functionality of the implant and thus the actual heart.

In a worst-case-scenario, this heart implant and application is only accessible to those who can afford it and already have the appropriate smartphone to download the application piece. The data and functionality is not very secure and upon hacking, the actual heart function may be altered. If, say implanted on a world leader, terrorists could hack into the device and hold his or her life for ransom with the click of a button.

Potential improvements that would not only mitigate the risk of the worst-case-scenario but also carry out the project in an all around more ethical manner include:

  • Include the option for an external device such as a tablet that is available for all patients at no extra cost. The idea of a smartphone application that interacts with the implant is a great idea to increase the patient’s control over their own health, however it does not take into consideration disparities in wealth. An external device that enables the same control will help reduce that socioeconomic burden.
  • Get rid of the wireless connectivity capability of the device-to-phone aspect and instead implement a direct connection data transfer. The wireless connection was already limited to a short range of transfer, but exponentially increased the hacking capability and risk. If instead data and commands could be transferred using a small wire that plugs into the phone or external device and then is directly held over the heart where the implant is, the hacking risk is significantly reduced already.
  • Offer the implant at a low cost so it is accessible to those with lower socioeconomic status

Case Study #2: How Should AI be Developed, Validated, and Implemented in Patient Care?

GIF from GIPHY

Pathologists are considering using a new artificial intelligence (AI) program by the Google Brain Project. Not only can the program scan more images much faster than a human, but it has increasing sensitivity for detecting cancerous cells (Anderson & Anderson, 2019). Before incorporating these types of AI programs for diagnostics and care we need to consider the impact they will have on all stakeholders. Like in the heart implant case study, we again have 3 stakeholders: the developers, the doctors, and the patients. This time, while the developers are interested in producing a economically viable product and the patients are concerned with how the product works to properly diagnose their condition effectively and efficiently, the doctor in this scenario is interested in both the patient outcome but also in their personal liability in using a primarily “mysterious” diagnostic tool. The diagnostic AI has the potential to diagnose patients faster and with greater confidence than a doctor, thus leading to faster treatment for the patient and a greater potential of survival. However, as the AI uses a black box algorithm that lacks justification in its diagnosis, it also has the potential to be wrong. In taking that risk, the doctor and patient both risk a potential misdiagnosis or uncaught diagnosis that could lead to treatment down the wrong path, delayed treatment, and thus a lower potential for survival.

Such an AI leads to many ethical considerations. As mentioned above, there is the black box issue. This diagnostic tool, like many, uses a black box algorithm. This means it provides no justification or evidence for it decisions and doctors may not understand how it works. This can leave doctors liable to malpractice claims of unsupported decision making, whether the diagnosis was correct or not. Another consideration is the issue of automation bias. If these machines are thought of as “perfect,” physicians may become overly reliant on them. Inherent trust and complacency on technology may result in underperforming and overall lowering of medical professionals’ ability. In lowering the standard of human medical care, the patient’s are at greater risk overall. Finally, there is the accountability issue. If and when AI goes wrong who should be held accountable?

As data professionals creating such AIs, developers must be able to provide some sort of output along with the ANN that produces a diagnosis. A program cannot fabricate “cancer” out of thin air and have us take it at its word. The ACM Ethical Principles 2.6 and 3.7 state “a computing professional should perform work only in areas of competence” and “recognize and take special care of of systems that become integrated into the infrastructure of society” respectively. A programmer has no business creating a machine that has the power to change someone’s life via that potentially fatal diagnosis, if they themselves do not understand the repercussions. Not only does the output the programmer develops affect the doctor, but it also affects the patient and the patient’s family. The ripples of a single diagnosis have the power to touch masses of people and the programmer must understand what each and every positive and negative diagnosis means to all of these people.

In a best-case-scenario the medical diagnostic AI would be used in conjunction with a medical professional to diagnose patients. It would be used as a secondary source to backup a doctor in his or her diagnosis or as a tool to help base the foundation of a diagnosis, rather than the sole and primary source of information. The diagnostic AI would also, along with its diagnosis, provide some sort of justification or further reference for which the doctor may refer to in using the program as a source so that the claims are not unfounded.

In a worst-case-scenario, medical professionals will become wholly reliant on the diagnostic AI to diagnose patients and inform their course of treatment. In a specific case, the AI has made a mistake but the doctor has become too inept to realize it. The patient undergoes a treatment that is not only unnecessary, but also harmful in the case in which they do not actually have the disease. As such, the patient falls truely ill and dies due to the original faulty AI diagnosis and failure to make a fully informed diagnosis by the doctor.

GIF from GIPHY

Potential improvements that would not only mitigate the risk of the worst-case-scenario but also carry out the project in an all around more ethical manner include:

  • Requiring that diagnostic AI’s produce some sort of output or evidence of their diagnoses. In offering support or secondary references, the doctors may be supplemented in their decisions and be overall more informed. The output also reduces the mystery of black box algorithms.
  • Requiring that doctors use diagnostic AI’s as a secondary source to their own personal medical knowledge, rather than the primary and first source of information. As a “second opinion” AIs will aid and support physicians without running the risk of automation biases.
  • Patients should be informed when the doctor is making decisions with the help of an AI algorithm so they understand the risk they are accepting.

Conclusion:

AI can make medicine more efficient and offer an additional support system for doctors. However, it should not replace doctors, but rather be used as a supplemental tool for them. There is no doubt that AI will continue to be integrated into medical practices. As this technology continues to grow, we suggest the following:

  • Doctors should be more educated on the limits of the technologies they are working with and avoid full confidence in the output
  • Programs should provide justification for output and black box algorithms should be restricted
  • Because this advancement of technology could increase the socioeconomic gap in accessibility of medicine, we must consider the possibility that AI may actually decrease the standard of care for many people
  • Programmers should carefully collect and choose which data to train their algorithms on
  • Humans offer a unique perspective and emotional input to medicine that machines will never be able to replicate
  • Regulations should be put in place for AI systems used in diagnoses and treatments in order to avoid faulty output as well as enforce all above suggestions

Ultimately, these risks and ethical concerns need to be fully considered by both computing and medical professionals in order to provide the safest and most successful care possible for all patients.


References:

Amazon’s Facial Recognition Tech is Racially Bias, Study Says. (2019, January 30). Retrieved April 3, 2019, from Security Sales & Integration website: https://www.securitysales.com/news/amazon-facial-recognition-racially-bias/

Anderson, M., & Anderson, S. L. (2019). How Should AI Be Developed, Validated, and Implemented in Patient Care? AMA Journal of Ethics, 21(2), 125–130. https://doi.org/10.1001/amajethics.2019.125.

Callahan, D., & Jennings, B. (2002). Ethics and Public Health: Forging a Strong Relationship. American Journal of Public Health, 92(2), 169–176. https://doi.org/10.2105/AJPH.92.2.169

Case: Medical Implant Risk Analysis. (2018, July 10). Retrieved April 3, 2019, from ACM Ethics website: https://ethics.acm.org/code-of-ethics/using-the-code/case-medical-implant-risk-analysis/

Code of Ethics. (n.d.). Retrieved April 3, 2019, from https://www.acm.org/code-of-ethics

Doctor Robot will see you now. (n.d.). Retrieved April 3, 2019, from TimesLIVE website: https://www.timeslive.co.za/news/sci-tech/2017-06-19-doctor-robot-will-see-you-now/

Hardesty, L. (2018, February 11). Study finds gender and skin-type bias in commercial artificial-intelligence systems. Retrieved April 3, 2019, from MIT News website: http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Ijaz, R. (2018, April 18). How AI Makes Precision Medicine More Accurate. Retrieved April 28, 2019, from Health Works Collective website: https://www.healthworkscollective.com/how-ai-makes-precision-medicine-more-accurate/

Khullar, D. (2019, February 2). Opinion | A.I. Could Worsen Health Disparities. The New York Times. Retrieved from https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html

Luxton, D. D. (2019). Should Watson Be Consulted for a Second Opinion? AMA Journal of Ethics, 21(2), 131–137. https://doi.org/10.1001/amajethics.2019.131.

Ramesh, A. N., Kambhampati, C., Monson, J. R. T., & Drew, P. J. (2004). Artificial intelligence in medicine. Annals of the Royal College of Surgeons of England, 86(5), 334–338. https://doi.org/10.1308/147870804290

Rigby, M. J. (2019). Ethical Dimensions of Using Artificial Intelligence in Health Care. AMA Journal of Ethics, 21(2), 121–124. https://doi.org/10.1001/amajethics.2019.121.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social