Blog

Blog: The Dimensions of Ethical AI Changing the Face of Healthcare


Go to the profile of Alim Bhatia

— A Healthcare Dystopia with Current Forms of AI

Although many may believe that… “The AI is going to take over! Steal all of our jobs and is going to replace all of us!”

Not going to sugar coat this. This may be true to a certain extent when discussing its implications within the medical space, there are still a large variety of ethical draw-backs that need to be considered fist.

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” — Sam Altman

How about we make the most of our time, whilst we are still at the top of the food chain 😉

Medical Artificial Intelligence — WHY and WHY NOT

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless.

With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists.

In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

With these advancements made for stronger integration and contributions being made in the healthcare space, there is still an ethical dimension to consider tackling issues of privacy, security, and special regulatory provisions. And as this technology continues to progress, many get caught up in highlighting what the future of the healthcare space is going to look like, leaving the medical community remaining ill-informed of the ethical complexities that budding AI technology can introduce.

Ignoring its detrimental impact.

Ignoring the ethical complexities of AI

Dystopian Future of AI — Challenges of Ethical Artificial Intelligence

Estimates of the impact of AI on the wider economy globally vary wildly, with a recent report suggesting a 14% effect on a global gross domestic product by 2030, half of which coming from productivity improvements.2 These predictions create a political appetite for the rapid development of the AI industry

Biased Artificial Intelligence starts with faulty data sets

Diversity works not only to address the reasons of equality, but it is also essential to counteract potential biases in data and in human judgments.

It’s well established that clinicians can be influenced by subconscious bias. Often these biases are so deep-set that we are blind to them. Biases in health data are also common and can be life-threatening when not addressed properly.

For example, heart attacks are more common in men but are also the leading cause of death in women — at least in Western society. Despite this, it is more common for heart disease in women to be overlooked by doctors, unrecognized, and therefore untreated. This isn’t just because it’s considered less likely, but also because symptomatically it often manifests differently in women than in men.

Similarly, AI in healthcare is only as good as the people and data it learns from. This means a lack of diversity in the development of AI models can drastically reduce its effectiveness. AI trained on biased data will simply amplify that bias. For example, IBM Watson’s recommendations for cancer treatments have been based on training by just a small number of physicians at one medical institution. This creates biased recommendations based not on official guidelines but on the experience and opinions of a few, probably quite similar, people.

Figure 1

— Figure 1 summarises expected trends in ML research in medicine, ranging from short to longer mediums of time, focusing on the further development of reactive systems, trained to classify patients with a measurable degree of both accuracy and proactive systems. Being that these forms of ML algorithms are going to be tackling, diagnosing, and treating large prevalent areas within the medical field, it must be done with functions of unbiased decision making to ensure safety.

Who’s at Fault?

With complications during methods of treatment, diagnosis, and or classification of patient diseases or illnesses, highlights one of the most discussed topics speaking on the ethics of Artificial Intelligence.

As a whole, AI is (On a high-level) just a form of computational statistics. Its implication within the healthcare space focuses on providing outputs from given data sets, making the best possible suggestion/decision. The process as to which we measure accuracy within the medical industry puts Receiver Operator Curves (ROC curves) at the focal point. These ROC curves mainly consist of comparative data sets that showcase systems that best perform at specific tasks.

The better a system performs, the further up and towards the left the curve goes. A perfect 100% accurate system would not even be a curve, it would be a right angle in the top left-hand corner of the graph.

With these AI systems having higher accuracy rates towards both diagnosis and treatment for patients, we must take into account that utilizing systems of Artificial Intelligence there is expected higher rates of performance in patient care.

But how do we ensure we are making the best possible decision, putting our lives on the functionality of these algorithms.

Working to ensure and outline the best decision-making model for these ML algorithms relies heavily on two strong assumptions: AI must have access to population-wide electronic health records (EHRs) and these EHRs must be interpretable by AI.

Or we can stop getting sick.

But that’s a topic for another day 😉.

So What Your Saying is…. “We’re Screwed”

Well, not quite.

There are many that actually realize the importance of addressing this problem and are currently working on solutions focusing on the dimensions of ethical AI.

Companies like Winterlights Labs, a Toronto based startup are working on ways to actually tackle methods of biased data sets, building auditory tests for neurological diseases like Alzheimer’s disease, Parkinson’s, and multiple sclerosis.

Working to remove the senses of bias in all forms of diagnosis, avoid the characterization by gender or race, but sometimes by other traits, like language, skin type, genealogy, or lifestyle from flawed data sets.

Gathers tons of datasets on previous patients to be inputted in their AI algorithm for the best characterization of the patient and aid the process of rehabilitation. But what is most unique about the process, is that Winterlight Labs is placing a heavy focus on collecting data towards native English speakers to conduct directed treatment.

It’s not just the what Winterlight Labs is trying to do to tackle the complexities of ethical AI in healthcare, rather works to address a serious problem and set the precedence for other companies to continue innovating and finding new ways to completely revolutionize the healthcare space.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social