Blog: Why Does Artificial Intelligence Lie To Human Patients?
The British Broadcasting Corporation (BBC) has a long-running documentary television series called “Horizon”, which explores science and philosophy related topics. In its latest episode debut on 1 November, the 60-minute program tells the emergence of artificial intelligence and its impact on healthcare. However, instead of showcasing a series of possibilities new technologies have created on a field as old as human civilization, it felt more like a ruffle over a mobile application.
Babylon Health, the mobile application featured in the program functions like WebMD and claims to be a “General Practitioner (GP) at hand”. By telling the artificial intelligence (AI) driven system the symptoms that one is experiencing at the moment, users will be provided with triage information. Shall the need arise, the app also partnered with an actual clinic located in west London, for users to visit an actual GP.
The TV program accused the mobile application of confusing users, by providing medical advice which mimic diagnosis and draws onto the loophole of young adults who want fast service and taking patients away from registered GPs. Safety at hand, the program also questioned the ulterior motive of the company, as the founder is an advocate of privatization of UK’s National Health Service (NHS).
Algorithms are designed by human designers and we are biased
UK prides NHS as it provides free or affordable healthcare and treatments to those in need. Having just celebrated its 70th birthday earlier this year, NHS had been strongly criticized for its depleting efficiency and burnout medical professionals. Others like Dr. Ali Parsa, the man behind Babylon Health, believe privatization will prevent its collapse but this may mark an end to free services.
The program used Babylon Health as the only example to question the reliability of the entire field of AI medicine, citing the lack of systematic, peer-reviewed study to prove that AI is safe. Babylon Healthcare, on the other hand, rebuked by feeding its AI with outdated medical exam questions and answers, to prove that it outperformed human doctors in medical proficiency tests which became a laughing stock to the medical professionals from the Royal College of General Practitioners.
Regardless of Horizon and Babylon Health, both of which had demonstrated a very bad way of portraying new technologies. Meredith Broussard, data journalist and assistant professor at New York University, said in her new book “Artificial Unintelligence”, that “algorithms are designed by people and we are biased”. New technologies like AI are neutral tools and if it ever lie or overpromise us, it’s likely that the human behind it wants to lie or overpromise (i.e., Marvin is paranoid because his creator — Douglas Adams made it to be).
Horizon should be more holistic and not mislead its viewers by falsifying AI with an egocentric example. Similarly, thinking that a mobile app can decentralize a 70-year-old system is hilarious, if not more dystopic than one day AI will replace human and take over the World.
Being overly critical doesn’t improve humans — it doesn’t improve AI either!
Nobody has ever thought of hit and run or drink driving when automobile was first introduced and no one has ever questioned about personal data and related breaches when we were first exposed to social media. Being overcritical to innovations is not the way to improve it because we simply can’t foresee everything.
Even checkpoint inhibitor, a revolutionary immunotherapy which garnered James Allison and Tasuku Honjo, the Nobel Prize in Physiology or Medicine this year, has its side effects and uncertainties. Indeed, AI is nowhere near that kind of achievement, since it’s still subjected to the many years of scientific study and clinical trials. Denouncing AI will not make it better, what we should focus on now is to create as many chances as possible for AI to assist us to perform something more diligently and efficiently.
Photo credit to: Rosy
*This article was originally published on AIMed Blog on 7 November 2018.