a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Will Computers Replace Doctors?

Blog: Will Computers Replace Doctors?


Is there a future where a machine scans you, diagnoses you, and prescribes you medicine, all without a doctor being present?

It is nearly a certainty that most of the work humans do now will at some point be done by computers. But some experts in artificial intelligence (AI) think that work involving the higher cognitive functions, for example, law, pedagogy, and of course medicine, is forever beyond computers. That the human mind dances in fields of noonday brightness where dull computers can never tread: digital machines may excel in narrow tasks for which there are clear parameters, but they will never be fit to debate politics, theorize physics, or write a novel. Other experts, however, think that computers will have cognitive abilities far surpassing those of human beings. That the fields of noonday brightness humans occupy are but desiccate grasslands of scant resources and diversity, where any bold color is blanched by a scorching sun — except on rare flowers in secret oases. They declare, “Elysium forever eludes us, but not our artificial progeny!”

On one side of the AI debate, then, stand the optimists who think computers will eventually be self-conscious ideal reasoners and on the other side the pessimists who think computers will never be more than unconsciously operating machines. [1] Without trying to settle the debate, we will explore both views and the (extensive) consequences they entail for medicine.

A little-known fact about the field of AI is there’s no agreement on the definition of ‘artificial intelligence,’ let alone a single research program to achieve it.

Nevertheless, we all have a pretty good folk notion of intelligence and can detect it quite easily in others, even in the course of the briefest of conversations.

Alan Turing agreed. He famously devised a test for machine intelligence: the Imitation Game. In the game, a human “judge” carries on a text-only conversation with two “contestants,” one machine and one human. The judge must distinguish human from machine solely based on these conversations. If the machine fools the judge into thinking it’s the human, and this result is repeatable, then that’s all we need, Turing thought, to call the machine intelligent.

This sort of natural language test of machine intelligence has come to be known as a Turing test (TT). There’s much debate over whether Turing’s Imitation Game is a good test of machine intelligence; so, I’ll just bypass the issue by employing the notion of a rigorous Turing test (RTT), which is a test such that if a machine passes it, then there’s as much evidence for believing the machine is intelligent as there is for believing one’s spouse is intelligent.[2] Following the traditional nomenclature, a machine with actual intelligence is ‘Strong AI,’ and a machine with simulated intelligence is ‘Weak AI.’ If a machine passes RTT, then it is exceedingly likely that the machine is Strong AI.

The AI pessimist may object:

“Passing a Turing test, no matter how rigorous, does not necessarily indicate intelligence. In fact, there are thought experiments which forcefully demonstrate that a computer cannot be intelligent whether or not it passes a Turing test. The most celebrated and discussed of these thought experiments is John Searle’s Chinese Room:

Imagine a monolingual English speaker in a sealed room with a desk. On the desk is a set of instructions written in English. Surrounding him are bins of Chinese characters. Through a slot in the wall, Chinese linguists insert questions written in Chinese (the input). The man then refers to his instructions (the program) for which characters to retrieve from the bins (the database) and the order in which he must fix them to a paper. He then slides that paper (the output) back through the slot to the linguists outside, who don’t know the identity of the room’s occupant. After many and varied questions all receive appropriate answers, the linguists are sure the room’s occupant is a native Chinese speaker. But he isn’t. The man doesn’t understand a lick of Chinese.[3]

See — Searle shows formal symbol manipulation (syntax) isn’t sufficient for understanding (semantics) even though it’s sufficient to pass TT. The man in the room would pass (native Chinese speaker) TT even though all the questions and answers are entirely unintelligible to him. Indeed, he doesn’t even know they are questions and answers as opposed to a series of exclamations or meaningless strings of characters. But a computer is just like the Chinese Room: it operates syntactically, according to a program. A computer understands what it is saying and being asked, then, just as much as the man in the room understands Chinese — zero. But it’s clearly ridiculous to ascribe intelligence to something with zero understanding. Therefore, if Strong AI means an actually intelligent machine, then TT, and by extension RTT, is no good for revealing Strong AI.”

But the AI pessimist’s conclusion is too strong.

Even if one grants the Chinese Room demonstrates machines cannot have true understanding and therefore actual intelligence, the further conclusion the objector draws — that RTT is not a good test of actual intelligence — is unwarranted. If an alien were to pass RTT, one wouldn’t hesitate to ascribe intelligence to it. Sure, it’s logically possible the alien is a blind automaton, just as it’s logically possible all your fellow human beings are blind automata (solipsism). But to believe so is contrary to all empirical evidence, solid inductive principles, and foundational assumptions about how the world works.[4] When confronted with equal evidence, it’s only rational to draw equal conclusions. Which means, the Chinese Room, if successful, does not invalidate RTT but rather gives us reason to believe Strong AI cannot be realized.

But is the Chinese Room successful?

Without concluding either way, I simply point out that the Chinese Room is not definitively a good analog of all AI. Currently, much of the research done in AI is based in connectionism, the view that intelligence is a function of connections between neurons, as opposed to symbolism, the view that intelligence is a function of symbolic processing. Hence the explosion in artificial neural networks, or computational systems inspired by the brain’s neuronal structure. Instead of giving an output according to a set of rules, neural nets weigh different inputs to get the correct output. The activation function determines the input-output relation. The function most analogous to a neuron firing is the step function, where if the weights of the inputs reach a certain threshold, the “neurons” (called ‘units’) “fire” to give the output.[5] If a net gives the wrong output, it adjusts its weights to correct for the error (which is how it “learns”). So, unlike the Chinese Room, at no point is a neural net representing an external state of affairs S symbolically, rather, it represents S as a function of all inputs from S multiplied by their respective weights.[6]

Even if neurocomputation is disanalogous to the Chinese Room, there’s no clear path to Strong AI. For all their ingenuity, neural networks are still just formal systems, and nobody has any idea how to get consciousness from computational formalisms.

Strong AI remains merely an inchoate idea.

Results in the field confirm as much. There’s an annual Turing test competition, the Loebner Prize, and after 27 years, no one has claimed it. At least one of the present authors chatted a little with the best performing bot for three years running, Mitsuku. It didn’t take long to reveal its inhuman nature. The most telling moment: when asked, “What is a moral compass?” Mitsuku replied, “Compass = a navigational instrument. It has a needle that always points north.”

The truth is Mitsuku is not even close to passing TT even though it is specifically designed for the task. In fact, Strong AI, and even general intelligence Weak AI, is hardly being researched anymore due to the seemingly insurmountable difficulties involved. There’s no doubt that if Strong AI or a near functional equivalent were created, it could replace doctors. But for now, this scenario is far-fetched.

Instead, nearly all AI today is designed to master specific tasks.

For example, facial recognition, chess, or Jeopardy! This is called ‘Narrow AI.’ A species of Weak AI (not a very good name as it’s logically possible Weak AI be functionally equivalent to Strong AI), Narrow AI is a machine unable to pass RTT that replicates a fixed set of human-level intelligent behaviors without receiving commands. It comes with no pretensions to consciousness or general intelligence.

Unlike its stronger, but speculative, brethren, Narrow AI is here, now, and fast making inroads into all kinds of areas once the sole preserve of human intellect, including medicine. Current evidence suggests AI is quite adept at analyzing medical images and AI is even assisting in surgeries. There are still challenges ahead in developing a full-fledged medical AI, especially understanding why the AI makes any given medical conclusion and ensuring it properly translates the medical literature. But the future is not too distant when AI will be able to diagnose patients and pinpoint treatments with greater accuracy than physicians.

Does this spell doom for doctors?

Not given the right understanding of ‘doctor.’ A doctor is more than someone who makes diagnoses and prescribes treatments. Minimally, a doctor is a medical expert who regularly diagnoses, treats, or guides patients relative to actual health disorders.[7] Eventually, AI will largely take over diagnosis and prescription; the doctor will simply verify the AI’s recommendation and then guide the patient accordingly. But as long as a medical expert is guiding patients, she is a doctor.

One shouldn’t think this will limit the role of doctors, even vis-à-vis their patients. Doctors will simply transfer their energies to guiding patients more thoroughly, positively promoting patients’ health, conducting medical research, and so on. Ultimately, this shift in focus will create a better medical experience for both doctors and patients. Patients will finally get what they want, an emphasis on bedside conduct and fewer misdiagnoses, and doctors will be able to increase efforts on the frontiers of medicine. Patients will be better served by doctors, and doctors will better practice evidence-based medicine.

When all is said and done, medicine will advance, and humanity will exult, but perhaps first breathe a collective sigh of relief when they realize — the computers are for good, not harm.

[1] The terms ‘optimist’ and ‘pessimist’ here do not mean views on whether the consequences of AI will be good or bad but on whether AI can have human-like intelligence. Raymond Kurzweil is an example of an AI optimist, and John Searle is an example of an AI pessimist.

[2] Here are a few of many potential ways to increase the difficulty of Turing’s test: (1) substantially increase the amount of time the judges have to converse with the contestants from the five minutes Turing (1950) originally proposed; (2) likewise, increase the rate at which a computer contestant must fool the judges from the 30% Turing proposed to 90%+; (3) specially train the judges to ask questions known to be difficult for computers but which still test intelligence; (4) have the contestants complete other verbal tasks for the judges, e.g. write an essay.

[3] Searle, 2004

[4] Indeed, it may even be that the laws of nature preclude anything passing RTT sans true understanding.

[5] Step functions are seldom used nowadays due to their incompatibility with gradient descent training.

[6] There can be multiple layers of units such that each subsequent layer represents the output from the previous layer. These multilayer networks are referred to as ‘deep neural networks’ for ‘deep learning’ because each layer “learns” from all subsequent layers, as error is computed at the final layer and then distributed across all layers (a technique known as back-propagation).

[7] A doctor may or may not conduct research into health disorders or their treatments, teach medicine, or concern herself with positively promoting health, rather than just treating health disorders. Notice the definition does not include credentials or legal licensing. I want to say what a doctor is essentially, not what a doctor is given certain social/legal norms. If an MD were required for one to be a doctor de facto, not just de jure, then trivially an AI couldn’t be a doctor because MDs are not given to machines. The definition has some interesting results: purely cosmetic surgeons are not doctors because they are not diagnosing, treating, or guiding patients relative to actual health disorders; MDs who only conduct research or consult for companies are not doctors because they don’t see patients; nurse practitioners are doctors (given they work with patients). None of these results conflicts with a commonsense understanding of the concept doctor when its social accoutrements are stripped away.

Source: Artificial Intelligence on Medium

(Visited 4 times, 1 visits today)
Post a Comment

Newsletter