http://bit.ly/2ZLRbwY

- In recent years, artificial intelligence has rapidly become the chief topic of conversation among healthcare executives, vendors, and IT developers. 

Experts at organizations across the care continuum, from government agencies to leading hospitals and health systems, have considered the many applications of the technology and recognize the promise it holds for the future of care delivery.

Countless studies have shown that AI can accelerate disease diagnosis, predict hospital readmissions, and accurately detect cancer in medical images. These capabilities could mean better, more precise treatments, improved patient outcomes, and ultimately, lower care costs.

However, with all the hype surrounding advanced analytics technologies, it can be easy to forget about the problems that could come with them. Issues like biased algorithms, patient safety concerns, and threats to data privacy have all clouded the otherwise clear vision of AI in healthcare, and could limit its role in the industry.

At Harvard Medical School’s Precision Medicine Annual Conference in Boston last week, panelists discussed how to effectively use AI to accelerate precision medicine. When talking about the potential risks of healthcare AI, one speaker made an unsettling comparison between the technology and a certain dangerous mineral. 

READ MORE: Top 4 Ways to Advance Artificial Intelligence in Medical Imaging

“I think of machine learning kind of as asbestos,” said Jonathan Zittrain, a professor at Harvard Law School, per STAT News

“It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”

Could AI truly be “asbestos” in healthcare? What are some of the major risks and barriers to using AI in care delivery, and how can the healthcare system ensure these factors don’t result in major problems for patients and providers?

Biased data, biased algorithms

Clinicians must have complete faith that the algorithms are accurate, reliable, and objective in order to make AI tools a part of routine clinical care. However, sometimes the data used to train algorithms is biased, or algorithms may be designed to skew results, which could exacerbate care disparities rather than close them.

In a perspective piece recently published in the New England Journal of Medicine (NEJM), researchers at the Stanford University of Medicine noted that bias can creep into health data via human bias, bias introduced by design, and bias in the ways healthcare systems use the data.

READ MORE: Unleashing the Value of Health Data in the Era of Artificial Intelligence

“You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests,” said David Magnus, director of the Stanford Center for Biomedical Ethics. 

“What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”

If providers can’t see how an algorithm has come up with a result, they won’t be able to know if the algorithm is biased, which makes it difficult to fully trust the technology when making clinical decisions.

“There are currently no measures to indicate that a result is biased or how much it might be biased,” Keith Dreyer, DO, PhD, Chief Data Science Officer at Partners Healthcare and Vice Chairman of Radiology at Massachusetts General Hospital, said at the 2018 World Medical Innovation Forum on Artificial Intelligence. 

“We need to explain the dataset these answers came from, how accurate we can expect them to be, where they work and where they don’t work.  When a number comes back, what does it really mean?  What’s the difference between a seven and an eight or a two?”

READ MORE: Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

To keep bias out of artificial intelligence tools, developers and vendors should be transparent about their methodologies, capabilities, limitations, and data sources. 

Guidelines from the Clinical Decision Support (CDS) Coalition state that CDS tools should make their data sources and methodologies clear through comprehensive metadata. This could include commercial, academic, or proprietary databases to supply information for analytics; published materials, such as journal articles or medical society guidelines, to support recommendations; and unpublished proprietary clinical guidelines or research.

“When the source is truly machine learning, the software needs to reveal that source, along with information that will help the user gauge the quality and reliability of the machine learning algorithm,” the Coalition said.

“Through a page in the user interface that can be periodically updated, the developer could explain to the user the extent to which the system has been validated and the historical batting average of the software. That context helps the user understand the reliability of the software in general.”

Providers could also work closely with developers to ensure that algorithms don’t lead providers to misinterpret data. 

The Stanford researchers who authored the NEJM perspective piece described a pilot study they conducted, which had physicians collaborate with technology designers to help create an algorithm that can predict the need for a palliative care consultation. This study helped physicians know that patient problems could be answered and well-understood.

Data (in)security

While the ability of AI to evaluate large amounts of data is exciting, patients aren’t convinced that these tools will keep their information private. 

A 2018 survey of 500 patients revealed that while most patients are more comfortable with AI being used in healthcare settings than in banking or retail, the technology still generates trust issues among healthcare consumers.

Just 35 percent said they were confident that their data being used for AI is stored securely. Sixty-nine percent of consumers over 40 are concerned that their data is not securely stored, while 58 percent of consumers younger than 40 are worried about the same.

These concerns will only become more significant as mHealth data plays a larger role in patient care. Information extracted from wearable devices and patient monitoring tools is expected to play a critical role in powering AI and analytics technologies in the next few years, a recent Frost & Sullivan analysis stated.

Key industry players have launched initiatives to overcome security challenges. Aetna, Ascension, Humana and Optum have recently joined forces for the Synaptic Health Alliance, a collaborative pilot program that uses blockchain to create a secure dataset among providers. 

Although blockchain is still relatively new to the healthcare field, the technology holds important implications for data security and more seamless data exchange. 

“HIPAA says to protect the confidentiality, integrity, and availability of data, which leads a lot of organizations to err on the side of caution and say that they aren’t going to share what they have with the community, just in case,” David Houlding, MSc, CISSP, CIPP, Principal Healthcare Program Manager at Microsoft Corporation, told HealthITAnalytics.com in a previous interview.

“There is certainly some logic to that – no one wants to be the subject of the next data breach headline.  But that’s where blockchain can come in.  It has the potential solve a large number of the issues that are stunting the deployment of artificial intelligence for healthcare purposes.”

Other organizations have also aimed to improve data security and access. In May 2018, HealthCore, a subsidiary of Anthem, announced that life sciences companies could access its databases to develop analytics capabilities. Researchers can conduct investigations on de-identified patient data from the real world rather than a clinical trial setting. 

Disruption of the status quo

As AI and other analytics technologies continue to shake up the industry, health system leaders and providers will begin to see their roles changing more and more – a trend that could make stakeholders resistant to adoption. 

Chief information officers (CIOs) are projected to experience a great shift in their role. Ninety-five percent of executive leaders are anticipating or already seeing changes to their responsibilities due to digitalization, according to a 2017 survey from Gartner. Over 80 percent said that innovation and transformation are now a crucial part of their job descriptions.

“The CIO’s role must grow and develop as digital business spreads, and disruptive technologies, including intelligent machines and advanced analytics, reach the masses,” said Andy Rowsell-Jones, vice president and distinguished analyst at Gartner.

“While delivery is still a part of the job, much greater emphasis is being placed on attaining a far broader set of business objectives.”

A 2018 report from Black Book Research mirrored these findings. The report highlighted the need for CIOs to become experts in scaling existing infrastructure while embracing AI and other technologies, rather than IT purchasing and management. 

Changes to everyday responsibilities and skills will also apply to those outside the executive suite. Providers will need to adapt to this new environment, and this means adjusting medical education and practice. 

“The skills required of practicing physicians will increasingly involve facility in collaborating with and managing AI applications that aggregate vast amounts of data, generate diagnostic and treatment recommendations, and assign confidence ratings to those recommendations,” Steven A. Wartman, MD, PhD and C. Donald Combs, PhD, wrote in a recent article for the AMA Journal of Ethics.

“The current learning environment, with its excessive information-retention demands, has proven to be toxic and in need of complete overhaul. The speed of technological innovation means that the skills of some faculty members are outdated compared to those of their students.”

To achieve this, the authors recommended that medical education focus on knowledge capture rather than knowledge retention; collaboration with and management of AI algorithms; and a better understanding of probabilities and how to apply them in clinical decision-making. 

These changes will require significant effort on the part of faculty, students, and others, the authors said, but they will be necessary to drive the use of AI forward. 

There are so many potential benefits that could come from leveraging AI in healthcare. However, stakeholders can’t forget the possible risks and barriers to using the technology, either.

To avoid AI becoming the asbestos of healthcare, providers, payers, executives, and other major industry players will need to address the potential issues of the technology and find innovative ways to overcome these challenges.

Source: “artificial intelligence” – Google News