AI in healthcare raises need for guidelines to protect patients

[ad_1]

Several studies have raised potential issues with using AI in a healthcare setting over the past few years.

A 2019 analysis published in the journal Science found a commercial algorithm from Optum used by a health system to select patients for a care management program assigned less healthy Black patients the same risk level as white ones, meaning Black patients would be less frequently identified as needing extra care.

An Optum spokesperson said in a statement that the algorithm is not racially biased and that the researchers mischaracterized a cost prediction algorithm based on one health system’s incorrect, unrecommended use of the tool.

“The algorithm is designed to predict future costs that individual patients may incur based on past healthcare experiences and does not result in racial bias when used for that purpose—a fact with which the study authors agreed,” the spokesperson said.

In 2021, researchers at the University of Michigan Medical School published a peer-reviewed study that found a widely used sepsis prediction model from electronic health record giant Epic Systems failed to identify 67% of people who had sepsis. It also increased sepsis alerts by 43%, even though the hospital’s overall patient population decreased by 35% in the early days of the pandemic. Epic did not make the team who worked on the AI sepsis model available for an interview.

The White House Office of Science and Technology Policy included both instances, without naming the companies, in a report accompanying its “AI Bill of Rights” blueprint, meant as a guidance for multiple industries.

While the framework does not have an enforcement mechanism, it includes five rights to which the public should be entitled: Algorithms should be safe and effective, be nondiscriminatory, be fully transparent, protect the privacy of those they affect and allow for alternatives, opt-outs and feedback.

Jeff Cutler, chief commercial officer at Ada Health, a healthcare AI company offering symptom checking for patients, said his organization follows the five principles when developing and deploying algorithms.

“It’s really important that the industry takes the ‘Bill of Rights’ very seriously,” Cutler said. “It’s important that users and enterprises embracing these platforms are asking the right questions around clinical efficacy, accuracy, quality and safety. And it’s important that we’re being transparent with users.”

But experts say real regulation is needed to make a difference. Although the Food and Drug Administration is tasked with overseeing software as a medical device, including AI, experts say the agency has a hard time responding to the increasing number of algorithms that have been developed for clinical use. Congress could step in to define AI in healthcare and outline mandatory standards for health systems, developers and users.

“There’s going to have to be enforcement and oversight in order to ensure that algorithms are being developed with discrimination, bias and privacy in mind,” said Linda Malek, chair of the healthcare practice at law firm Moses & Singer.

Dr. John Halamka, president of Mayo Clinic Platform, a portfolio of businesses from the Rochester, Minnesota-based health system focused on integrating new technologies, including AI, into healthcare, said more policies may be on the way.

The Office of the National Coordinator is expected to coordinate much of the regulatory guidance from various government agencies including the FDA, the Centers for Disease Control and Prevention, the National Institutes for Health and other federal agencies outside of HHS, said Halamka, who has advised ONC and the federal government on numerous healthcare technology initiatives, but is not directly involved with oversight.

Halamka expects significant regulatory and subregulatory guidance within the next two years.

Download Modern Healthcare’s app to stay informed when industry news breaks.

[ad_2]

Source link