AnsibleHealth’s Dr. Tiffany Kung on ChatGPT’s healthcare potential

[ad_1]

Modern Healthcare reporters take a deep dive with leaders in the industry who are standing out and making a difference in their organization or their field. We hear from Dr. Tiffany Kung, researcher at virtual pulmonary rehab treatment center AnsibleHealth, about how the ChatGPT model—which uses natural language processing to generate text responses based on available data—could eventually be used in the healthcare industry.

What are some ways that health systems could use artificial intelligence technology such as ChatGPT to augment care and provider operations?

At AnsibleHealth, we’re already using ChatGPT every day. We’ve incorporated it into our electronic health record so our providers are able to use ChatGPT to better communicate with patients, and we’re using it to talk to our insurance providers—to do things like rewrite an appeal letter if [payers have] denied a claim. All our providers have undergone training to make sure that everything’s deidentified, so it’s HIPAA-compliant.

ChatGPT is most commonly being used right now to communicate with insurance [companies] and to do a lot of administrative work, since physicians now spend so much of their time dealing with things that are not direct patient care: paperwork and billing.

In terms of the chatbot’s potential shortcomings, where might providers run into issues with ChatGPT? In what ways is this technology not fully equipped for use in the healthcare sector?

ChatGPT and most other existing AI are not HIPAA-compliant at the moment. That means it can’t handle any patient data that’s sensitive or anything that’s confidential. That’s really one of its big shortcomings. For us to incorporate ChatGPT and other AI more into our everyday use, we have to do a lot of rigorous testing. Just like any novel drug or any new technology, we need to test its safety, usability and efficacy.

You recently led a study in which researchers had ChatGPT take the U.S. Medical Licensing Exam. How is the chatbot’s performance on that exam an indicator of its possible effectiveness in medical education?

We were really excited to see that ChatGPT was capable of passing the U.S. Medical Licensing Exam. It [scored] about 60%, which was the passing threshold. That’s just the 1 to 2 percentile performance on this exam.

So by no means is ChatGPT capable of being your physician or being a good doctor right now. There’s a lot of work to be done. Everything is still very early, but we’re really excited about the potential.

What do you think some of that potential might amount to?

There are a lot of different applications. It’s still very early. At AnsibleHealth, we take care of patients who are extraordinarily sick: They have respiratory diseases like chronic obstructive pulmonary disease, and they also have other comorbidities like cardiac conditions and kidney conditions. A lot of the work we do is coordinating care among the many doctors and specialists these patients need. We help improve communication among the patients, cardiologists, nephrologists and pulmonologists. That’s something that AI can do: improve care coordination.

Healthcare leaders have several concerns about the chatbot’s inaccuracies, which could have detrimental effects on patient care. What are your impressions of the healthcare industry’s perception of this tool?

As a whole, healthcare has a really high bar for using anything for patient care. Our bar is so high because we’re dealing with patient lives. So anything we use has to be the safest possible.

Additionally, a lot of physicians are cautious when dealing with new technology or new drugs. Every day in the hospital, we communicate with each other with pagers: pretty antiquated technology, but it shows how sometimes healthcare is wary of new technologies, and we like things we’re comfortable with.

[ad_2]

Source link