Microsoft’s Nuance adds ChatGPT successor GPT-4 to EHR

[ad_1]

Nuance Communications, a clinical documentation software company owned by Microsoft, is adding OpenAI’s ChatGPT successor GPT-4 to its latest application. 

Nuance introduced its new application on Monday morning called Dragon Ambient eXperience Express. The company said this version of Dragon can summarize and enter conversations between clinicians and patients directly into electronic health record systems using OpenAI’s GPT-4 generative AI capabilities.

Related: What’s ahead for ChatGPT in healthcare

GPT-4, which was released by OpenAI on March 14, is more advanced than its popular predecessor ChatGPT, said Peter Durlach, Nuance’s chief strategy officer. ChatGPT is available for free online and has been used by more than 100 million people since its launch in early January. 

“The amount of data they put into this thing dwarfs ChatGPT,” Durlach said. “Internally, ChatGPT is thought of as a toy compared to this.” 

Microsoft has invested billions in OpenAI to use ChatGPT and OpenAI’s other technologies in the big tech company’s business applications such as Microsoft Word and Excel. Nuance, which was bought by Microsoft in 2022 for $19.7 billion, is the latest to see its products get powered by OpenAI’s capabilities. Dragon Express will be available on private preview this summer to organizations that sign up. 

In addition to entering relevant information into the health record, Nuance’s application can also change the language and removes conversations not relevant to the care plan. 

“It takes it takes out extraneous stuff [and] thinks about what’s relevant,” Durlach said. “It actually changes the language to be what a physician would have dictated versus necessarily what was discussed with the patient in more colloquial language.”

Not a Modern Healthcare subscriber? Sign up today.

Users must describe what they’re seeing in order for the software to program to work properly, Durlach said. For example, if a patient presents with a sore throat, specific commentary of what the clinician is seeing must be verbally shared for the program to enter the information.

Durlach said caution and redundancy are needed to ensure AI does not make harmful medical mistakes. 

We’re taking a very sober look at this thing,” Durlach said. “It’s not as great as the fanboys would say and it’s not as scary as the naysayers. It’s in the middle.”

According to Durlach, Nuance and Microsoft deliberately targeted clinical note taking as an area to test conversational AI because it requires physician review and is not directly affecting the real-time care of patients. 

“If it makes a mistake, it’s not going to kill someone or hurt them, because the physician still reviews the entire note,” Durlach said.

This story first appeared in Digital Health Business & Technology.

[ad_2]

Source link