Healthcare systems are turning to artificial intelligence to solve a major challenge for doctors: monitor a steady flow of patients while responding promptly to messages from people with questions about their treatment.

Doctors in three different health care systems in the US test a a “generative” AI tool based on ChatGPT which automatically compiles answers to patients’ queries about their symptoms, medications and other medical problems. The goal is to help reduce the time doctors spend on written communication and free them up to see more patients in person and focus on more medically challenging tasks.

UC San Diego Health and UW Health has been testing the tool since April. Stanford Health Care, considered one of the nation’s leading hospitals, plans to make its artificial intelligence tool available to some doctors starting next week. At least a dozen or so doctors are already using it on a regular basis in trials.

“Patient reporting is not a burden in itself — it’s more a mismatch between demand and opportunity,” Dr. Patricia Garcia, a Stanford gastroenterologist who is leading the pilot, told CBS MoneyWatch. “Care teams are not able to deal with the number of patient messages they receive in a timely manner.”

The tool, a HIPAA-compliant version of OpenAI’s GPT language model, is integrated into doctors’ mailboxes through medical software company Epic’s “MyChart” patient portal, which allows customers to send messages to their healthcare providers.

“This could be a great opportunity to support patient care and open up more complex interactions for clinicians,” said Dr. Garcia. “Maybe big language patterns can be the tool that changes ‘InBasket’ from a burden to an opportunity.”

We hope that this tool will reduce the administrative work of doctors while increasing patient engagement and satisfaction. “If it works as predicted, it’s a win on all fronts,” she added.

Can artificial intelligence show empathy?

While texting with a new generation of AI won’t replace interacting with a doctor, research shows that the technology is now sophisticated enough to communicate with patients – a vital aspect of care that can be overlooked given the fragmented and bureaucratic health care system. of America’s health.

Indeed, recent research published in the journal JAMA Internal Medicine found that patients preferred responses from ChatGPT over doctors to nearly 200 queries posted on an online social media forum. The authors found that chatbot responses were rated higher by patients for both quality and empathy.

Dr Christopher Longhurst, author of the study, said it shows that tools such as ChatGPT offer huge potential for use in healthcare.

“I think we’re going to see this move the needle more than anything in the past,” said Longhurst, chief medical officer and director of digital technology at UC San Diego and associate dean of the UC San Diego School of Medicine. San Diego Medicine. “Physicians are getting a lot of messages. It’s typical for a primary care physician, and it’s a problem we’re trying to help solve.”

It should be noted that using technology to help doctors work more efficiently and intelligently is not revolutionary.

“There are many things we use in healthcare that help our doctors. In electronic health records, we have alerts that say, “Hey, this prescription could cause a patient to overdose.” We have alarms and all kinds of decision support tools, but medicine is only done by the doctor,” Longhurst said.


ChatGPT: Artificial Intelligence, Chatbots and the World of the Unknown | 60 minutes

13:22

In a UC San Diego health care pilot, a preview of a dashboard with patient messages, shared with CBS MoneyWatch, illustrates how doctors are interacting with AI. When they open a message from a patient asking about blood test results, for example, a suggested answer, compiled by artificial intelligence, appears. The responding physician may use, edit, or reject it.

GPT is able to provide what it calls a “useful response” to queries like, “I have a sore throat.” But no messages will be sent to patients without first being reviewed by a live member of their medical team.

Meanwhile, all AI-assisted answers also come with a disclaimer.

“We say something like, ‘Part of this message was automatically generated in a secure environment and reviewed and edited by your care team,'” Longhurst said. “Our intention is to be completely transparent with our patients.”

So far, patients seem to think it works.

“We feel that patients appreciate that we have tried to help our doctors with answers,” he said. “They also understand that they don’t get an automatic message from the chatbot that this is an edited response.”

“You have to be careful”

Despite the potential of artificial intelligence to improve clinician-patient communication, there are a number of challenges and limitations regarding the use of chatbots in healthcare settings.

First, at this point, even the most advanced forms of technology can malfunction or “to hallucinate“, giving random and even wrong answers to people’s questions is a potentially serious risk in offering help.

“I think it has the potential to have that impact, but at the same time we have to be careful,” said Stanford’s Dr. Garcia. “We are dealing with real patients with real medical problems and there are challenges [large language models] confabulation or hallucination. Therefore, it is very important that the first users in the country do it very carefully and conservatively.”

Second, it remains unclear whether chatbots are appropriate for answering the many different questions a patient may have, including those related to prognosis and treatment, test results, insurance and payment, and many other issues that often arise. when seeking medical help. .

A third challenge is how current and future AI products ensure patient privacy. With quantity cyber attacks in healthcare facilities on the rise, the increasing use of technology in healthcare could lead to a significant surge in digital data containing sensitive medical information. This raises pressing questions about how such data will be stored and protected, and what rights patients have when interacting with chatbots about their care.

“[U]sing AI healthcare assistants creates a number of ethical issues that must be resolved before these technologies are implemented, including the need for human verification of AI-generated content for accuracy and potential false or fabricated information,” the JAMA study notes.

https://www.cbsnews.com/news/chatgpt-artificial-intelligence-ai-health-care-patient-messages/

Previous articleAn employee of the Alaska National Park died in an avalanche. Witnesses say that the worker has worked
Next articleA woman on a mountain dies after being stabbed; the person is charged