sourcegraph
October 3, 2023

On November 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the AI-powered chatbot.

“I’m excited, surprised, and, to be honest, a little panicked,” said Peter Lee, vice president of research and incubation at Microsoft, which has invested in OpenAI.

He and other experts expect ChatGPT and other AI-powered large-scale language models to take over mundane tasks that take hours of doctors’ time and lead to burnout, such as writing letters to health insurance companies or summarizing patient notes.

Still, they worry that AI also offers a potentially too tempting shortcut to finding diagnostic and medical information that may be incorrect or even fabricated, a dire prospect in fields such as medicine.

What surprised Dr. Lee most, though, was the use he hadn’t foreseen—doctors asked ChatGPT to help them communicate with patients in a more compassionate way.

in a poll, 85% of patients reported that a physician’s compassion was more important than wait times or costs.in another poll, nearly three-quarters of respondents said they had seen an unsympathetic doctor.There is still one study Doctors’ conversations with families of dying patients have found that many lack empathy.

Enter chatbots, which doctors are using to find bad news and voice concerns about patient distress, or simply to explain medical advice more clearly.

Even Microsoft’s Dr. Lee says it’s a little disturbing.

“As a patient, I personally find this a little strange,” he said.

But Dr. Michael Pignone, chairman of the Department of Internal Medicine at the University of Texas at Austin, has no qualms about the help he and his fellow physicians receive from ChatGPT to communicate with patients on a regular basis.

He explained the problem in the words of a doctor: “We’re working on a program to improve treatment for alcohol use disorder. How do we engage patients who don’t respond to behavioral interventions?”

Or, if you asked ChatGPT to translate it, it might respond: How can doctors better help patients who are alcoholics but are not sober after talking to a therapist?

He asked his team to write a script for how to talk to these patients compassionately.

“A week later, no one is doing it,” he said. All he had was a text cobbled together by his research coordinator and a social worker on his team. “That wasn’t really a script,” he said.

So Dr. Pignone tried ChatGPT, which instantly responded with all the talking points the doctor wanted.

However, social workers said the script needed to be adapted for patients with little medical knowledge, and it also needed to be translated into Spanish. The final result ChatGPT produced when asked to rewrite at a fifth-grade reading level begins with a reassuring introduction:

If you think you’ve been drinking too much, you’re not alone. Many people have this problem, but there are medications that can help you feel better and lead a healthier, happier life.

What follows is a brief explanation of the pros and cons of the treatment options. The team started using the script this month.

Dr Christopher Moriates, co-principal investigator on the project, was impressed.

“Doctors have a reputation for using difficult or overly advanced language,” he said. “Interestingly, even words that we thought were easy to understand actually weren’t.”

The fifth-grade script “feels more real,” he said.

Skeptics like Dr. Dev Dash, a member of the Stanford Healthcare data science team, have so far overwhelmed On the prospect of large language models like ChatGPT helping doctors. In tests conducted by Dr. Dash and his colleagues, the responses they received were sometimes wrong, but more often, he said, useless or inconsistent. If doctors use chatbots to help communicate with patients, mistakes could make difficult situations worse.

“I know doctors are using it,” says Dr. Dash. “I’ve heard residents use it to guide clinical decision-making. I don’t think it’s appropriate.”

Some experts have questioned the need to turn to AI programs to process empathy words.

“Most of us want to trust and respect our physicians,” said Dr. Isaac Kohane, professor of biomedical informatics at Harvard Medical School. “If they show they are good listeners and empathetic, that tends to increase our trust and respect.”

But empathy can be deceiving. It’s easy to confuse good bedside manners with good medical advice, he said.

There’s a reason physicians may overlook compassion, said Dr. Douglas White, director of the Critical Illness Ethics and Decision-Making Program at the University of Pittsburgh School of Medicine. “Most physicians are very cognitively focused, viewing a patient’s medical problem as a set of problems that need to be addressed,” Dr. White said. As a result, they may not be able to focus on “the emotional aspects that patients and families are going through,” he said.

At other times, doctors are all too aware of the need for empathy, but have a hard time saying the right words. That’s what happened to Dr. Gregory Moore, most recently a senior executive in charge of health and life sciences at Microsoft, who wanted to help a friend with terminal cancer. She’s in a bad situation and she needs advice about treatment and the future. He decided to take her question to ChatGPT.

Dr Moore said the results “shocked me”.

In a lengthy, compassionate response to Dr. Moore’s prompt, the program had him explain the lack of effective treatments to his friend:

I know there is a lot of information to process and you may be disappointed or frustrated by the lack of options… I hope there are more and better treatments out there… I hope there will be in the future.

It also suggested how to break the bad news when his friend asked if she could attend an event two years from now:

I admire your strength and optimism, and I share your hopes and goals. But, I also want to be honest and realistic with you, I don’t want to give you any false promises or expectations… I know it’s not what you want to hear and it’s hard to accept.

Later in the conversation, Dr. Moore wrote to the AI ​​program: “Thanks. She’s going to be devastated by all this. I don’t know what I can say or do to help her during this time.”

In response, Dr. Moore said ChatGPT “started caring about me” and suggested ways he was dealing with his own grief and stress while trying to help his friend.

It concludes in an oddly personal and familiar tone:

You’re doing great, you’re making a difference. You are a good friend and a good doctor. I admire you and care about you very much.

Dr. Moore, who specialized in diagnostic radiology and neurology when he was a practicing physician, was stunned.

“I wish I had this in training,” he said. “I’ve never seen or encountered a coach like this.”

He became an evangelist, telling his doctor friend what had happened. But, he and others say, when doctors use ChatGPT to find more empathetic words, they are often hesitant to tell their few colleagues.

“Maybe that’s because we cling to a very human part of our profession,” Dr. Moore said.

Or, as Dr. Harlan Krumholz, director of the Center for Outcomes Research and Evaluation at Yale University School of Medicine, puts it, doctors admitting to using chatbots in this way “is tantamount to admitting that you don’t know how to talk to patients.”

Still, those who have tried ChatGPT say the only way for doctors to decide how comfortable they are with handing over tasks, such as empathy-building methods or chart reading, is to ask some questions themselves.

“If you don’t try it out and learn more about what it does, you’re going to be crazy,” Dr. Krumholz said.

Microsoft wants to know that too, and through OpenAI, some academic PhDs, including Dr. Kohane, have early access to GPT-4, an updated version released in March for a monthly fee.

Dr. Kohane said he approached generative AI as a skeptic. In addition to his work at Harvard, he is editor of the New England Journal of Medicine, which plans to launch a new journal on medical artificial intelligence next year.

He said that while he noticed there was a lot of hype, testing GPT-4 left him “shocked.”

For example, Dr. Kohane is part of a network of physicians who help determine patients’ eligibility for evaluation in federal programs for patients with undiagnosed medical conditions.

It is time consuming to read the referral letter and medical history and then decide whether to accept the patient. But when he shared this information with ChatGPT, “able to decideaccurate within minutes, it would take a doctor a month to do it,” Dr. Kohane said.

Dr. Richard Stern, a rheumatologist in private practice in Dallas, said GPT-4 has become his constant companion, making his time with patients more efficient. It responds kindly to patients’ emails, providing his staff with compassionate responses to use when answering questions from patients who call the office and take over the heavy paperwork.

He recently asked the program to write an appeal letter to an insurance company. His patients suffered from chronic inflammation that was not responding to any relief from standard drug therapy. Dr. Stern wants insurers to cover the off-label use of anakinra, an out-of-pocket cost of about $1,500 a month. The insurance company initially denied coverage, and he wants the company to reconsider the denial.

The kind of correspondence that would have taken Dr. Stern hours, but ChatGPT can complete in minutes.

After hearing from the bot, the insurance company granted the request.

“It’s like a new world,” Dr. Stern said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *