Artificial intelligence

Doctor Google? AI could be a doctor in the pocket, but the company’s health chief urges caution about its limits | Google

JThe arrival of artificial intelligence in healthcare means everyone could one day have a doctor in their pocket, but Google’s chief health officer urged caution about what AI can do and what its limits should be.

“There is going to be an opportunity for people to have even better access to services [and] to high quality services,” Dr Karen DeSalvo told Guardian Australia in an interview last week.

“But we are a way to get there. We have a lot to iron out to make sure the models are appropriately constrained, factual, consistent, and follow the ethical and fairness approaches we want to take – but I’m super excited about the potential even as a doc.

DeSalvo, a former Obama administration health chief, has led Google’s health division since 2021 and visited Australia for the first time in her role last week. She said AI would be a “tool in the toolbox” for doctors and could help address labor shortage issues and improve the quality of care provided to people. This would fill in the gaps rather than replace doctors, she added.

“As a doc, sometimes I have to say, ‘Oh my god, there’s this new stethoscope in my toolbox called a great language model, and it’s going to do a lot of amazing things.’ But it won’t replace doctors – I think it’s a tool in the toolbox.

Last week, a Google study published in Nature analyzed how large language models (LLMs) could answer medical questions, with its own LLM Med-PaLM included in the study.

LLMs received 3,173 of the most frequently searched medical questions online, and results showed the Med-PaLM system generated responses on par with clinician responses 92.9% of the time. Responses deemed to be potentially harmful occurred at a rate of 5.8%. The authors said further evaluation was needed.

DeSalvo said he was still in a “testing and learning phase,” but LLMs could be the best intern for a doctor by putting all the textbooks in the world at their fingertips.

“I’m on the side of, there’s potential here and we should be bold as we consider what the potential uses might be to help people around the world.”

But it should never replace humans in diagnosing and treating patients, she said, indicating there would be concerns about the risk of misdiagnosis, with early LLMs prone to what have been called “AI hallucinations” providing the source material to fit the required response.

skip newsletter promotion

“One of the things we really focus on at Google is tuning the model and constraining the model in such a way that it leans factual,” she said. “Whether it’s for a clinician or for the patient, you don’t want a sonnet about your chemotherapy, you want to know what the literature says [and] Is it correct?”

DeSalvo said the ultimate goal is to address the information imbalance between the medical industry and the public, and to empower patients as much as possible.

“Information is a determinant of health. And that starts with understanding and knowing about the potential condition… We want to make sure people have that knowledge and that agency,” she said.

“When I practiced, I loved when [patients] arrived with the printed sheets or a spiral notebook with all their glucose things written in the lines, and we were able to have a real conversation.

Leave a Reply