Google plans its A.I. to transform healthcare next, as it partners with the Mayo Clinic, report reveals

Google plans its A.I. to transform healthcare next, as it partners with the Mayo Clinic, report reveals

Doctors have been using A.I. like ChatGPT for various tasks, including doing tedious paperwork, predicting health problems, and even improving their bedside manner, but what about a large language model (LLM), trained on medical exams, that can help them with diagnoses? Google hopes to take A.I. in healthcare mainstream with a new, medicine-specific chatbot called Med-PaLM 2, which it’s been testing since April, the Wall Street Journal stated, citing people familiar with the matter.

According to Google’s website, med-PaLM 2 is an LLM that answers medical questions, organizes information, and can synthesize various modes of data, including images and health records. Google, also the maker of the chatbot Bard, trained Med-PaLM 2 on medical licensing exams, and unsurprisingly it is the first A.I. to have produced passing answers for U.S. Medical License Exam (USMLE)-style questions. Questions in the USMLE style present a patient scenario that lists symptoms, medical history, age, and other descriptors and asks questions such as the most likely complication. Med-PaLM 2 provided long-form answers to these questions and selected from multiple choices.

Despite not being specifically trained, OpenAI’s GPT-4, ChatGPT’s successor, scored similarly to Med-PaLM 2 on medical exam questions. However, both technologies are still unreliable enough for high-stakes healthcare use.

“I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey,” Greg Corrado, a senior research director who worked on Med-PaLM 2, told the Wall Street Journal.

Google is currently piloting Med-PaLM 2 at the research hospital Mayo Clinic and has not announced when the chatbot could be released to the general public. Hospitals are already using ChatGPT—and have been since almost immediately after its release—not just for quick medical questions. Doctors use A.I. less like an encyclopedia and more like an assistant, even asking the chatbot how to conduct complex interactions, such as interventions for those struggling with addiction.

Using A.I. templates to communicate with patients may seem like an insufficient substitution for human connection. Still, Med-PaLM 2’s responses to medical questions were preferred to real doctors’ responses, according to research published by Google in May. Physicians compared A.I.-generated responses to human-written responses on nine verticals and chose the A.I.’s answers in eight of the nine.

Despite the possibly higher quality of some A.I. answers, a 2018 survey found that most patients prioritize compassion in medical care, even would pay a higher fee for a more human experience. A.I. fundamentally cannot provide understanding, but its use in creating scripts for improved bedside manner facilitates smoother or gentler doctor-patient conversations.

Still, many are wary that integrating A.I. into medicine too quickly and without regulation could have disastrous consequences. A.I. often has “hallucinations,” which state false information as fact, which could lead to incorrect diagnoses or treatments if not carefully checked by a person. Moreover, A.I. can potentially replicate and amplify bias already ingrained in the healthcare system if not trained correctly. The World Health Organization released a statement in May calling for a very cautious introduction of A.I. into medicine.  

“Precipitous adoption of untested systems could result in errors by healthcare employees, cause harm to patients, erode trust in A.I. and thereby underestimate (or delay) the potential long-term advantages and uses of such technologies around the world,” the WHO wrote.

There’s also the question of how patient data will be used if input into hospital A.I. Google and Microsoft did not train their algorithms on patient data. Still, each hospital could train its A.I. on patient data in the future. Google has already started using patient data from Mayo Clinic’s Minnesota headquarters for specific projects.

Google said patient data would generally be encrypted and inaccessible by the company, but the tech giant has caused controversy with its use of healthcare data in the past. In 2019, Google launched an initiative called “Project Nightingale,” which collected medical data from millions of Americans across 21 states without consent. The data included patient names and other identifying information, diagnoses, lab results, and records. It was used internally by Google without doctor or patient knowledge.

“Careful consideration will be required to be given to the ethical deployment of this technology including rigorous quality assessment when used in different clinical settings and guardrails to mitigate against over reliance on the output of a medical assistant,” Google wrote in its report on Med-PaLM.

Google did not respond to Fortune’s request for comment.