Using AI Chatbots To Get Reliable Information About Cancer

A woman's hands type on a laptop.
Millions of Americans use chatbots to learn about medical conditions like cancer.

If you’ve ever turned to a chatbot to ask questions about your health, you’re in good company. Millions of Americans now go online every day to seek medical advice from chatbots powered by artificial intelligence (AI).

People have been Googling their symptoms for decades, but AI chatbots take online medical advice to a new level. One recent survey found that people who search for health information online generally find AI-generated answers useful and trustworthy. 

It might surprise you, but many doctors agree: Chatbots can be a good way for you to learn more about what’s going on with your health. They can even give helpful, accurate information about complex diseases like cancer. 

But there are real downsides, too. In this article, doctors from Memorial Sloan Kettering Cancer Center (MSK) share tips on how to get the most out of these AI tools — and explain why you should still rely on your care team to give information that is most relevant to you. 

How accurate is chatbot information about cancer? 

“It’s very common for patients to use chatbots to educate themselves about their medical conditions,” says MSK gastrointestinal medical oncologist Michael Foote, MD, who published a study on the topic in JAMA Oncology in 2023. Researchers asked cancer doctors with different specialties to review the accuracy of cancer treatment information provided by chatbots. At that time, the researchers found about one-third of the responses were wrong, 

Dr. Michael Foote
“It’s very common for patients to use chatbots to educate themselves about their medical conditions,” says Dr. Michael Foote.

A similar study in the American Journal of Clinical Oncology in 2024 found that while many responses were accurate, others were misleading. “One of the challenges is that patients may latch on to the misleading parts, especially if they confirm what they already believe,” says MSK radiation oncologist James Janopaul-Naylor, MD, who conducted the study. 

That said, the accuracy of chatbots is rapidly improving, and both Dr. Foote and Dr. Janopaul-Naylor say it’s OK for patients to use these tools — with the understanding that these tools are not perfect. “Think of a chatbot as a research assistant rather than a decision-maker,” Dr. Janopaul-Naylor says. “Any medical decisions you make should always involve your care team.” 

How to use chatbots for information about cancer 

Brainstorming topics to discuss with your care team. Chatbots can help you come up with a list of things to talk about at your appointments. “A lot of times the chatbot may suggest things that you didn’t think of,” Dr. Foote says. “Patients who consult AI before they come to see me often have a better understanding of their disease and ask more in-depth questions.” 

Explaining complex terms. If you search online for information about your cancer — especially early in your cancer journey when everything is new and unfamiliar — you are likely to come across a lot of jargon and words you don’t know. A chatbot can help explain medical terminology in language that is clearer and may be easier for you to understand. 

Clarifying test results. Test results can be confusing and complicated, and after they show up in your patient portal you may need to wait a few days for a chance to talk to your doctor or nurse about them. “Chatbots can help define terms and phrases from your reports that can potentially reduce anxiety and lead to a more productive conversation when you next see your provider,” Dr. Janopaul-Naylor says. 

The risks of using chatbots for medical information 

Chatbots may pull data from unreliable sources. These tools scan the whole internet to create responses. That information may come from medical journals and other sources that have been carefully reviewed for accuracy, but it may also come from discussion forums, personal blogs, or shady websites trying to sell products. “When a chatbot tells you something, you can ask it for references,” Dr. Foote explains. “You should then go check those references to make sure they are reliable, or if they are even real.” 

Dr. James Janopaul-Naylor
“Think of a chatbot as a research assistant rather than a decision-maker,” says Dr. James Janopaul-Naylor.

Chatbots can hallucinate. It’s been widely documented that chatbots sometimes make things up, especially if they don’t have the answer to a question. “If something doesn’t seem right, there’s a good chance it’s not,” Dr. Janopaul-Naylor says. “One good way to check this is to ask the same question in different ways or to ask more than one chatbot. If you get different responses, that information is less likely to be accurate.” 

Chatbots tell you what you want to hear. Be careful to avoid your own biases when you ask questions. If you show a preference for certain treatments — for example, if you say you hope to avoid surgery or that you want to make herbs and supplements part of your treatment plan — the chatbot will be more likely to make suggestions that agree with those wishes. 

Chatbots may not respect your privacy. Companies that make chatbots often employ user data to further refine their models. They may also sell data to outside entities, including insurance companies. If you are sharing your medical information with a chatbot, especially if you are uploading any of your documents, make sure that you remove anything that could link the information back to you, including your full name, birthdate, and medical record number (MRN). 

Key considerations when asking a chatbot for medical advice 

Don’t rely on chatbots in an emergency situation. If you think something is seriously wrong, call your doctor or go to an emergency room. Don’t enter your symptoms into a chatbot. 

Make sure to input the details of your case correctly. “User error can affect the results you get from chatbots,” Dr. Janopaul-Naylor says. “I had one patient who was concerned about his prognosis for prostate cancer, but it turned out that when he entered his test results into a chatbot, he had typed his PSA number wrong.” 

Chatbots use the information you give them. Treatment recommendations will likely be very different for a 35-year-old and an 80-year-old. If a chatbot doesn’t have all the information about you and your situation, its responses will be less relevant. 

Chatbots may not have access to the most current data. Chatbots have a “knowledge cut-off date” — after which they have stopped collecting information to go into their system. “If a new treatment has recently been approved, it could be missing from the results you get,” Dr. Foote explains. 

Remember that the information you get from a chatbot is based on averages, but you are a unique individual. One problem with chatbots is that they usually report findings with a high degree of confidence, when the truth is often much more nuanced. This could be a concern especially when talking about serious topics like survival rates. 

Always share your findings with your medical team, especially if there’s something that concerns you or doesn’t seem right. “I appreciate it when my patients or their family members come to an appointment with printouts or screenshots they’ve gotten from a chatbot,” Dr. Janopaul-Naylor says. “It helps them to ask more thoughtful questions and allows me to provide more detailed answers.”