2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2024 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
Jharna M. Patel, MD, discusses the utility of ChatGPT in genetic counseling for patients with gynecologic cancers.
As artificial intelligence (AI) platforms continue to increase in prominence, developing specific language for physicians to use to integrate platforms such as ChatGPT into practice could be a beneficial endeavor, according to Jharna M. Patel, MD.
During the 2024 SGO Annual Meeting on Women’s Cancer, Patel presented findings from a study that examined the use ChatGPT to answer genetic counseling questions for patients with gynecologic cancers. Findings showed that regarding 20 genetic counseling questions, reviews by gynecologic oncologists gave all of ChatGPT answers a score of 1, meaning the answers were comprehensive and correct. Additionally, 67% of answers to 20 questions regarding Lynch syndrome and hereditary breast and ovarian cancer topics received a score of 1. The additional 33% received a score of 2, meaning the answer was correct but not comprehensive.
“ChatGPT is a powerful tool, but it does have its limitations. More work is needed in this specific field before it can be feasibly applied to the field of gynecologic oncology,” Patel said in an interview with OncLive®. “However, I hope that this [study] opens the door for us to consider this powerful tool to be able to be used as a patient resource.
In the interview, Patel discussed the rationale for utilizing ChatGPT to answer genetic counseling questions in the gynecologic oncology field, expanded on the findings from this study, and detailed the next steps of researching AI platforms for potential use in clinical practice.
Patel is a gynecologic oncology fellow at Perlmutter Cancer Center of New York University Langone Health.
Patel: ChatGPT is becoming more and more used by everyone, including our patients. ChatGPT has been subject to a lot of recent investigations, and it has been found that it passed the United States Medical Licensing Examination [USMLE]. Therefore, we wanted to specifically see what its applications in gynecologic oncology could be.
We know that [ChatGPT] is good at answering fact-based questions; however, we wanted to test its ability to answer more nuanced questions. We believe that genetic counseling is a category to be tested for these very specific questions.
We created a list of 40 questions in conjunction gynecologic oncologists and polling from professional society websites about the most common questions that pertain to genetic counseling in gynecologic oncology. The list of questions [was] then split into two categories. The first category pertained to just genetic counseling questions in general, and these were questions such as, ‘Will my insurance be impacted if I test positive for a BRCA syndrome?’ The second question set pertained to more syndrome-specific questions, such as Lynch syndrome and BRCA-associated hereditary breast and ovarian cancer questions.
The questions were queried into ChatGPT. We used an eighth-grade reading level to emulate an average user. The queries were then scored by 2 gynecologic oncologists on a scale of 1 to 4. A score of 1 would denote that the answer was completely correct and comprehensive, and in clinical practice, the gynecologic oncologist would likely have nothing significant to add to that response. A score of 4 would denote a completely incorrect response. If there were scoring discrepancies between the 2 reviewers, additional gynecologic oncologists were asked to score the questions to resolve any discrepancies.
The most surprising finding was in the category of those general genetic counseling questions. ChatGPT answered [those questions] with 100% comprehension and accuracy. There were no scoring discrepancies. All the gynecologic oncologists who we asked to score those questions agreed that all those answers were comprehensive and correct.
Another finding was that in the syndrome-specific questions, especially Lynch syndrome and BRCA-associated syndromes, although ChatGPT was accurate [with a score of 1] 67% of the time, it did provide some correct answers [that] weren't comprehensive [a score of 2], and the attendings thought that there would be something more that they could add to that answer. It was definitely surprising and interesting on both fronts.
There were limitations to our study. This was a scoring system that was adapted from previously published literature, which means that our study was inherent to any scoring bias that could be present. We tried to limit this bias by ensuring that each reviewer was blinded to the responses of other reviewers.
Another limitation was our specific questions about Lynch syndrome. We only incorporated 3 questions in our study that pertained to Lynch syndrome specifically, which limited our ability to generate any formative conclusions about that specific category. The addition of more Lynch syndrome–specific questions could allow us to compare that category of questions directly with the hereditary breast and ovarian cancer questions.
The next steps of [this] research are truly fascinating. AI pipelines are being developed everywhere, and it would be interesting to see if a similar AI language model could be developed with physician input to help see if it generates more comprehensive answers than a physician would otherwise give in their clinical practice. Then [we could see] if that actually influences the uptake of genetic testing in our patients.
Patel JM, Hermann CE, Growdon WB, Aviki E, Stasenko M. ChatGPT accurately performs genetic counseling for gynecologic cancers. Presented at: 2024 SGO Annual Meeting on Women’s Cancer; March 16-18, 2024; San Diego, CA.