This article presents compelling case studies where patients, after being misdiagnosed by multiple doctors, found correct diagnoses for conditions like Tularemia and tethered cord syndrome using ChatGPT. While the medical community, represented by a senior cardiologist, acknowledges the immense potential of AI in both patient and clinical settings, there is a clear warning about the pitfalls—namely, fueling patient anxiety and encountering physician dismissiveness, as ‘patients are not widgets.’ The central discussion for the future must revolve around how we safely and effectively integrate these powerful diagnostic tools into the standard clinical workflow. What regulatory frameworks must be established to govern AI-assisted diagnosis? Should AI be formally adopted as a mandatory second opinion for complex or persistent symptom cases, or will this integration erode the human element of medicine and professional expertise? Furthermore, how do we train the next generation of physicians to collaborate with, rather than compete against, large language models to maximize patient outcomes?
VincoClavis on
Ai diagnosed my sleep apnea after years of doctors putting me on various antidepressants – many of which actually make sleep worse.
Ai diagnosed my mum with pancreatic cancer after months of doctors ignoring her symptoms. It wasn’t until she sat in A&E and told them she wasn’t leaving til she got a scan that they finally relented, scanned her and found the tumour.
Ai diagnosed my daughter with cerebral palsy after the GP said “don’t worry about it, it will improve on its own.” This during the critical formative stages of development. If we’d listened to him she wouldn’t have gotten any of the treatment she needed at this crucial stage.
It’s hard to even justify going to the GP these days, except that they are the official gatekeepers of NHS services.
moonbunnychan on
Chatgpt correctly diagnosed me with having a problem with my Vargas nerve for a cough after YEARS of doctors thinking it was everything from asthma to acid reflux to telling me nothing was wrong with me at all and it was in my head.
Smartimess on
AI in the medical field is one of the most promising ones. It is really good in narrowing down a diagnosis or in analyzing data via Ockham’s Razor. Because it learned from every medical field and will check symptoms independently from the specialization of the doctor.
tzmx on
I think key here is how at least “first line” doctors work and ”diagnose” you… basically, in essence, same way as AI. You list you symptoms, them maybe doctors checks something he can see straight up without doing specific examination and that’s it… he puts 1 and 1 together and thinks of diagnosis that fits those symptoms… you know – more or less the same way as AI does.
Now how efficient and good doctors can do it compared to ai… i dont know, but clearly some worse than others.
Probably just matter of time before this first line of doctors are replaced with AI LLM’s.
PeterJoAl on
As with all LLMs, use it for advice but then double check the advice. Getting a diagnosis and then tests to see if the diagnosis is right sounds useful. Blindly trusting it, not so much.
bremidon on
Remember: doctors are only consultants. No matter how they want to believe otherwise, that is all they are.
Don’t misunderstand me. They are absolutely the experts in the field. But then again, any good consultant is an expert in their chosen field. In any case, listen to your doctor and make sure you understand.
The point is that *you* are the one that has to live with the consequences of any decisions, not the doctor. Additionally, as much as they are the experts in the field of medicine, you are the expert about your own body.
A good doctor will know all this and will form a partnership with the patient to tie together their medical expertise with your lifetime of experience with your body. A bad doctor will just brush you off or use some form of “because I say so.”
If your doctor is not able to communicate the medical information you need to make the right decision, they are a bad doctor. Get a new one.
The responsibility for your health is with you. You are the one that ultimately benefits or suffers from any health decisions made. And that means: you have to take full responsibility. You cannot pass it on to someone else, no matter how much more they know than you about medicine. You are the boss, and your doctor is your consultant.
bmrtt on
I once had an accident working abord a tanker ship and couldn’t move my feet afterwards. I was taken to a hospital, they did an x-ray, and the resident doctor claimed that a certain bone at the top of my foot was broken, and needed to be put in cast. I was of course not taken back to ship and sent home instead, but this diagnosis and its consequences costed my company a lot of money, and me a job that was basically a jackpot at the time in my life.
Other doctors said a lot of different things, from “it was never broken, that was a misdiagnosis” to “it was broken but it healed miraculously fast”. I never had a concrete answer for years. Just lost the best job I had in my life and fell into a year long depression over nothing.
Recently I had ChatGPT take a look at the scans. It confirmed that there was indeed no bone broken, gave likely reasons for the misdiagnosis, and even explained why I couldn’t move my foot.
I’m sorry if this upsets snarky doomer redditors but I 100% trust AI for medical analysis over doctors. Only one of them fucked my life up because they couldn’t be bothered to pay attention.
CthonicFlames on
“It will all depend on the personality of the patients.”
This sentence alone should be bolded, underlined, italicized, and 700% bigger font. In psychiatry, AI is definitely the enemy with patients getting AI assurances that they don’t need their medications or that their psychoses are justified or not irrational.
jgutierrez81 on
Well, what can you do when basic healthcare is too expensive, just dont trust it blindly
lach888 on
This is possibly the best use case for LLMs. It’s the most rigorously and accurately documented occupation in all of human history. It also points out the problem with LLMs in that it takes a century of evidence gathering to find the data from the noise, then that’s it, the tool has peaked almost immediately. You have to wait for millions and millions of humans to gather more data to substantially improve.
lettercrank on
The problem is not where you get your diagnosis, but the blind faith you put in it. Trusting a doctor blindly is just as dumb as trusting an llm. You are your own health advocate
12 Comments
**Submission Statement:**
This article presents compelling case studies where patients, after being misdiagnosed by multiple doctors, found correct diagnoses for conditions like Tularemia and tethered cord syndrome using ChatGPT. While the medical community, represented by a senior cardiologist, acknowledges the immense potential of AI in both patient and clinical settings, there is a clear warning about the pitfalls—namely, fueling patient anxiety and encountering physician dismissiveness, as ‘patients are not widgets.’ The central discussion for the future must revolve around how we safely and effectively integrate these powerful diagnostic tools into the standard clinical workflow. What regulatory frameworks must be established to govern AI-assisted diagnosis? Should AI be formally adopted as a mandatory second opinion for complex or persistent symptom cases, or will this integration erode the human element of medicine and professional expertise? Furthermore, how do we train the next generation of physicians to collaborate with, rather than compete against, large language models to maximize patient outcomes?
Ai diagnosed my sleep apnea after years of doctors putting me on various antidepressants – many of which actually make sleep worse.
Ai diagnosed my mum with pancreatic cancer after months of doctors ignoring her symptoms. It wasn’t until she sat in A&E and told them she wasn’t leaving til she got a scan that they finally relented, scanned her and found the tumour.
Ai diagnosed my daughter with cerebral palsy after the GP said “don’t worry about it, it will improve on its own.” This during the critical formative stages of development. If we’d listened to him she wouldn’t have gotten any of the treatment she needed at this crucial stage.
It’s hard to even justify going to the GP these days, except that they are the official gatekeepers of NHS services.
Chatgpt correctly diagnosed me with having a problem with my Vargas nerve for a cough after YEARS of doctors thinking it was everything from asthma to acid reflux to telling me nothing was wrong with me at all and it was in my head.
AI in the medical field is one of the most promising ones. It is really good in narrowing down a diagnosis or in analyzing data via Ockham’s Razor. Because it learned from every medical field and will check symptoms independently from the specialization of the doctor.
I think key here is how at least “first line” doctors work and ”diagnose” you… basically, in essence, same way as AI. You list you symptoms, them maybe doctors checks something he can see straight up without doing specific examination and that’s it… he puts 1 and 1 together and thinks of diagnosis that fits those symptoms… you know – more or less the same way as AI does.
Now how efficient and good doctors can do it compared to ai… i dont know, but clearly some worse than others.
Probably just matter of time before this first line of doctors are replaced with AI LLM’s.
As with all LLMs, use it for advice but then double check the advice. Getting a diagnosis and then tests to see if the diagnosis is right sounds useful. Blindly trusting it, not so much.
Remember: doctors are only consultants. No matter how they want to believe otherwise, that is all they are.
Don’t misunderstand me. They are absolutely the experts in the field. But then again, any good consultant is an expert in their chosen field. In any case, listen to your doctor and make sure you understand.
The point is that *you* are the one that has to live with the consequences of any decisions, not the doctor. Additionally, as much as they are the experts in the field of medicine, you are the expert about your own body.
A good doctor will know all this and will form a partnership with the patient to tie together their medical expertise with your lifetime of experience with your body. A bad doctor will just brush you off or use some form of “because I say so.”
If your doctor is not able to communicate the medical information you need to make the right decision, they are a bad doctor. Get a new one.
The responsibility for your health is with you. You are the one that ultimately benefits or suffers from any health decisions made. And that means: you have to take full responsibility. You cannot pass it on to someone else, no matter how much more they know than you about medicine. You are the boss, and your doctor is your consultant.
I once had an accident working abord a tanker ship and couldn’t move my feet afterwards. I was taken to a hospital, they did an x-ray, and the resident doctor claimed that a certain bone at the top of my foot was broken, and needed to be put in cast. I was of course not taken back to ship and sent home instead, but this diagnosis and its consequences costed my company a lot of money, and me a job that was basically a jackpot at the time in my life.
Other doctors said a lot of different things, from “it was never broken, that was a misdiagnosis” to “it was broken but it healed miraculously fast”. I never had a concrete answer for years. Just lost the best job I had in my life and fell into a year long depression over nothing.
Recently I had ChatGPT take a look at the scans. It confirmed that there was indeed no bone broken, gave likely reasons for the misdiagnosis, and even explained why I couldn’t move my foot.
I’m sorry if this upsets snarky doomer redditors but I 100% trust AI for medical analysis over doctors. Only one of them fucked my life up because they couldn’t be bothered to pay attention.
“It will all depend on the personality of the patients.”
This sentence alone should be bolded, underlined, italicized, and 700% bigger font. In psychiatry, AI is definitely the enemy with patients getting AI assurances that they don’t need their medications or that their psychoses are justified or not irrational.
Well, what can you do when basic healthcare is too expensive, just dont trust it blindly
This is possibly the best use case for LLMs. It’s the most rigorously and accurately documented occupation in all of human history. It also points out the problem with LLMs in that it takes a century of evidence gathering to find the data from the noise, then that’s it, the tool has peaked almost immediately. You have to wait for millions and millions of humans to gather more data to substantially improve.
The problem is not where you get your diagnosis, but the blind faith you put in it. Trusting a doctor blindly is just as dumb as trusting an llm. You are your own health advocate