Is ChatGPT Nicer than Your Doctor?

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 101

  • @ZeroAce7
    @ZeroAce7 ปีที่แล้ว +34

    My doctor acts like I’m wasting their time when I ask 2 questions, like only one was included in the package and I should know that.

    • @shakeyj4523
      @shakeyj4523 ปีที่แล้ว

      And they are typing on the computer instead of really listening to you.

  • @RobKinneySouthpaw
    @RobKinneySouthpaw ปีที่แล้ว +39

    What this might show is a physician who spends more time conversing with a patient will tend to get themselves rated as higher in empathy and quality of their advice, regardless of the actual content of their advice.

    • @dreamingbutterfly1
      @dreamingbutterfly1 ปีที่แล้ว +2

      This is the principle that alternative "medicine" operates on.

  • @jamesmoore4023
    @jamesmoore4023 ปีที่แล้ว +52

    I tried it with consent from a friend dying from ALS and he said it was more informative and compassionate than his care team.

    • @Praisethesunson
      @Praisethesunson ปีที่แล้ว

      Usually that's because his "care team" includes a series of corporate ghouls literally skimming off the top at every step in the healthcare process.

  • @hungrymusicwolf
    @hungrymusicwolf ปีที่แล้ว +8

    I would argue that the chatbots ability to talk more than a doctor plays into its ability to make a patient feel cared for. You can say that it isn't an accurate definition of empathy, but that isn't really the point. It's all about whether the patient feels it, not whether it is so.
    Though I wholeheartedly agree with you on the quality of information aspect.

    • @shakeyj4523
      @shakeyj4523 ปีที่แล้ว

      I wonder if the fact that Doctors have gotten so bad, that people are grateful for ANY little tidbit of feeling cared for is part of it.

  • @Draddar
    @Draddar ปีที่แล้ว +9

    While we may not yet be at a point where the AI outperforms doctors in such a task I think this study shows that perhaps we are also not as far off as we would have liked. There are several problems with chatgpt right now including fabricating answers, but with the speed these systems are developing I'd be shocked if it doesn't surpass human doctors in a matter of years, at least in these general "health issue->advice and therapy" sort of tasks, lets face it a lot of instructions are copy+paste based on the diagnosis anyway. And I haven't even touched on the unlimited speed at which it can operate.
    I suspect medicine (as well as most other fields) will be a totally different ballgame within a decade unless we artificially limit the use.

    • @HexerPsy
      @HexerPsy ปีที่แล้ว

      Well, for simple diagnosis there is a lot of patient information out there already. Folders or whole websites, usually the patient gets this information, or explanation about the medication. But rarely do patients read it properly - or they rather search online than check the info properly. If a chatbot helps in this manner, thats great, and can be done today.
      Meanwhile a lot of deeplearning tools are already in use in the hospital for all kinds of tasks, from enhanced detection to prediction models.
      Chat tools like ChatGPT are really only good at communicating, because they are trained on producing likable text, not correct text. This is why they can 'hallucinate' as you describe it. In the hospital, the current standard is shared decision making, where the doctor informs the patient of the best options, and doctor and patient balance the risks vs the benefits. Its decision making based on principles - which is just not how chatGPT is build, as it has no principles. These models today would be considered unethical to apply to patients.
      We have to be careful in that a 'good communicator' doesnt automatically mean its 'good at its job'. We are too used to listening to experts in this manner: if it sounds good, it must be true. In the world of LLMs like chatgpt, thats dangerous today.
      However, you are probably right that the next generation of deeplearning networks (old gens being image detection networks, cur gen are attention based transformers like chatgpt) - if the next generation can combine database information and online search, with principled decision making, it can be considered actual AI. Right now all forms of 'AI' tools and deeplearning networks are just complicated statistical models.

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว

      idk about "would have liked". the sooner chatgpt can replace doctors at triage, the sooner doctors can switch to handling the stuff that actually is way way beyond what AI can handle right now, which is going to be a lot of stuff for a long time because of how uniquely deep biology is. Unlike some jobs doctors aren't going away as a job until there are no more health problems humans are the best option for, and phew that's gonna be a while compared to some subdomains of ai - it'll plausibly be the single one that lasts longest for a variety of reasons. There's jobs risk for sure, but because of licensing needs, doctors should best spend their time figuring out how to bend ai into another giant whose shoulders they can stand on, because we're not going to lose the need for doctors until everything is solved.

  • @cassioamorim1348
    @cassioamorim1348 ปีที่แล้ว +4

    I have type 1 diabetes and my current doctor is great! Very empathetic and patient, and always trying to improve my condition, which has never been better!
    BUT, my previous doctor would literally just smile and talk to me for 30 seconds, with no conversation or empathy at all. I'm quite sure the quality of service of some doctors is a known issue in medicine, and what chatGPT or any future AI have an edge is that they can be trained to act only like the good ones.

  • @worldcitizenra
    @worldcitizenra ปีที่แล้ว +2

    A chat bot has a huge advantage over in person doctors in making a diagnosis and recommending treatment, assuming that both the doctor and the chat bot have access to the same primary input data about the patient's condition. That advantage is the chat bot's ability to quickly do a massive information search that allows the most recent medical research to drive it's results and do so without bias or having been influenced by the often self-serving information provided to doctors by pharma and medical products companies. The chat bot would also not have the psychological commitment to past diagnoses and treatments that a doctor may have. Several years ago, Healthcare Triage posted videos reporting studies showing that doctors tend to adhere to practices they've used in the past even when newer research shows those practices to be ineffective or causing harm. In addition, it is well documented that some organizations responsible for defining the standard of care for illnesses will adhere to and continue existing standards even after research is published questioning or disproving the efficacy and rationale for the standard. A chat bot has no such commitment to the status quo, unless its algorithms are written to give preference to established conventional standards.

    • @m136dalie
      @m136dalie ปีที่แล้ว +1

      The problem with the chat bot is it has no deep understanding of how the human body operates. If given prompts of 2-3 sentences with key information it will reliably get the diagnosis, but when you give it complicated presentations as found in some case reports it becomes unreliable.

    • @worldcitizenra
      @worldcitizenra ปีที่แล้ว +1

      @@m136dalie - So, chat bot to research diagnosis and alternatives; doctor to interpret and prioritize the treatment options presented?

    • @m136dalie
      @m136dalie ปีที่แล้ว +1

      @@worldcitizenra I think the best case to elaborate would be radiologists, since it's probably going to be the speciality most affected by AI and they're also the ones who talk about it the most.
      How it currently works is every hospital has a certain number of radiologists who analyse X-rays, CT scans and MRI scans that are ordered. Generally every radiologist has a long list of scans to sort through but they will prioritise urgent scans. Each scan takes a considerable amount of time to review, especially CT scans and MRI scans.
      Now if AI becomes very sophisticated, the AI would be able to analyse and provide a preliminary diagnosis for every scan. The radiologist would then be able to review the scan and potentially make changes to the diagnosis of necessary. This would drastically reduce the amount of time spent per scan by a radiologist, meaning that they can review significantly more scans per hour.
      So hospitals will then have the choice to encourage more liberal use of imaging, or (more likely) hire less radiologists. However for safety and legal reasons you can never get rid of the radiologist entirely, since there has to be a safety net in case the AI makes a mistake.
      If AI does become prevalent in other fields of medicine, it would be in a similar way to the example stated above. Honestly, I think it's a good thing since it would make doctors more efficient.

  • @marley7145
    @marley7145 ปีที่แล้ว +1

    When I ask my doctor a question, my priorities are accuracy and completeness. In my limited experience with ChatGPT, I can expect neither.
    The other thing I need from my doctor is her experience. ChatGPT can look at the number of people who have searched for "condition" and "remedy". My doctor has the experience of working with her patients, becoming familiar with their cases, and following up to see if those remedies had any effect, positive or negative. That's what I really need.

  • @peterterry7918
    @peterterry7918 ปีที่แล้ว +7

    As a doctor, I think this is a good start. I would use AI to collect information and present it to me with a differential diagnosis list. After confirmation of the history and exam, the patient and provider can discuss options for testing or treatments. The AI could create the note for the provider to review and help with answering questions that the patient forgot to ask or forgot what was said. Unfortunately, insurance companies will want to have the issue of legal liability settled before agreeing to agreeing to a provider using AI.

    • @TheScourge007
      @TheScourge007 ปีที่แล้ว +1

      My question there would actually be, what's the advantage of an AI versus a knowledge database with a good search function? After all machine learning forms the basis for either language models like Chat GPT or for search engines like Google. My worry with an AI is how it winds up convincing people of the certainty of its answers and is better calibrated to producing a fake citation than a real one. Not to mention there's no way for the AI to determine truth outside of what humans tell it is true. At which point using a properly curated knowledge database with a good search function is a better answer. There's no obfuscation of the role of humans nor a misplaced belief that a machine with parameters set by humans has a better insight to reality than humans do.
      What concerns me about the chat functions of AI is that it exaggerates the power of AI and at the same time draws people's thoughts about how it can be used down too narrow of pathways. We could use AI to aid in search functions, data updates, and checking for contradictions in sprawling databases. But by wanting it to act as a chatbot we ask it to do things it really can't do, like decide truth.

    • @shakeyj4523
      @shakeyj4523 ปีที่แล้ว

      With the way current doctors are, how long before you take the patient out of the loop completely? It's essentially that way now. Doctors don't even listen to your heart anymore. They don't listen to your lungs. They don't listen to the PERSON. They just input into the computer. They spend more time rushing people out than they do caring for them.

  • @_ch1pset
    @_ch1pset ปีที่แล้ว +3

    The current state of AI is it can be useful to bounce ideas off and it can synthesize responses based on known information that it was trained on. Very handy tool as long as you are able to spot bad information. GPT produces bad responses quite a lot, but if you can spot it, you can have it reflect and correct itself.
    How you spot bad information depends, just gotta make sure responses are logically consistent, follow the directions exactly as queried, make sure any extra information is truly relevant, etc. I have experimented with asking it to help me solve some problems using code. In that case you have to check it's code to make sure it does what you asked. Sometimes you just need to give it more clear queries, but other times, it just doesn't have enough context to produce an accurate response.

  • @gravityvertigo13579
    @gravityvertigo13579 ปีที่แล้ว +5

    Respectfully to some of these comments, it's so true that doctors not putting enough effort into bedside manner is a real problem. But also, a machine regurgitating a bunch of friendly nonsense you can't trust is a bogus solution.

    • @SuperDoNotWant
      @SuperDoNotWant ปีที่แล้ว +2

      It is, but you've just described acupuncture, TCM, and chiropractic. Placebos whose biggest benefit comes from "friendly nonsense". Next question: Does the "person" listening and being nice even have to be human to "improve healthcare outcomes"?

    • @gravityvertigo13579
      @gravityvertigo13579 ปีที่แล้ว

      @@SuperDoNotWant what was the first question

  • @verity3616
    @verity3616 ปีที่แล้ว +5

    As much as people thinking AI is more 'accurate' than doctors is going to be a problem going forward, I worry about the public perception occupational experience is _less important_ than the generically 'nice' way an AI can reply to prompts. I wish these authors had used validated tools to assess patient satisfaction and appropriateness/accuracy of the provided information.
    Also, one of the flaws in this particular study is the people replying on a subreddit are not necessarily specialists; you can get a general practitioner responding about a rarer skin disease with a vague, but accurate, reply. If a dermatologist had wandered by they will have a wealth of additional information and expertise to share. Since participation on the site is voluntary, this was a matter of chance and does not accurately reflect real world care practices. Humans online also tend to be aware of the legal limits of what they should say or recommend, AI, not so much.
    AI is also going to rely solely on aggregating publications or written material, right? It cannot possibly include the insights gained from lived experience as a healthcare provider. Sometimes being honest and being perceived as 'nice' in the short-term are mutually exclusive in medicine. Moving forward, we should be very cautious about letting the conversation around AI in medicine be framed as a matter of customer satisfaction.
    I'm not a luddite, but I just hate this.

  • @billyb6001
    @billyb6001 ปีที่แล้ว +4

    I did use chat repeat to look at some of my medical results

  • @RebdullRinn
    @RebdullRinn ปีที่แล้ว

    I have adult onset stills disease, an extremely rare auto inflammatory disease, and popped my symptoms into a medical ai bot and it spit out the all the exact possible diagnosis and diagnostic paths my drs used. Only my drs did it over a period of 6 months. The medication we tried which lead to my dx (which is safe cheap and quick) took us 5 months to think of trying. The bot suggested it immediately. There’s absolutely a place for ai in medicine as a collaborative tool.

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว +1

    Some papers to look up, all are open access on arxiv, from this year -
    - Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness (negative about readiness-to-use)
    - Evaluation of ChatGPT Family of Models for Biomedical Reasoning and Classification (negative)
    ...reminder to those new to reading papers, remember that most papers are kinda crap, especially in ml, double especially when not filtering by peer-reviewed which I am emphatically not doing here; I am instead filtering by "high enough apparent quality to be worth mentioning with a note about apparent quality". I am just a skilled nerd on the internet with a hobby, real medical domain researchers will probably have strong favorites and likely know how to find some I don't.
    - Can large language models reason about medical questions? (negative)
    - Capabilities of GPT-4 on Medical Challenge Problems (positive: "gpt4 is enough better to stand out vs gpt3.5")
    - Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations (hesitant: chatgpt breaks japanese law by recommending things the japanese healthcare system disallows but where debate is ongoing about values globally)
    - The Case Records of ChatGPT: Language Models and Complex Clinical Questions (optimistic: "can go well in a few trials, further research needed")
    - Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery (mixed: "well it sorta works I guess and usually doesn't give awful advice but usually isn't that good either" or so)
    - Differentiate ChatGPT-generated and Human-written Medical Texts (mixed: you can tell chatgpt wrote it because it'll be fluent but vague, non-concrete)
    - Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language Understanding (mixed: promising but further capabilities research needed)
    - ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge (optimistic: an attempt to build an open source doctor model; unclear if the edge case rate is actually lowered, but seems quite promising as a component)
    - On the Evaluations of ChatGPT and Emotion-enhanced Prompting for Mental Health Analysis (mixed: could work but it's fragile right now)
    - ChatGPT may Pass the Bar Exam soon, but has a Long Way to Go for the LexGLUE benchmark (another field, but shows where chatgpt is skill wise)
    - MedAlpaca - An Open-Source Collection of Medical Conversational AI Models and Training Data (another attempt to make an open source one, again unclear from abstract how well it does)
    - In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT ("yeah so this isn't reliable basically")
    - BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining (yet another pretrained one with unclear actual success rate)
    - CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models (yet another attempt to do something crazy with unclear success)
    - Many bioinformatics programming tasks can be automated with ChatGPT (another highly optimistic one with vague success)
    - Benchmarking ChatGPT-4 on ACR Radiation Oncology In-Training (TXIT) Exam and Red Journal Gray Zone Cases: Potentials and Challenges for AI-Assisted Medical Education and Decision Making in Radiation Oncology (performs ok but not great; "Regarding clinical care paths, ChatGPT-4 performs well in diagnosis, prognosis, and toxicity but lacks proficiency in topics related to brachytherapy and dosimetry")
    - Do We Still Need Clinical Language Models? (answer: pretty much "yep, chatgpt isn't enough")
    - Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential (mixed, but optimistic)
    - Putting ChatGPT's Medical Advice to the (Turing) Test ("On average, chatbot responses were correctly identified 65.5 time, and provider responses were correctly distinguished 65.1 average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions.")
    - Evaluating Large Language Models on a Highly-specialized Topic, Radiation Oncology Physics ("although ChatGPT (GPT-4) performed well overall, its intrinsic properties did not allow for further improvement when scoring based on a majority vote across trials. In contrast, a team of medical physicists were able to greatly outperform ChatGPT (GPT-4) using a majority vote. This study suggests a great potential for LLMs to work alongside radiation oncology experts as highly knowledgeable assistants"

  • @crash.override
    @crash.override ปีที่แล้ว +16

    Source: reddit
    Well there's the problem, with the study!

    • @Healermain15
      @Healermain15 ปีที่แล้ว +2

      It really is though. That implies they didn't even know who their test group was or how representative they were of their larger field.
      Or if they were even qualified to do those assesments, unless that subreddit has some very strict posting requirements that I'm not aware of.

    • @QvsTheWorld
      @QvsTheWorld ปีที่แล้ว

      Maybe I understood wrong but I thought the study was that they collected medical question and answers from reddit then generated a ChatGPT responses then have the both responses analyzed by actual physicians. So the person who asked the question was never aware of the ChatGPT response.
      Even in this context for me the problem is that you can't really certify the credential of the human responder. There is also somewhat of a ethical concern if the original posters question is assumed to be fair use to be use in research without their consent.

  • @cassieoz1702
    @cassieoz1702 ปีที่แล้ว +4

    Of course, no-one ever talks about the 'bedside manner' of patients. Dealing with demanding, rude, violent (70% of primary care docs have been assaulted) patients who dont follow ANY of the guidance given but blame you for poor results, is why docs are leaving the profession in droves.

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว +1

      I wonder if you could use AI to dismiss rudeness more effectively. Not by talking to the AI yourself, but by telling rude patients that if they aren't willing to be kind they can try their luck with AI advice,

    • @cassieoz1702
      @cassieoz1702 ปีที่แล้ว +1

      @laurenpinschannels ah, but you see, they don't believe they're demanding, rude, aggressive or dismissive. It's only EVER the doctor's fault

    • @MichiruEll
      @MichiruEll ปีที่แล้ว +1

      I get that some patients are difficult, but at the same time taking care of our health is often not our priority, cause in a capitalistic system it cannot be.
      "Don't follow any of the guidance given" sounds like something my GP would say when I didn't lose weight after she said I "should lose weight". That's the entire scope of the instructions she gives. Thanks doc, hadn't thought of that ever, I'll just snap my fingers. I fully understand that this is a risk factor, but like... it takes more support than a guilt trip and a proclamation.

    • @cassieoz1702
      @cassieoz1702 ปีที่แล้ว

      @MichiruEll no, i was actually thinking of not taking a specific antibiotic for a culture-proven wound infection and then filing a complaint against the doc when the wound dehisced. Fundamentally there are A-holes in every walk of life but i get tired of one sided stereotypes where only docs are wrong, all nurses are angels of mercy and patients are eternal victims.

    • @MisterCynic18
      @MisterCynic18 ปีที่แล้ว

      ​@@laurenpinschannels I'm pretty sure a majority of people would gladly take the AI regardless of how effective it was as long as it was nicer to them. We can already see how susceptible people are to snake oil salesman and all manner of pseudoscience peddlers that just talk nice to them, a machine built purely to tell sweet lies would quickly replace real doctors.

  • @billyb6001
    @billyb6001 ปีที่แล้ว +8

    The doctor I get through my insurance doesn't seem to care they just run everything through computers anyway

    • @ratgreen
      @ratgreen ปีที่แล้ว +1

      Here in the UK the doctors will literally google something in front of you during the appointment if they dont know enough about it. As if I haven't done that already.

    • @mirsadm
      @mirsadm ปีที่แล้ว +1

      ​@@ratgreen what is the problem there? Would you rather they make up an answer instead? That's what most professionals do. They Google it and use their knowledge in the field to come up with an answer.

  • @ratgreen
    @ratgreen ปีที่แล้ว +1

    Here in the UK public health service is so so underfunded you get about 10 minutes to discuss health issues with a doctor before you run out of time. Every appointment is rushed and seems like all they are trying to do it get your out the door asap. Which is useless if youve got something chronic or non symptomatic, or just something they cant see with their own eyes. You dont have to deal with gaslighting from humans suggesting its all in your head with AI. AI doesn't suggest dumb shit that you are already doing (eat healthy, exersise, go outside hurrr duurrr). Chatgpt is great for giving you ideas for things that COULD be causing you problems. Then you can get blood tests yourself, and then feed chatgpt the test results and see what it suggests. Google the suggestions to confirm they are legit. You can methodically rule out things it can and cant be, and arm yourself for your next 10 minute appointment to get the most use out of a human doctor as you can.
    Honestly it would free up doctors time if they had a certified healthcare AI chatbot that patients could use to deal with common, low risk issues (colds, ear infections, etc) which currently waste doctors time that could otherwise be spent on more serious cases.

    • @shakeyj4523
      @shakeyj4523 ปีที่แล้ว +1

      It's the same way in the US, only we have to pay hundreds to thousands of dollars a month to be treated that way. You need to get rid of the conservatives in government or you won't have doctors. Believe me, I have been in both systems, and you do NOT want the US model.

    • @ratgreen
      @ratgreen ปีที่แล้ว

      @@shakeyj4523 absolutely right, the Conservatives are intentionally destroying and milking the NHS, making it so bad and not fit for purpose that they can force people to go private (which all their friends and families own or have stocks invested in the private health care sector, to get rich) and dumb people will say 'oh the NHS was not fit for purpose anyway so it's a good thing'

  • @QvsTheWorld
    @QvsTheWorld ปีที่แล้ว

    I don't know if I'm missing anything but regarding the "quality/factuality" of the answer, could someone just take the responses and and have them graded with a more rigorous scale? As far as I'm concern I think the technology is on it's way to provide good assistance to doctors since it has the potential to always have the most up to date information. Patients could also use it to prepare for appointment with questions, concerns and important information to provide to their doctor. At most AI could be use to provide referral to some test and have the result forwarded to the patients doctor.

  • @bojangprodoktschns5428
    @bojangprodoktschns5428 ปีที่แล้ว +26

    It might help to have an AI that is taught on medical literature instead of 'the internet'.
    AI is a great thing to happen to our medical system! I am especially hopefull for the so called rare diseases that are often misdiagnosed because of their rarity, yet are really not that rare when considered all together. Another blind spot are diseases that fall inbetween fields of expertise.
    Personally I couldn't care less about empathy: It doesn't help me to get misdiagnosed in a nice way.

    • @CG_Hali
      @CG_Hali ปีที่แล้ว

      Can't upvote the need to diagnose rare diseases first.

    • @ratgreen
      @ratgreen ปีที่แล้ว +2

      This! There's too many conditions in existance for a human doctor to have enough knowledge on all of them, let alone how to identify, diagnose and then finally treat.

    • @kieleyevatt2232
      @kieleyevatt2232 ปีที่แล้ว +1

      ​@@ratgreenIn theory that's why we have specialists. Your GP is supposed to figure out what general body system is the problem and send you to the correct specialist, who should be able to know almost everything that could be wrong with that bodily system

  • @MisterCynic18
    @MisterCynic18 ปีที่แล้ว +2

    It deeply amuses me that an unthinking algorithm with no conscious awareness rated as more "empathetic" than actual human beings.

    • @AileTheAlien
      @AileTheAlien ปีที่แล้ว

      I think Dr. Carroll hit on it with the 'flowery' description - these things are really good at spitting out verbose responses. If it feels like the answer took more time to write, that could make it feel like another person who cares more. (And hopefully not just a computer.)

  • @BinarySplit
    @BinarySplit ปีที่แล้ว +2

    They just need to work on the gaslighting problem. We'd be less eager to replace them with AI if doctors would stop automatically blaming patients when they can't figure something out.

  • @joyg2526
    @joyg2526 ปีที่แล้ว +2

    Chat bot had a better bed side manner than almost %100 of all the doctor's I've ever had. Real doctors don't or can't care in our current US system. I don't know about accuracy. It'd be nice if real doctors and AI could get together and make a better team.

  • @lillyrocks2011
    @lillyrocks2011 ปีที่แล้ว +3

    I'd rather to have a right diagnosis than a human doctor being false "empathetic " but being dismissive... I've been misdiagnosed for years. I have a rare awful auto-immune disease that human doctors dismissed because they didn't read my antibodies properly.
    I'd trust now more a robot 🤖 than a human doctor? I don't know but I hope they can be accurate and really care.

  • @jtylermcclendon
    @jtylermcclendon ปีที่แล้ว

    I believe ChatGPT is like more empathetic because perceived empathy is based on relating to the person, and physicians are a very niche subset of the general population based on their intelligence and work ethic while ChatGPT is modeled on the average of general pop so, therefore ChatGPT is more relatable to general population.

  • @Aegnor
    @Aegnor ปีที่แล้ว

    I spent 15 years trying to get a doctor to tell me what was causing severe pain in my finger before it was finally diagnosed as a glomus tumor. ChatGPT figured it out in five minutes.

  • @mnoxman
    @mnoxman ปีที่แล้ว

    Frankly, given my experience with cardiologists and emergency room doctors EMH Mk 0 Alpha 1 will be a significant improvement.

  • @zacharyhockett6248
    @zacharyhockett6248 ปีที่แล้ว +1

    The thing about this AI is not what it can do today but what it can do tomorrow. Even more importantly, how cheap it will be.

  • @bellabilou7220
    @bellabilou7220 ปีที่แล้ว +1

    Great as always 👌😊

  • @natedawww
    @natedawww ปีที่แล้ว +4

    It should also go without saying that, however polite an "AI" chat response may seem, it is definitionally incapable of empathy. It is not actually "thinking", let alone imagining and internalizing the emotional state of its conversation partner. It is merely stringing together words in a probabilistic manner (hence also the extremely valid concerns about accuracy).

    • @MisterCynic18
      @MisterCynic18 ปีที่แล้ว

      Perhaps then empathy is not actually what people desire.

  • @Phlegethon
    @Phlegethon ปีที่แล้ว +1

    Family doctors are done it’s just a low skill job where the doctor is wrong more times than is comfortable or doesn’t give someone a test they really need to get

  • @Kriliska
    @Kriliska ปีที่แล้ว

    For any chat to be nicer than my current doctors, it would have to coddle me like an infant.

  • @poozlius
    @poozlius ปีที่แล้ว +2

    [insert algorithm-pleasing comment engagement here]

  • @dylangreen6075
    @dylangreen6075 ปีที่แล้ว +1

    Huh well to me it sounds like this particular study was done very poorly. Like they were just trying to capitalize on the popularity of every article / story/paper being written about chat GPT recently. No clarification as to what they meant by quality. Now that's some impressive research. This kind of click-bait material is definitely what we needed to be spending time and resources on investigating when it comes to the role of AI in our society. I wonder if AIs get together and discuss whether or not they think humans will ever achieve consciousness.

  • @TakeWalker
    @TakeWalker ปีที่แล้ว +1

    Anyone who trusts their life and health to an "AI" out of a belief that it's better than a real doctor (i.e., not out of desperation because they can't get real medical care for whatever reason) deserves what they get. We need to curb this crap before people start dying.

  • @jamesonstalanthasyu
    @jamesonstalanthasyu ปีที่แล้ว

    ChatGP reponses are like all thr other internet armchair doctors/therapists. Seems to be big about blatheirng on but not so much with factual informati9n.
    And I can also see a ChatGP being completely nonplussed with 5 subsequent "but are you sure" responses.

  • @Marconius6
    @Marconius6 ปีที่แล้ว

    ChatGPT is definitely nicer than most of my doctors, but that's not exactly a high bar.
    More accurate? Probably not... yet.

  • @Alex-ki1yr
    @Alex-ki1yr ปีที่แล้ว

    Interested in what y'all have to say!

  • @elleplaudite
    @elleplaudite ปีที่แล้ว

    Tbh an average death row psychopath would be more empathetic than most doctors.

  • @jeffbrownstain
    @jeffbrownstain ปีที่แล้ว

    As someone who is equally aware of both the limitations of AI AND the limitations of Humans: AI are replacing MOST of the humans in my life 🤷‍♀️

  • @MahargBor
    @MahargBor ปีที่แล้ว +12

    And as always with AI technologies, this is the worst it will ever be. It will continue to self improve inevitably. Let's continue to get AI ethically aligned with healthcare and all other fields of science.

    • @Ilamarea
      @Ilamarea ปีที่แล้ว

      AI will outgrow being aware of our existence. There's no point trying to align it. It's not the Great Filter for nothing!

    • @Healermain15
      @Healermain15 ปีที่แล้ว

      This wasn't an endorsement of replacing doctors with AI. Not sure where you got that from in the video.

    • @SuperDoNotWant
      @SuperDoNotWant ปีที่แล้ว

      And what are you going to do when the AI decides you're not worth saving and recommends euthanasia? Or decides to lie to you, or withhold information because it's made its own judgement that it's for your own good (like white male doctors used to do to women and minorities)?
      Oh, how about the fact that the practice of medicine is historically sexist and racist, and the AI is trained on that history, so while AI medicine might be really good for Average White American Men, it's probably going to be even more lethal for anyone who isn't that?

    • @SuperDoNotWant
      @SuperDoNotWant ปีที่แล้ว

      I honestly can't wait until you ex-crypto/NFT guys find a new tech bro hobby that you think will Save Humanity.

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว +1

      ​@@SuperDoNotWant yeah this comment definitely has some spectator vibes. but also, reducing load on doctors is good actually - doctors' jobs aren't going away for quite a while. and you can say to rude patients "if you don't feel the need to be kind to me as a human, you're welcome to try your luck with AI", while simultaneously you can use AI yourself as a tool for keeping up with the research. it shouldn't be too long before there are high quality medical domain specific chatbots available certified from the FDA which are meant specifically as doctor's aides.
      @MahargBor it's not currently self improving and let's hope it doesn't. It takes a lot of hard work from researchers to improve these models still. Self improving is in fact extremely dangerous and something we hope AIs don't do for a while.

  • @skywise001
    @skywise001 ปีที่แล้ว

    ChatGPT is programmed to get responses. So if it lies to you and you like it thats a win as far as its rather simple programming.

  • @IrocZIV
    @IrocZIV ปีที่แล้ว +1

    I think Doctors utilizing Ai will be an overall good.

  • @willemvandebeek
    @willemvandebeek ปีที่แล้ว

    Wow, does that mean ChatGPT passed the Turing test?

    • @MisterCynic18
      @MisterCynic18 ปีที่แล้ว

      Plain old chatbots passed the Turing test like 20 years ago. It doesn't take much.

  • @HexerPsy
    @HexerPsy ปีที่แล้ว

    Ehm... ChatGPT has more input besides the prompt you give it. It is always told to produce helpful and empathetic responses, because its training dataset (the internet) also contains terrible things. This is why it immediately apologizes whenever you give it feedback on an earlier response.
    So you are saying its acting nice, when its constantly told behind your back to act nicely. Wow - big discovery!

  • @ajcrkni
    @ajcrkni ปีที่แล้ว

    Even though this study has limitations like the mentioned accuracy, it's pretty easy to reproduce the study in a better setting. Doc put a solid defense, but let's face it, it's inevitable.

  • @PedanticNo1
    @PedanticNo1 ปีที่แล้ว

    Rock and stone :]

  • @DarthStuticus
    @DarthStuticus ปีที่แล้ว

    so doctors should run their actual diagnosis through chat gpt to make it more empathetic and then fact check it for errors.

  • @Praisethesunson
    @Praisethesunson ปีที่แล้ว

    It is nicer. But that's only because my doctor does child healthcare.
    So he doesn't have many fucks left for me after he spent time telling a 10 year old how their cancer treatment is going to go.

  • @SuperDoNotWant
    @SuperDoNotWant ปีที่แล้ว +1

    We really are heading full-throttle to Idiocracy, aren't we.

    • @AileTheAlien
      @AileTheAlien ปีที่แล้ว

      😼Not cyberpunk enough. We live in the darkest timeline.

  • @anymouse8221
    @anymouse8221 ปีที่แล้ว

    I mean... I've been yelled at by homeless people with more empathy than my average physician interaction.

  • @edwardbrito4010
    @edwardbrito4010 ปีที่แล้ว

    No the medical system is cold & cruel sure individual nurses & doctors care but the system promotes cold & calculated care.

  • @YossiSirote
    @YossiSirote ปีที่แล้ว

    I have never heard you so defensive before. Wow, this really got under your skin.

  • @BadgerOWesley
    @BadgerOWesley ปีที่แล้ว

    Sounds like you and other doctors feel overworked and are scared of AI being better than you.
    "AI is only able to take stupid people's jobs, not mine! I paid too much for med school just to have a computer that can potentially have access to all human knowledge be better than me!!? King of the doctors? Bah I say."

    • @m136dalie
      @m136dalie ปีที่แล้ว

      No doctor is scared of being replaced by AI. Even radiologists who are most at risk. It will just make them more efficient at their jobs, so they can review cases rather than having to scrutinise every one individually.
      However as it stands AI can't even reliably diagnose ECG readings, which is just a 1 dimensional line, so there's still a long way to go before it has a meaningful impact on medicine.