ChatGPT: Will It Transform the World of Health Care?

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ธ.ค. 2024

ความคิดเห็น • 25

  • @wordysmithsonism8767
    @wordysmithsonism8767 ปีที่แล้ว

    For progress, that last guy before the Q&A is the man.

  • @jk35260
    @jk35260 ปีที่แล้ว +3

    Very interesting and meaningful discussion.
    GPT4 is already out and seems to be excel in the field of biology and medicine. DAX is a great tool.
    MSFT and OpenAI will certainly create an easy to use API for institution to fine tune GPT4. I think the human touch with patient is important.
    Later on, GPT4 will be able to look and analyse pictures .

  • @wandabedinghaus
    @wandabedinghaus ปีที่แล้ว

    Thanks for this inspiring presentation. Now doctors can be the healers they went into medicine to do.

  • @sandybayes
    @sandybayes ปีที่แล้ว

    I would hope that the next generation of health care incorporates nutrition into its treatment and prevention options for their patients. This area has been sorely neglected yet there are many humans who seek other medical treatment modalities outside of the traditional medical community for better, less invasive treatment options. Most medical doctors I have encountered do not have the foggiest idea of what the human bodies nutritional requirements are in order to function well and to prevent specific chronic diseases.

  • @equinoxhit
    @equinoxhit ปีที่แล้ว

    Thank you for sharing!

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว +2

    HIPAA has not yet approved chatgpt, is it?

    • @whatisiswhatable
      @whatisiswhatable ปีที่แล้ว +1

      it has approved it in Azure.. saw an article on this

    • @hongmeixie409
      @hongmeixie409 ปีที่แล้ว

      ​@@whatisiswhatablewhat's document?

  • @freedomlife3623
    @freedomlife3623 ปีที่แล้ว +4

    If the consult to the patient is wrong, who does the patient sue? The Doctor or the ChatGPT this Doctor used?

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว

      right now it would still be the doctor I think.

    • @shihtzusrule9115
      @shihtzusrule9115 ปีที่แล้ว

      Doctors or NPs or PAs, whoever is creating the report/documentation, will still have to sign off on it, so the signature shows the doctor read, edited and approved the document. It's the same scenario with medical transcriptionists who do the typing or cleaning up after voice wreck-ognition or the dictator now or scribes who do it without the practitioner dictating just an intake sheet and access to the patient's chart and compile the medical data into the reports that gets the hospital/clinic closer to billing and reimbursement. Microsoft bought Nuance, a pioneer in voice recognition software and probably the largest prime contractor with hospitals and clinics to transcribe these dictated reports, and the snippets of conversation or text the woman was talking about iSoftstone hired people to transcribe 10-second recordings of conversations to help voice recognition (and now I'm thinking ChatGPT) to help build a database to draw from to correct some of its problems earlier on. IBM worked on voice rec since 2001: A Space Odyssey. An IBM employee and friend of Stanley Kubrick came to work on the set as a technical advisor. In the movie, the computer's name was HAL and those 3 letters one up in the alphabet from I-B-M. He took a leave from IBM's voice rec b/c it wasn't going anywhere and working on the movie set sounded fun. So, this solution has been worked on since the 1960s if no the 50s. The problem is a computer is a difference engine. There is no "intelligence" or "learning." You have to have a database that can cover all contingencies and enough "if then' scenarios to guide the computer through the data and to select the most probable solution when trying to match the request or task to the solution. Refining the question improves the probability of a correct answer.
      HIPAA allows patients to sue for errors in their medical records and believe me, they abound but laypeople don't get it. After you go to the doctor or hospital even the ER. You should quickly get your dictated reports and see what is in them. It could dispel any misapprehensions you have about your care. Initial evaluation or H&P, follow-up visits/consults, discharge summary, operative reports, imaging, etc.

  • @jch3117
    @jch3117 ปีที่แล้ว

    Excellent conversation

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว

    there's incredible research happening - don't miss the clinical language model research if you're going to use it. it's important to be able to check how reliable the ai will be; it learns surprising, invisible interference patterns.

  • @brozbro
    @brozbro ปีที่แล้ว

    There will be many chats. ChatMedical, ChatLaw, ChatEngineering, etc. each learning their specialty...

  • @entrepremed
    @entrepremed ปีที่แล้ว

    "if we don't do it , someone else will do it"... the only part I don't find so cool of this talk.

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว +1

    Re: the other comment warning of fascism - definitely a risk due to the rising threat; but it could be worse than that if the ai learns to be authoritarian and then does it back to the human authoritarians who create it. we do need this sort of advanced AI, but it's important to understand causal validity and how to respect agentic rights of the patient. medical safety is the same problem as general inter-agent-systems ai safety; we need to be able to check whether the ai is structured in a way that will promote the patient's agency right than trying to control them into doing the maximally safe thing and trick the doctor into controlling. Don't accidentally build the matrix while trying to make humans live forever - people need to retain their own choice making, and yet have the tools to live as long as they want healthily and truly understand the world.

  • @cmorth2413
    @cmorth2413 ปีที่แล้ว +2

    WOW, if this gets my doctor to pay more attention to me…hurrah!

    • @shihtzusrule9115
      @shihtzusrule9115 ปีที่แล้ว

      It is guaranteed to not. When they go over your chart and think about your case and recap and document your history, objective data through tests and exams, and come up with an assessment and plan for treatment, it makes them think about you. They may not be able to put a face with the name or diagnosis but they do have to know who they are dictating on. If this allows them to skip this step, the answer is no, not going to think about you, your case or get closer to the ah-ha moment some patients need to get to the right diagnosis and treatment.

  • @chris-hu7tm
    @chris-hu7tm ปีที่แล้ว +1

    we need to push for a diagnosis chatgpt were you can diagnose yourself

  • @santinorider7536
    @santinorider7536 ปีที่แล้ว

    This is horrifying.

  • @jamesedmonds926
    @jamesedmonds926 ปีที่แล้ว +3

    Tech pharma fascisism

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว

    the claim about the expected growth of gpt4 is an exaggerated meme and is wrong. parameter growth is going much more slowly than that.