What's Next For ChatGPT in Healthcare? - The Medical Futurist

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 พ.ค. 2024
  • Everyone has been talking about ChatGPT for months now, and it actually reached the great milestone of 100 million users within 2 months after its launch.
    ChatGPT has passed law, business, and medical exams, and outperformed students in microbiology.
    It could become the best assistance we could've ever dreamed about, especially in medicine and healthcare.
    So what's next for ChatGPT in healthcare?
    Here are 6 things to expect in the next year!

ความคิดเห็น • 51

  • @Joseph1NJ
    @Joseph1NJ ปีที่แล้ว +4

    I had always hoped basic technology, not necessarily AI, would enhance patient outcomes. For example, our intake forms are rarely looked it, and if so, only at a glance. I would welcome a thorough online questionnaire we patients can complete well in advance to our appointment with an new MD. Those answers can be put through an algorithm identifying red flags, or areas of attention. For example, I guarantee my GP doesn't know what I do for a living, anything about my family's health history, what my diet is like, my sleep pattern, my emotional health, exercise, social life, exposure to toxins, and nothing of my health screenings, e.g., colonoscopy, skin screenings, eye checkup, nothing.
    In the healthcare industry maybe he isn't allowed more than 5 minutes, 4.5 of which is spent behind a computer screen, and has to keep the flow moving in order maximise revenue. But since I'm paying the exorbitant cost of being self insured, certainly I should get more, and this is where tech can assist.

  • @staroceans8677
    @staroceans8677 ปีที่แล้ว +1

    I completely agree with you and I look forward to it being a part of the future and think it has much to contribute.

  • @mpen7873
    @mpen7873 11 หลายเดือนก่อน +1

    Very helpful great work

  • @chenwilliam5176
    @chenwilliam5176 ปีที่แล้ว

    It can play a Role of
    Consultant of Health Care

  • @DrMel1911
    @DrMel1911 ปีที่แล้ว +14

    I asked GPT-4 to assist with a literature review. The sources it provided were not very accurate. Sometimes it would mix up author names and article titles. Once I noticed that, I began asking it for the DOI so that I could go straight to the source without searching via name or author, but even the DOIs were leading to the wrong articles. Hopefully we will see improvement soon. Has anyone else experienced a similar situation?

    • @surajbamankar3603
      @surajbamankar3603 ปีที่แล้ว

      Indeed. I experienced the similar issue when I was working on my article.

    • @smryt9728
      @smryt9728 ปีที่แล้ว

      Interestingly I have found Bing AI (Precise Mode) to be the most reliable when I comes to fact based questions and citing its sources. Its powered by GPT4 but it has access to the internet which helps it alot. GPT4 on Open AI’s website is much more creative and its answers are more interesting but it gets many thing just completely wrong.
      In any case I think there is still LOTS of room for improvement when it comes to reliability with AI. I’m just impressed how far we’ve come. Had anyone asked me about generative AI last year I would have said we’re 10-20yrs away from it having a big impact…man was I wrong

    • @DrMel1911
      @DrMel1911 ปีที่แล้ว

      @@smryt9728 I've noticed Google Bard will sometimes provide a link to it's source as well. I agree that OpenAI ChatGPT is more creative but that's a challenge when you need facts. I guess for now we'll use all of the mainstream gpt tools depending on the need in the moment.

    • @smryt9728
      @smryt9728 ปีที่แล้ว

      @@DrMel1911 Try to get access to Bing AI, for me its been much better than using it on OpenAI's website and its been way better than Bard for providing reliable sources. It isn't perfect by any means but its been extremely useful.

  • @georgecorea7314
    @georgecorea7314 ปีที่แล้ว

    Thanks but I would love to know how you are practically using it daily and what issues you have found. If you already have a video on this -what's the link?

  • @skyracekid8985
    @skyracekid8985 ปีที่แล้ว +2

    I don't mind chat GPT doing study's on things like mental health because I truly believe that is what is needed because if you don't have your mind you don't have anything

  • @EmaManfred
    @EmaManfred ปีที่แล้ว

    I think if the accuracy of chat GPT can be ensured then this can be feasible. I can see that generative AIs like Bluewillow still has some margin of error, which is very important in the medical field.

  • @alecharrington9806
    @alecharrington9806 ปีที่แล้ว

    I like the fact that you made it apparent that it is important to teach other providers or providers in training on how to use this technology as well as continue to develop it. When I was reading through the comments section, I did notice concerns regarding using private health information to potentially give the program a command. While we do have the ability to remove patient identifiers, do you think there still needs to be further public training on how to use these technologies for fear of giving a program a piece of private information that becomes accessible to other people who use those technologies?
    One of the concerns my colleagues and I talk about is that as these technologies become more medical-centric, they may not have the correct algorithms put in place which could allow patients, physicians, and the EMR (assuming cross-talking is possible), to accidentally give information to the bot making that information not only accessible to those who made the bot, but other people who also can access the bot. From a legal standpoint, by using these technologies and giving personal information to bot technologies, does that personal information become the private property of whoever owns the bot?

    • @drbentran
      @drbentran ปีที่แล้ว +1

      Very important point. Companies across multiple industries (management consulting, finance, health care, tech) have explicitly stated not to supply sensitive (corporate, patient identifying information) to ChatGPT because of concerns that ChatGPT can somehow divulge this information in it's subsequent answers to others. There are efforts underway by Azure (microsoft) to create a ChatGPT internal version that reduces likelihood of "spilling" this information, so that confidential information can be used internally within the corporation/health care system.

    • @mbongenindlovu2795
      @mbongenindlovu2795 ปีที่แล้ว

      @@drbentran The training weights used to train chatGPT are frozen when you are having conversations with chatGPT, so they don’t learn anything new when your having convo’s with them. Your conversation data with the ai is stored with Microsoft/openai and when they want to update the version of chatGPT, they pull information from your conversations and sort out the high quality conversation and the low quality ones. The high quality ones are then put in the pipeline for updating the weights (knowledge base) of the ai. So to gain a little more clarity for your question: These companies generally need to really sort out the high quality conversations that users are having with chatGPT, pick them and then update chatGPTs experience or world knowledge with those high quality conversations. This is the only way chatGPT may seem to us looking from the outside (because we don’t work at these companies), that information is divulging and permeating into other peoples conversations with chatGPT. Just wanted to give you perspective on how such systems are updated when they are infront of millions of eye balls.

  • @hadiquafazal5653
    @hadiquafazal5653 ปีที่แล้ว

    I really like your video. Specially the points you highlighted about the information resources are NOT mentioned in ChatGPT.

  • @radokanalev4521
    @radokanalev4521 ปีที่แล้ว +1

    Interesting points in the video :) I am currently writing my Master's thesis on the attitudes of patients and practitioners towards the concept of an AI health chatbot.

  • @yoonchoi4065
    @yoonchoi4065 ปีที่แล้ว +3

    Cant use chatgpt for transmission of patient information. Its a violation of HIPAA compliancy. Do you have any info if this will change?

    • @Medicalfuturist
      @Medicalfuturist  ปีที่แล้ว +2

      You shouldn't share private information about patients anywhere on- or offline. But you can deanonimize a case and then ask for ChatGPT's help.

  • @heapsgood1161
    @heapsgood1161 ปีที่แล้ว

    When ChatGPT gives you references, it often gives you an incorrect reference - either it doesn't exist, or the link goes to a totally different article than what it says, or the article doesn't actually talk about the issue that it is purported to reference...

  • @drmalik212
    @drmalik212 ปีที่แล้ว

    How can AI /ChatGPT influence diagnostic Radiologists.. Bcz it seems they are gonna have more impact as compared to other health care professionals.

  • @joesiah841
    @joesiah841 ปีที่แล้ว

    We really do live in an exciting time with the development of AI and the potential that it holds. With that being said, there are some things that are concerning when it comes to AI and its integration into medicine. One challenge is the fact that chatGPT does not understand what it is saying; it is generating text based on the association of words in relation to each other in the training data meaning it may not present factual information even though it reads as flowing, coherent sentences. Because there is no thought behind the text being generated, there is a fundamental lack of human understanding and ethics. Such ethical principles such as the core ethical principles of medicine, including beneficence, nonmaleficence, autonomy, and justice.
    Beneficence is the ethical obligation of the doctor to do what is best for their patients.
    Nonmaleficence is the ethical obligation to do no harm. Autonomy is the ethical principle that people have the right to self-determination, meaning they get to choose whether or not they want specific treatments. And justice is the ethical principle guiding the equity of access and quality of care. I am not saying that AI should not be used in medicine; in fact, I feel like it should definitely be incorporated into medicine. Lest we as a society violate the ethical principle of beneficence, some examples of this are how image recognition software can be used to interpret radiographic images at a level the human eye can’t see. AI could also be immensely helpful in medical documentation and possibly even give insights the physician may have overlooked.
    The problem is if we only focus on the good it can do and trust what it is saying is accurate, people could be harmed when the AI misses a diagnosis or recommends the wrong treatment. Another concern is that humans are biased, and the information we feed into AI may have been affected by these biases and could possibly play into how it associates these biases with diagnostic criteria and treatment recommendations, which would violate the ethical concept of justice.
    We also have to take into consideration the training data that is being given to the AI. Is this sensitive information that could cause harm to the individual if it were to get into the wrong hands? If sensitive patient information is going to be used in the training data, then the individual patients should have the autonomy to choose if it is used or not. This doesn't just mean that you have them sign a release form; they must actually understand the implications of their choice to give informed consent.

  • @randhirsingh5896
    @randhirsingh5896 ปีที่แล้ว +1

    Fast changing AI scenario has it pros and cons but flow of knowledge and technology can't be stopped. Many fiefdoms will come down crashing. Whatever measures the anti AI forces will take in the short term, next improved version will crush them. Cat is now out of the bag. It would be wiser to learn and master newer technologies and educate the users.

  • @user-sd7vr3pz2y
    @user-sd7vr3pz2y ปีที่แล้ว

    Did u saw some cases of chatGPT or other openAi models into healthcare? Even though its experimental. I think it might been powerful in differential diagnosis, if use in conjunction with doctor.

    • @Medicalfuturist
      @Medicalfuturist  ปีที่แล้ว

      Not yet.

    • @Joseph1NJ
      @Joseph1NJ ปีที่แล้ว

      Ha, it won't replace Dr. House anytime soon. But the hope is, someday it will, because why not? Why not have the medical knowledge and experience of the world at your fingertips? Even better, used to assist in medical research, e.g., pharma.

  • @BrandonsHealthJourney
    @BrandonsHealthJourney ปีที่แล้ว

    I struggle to utilize ChatGPT because the references/citations are painfully wrong. I'm constantly having to fact check it. I'm conflicted if it's easier to just read publications like normal still. "For now".

  • @allykalorenify
    @allykalorenify ปีที่แล้ว +1

    There will come a time where a patient will just have to input their symptoms and AI will come up with the diagnosis and precribe the treatment

  • @JamshedMoidu
    @JamshedMoidu ปีที่แล้ว +1

    Today Chat GPT - 4 live demo is there.

    • @Medicalfuturist
      @Medicalfuturist  ปีที่แล้ว +2

      I've been analyzing it in long threads on my Twitter channel. twitter.com/Berci

  • @AparnaModou
    @AparnaModou ปีที่แล้ว

    Just because AI's sources their data on the internet already makes it flawed as data sources will be inaccurate or unreliable. They will need to sort out the problem of fact-checking sources too. Similar to how generative AIs like Bluewillow also pulls images from the web without the any sort of acknowledgement or respect for copyrights. For now, I'll be relying on my physician for anything related to my health.

    • @stringlarson1247
      @stringlarson1247 ปีที่แล้ว

      "AI" in general terms does not require 'the internet' as its data source.

  • @ikpenyielijah3953
    @ikpenyielijah3953 ปีที่แล้ว

    How can I get the course for AI in medicine. Thank you

    • @Medicalfuturist
      @Medicalfuturist  ปีที่แล้ว

      Here: medicalfuturist.thinkific.com/courses/introduction-to-artificial-intelligence-in-medicine-and-healthcare

  • @kdesilva8120
    @kdesilva8120 ปีที่แล้ว

    Quite frankly, I do not think or hope that AI tools like ChatGPT can ever replace human brain as AI lacks 3 key features of the human brain 🙂
    They are, analytical thinking, creative thinking and credibility/accountability (some humans also lack one or more of them, the last one in particular 🙂
    In healthcare, lack of these 3 key features in AI, poses a serious threat for the safety and well-being of the patients, if one were to completely rely on it 🙁

  • @fasteddiepool2717
    @fasteddiepool2717 ปีที่แล้ว

    👍

  • @drmalik212
    @drmalik212 ปีที่แล้ว

    How can AI tools be used by me as a budding radiologist. I am done with MBBS and 1 year internship and now have free time.

    • @jhamendrasinha241
      @jhamendrasinha241 ปีที่แล้ว +2

      AI. will finish radiologist job, will be assisted by technicians, yes neuro intervention or intervention super speciality will survive

    • @ummmjustsayin
      @ummmjustsayin ปีที่แล้ว +1

      You need to ask chat gpt

  • @AhmedHassan-ry6qk
    @AhmedHassan-ry6qk ปีที่แล้ว

    Kindly anyone who might help me to understand if its free to use or u need subscription because when i tried to access it says you have used the free session so go premium. Thanks in advance

    • @Medicalfuturist
      @Medicalfuturist  ปีที่แล้ว

      It's free but you have to register on openAI's website to be able to use it. If the system is temporarily down, only premium users can use it because they have priority.

  • @KVMD
    @KVMD ปีที่แล้ว +6

    I much rather my patients use chatgpt than google.

  • @zebonautsmith1541
    @zebonautsmith1541 ปีที่แล้ว

    Lets put doctors and their high fees out of business. Love it.

    • @mikeheffernan
      @mikeheffernan ปีที่แล้ว

      Let’s bash doctors today!

  • @billkemp9315
    @billkemp9315 ปีที่แล้ว

    The most valuable thing in medicine is information and the current medical information system is a shit show. Having witnessed how doctors use information or do not use the available information has demonstrated to me there must be a better way. Let me explain.
    Hospitals use a software system called "Epic" which was designed primarily to record procedures, treatments, or medicines that are given to the patients that are required to do accurate billing to the patient/insurance companies, but when it comes to providing critical information to the doctors in a timely and easy to use fashion this system fails. Doctors, nurses, and technicians have many patients with a wide variety of medical issues and these medical experts work long hours and so keeping the most salient information on each patient is impossible. Information within Epic gets buried in the database, and the most critical information does not automatically float to the top for the medical experts to see, that would require a lot of time and effort to figure out and these medical experts don't have the luxury of time.
    So, what is the solution?
    ChatGPT will dramatically alter medical information forever.

  • @haroldasraz
    @haroldasraz ปีที่แล้ว

    Google doctor is dead, long live Chat GP!