Foundation Models for Medicine

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ม.ค. 2024
  • Dr. Laila Bekhet provides an introduction to clinical foundation models, and more specifically, Med-BERT, for healthcare applications. Topics include training and testing the model on heart failure and pancreatic cancer risk, the introduction of Med-BERT v2, ongoing project expansion, ongoing collaborations, and collaborative opportunities. Dr. Bekhet emphasizes the model's impact, including citations and GitHub engagement. The presentation concludes by welcoming collaboration and sharing access to the Med-BERT model.
    1. Introduction to Foundation Models
    a. Description of foundation models as AI models trained on large datasets using self-supervised learning for diverse applications.
    Time Stamp: 00:10-1:41
    2. Clinical Foundation Models and Med-BERT
    a. Overview of clinical foundation models focused on healthcare and biomedicine.
    b. Introduction to Med-BERT trained on structured clinical data using a bidirectional encoder.
    Time Stamp: 1:42-2:37
    3. Testing on Heart Failure and Pancreatic Cancer Risk
    a. Training Med-BERT on structured clinical data for more than 20 million patients.
    b. Testing on heart failure and pancreatic cancer risk prediction tasks.
    c. Performance comparison, with and without the use of Med-BERT as the foundation model.
    Time Stamp: 4:29-7:19
    4. Med-BERT v2 and Data Comparison
    a. Introduction of Med-BERT v2 trained on claims data, with additional medications and procedures information.
    b. Performance comparison with Med-BERT v1 and evaluation on heart failure and pancreatic cancer risk prediction.
    c. Additional downstream tasks include mental retardation and depression predictions, as well as medication recommendation.
    Time Stamp: 7:20-10:21
    5. Med-BERT Summary
    a. In summary, the Med-BERT model is publicly available, with over 350 paper citations and more than 200 GitHub stars.
    b. Numerous papers have utilized the codebase, applying enhancements, and fostering communication with users; the model is now shared (instructions provided).
    Time Stamp: 10:22-11:25
    6. Project Expansion and Improvement
    a. Recently funded for project expansion, utilizing large language models and newer algorithms.
    b. Empowering data with knowledge base from ontology, testing prompts, and exploring patient similarity. Welcoming collaborator ideas, with a GitHub repository for model development.
    Time Stamp: 11:26-13:43
    7. Additional Resources Shared (e.g., Pytorch_EHR and Dr. Masayuki Nigo’s work, as one success story)
    a. Acknowledged Dr. Masayuki Nigo’s important work.
    b. Presented at IDWeek 2022; article under revision in Nature Communications.
    Time Stamp: 13:44-14:37
    8. Acknowledgments and Gratitude
    a. Acknowledged group, collaborators, co-investigators, and McWilliams School Data Service.
    b. Expressed gratitude for data sources and thanked team members.
    c. Open for questions.
    Time Stamp: 14:38-15:04

ความคิดเห็น •