- 125
- 197 762
Stanford MedAI
เข้าร่วมเมื่อ 24 มี.ค. 2021
The MedAI Group Exchange Sessions are a platform where we can critically examine key topics in AI & medicine, generate fresh ideas and discussion around their intersection and most importantly, learn from each other.
We will be having weekly sessions where invited speakers will give a talk presenting their work followed by an interactive discussion and Q&A. Our sessions are held every week from 1pm-2pm PST on Thursdays.
Please join our mailing list, mailman.stanford.edu/mailman/listinfo/medai_announce, to get notifications about upcoming sessions. All information (schedule, presenter details, abstracts, links to recordings, etc.) will be available on our website, medai.stanford.edu.
For more details, check out our website: medai.stanford.edu (Twitter handle: @MedaiStanford)
Organized by members of the Rubin Lab (rubinlab.stanford.edu/):
Contact:
Nandita Bhaskhar (www.stanford.edu/~nanbhas)
Amara Tariq (Tariq.amara@mayo.edu)
We will be having weekly sessions where invited speakers will give a talk presenting their work followed by an interactive discussion and Q&A. Our sessions are held every week from 1pm-2pm PST on Thursdays.
Please join our mailing list, mailman.stanford.edu/mailman/listinfo/medai_announce, to get notifications about upcoming sessions. All information (schedule, presenter details, abstracts, links to recordings, etc.) will be available on our website, medai.stanford.edu.
For more details, check out our website: medai.stanford.edu (Twitter handle: @MedaiStanford)
Organized by members of the Rubin Lab (rubinlab.stanford.edu/):
Contact:
Nandita Bhaskhar (www.stanford.edu/~nanbhas)
Amara Tariq (Tariq.amara@mayo.edu)
MedAI #131: Analyzing and Exposing Vulnerabilities in Language Models | Yibo Wang
Title: Analyzing and Exposing Vulnerabilities in Language Models
Speaker: Yibo Wang
Abstract:
Large Language Models (LLMs) have demonstrated impressive capabilities across various applications, yet they remain vulnerable to biases and adversarial attacks, compromising their trustworthiness. This presentation introduces two papers exploring these critical issues: robustness and fairness in LLMs. The first paper introduces a new adversarial attack method with lower detectability and better transferability to LLMs. While recent attacks achieve high success rates, the adversarial examples often deviate from the original data distribution, making them detectable. This paper proposes a Distribution-Aware Adversarial Attack method that considers distribution shifts to enhance attack effectiveness. Experiments validate the method’s efficacy and transferability to LLMs across multiple datasets and models. The second paper explores gender affiliations in text generation, where LLMs often infer gender from inputs without explicit gender information, reinforcing stereotypes. The paper systematically investigates, quantifies, and mitigates gender affiliations in LLMs.
Speaker Bio:
Yibo Wang is a Ph.D. student in the Computer Science Department at University of Illinois Chicago, under the supervision of Professor Philip S. Yu. Her primary research areas include natural language processing and large language models, with a focus on trustworthy large language models, and code generation using large language models.
------
The MedAI Group Exchange Sessions are a platform where we can critically examine key topics in AI and medicine, generate fresh ideas and discussion around their intersection and most importantly, learn from each other.
We will be having weekly sessions where invited speakers will give a talk presenting their work followed by an interactive discussion and Q&A.
Our sessions are held every Monday from 1pm-2pm PST.
To get notifications about upcoming sessions, please join our mailing list: mailman.stanford.edu/mailman/listinfo/medai_announce
For more details about MedAI, check out our website: medai.stanford.edu. You can follow us on Twitter @MedaiStanford
Organized by members of the Rubin Lab (rubinlab.stanford.edu) and Machine Intelligence in Medicine and Imaging (MI-2) Lab:
- Nandita Bhaskhar (www.stanford.edu/~nanbhas)
- Amara Tariq (www.linkedin.com/in/amara-tariq-475815158/)
- Avisha Das (dasavisha.github.io/)
Speaker: Yibo Wang
Abstract:
Large Language Models (LLMs) have demonstrated impressive capabilities across various applications, yet they remain vulnerable to biases and adversarial attacks, compromising their trustworthiness. This presentation introduces two papers exploring these critical issues: robustness and fairness in LLMs. The first paper introduces a new adversarial attack method with lower detectability and better transferability to LLMs. While recent attacks achieve high success rates, the adversarial examples often deviate from the original data distribution, making them detectable. This paper proposes a Distribution-Aware Adversarial Attack method that considers distribution shifts to enhance attack effectiveness. Experiments validate the method’s efficacy and transferability to LLMs across multiple datasets and models. The second paper explores gender affiliations in text generation, where LLMs often infer gender from inputs without explicit gender information, reinforcing stereotypes. The paper systematically investigates, quantifies, and mitigates gender affiliations in LLMs.
Speaker Bio:
Yibo Wang is a Ph.D. student in the Computer Science Department at University of Illinois Chicago, under the supervision of Professor Philip S. Yu. Her primary research areas include natural language processing and large language models, with a focus on trustworthy large language models, and code generation using large language models.
------
The MedAI Group Exchange Sessions are a platform where we can critically examine key topics in AI and medicine, generate fresh ideas and discussion around their intersection and most importantly, learn from each other.
We will be having weekly sessions where invited speakers will give a talk presenting their work followed by an interactive discussion and Q&A.
Our sessions are held every Monday from 1pm-2pm PST.
To get notifications about upcoming sessions, please join our mailing list: mailman.stanford.edu/mailman/listinfo/medai_announce
For more details about MedAI, check out our website: medai.stanford.edu. You can follow us on Twitter @MedaiStanford
Organized by members of the Rubin Lab (rubinlab.stanford.edu) and Machine Intelligence in Medicine and Imaging (MI-2) Lab:
- Nandita Bhaskhar (www.stanford.edu/~nanbhas)
- Amara Tariq (www.linkedin.com/in/amara-tariq-475815158/)
- Avisha Das (dasavisha.github.io/)
มุมมอง: 356
วีดีโอ
MedAI #130: Me-LLaMA: Medical Foundation LLMs for Text Analysis and Beyond | Qianqian Xie
มุมมอง 5K2 หลายเดือนก่อน
Title: Me-LLaMA: Medical Foundation Large Language Models for Comprehensive Text Analysis and Beyond Speaker: Qianqian Xie Abstract: Recent advancements in large language models (LLMs) like ChatGPT and LLaMA have shown promise in medical applications, though their performance in medical language understanding still requires enhancement. In this talk, I will present our work Me-LLaMA, a suite of...
MedAI #129: Large Scale Multi-Microscope Datasets and their Challenges | Waqas Sultani
มุมมอง 2172 หลายเดือนก่อน
Title: Large Scale Multi-Microscope Datasets and their Challenges Speaker: Waqas Sultani Abstract: Each year, approximately 226 million malaria cases are reported across 87 countries, with 425,600 resulting in fatalities. In 2019, 67% of these deaths were children under five. Similarly, according to GLOBOCAN 2020, leukemia is a leading cause of cancer-related deaths among individuals under 39, ...
MedAI #127: Improving LLMs for Clinical Named Entity Recognition via Prompt Engineering | Yan Hu
มุมมอง 6533 หลายเดือนก่อน
Title: Improving Large Language Models for Clinical Named Entity Recognition via Prompt Engineering Speaker: Yan Hu Abstract: Objective: This study quantifies the capabilities of GPT-3.5 and GPT-4 for clinical named entity recognition (NER) tasks and proposes task-specific prompts to improve their performance. Materials and Methods: We evaluated these models on two clinical NER tasks: (1) to ex...
MedAI #126: Divide & Conquer - Concept-based Models for Efficient Transfer Learning | Shantanu Ghosh
มุมมอง 5133 หลายเดือนก่อน
Title: Divide and Conquer: Carving Out Concept-based Models out of BlackBox for More Efficient Transfer Learning Speaker: Shantanu Ghosh Abstract: Building generalizable AI models is one of the primary challenges in the healthcare domain. While radiologists rely on generalizable descriptive rules of abnormality, Neural Networks (NN), often treated as blackboxes, suffer even with a slight shift ...
MedAI #125: Role of Instruction-Tuning and Prompt Engineering in Clinical Domain | Mihir Parmar
มุมมอง 4774 หลายเดือนก่อน
Title: Role of Instruction-Tuning and Prompt Engineering in Clinical Domain Speaker: Mihir Parmar Abstract: In this talk, I will discuss the pivotal role of instruction-tuning and prompt engineering in advancing Clinical NLP. I will cover how our In-BoXBART leverages instruction-tuning to improve performance across multiple biomedical tasks, and how a collaborative LLM framework enhances the ef...
MedAI #124: SleepFM: Multi-modal Representation Learning for Sleep | Rahul Thapa
มุมมอง 4734 หลายเดือนก่อน
Title: SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals Speaker: Rahul Thapa Abstract: Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities. We curate a large polysomnography dataset from over 14,000 participants comprising over 100,000 hours of multi...
MedAI #123: Towards Robust Radiogenomic Models for Brain Tumor Characterization | Hassan Mohy-ud-Din
มุมมอง 3094 หลายเดือนก่อน
Title: Towards Robust Radiomics and Radiogenomics Predictive Models for Brain Tumor Characterization Speaker: Hassan Mohy-ud-Din Abstract: In the context of brain tumor characterization, we focused on two key questions which, to the best of our knowledge, have not been explored so far: (a) stability of radiomics features to variability in multiregional segmentation masks obtained with fully-aut...
MedAI #122: Integrating Pathology Images and Genomics Data for Cancer Grading | Xiaohan Xing
มุมมอง 6665 หลายเดือนก่อน
Title: Integrating Pathology Images and Genomics Data for Cancer Grading Speaker: Xiaohan Xing Abstract: In recent years, Artificial Intelligence (AI) technology has been widely applied to the analysis of multi-modal biomedical data, revolutionizing healthcare. Integrating morphological information from pathology slides with molecular information from genomics data enhances cancer grading accur...
MedAI #121: HECTOR - Multimodal DL model for recurrence risk in endometrial cancer | Sarah Volinsky
มุมมอง 4805 หลายเดือนก่อน
Title: HECTOR, a multimodal deep learning model predicting recurrence risk in endometrial cancer Speaker: Sarah Volinsky Abstract: Predicting distant recurrence of endometrial cancer (EC) is crucial for personalized adjuvant treatment. The current gold standard of combined pathological and molecular profiling is costly, hampering implementation. Here we developed HECTOR (histopathology-based en...
MedAI #120: Holistic OR Domain Modeling with Large Vision Language Models | Ege özsoy
มุมมอง 3665 หลายเดือนก่อน
Title: Holistic OR Domain Modeling with Large Vision Language Models Speaker: Ege özsoy Abstract: The operating room (OR) is an intricate environment involving diverse medical staff, patients, devices, and their interactions. Traditionally, only skilled medical professionals can navigate and comprehend these complex dynamics. This talk introduces an innovative approach towards automated, compre...
MedAI #119: AI-Driven Advancements in Mammogram Analysis | Aisha Urooj
มุมมอง 3275 หลายเดือนก่อน
Title: AI-Driven Advancements in Mammogram Analysis Speaker: Aisha Urooj Abstract: Adapting AI models for healthcare applications presents significant challenges, such as domain misalignment, limited access to extensive datasets, and highly imbalanced classes. Hence, there is a pressing need to develop a corresponding proficiency in adapting the advancements in AI to the medical domain. Such ad...
MedAI #118: Framework for Exposing Vulnerabilities of Clinical LLMs: Breast Cancer | Avisha Das
มุมมอง 3686 หลายเดือนก่อน
Title: Framework for Exposing Vulnerabilities of Clinical Large Language Model: A Case Study in Breast Cancer Speaker: Avisha Das Abstract: Large language models (LLMs) with billions of parameters and trained on massive amounts of crowdsourced public data have made a dramatic impact on natural language processing (NLP) tasks. Domain specific `finetuning' of LLMs has further improved model behav...
MedAI #117: Dynamic Graph Enhanced CL for Chest X-ray Report Generation | Mingjie Li
มุมมอง 5936 หลายเดือนก่อน
Title: Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report Generation Speaker: Mingjie Li Abstract: In the realm of medical imaging, automatic radiology reporting has emerged as a crucial tool to alleviate the heavy workloads faced by radiologists and enhance the interpretation of diagnoses. Traditional approaches have augmented data-driven neural networks with static medical kno...
MedAI #116: Deep Symmetry-sensitive networks for detecting brain diseases | Arko Barman
มุมมอง 3327 หลายเดือนก่อน
Title: Deep Symmetry-sensitive networks for detecting brain diseases Speaker: Arko Barman Abstract: Spatial symmetry is commonly used by clinicians in the diagnosis and prognosis of diseases involving multiple organs such as the brain, prostate, breasts, and lungs. Anomalies in symmetry can be indicative of patient-specific disease-related features that are less sensitive to inter-patient varia...
MedAI #115: Behave like a Doctor: Clinical Process-Aware Medical Dialogue System | Kaishuai Xu
มุมมอง 6947 หลายเดือนก่อน
MedAI #115: Behave like a Doctor: Clinical Process-Aware Medical Dialogue System | Kaishuai Xu
MedAI #114: Ambiguous medical image segmentation using diffusion models | Aimon Rahman
มุมมอง 1.6K8 หลายเดือนก่อน
MedAI #114: Ambiguous medical image segmentation using diffusion models | Aimon Rahman
MedAI #113: Generative AI for easily synthesizable and structurally novel antibiotics | Kyle Swanson
มุมมอง 7278 หลายเดือนก่อน
MedAI #113: Generative AI for easily synthesizable and structurally novel antibiotics | Kyle Swanson
MedAI #112: Radiologist-centered AI with Eye Tracking Techniques | Bin Wang
มุมมอง 6078 หลายเดือนก่อน
MedAI #112: Radiologist-centered AI with Eye Tracking Techniques | Bin Wang
MedAI #111: Self-supervised learning for chest x-ray analysis | Syed Muhammad Anwar
มุมมอง 9508 หลายเดือนก่อน
MedAI #111: Self-supervised learning for chest x-ray analysis | Syed Muhammad Anwar
MedAI #110: AI-driven fast and accurate cell phenotyping in multiplex images | Muhammad Shaban
มุมมอง 3979 หลายเดือนก่อน
MedAI #110: AI-driven fast and accurate cell phenotyping in multiplex images | Muhammad Shaban
MedAI #109: Towards Fundamental Biomedical AI | Che Liu and Zhongwei Wan
มุมมอง 9519 หลายเดือนก่อน
MedAI #109: Towards Fundamental Biomedical AI | Che Liu and Zhongwei Wan
MedAI #108: Data Efficient Learning in medical image segmentation | Yi Lin
มุมมอง 89210 หลายเดือนก่อน
MedAI #108: Data Efficient Learning in medical image segmentation | Yi Lin
MedAI #107: Large Language Models as Universal Medical Forecasters | Zeljko Kraljevic
มุมมอง 99910 หลายเดือนก่อน
MedAI #107: Large Language Models as Universal Medical Forecasters | Zeljko Kraljevic
MedAI #106: Multimodal Clinical Benchmark for Emergency Care (MC-BEC) | Emma Chen
มุมมอง 44710 หลายเดือนก่อน
MedAI #106: Multimodal Clinical Benchmark for Emergency Care (MC-BEC) | Emma Chen
MedAI #105: EHRXQA: A Multi-Modal Question Answering Dataset for EHRs | Seongsu Bae
มุมมอง 51310 หลายเดือนก่อน
MedAI #105: EHRXQA: A Multi-Modal Question Answering Dataset for EHRs | Seongsu Bae
MedAI #104: Reveal to Revise - How to Uncover and Correct Biases of Deep Models | Maximilian Dreyer
มุมมอง 23110 หลายเดือนก่อน
MedAI #104: Reveal to Revise - How to Uncover and Correct Biases of Deep Models | Maximilian Dreyer
MedAI #103: Multimodal Brain Age Estimation Using Population-Graph Learning | Margarita Bintsi
มุมมอง 34110 หลายเดือนก่อน
MedAI #103: Multimodal Brain Age Estimation Using Population-Graph Learning | Margarita Bintsi
MedAI #102: Visual-language foundation model for pathology research and education | Zhi Huang
มุมมอง 75610 หลายเดือนก่อน
MedAI #102: Visual-language foundation model for pathology research and education | Zhi Huang
MedAI #101: Dual meta-learning framework for longitudinal brain tissue segmentation | Chunfeng Lian
มุมมอง 21210 หลายเดือนก่อน
MedAI #101: Dual meta-learning framework for longitudinal brain tissue segmentation | Chunfeng Lian
for creating a binary mask which tool or method is used?
Hi, Do you have the dataset published somewhere we could access?
流利度还是差很多,感觉是一个单词一个单词在念啊~
Good for you, we can't test it, it is not open..
The model is not openly available. My institute does not subscribe to CITI training and thus I am not able to access the model on PhysioNet. Why does this model need to be behind such lockdown?
you didn't read the guidelines well enough. You can simply put MIT in the CITI training. go to MIMIC IV dataset page in physionet and then thoroughly check their guidelines on how to complete the CITI training. I did it last month and it takes 10 mins to complete the training, you can click through everything fast.
Yeah, nothing much in the github either. More waste of time.
Is it available & accessible for everyone?
Why not make the PowerPoint slides and programs ,dataset from the video publicly available? let us study
This is why you need strong moderators. Let her finish her presentation, then ask your questions. She handled it so well and professionally, but the interruptions are beyond unprofessional.
🎉🎉🎉
Have questions at the end. The questioners had no idea what they were talking about and derailed the presentation. Great work Qianqian Xie, you did fantastic
Good video, but the lady who kept interrupting was too loud and spoke very fast. I couldn't understand what she was trying to achieve while constantly interrupting. Please change the format to allow questions to be asked at set intervals or at the end of the presentation.
@@paulcreaser9130 We are indeed lucky to access this type of presentation. The presentations are of exceptionally high quality. We can also offer the panel means to improve their processes if they are willing to be reflective. A discussion at the end of the presentation is standard for a reason. The current format is not a discussion: it is a presentation with interjections
Additionally the suggestion by @AccC-c6d to have set intervals for questions could also work well
Thank you for this resource presentation. I would like to ask if you observed any performance gaps depending on the datasets used, particularly with the MTsample dataset, which is synthetic.
Prompt engineering differs from prompt tuning
Excellent presentation, well understandable
thankyou for the session, please provide the code.
Thanks stanf and Mihir! Amazing!
084 Veum Drive
Please can you share the paper,GitHub, or something? This is very interesting
Can we get a certificate on completing the course or no?
Very ingenious to use the image to make more data about the sample
3 months of cuda development! wow, that's rough. hope we get it working well in compilers
This is really good
Good job stanford MedAI and Dr. Zifeng Wang. Keep contributing
impressive
希望有中文字幕,英文听力不好。
不是有自动翻译嘛
Great video! IIRC Paul co-created the current CMU Multimodal course, which is available on TH-cam. This is 80% overlap with lecture 2.
Tri Dao yyds!
Interesting idea
优秀的一仔
at 5:18, Bayesian inference point, it should be \delta_x log p(y|x) on the right hand side of equation. In other words, \delta_x log p(x|y) = \delta_x log p(x) + \delta_x log p(y|x)
This is correct! Thanks for pointing it out.
so good
Very interesting!
Wow, impressed by Max...I hope I can call him that!
Insightful contribution. Keep it up Dr. Anwar.
impressive presentations. thank you
This topic is really interesting! I can understand easily although I am not good at data set!
great presentation!
Thanks.
Excellent presentation and impressive research, i only wonder why SSMs are recurrently efficient? (video timestamp : 32:27) Suppose k is the token len of input history. The general sequence model takes k square (s.t. transformer) time complexity. On the other hand, SSMs still need to encode all stateful history "recurrently". The S4 paper also aims to deal with this issue (multiply A, k-1 times to create a K bar matrix, it also ends in nearly k square) by diagonalizing the matrix. So, it seems SSMs recurrent aren't "naturally" efficient, but require some linear algebra technique. Any suggestion will be appreciated!!
he is one of the heads of this new Mamba architecture
And s4, and the ssm paper before that lol
quality, well structured presentation. Thanks Jason.
Awesome.. What's GAN to convert CT sinogram to image, thank you
Super interesting! Thanks for the presentation. I work in game development for now, but cool to see how things are going in the ML world 😊
excellent presentation. Thank you
Excellent presentation
couldn't imagine how interesting and informative this video is.
Where did you get the data set sir
Thanks for such a informative session
If I am not mistaken, this covers Foundation Models for images and language. Any attempt to build foundation models for time-series health data? Thanks in advance
It is a grear work ! Thank you!