Not What but Why: Machine Learning for Understanding Genomics | Barbara Engelhardt | TEDxBoston

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ต.ค. 2017
  • Machine learning and artificial intelligence are changing the nature of biological research, especially genomics. Artificial intelligence applications are opening up our understanding of ourselves and disease, and we must strive to create tools that can work as partners in research, not simply as black boxes. Barbara Engelhardt is an assistant professor in the Computer Science Department at Princeton University since 2014. She graduated from Stanford University and received her Ph.D. from the University of California, Berkeley, advised by Professor Michael Jordan. She did postdoctoral research at the University of Chicago, working with Professor Matthew Stephens, and three years at Duke University as an assistant professor. Interspersed among her academic experiences, she spent two years working at the Jet Propulsion Laboratory, a summer at Google Research, and a year at 23andMe, a DNA ancestry service. Professor Engelhardt received an NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, the Walter M. Fitch Prize from the Society for Molecular Biology and Evolution, an NIH NHGRI K99/R00 Pathway to Independence Award, and the Sloan Faculty Fellowship. Professor Engelhardt is currently a PI on the Genotype-Tissue Expression (GTEx) Consortium. Her research interests involve statistical models and methods for analysis of high-dimensional data, with a goal of understanding the underlying biological mechanisms of complex phenotypes and human diseases. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

ความคิดเห็น • 9

  • @JoeVirella
    @JoeVirella 3 ปีที่แล้ว +3

    Underrated talk! Thank you so much.

  • @asifrizwan
    @asifrizwan 2 ปีที่แล้ว

    Excellent talk! Thank you very much.

  • @fernandoaleman607
    @fernandoaleman607 6 ปีที่แล้ว +8

    Patterns, patterns, patterns!!

  • @ricardoislasruiz3186
    @ricardoislasruiz3186 3 ปีที่แล้ว

    Awesome!

  • @hanskraut2018
    @hanskraut2018 ปีที่แล้ว

    Very true those are underused tools, and the systems to use them are waaaaaaaaaaaaaaay to NOT userfrendly. Make a tools that is extreamly easy to use meaning almost no buttons (but all the buttons and functionalitys hidden in the settings but very google like very chatGPT like in the sense of minimalist design

  • @TheLeoPoint
    @TheLeoPoint ปีที่แล้ว

    :) ,as data scientist I was unaware,It's about ' Why ' not 'how' , really Powerful tool :) agreed, genuinely if v want to learn in depth n want to b more specific or expert in research n analysis ,then study with by questioning 'why' approach can makes unique n huge impact in learning,. Nice presentation 👌 :), but In real life in some matters I always prefer to avoid it

  • @hanskraut2018
    @hanskraut2018 ปีที่แล้ว

    That 10 000 is enought to distinguish. You just have more amount of data and less filtered than if you have 1000000000 million but the amount of data points will be the same in the end. Example: only 10'000 people = 200'000 datapoints per person but if you have 2'000'000'000 people you might only add 2 sizes of datapoints both equal the same number of stuff for the neural network, the large dataset can just be less precise and probably more generalizable but should be possible for sure, i am not convinced as many datapoints as today are required are needed! If i get rich ill advance that field/prove it (most likely if not already solved etc.)

  • @hayabusa4061
    @hayabusa4061 5 ปีที่แล้ว +1

    Is "the mean" white?