MIT 6.S191: AI Bias and Fairness

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ธ.ค. 2024

ความคิดเห็น •

  • @anshusingh3403
    @anshusingh3403 3 ปีที่แล้ว +12

    Today while AI systems are grappling with biases that can impact real lives, this topic is so important. It was very well delivered. Thanks :)

  • @nintishia
    @nintishia 3 ปีที่แล้ว +2

    This is not just a state-of-the-art balanced overview of the area, rather the depth of the speaker that comes from researching the area clearly shows. Thanks particularly for the algorithmic solutions part. I am curious about whether the learnt latent structure part has been further developed. Also whether training the variational layer in the autoencoder conflicts with the resampling approach in some way.

  • @bitsbard
    @bitsbard ปีที่แล้ว

    For those keen on this subject, you won't regret diving into "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell. It was a delight to read.

  • @AshokTak
    @AshokTak 3 ปีที่แล้ว +6

    I love how AI community is learning about this problem and solution for debiasing the models especially popular models in computer vision and NLP!

  • @lukeSkywalkwer
    @lukeSkywalkwer 3 ปีที่แล้ว +3

    Thanks so much for putting this online!
    I was wondering how the underlying distribution (frequency of values the z can take) can be estimated from the latent variables z ? (around 35:51) I mean, it's not as trivial as the distribution of z being identical to the distribution z takes in the training data, right?

  • @harshkumaragarwal8326
    @harshkumaragarwal8326 3 ปีที่แล้ว +1

    I loved the cancer detection example. Thanks for the lecture :))

  • @kruan2661
    @kruan2661 3 ปีที่แล้ว +2

    Great video! 8:06 I don't the COCO graph is accurate, there are lots of training and application of AI in China, with their own database. Most of the time Chinese just do these kinds of research secretly.

  • @busello
    @busello 2 ปีที่แล้ว

    Great contribution. Clear. Useful. Thank you!

  • @chanochbaranes6002
    @chanochbaranes6002 3 ปีที่แล้ว +3

    Another amazing video, if I wish to continue with deep learning what and where should I learn?

  • @macknightxu2199
    @macknightxu2199 3 ปีที่แล้ว +1

    any courses on privacy-preserving when using Deep Learning?

  • @AbhishekSinghSambyal
    @AbhishekSinghSambyal 5 หลายเดือนก่อน

    Awesome lecture. How do you create such presentations? Which app?

  • @lotfullahandishmand753
    @lotfullahandishmand753 3 ปีที่แล้ว +2

    Thanks for your contribution and doing great work to let people to know and have latest information and knowledge about Deep learning.
    can we have some format with more practical and challenging problem which AI Community can go through apart from these labs, it was just a proposal.
    Thanks again, KEEP GOING Ava and Amini

  • @luckychitundu1070
    @luckychitundu1070 2 ปีที่แล้ว +2

    Great Video

  • @macknightxu2199
    @macknightxu2199 3 ปีที่แล้ว

    Awesome courses.
    And where can I find the something like these labs projects to have a try AI and Deep Learning which matches this series of MIT Deep Learning courses?

  • @BitBard302
    @BitBard302 ปีที่แล้ว

    This book is turning heads "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @TheWayofFairness
    @TheWayofFairness 3 ปีที่แล้ว +1

    All of our problems begin with unfairness

  • @terraflops
    @terraflops 3 ปีที่แล้ว +1

    @Alexander Amini
    1. the watermelon example was excellent
    2. as a transgender person, CNNs are adversarial to my gender as the models are based *only* on *cisgender* people (need for more disaggregated evaluation)
    3. I don't like CNNs, and don't practice making them, as all examples and datasets are boring to me and simply binary. Talking about gender bias is also biased because transgender humans exist and gender-neutral terms exist but you would never know it in any tech/coding lecture. I am sure MIT has Transgender people in their school

  • @christianngnie3188
    @christianngnie3188 3 ปีที่แล้ว +2

    Awesome

  • @mehdidolati
    @mehdidolati 3 ปีที่แล้ว +5

    Who disliked the video before it begins and why?!

    • @Amilakasun1
      @Amilakasun1 3 ปีที่แล้ว +2

      These ethics are far-left liberal nonsense filled with hypocrisy. They are totally fine with AI vehicles killing men and boys to save women but throws a fit if it hires men over women in an already male-dominated field.

    • @jonaskoelker
      @jonaskoelker 3 ปีที่แล้ว

      I noticed something curious: at 25:02 to about 25:30, you see a real-world distribution of hair color next to a "gold standard" sample distribution. The lecturer mentions that black hair is underrepresented in the sample. She does not mention that red hair is underrepresented, even though that is also (and evidently) true, if the diagram is anything to go by.
      I'm not sure what to make of this, but it stood out to me like a sore thumb.

    • @chenzhuo8804
      @chenzhuo8804 3 ปีที่แล้ว +1

      @@jonaskoelker my understanding is the lecturer’s message is to communicate that the dataset has bias, instead of trying to enumerate the problems. But yes, under-represented red hair is a problem

  • @bhavyakaneriya8916
    @bhavyakaneriya8916 3 ปีที่แล้ว +1

    👍👍👍