This is not just a state-of-the-art balanced overview of the area, rather the depth of the speaker that comes from researching the area clearly shows. Thanks particularly for the algorithmic solutions part. I am curious about whether the learnt latent structure part has been further developed. Also whether training the variational layer in the autoencoder conflicts with the resampling approach in some way.
For those keen on this subject, you won't regret diving into "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell. It was a delight to read.
Thanks so much for putting this online! I was wondering how the underlying distribution (frequency of values the z can take) can be estimated from the latent variables z ? (around 35:51) I mean, it's not as trivial as the distribution of z being identical to the distribution z takes in the training data, right?
Great video! 8:06 I don't the COCO graph is accurate, there are lots of training and application of AI in China, with their own database. Most of the time Chinese just do these kinds of research secretly.
Thanks for your contribution and doing great work to let people to know and have latest information and knowledge about Deep learning. can we have some format with more practical and challenging problem which AI Community can go through apart from these labs, it was just a proposal. Thanks again, KEEP GOING Ava and Amini
Awesome courses. And where can I find the something like these labs projects to have a try AI and Deep Learning which matches this series of MIT Deep Learning courses?
@Alexander Amini 1. the watermelon example was excellent 2. as a transgender person, CNNs are adversarial to my gender as the models are based *only* on *cisgender* people (need for more disaggregated evaluation) 3. I don't like CNNs, and don't practice making them, as all examples and datasets are boring to me and simply binary. Talking about gender bias is also biased because transgender humans exist and gender-neutral terms exist but you would never know it in any tech/coding lecture. I am sure MIT has Transgender people in their school
These ethics are far-left liberal nonsense filled with hypocrisy. They are totally fine with AI vehicles killing men and boys to save women but throws a fit if it hires men over women in an already male-dominated field.
I noticed something curious: at 25:02 to about 25:30, you see a real-world distribution of hair color next to a "gold standard" sample distribution. The lecturer mentions that black hair is underrepresented in the sample. She does not mention that red hair is underrepresented, even though that is also (and evidently) true, if the diagram is anything to go by. I'm not sure what to make of this, but it stood out to me like a sore thumb.
@@jonaskoelker my understanding is the lecturer’s message is to communicate that the dataset has bias, instead of trying to enumerate the problems. But yes, under-represented red hair is a problem
Today while AI systems are grappling with biases that can impact real lives, this topic is so important. It was very well delivered. Thanks :)
0
This is not just a state-of-the-art balanced overview of the area, rather the depth of the speaker that comes from researching the area clearly shows. Thanks particularly for the algorithmic solutions part. I am curious about whether the learnt latent structure part has been further developed. Also whether training the variational layer in the autoencoder conflicts with the resampling approach in some way.
For those keen on this subject, you won't regret diving into "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell. It was a delight to read.
I love how AI community is learning about this problem and solution for debiasing the models especially popular models in computer vision and NLP!
Thanks so much for putting this online!
I was wondering how the underlying distribution (frequency of values the z can take) can be estimated from the latent variables z ? (around 35:51) I mean, it's not as trivial as the distribution of z being identical to the distribution z takes in the training data, right?
I loved the cancer detection example. Thanks for the lecture :))
Great video! 8:06 I don't the COCO graph is accurate, there are lots of training and application of AI in China, with their own database. Most of the time Chinese just do these kinds of research secretly.
Great contribution. Clear. Useful. Thank you!
Another amazing video, if I wish to continue with deep learning what and where should I learn?
any courses on privacy-preserving when using Deep Learning?
Awesome lecture. How do you create such presentations? Which app?
Thanks for your contribution and doing great work to let people to know and have latest information and knowledge about Deep learning.
can we have some format with more practical and challenging problem which AI Community can go through apart from these labs, it was just a proposal.
Thanks again, KEEP GOING Ava and Amini
Great Video
Awesome courses.
And where can I find the something like these labs projects to have a try AI and Deep Learning which matches this series of MIT Deep Learning courses?
This book is turning heads "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
All of our problems begin with unfairness
@Alexander Amini
1. the watermelon example was excellent
2. as a transgender person, CNNs are adversarial to my gender as the models are based *only* on *cisgender* people (need for more disaggregated evaluation)
3. I don't like CNNs, and don't practice making them, as all examples and datasets are boring to me and simply binary. Talking about gender bias is also biased because transgender humans exist and gender-neutral terms exist but you would never know it in any tech/coding lecture. I am sure MIT has Transgender people in their school
Awesome
Who disliked the video before it begins and why?!
These ethics are far-left liberal nonsense filled with hypocrisy. They are totally fine with AI vehicles killing men and boys to save women but throws a fit if it hires men over women in an already male-dominated field.
I noticed something curious: at 25:02 to about 25:30, you see a real-world distribution of hair color next to a "gold standard" sample distribution. The lecturer mentions that black hair is underrepresented in the sample. She does not mention that red hair is underrepresented, even though that is also (and evidently) true, if the diagram is anything to go by.
I'm not sure what to make of this, but it stood out to me like a sore thumb.
@@jonaskoelker my understanding is the lecturer’s message is to communicate that the dataset has bias, instead of trying to enumerate the problems. But yes, under-represented red hair is a problem
👍👍👍