Are We Automating Racism?

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 มี.ค. 2021
  • Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
    0:00 Intro
    1:24 Is AI Racist?
    4:15 The Myth Of The Impartial Machine
    11:09 Saliency Testing
    13:52 How Machines Become Biased
    18:33 Auditing The Algorithms
    20:24 Wrap Up

ความคิดเห็น • 474

  • @Vox
    @Vox  3 ปีที่แล้ว +1298

    [UPDATE May 20, 2021] CNN reports: "Twitter has largely abandoned an image-cropping algorithm after determining the automated system was biased." www.cnn.com/2021/05/19/tech/twitter-image-cropping-algorithm-bias/index.html

    • @Neyobe
      @Neyobe 2 ปีที่แล้ว

      That’s amazing!

  • @Fahme001
    @Fahme001 3 ปีที่แล้ว +10611

    lets not forget how light works in camera. I am a dark skinned person and I can confirm that a light skin would physically reflect higher amount of photon which will result in higher probability of the camera to capture that picture better than that of a black counterpart. same goes for computational photography and basic algorithm that are based on photos that we upload. it only makes sense that it would be bias towards white skin. why does everything have to be taken as an offensive scenario? We are going too far with this political correctness bullshit. Again, I am a person or dark skin and even I think this is bullshit. Now if you use it as if this is an issue in identifying person's face for security reasons or such, then, yes I am all for it to make it better to recognize all faces. But please, please make this political correctness bullshit stop.

    • @jezuconz7299
      @jezuconz7299 2 ปีที่แล้ว +15

      This is all indeed going to a point where everything has to be taken to the correctness debate instead of factual and objetive responses or solutions

    • @daniae7568
      @daniae7568 2 ปีที่แล้ว +5

      this is how barcodes work

    • @faithm9284
      @faithm9284 2 ปีที่แล้ว +4

      AMEN! There is no such thing as racism, there is only one race, the human race! Let's stop speaking the negatives! Words are very powerful. When you speak it, you give even fantasy power to 'become'!

    • @airdrummond241
      @airdrummond241 2 ปีที่แล้ว +19

      Light is racist.

    • @theoaglaganian1448
      @theoaglaganian1448 2 ปีที่แล้ว

      Amen+2
      This video is the definition of greed

  • @AtiqurRahman-uk6vj
    @AtiqurRahman-uk6vj 3 ปีที่แล้ว +15237

    Machines aren't racist. The outcome feels racist due to bias in training data. The model needs to be retrained.

    • @Ana-im4dz
      @Ana-im4dz 2 ปีที่แล้ว +7

      Lol 15K likes, no comments

    • @onee1594
      @onee1594 2 ปีที่แล้ว +6

      Well. Now I would like to see stats on distribution between black and white software engineers and ML specialists.
      And no. I don't say it should have quotas. I just wonder wherever it was tested at all

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 ปีที่แล้ว +1

      @@onee1594 Feel free to look for that at your nations government DB or draw a conclusion from a credible sample size. I am not obligated to provide that for you.

    • @onee1594
      @onee1594 2 ปีที่แล้ว +4

      @@AtiqurRahman-uk6vj You are not obligated and I didn't ask you to provide it.
      There's no need to be so uncivilized unless you think world turns around your comment and personally.

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 ปีที่แล้ว

      @@onee1594 Since you replied under my comment instead of opening a separate comment it is a logical assumption that you were placing the request to me and I declined.
      You and your colonial mindset of xenophobia are a few centuries too late to call others "uncivilized" simply for declining to do your bidding. Good day

  • @kumareth
    @kumareth 3 ปีที่แล้ว +4463

    As a machine learning enthusiast, I can confirm there isn't much diverse set of data available out there. It's just sad but it's alarmingly true.

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว +1

      @Sigmaxon in the case of training datasets, because they can be so expensive to produce, the demand is actually constrained by supply rather than the other way around. Changing the demographics in the dataset is a slow process.

    • @randybobandy9828
      @randybobandy9828 2 ปีที่แล้ว

      Why is it sad?

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว +1

      @@randybobandy9828 because it leads to less optimal outcomes for everyone, duh

    • @randybobandy9828
      @randybobandy9828 2 ปีที่แล้ว

      @@kaitlyn__L it's a issue not Worth addressing.

  • @HopeRock425
    @HopeRock425 3 ปีที่แล้ว +928

    While I do think that machines are biased I think that saying they're racist is an over statement.

    • @faychel8383
      @faychel8383 2 ปีที่แล้ว +5

      IQs under 83

    • @randybobandy9828
      @randybobandy9828 2 ปีที่แล้ว

      You're a simpleton

    • @Unaveragetrainguy
      @Unaveragetrainguy ปีที่แล้ว +11

      The piece carefully de-emphasized that the technology was 'racist'; but that the technology seemingly had 'racist outcomes'.

    • @jordanreeseyre
      @jordanreeseyre ปีที่แล้ว +5

      Depends if you define "racism" as requiring malicious intent.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 3 หลายเดือนก่อน

      They were using racist as a description which the AI outcomes were. They even said in this video that it doesn't mean there has to be malicious intent. Tho there probably is some cause the AI just learns from the prejudice stereotypes and beliefs of what people post online.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes excluding other kinds of activities or jobs has hardworking too.

  • @aribantala
    @aribantala 3 ปีที่แล้ว +5413

    Yes, as a Computer Engineering Bachelor and someone who's working with a Camera for almost 4 years now it's good to address the apparent weakness for Camera to capture darker object can mess up AI detections.
    My own Bachelor Thesis was about Implementation of Pedestrian Detection and its really hard to make sure the Camera is taking a favourable image... And since I am from Indonesia... Which, you guess it, have less white skinned population... Its really hard to find a good experiment location, especially when I use already developed Algorithm as a backbone. There are a lot of False positives... Ranging on "misses counts" due to the person is darker, to double counts due to a fairer skinned person passes while there are human shaped shadows.
    We need to improve the technology of AI with better Diversity for its Training datasets. It's good to address that weakness to create a better technology than to point fingers... Learn from our mistakes and improve from that... If a hideous person like Edison can do that with his electric lightbulb? Why aren't we doing the same while developing even more advanced tech than him?
    The title is very nuanced... But hey, it gets me to click it... And hopefully others can stand through the Headline.

  • @rodneykelly8768
    @rodneykelly8768 3 ปีที่แล้ว +2313

    At work, I have an IR camera that automatically measures your temperature as you walk into my facility. How it is supposed to do this is by locking on to the face, then measuring the person’s temperature. Needless to say, I want to take a sledgehammer to it. When it actually works, it’s with a dark face. The type of face it has the most problem with is a light face. If you also have a bald head, it will never see you.

  • @veryblocky
    @veryblocky 3 ปีที่แล้ว +425

    I feel like a lot of these things aren’t the result of anything racist, but of other external factors end up contributing to that. The example of the hospital algorithm looking at expensive patients, for instance, isn’t inherently racist. The issue there should be with the factors that cause minority groups to cost less (ie. worse access to insurance), not with the software.

    • @Zaptosis
      @Zaptosis 3 หลายเดือนก่อน

      Could also be due to non-racist factors such as cultural preferences like opting for at home care, or even subsidizes for low income areas/households which reduce the recorded expenditure of a patient.
      But of course as a media organization they need to jump to the most rage & offence inducing headline which gets them the most clicks, this is why I never trust Vox & other companies like this.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 3 หลายเดือนก่อน +1

      @@Zaptosis This Vox video did say that there could be factors that didn't have to do with just active racists too. So what are you talking about? You were the one who jumped to rage like you were accusing this vox video.
      Also, you jumped to conclusions that racism doesn't exist too. When it always does and there's evidence.
      You also shouldn't just assume "most African Americans want home care" when the African woman in this video said otherwise. Same with some other African yt-ers I've watched. You should see different perspectives.
      It just seemed like you wanted to not care that there are people negatively impacted by this or racism.
      It's double standards cause if there was prejudice against you or your group you would want the injustice to be amended.
      So far I think Vox is pretty educational.
      There's also conservatives who falsely cry about prejudice hoaxes towards them or caucasians too.
      There's people who received resulted from AI art that were racist or sexist stereotypes.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes saying other kinds of activities or jobs were less hardworking too.

  • @andysalcedo1067
    @andysalcedo1067 3 ปีที่แล้ว +309

    I'm sorry Joss, but how did the only two people in this video that actively work in the tech industry, that are building these automated systems, only have a combined 5 minutes on screen? You don't talk to the computer scientists about solutions or even the future of this tech, but yet you talk to Dr. Benjamin and Dr. Noble (who don't code) about "implications" and examples in which tech was biased. Very frustrating as a rising minority data scientist myself, to see this video focused on opinion instead of actually finding out how to fix these algorithms (like the description says.)
    Missed an excellent opportunity at highlighting minority data scientists and how they feel building these algorithms.

    • @jezuconz7299
      @jezuconz7299 2 ปีที่แล้ว +6

      These people don't seek facts or objetiveness, only points to blame others for not being politically correct...

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว

      I would’ve certainly liked to have seen input from Jordan B Harrod, as she’s done a number of great videos on this subject, but with Vox’s traditional print journalism background I can understand gravitating toward book authors.

    • @anonymousperson1771
      @anonymousperson1771 ปีที่แล้ว

      That's because the intent of the video is supposed to impart the outcome of racism regardless of how it actually works. Emotional perception is what they're after.

  • @Lightningflamingice
    @Lightningflamingice 3 ปีที่แล้ว +1844

    Just curious, but was it randomized which of the faces (darker/lighter) was on top, and which was on the bottom? It wasn't immediately apparent with the tests that were run after, but in both the Obama/McConnell and the 2 cohosts tests, the darker face was on top, which may be why there was an implicit bias towards the lighter face.
    If not that, the "racist" face detection can largely be boiled down to the algorithm being fed more training data of white people rather than black people, a consequence of darker skin tones comprising a minority of the population. As such the ML cropper will choose the face it has a higher confidence is a face. That could be the source of a racial skew.

    • @kjh23gk
      @kjh23gk 5 หลายเดือนก่อน

      The Obama/McConnell test was done with two versions of the image, one with Obama at the top and one with Obama at the bottom. The face detection chose McConnell both times.

  • @NANI-ys5pc
    @NANI-ys5pc 3 ปีที่แล้ว +1342

    This video also seems a bit biased, I don’t believe “racism” is the most appropriate reality to associate this phenomena with.

  • @theguardian3431
    @theguardian3431 3 ปีที่แล้ว +561

    I just started the episode but I would think that this has something to do with the basic principles of photography. When you take a photo, the subject is usually in the light and the background is darker for obvious reasons. So, the algorithm simply sees the darker faces as part of the background.

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว +6

      Indeed - but why didn’t the researchers train it to not do that? Because insufficient testing was done. Which comes back to a blind spot re race in humans. These algorithms are amazingly good at fitting to the curves we ask them to fit, so these problems aren’t inherent to the technology. It’s with the scope of the problem researchers are asking it to solve.

    • @randybobandy9828
      @randybobandy9828 2 ปีที่แล้ว +1

      @Kaitlyn L ya because light just happens to reflect off of lighter skin... oh no.. how dare light behave this way!!

    • @rye419
      @rye419 ปีที่แล้ว +3

      Do you think this issue would occur in a society of majority dark faces? Think about that

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 3 หลายเดือนก่อน

      3:37 What about how they used pics of people with all white backgrounds? Why isn't AI thinking the light faces are the light background then? Why is the AI able to pick up the dark hair of caucasians as not part of the background?

  • @gleep23
    @gleep23 ปีที่แล้ว +25

    This is first video I've seen with "Audio Description" to assist the vision impaired. I'd like to commend Vox for putting in the effort to help differently abled people, especially considering this videos subject matter. Well done for being pro-active with assistive technology.

  • @bersl2
    @bersl2 3 ปีที่แล้ว +875

    There's also the possible issue of "white balance" of the cameras themselves. My understanding is that it's difficult to set this parameter in such a way that it gives acceptable/optimal contrast to both light and dark skin at the same time.

    • @unh0lys0da16
      @unh0lys0da16 ปีที่แล้ว +1

      That's why you use multiple models, one to detect whether there is a black or white person in view and then have one model for each.

  • @NDAGR-
    @NDAGR- 3 ปีที่แล้ว +2913

    The hand soap dispenser is a real thing. Straight up

    • @TheSands83
      @TheSands83 2 ปีที่แล้ว

      The black guy didn’t have his hand underneath it correctly clearly…. N I’ve had soap dispensers not work…. U r so oppressed

  • @the_void_screams_back7514
    @the_void_screams_back7514 3 ปีที่แล้ว +1176

    the level of production on this show is just
    * chef's kiss *

  • @syazwan2762
    @syazwan2762 3 ปีที่แล้ว +1600

    16:15 that got me trippin for a second until I realize they probably just mirror the video so that the writing comes out right and she's not actually writing backwards.

  • @SpaceWithSam
    @SpaceWithSam 3 ปีที่แล้ว +906

    Fact: Almost everyone go straight to the comments section!

  • @robina.9402
    @robina.9402 3 ปีที่แล้ว +809

    Can you please put the names of the people you are interviewing in the description and links to their work/social media? Especially if they have a book we could support!

  • @fabrizio483
    @fabrizio483 3 ปีที่แล้ว +502

    It's all about contrast and how cameras perceive darker subjects. The same thing happens when you try to photograph a black cat, it's very difficult.

  • @bellatam_
    @bellatam_ 3 ปีที่แล้ว +604

    Google photos thinks that all my Asian family and friends are the same person

  • @divinebitter1638
    @divinebitter1638 3 ปีที่แล้ว +930

    I would like to see the contrast pic of the two men Joss took at the beginning and uploaded to Twitter been repeated, but with the black man on the bottom. The background at the top of the pic they took was quite dark, and the lack of contrast might have contributed, along with Twitter's weighting bias, to the white face being featured. I don't think Twitter would switch to picking the black face but it would have helped control for an extra variable.

  • @booksandocha
    @booksandocha 3 ปีที่แล้ว +847

    Funnily enough, this reminds me of an episode in Better Off Ted (S1:E4), where the central parody was on automated recognition systems being "racist" and how the corporation tried to deal with it. Well, that was in 2009...

    • @MsTifalicious
      @MsTifalicious 2 ปีที่แล้ว

      That sounds like a funny show. Sadly it won't fly today, but I'll be looking for it online now.

  • @I_am_Theresa
    @I_am_Theresa 3 ปีที่แล้ว +1659

    I swear I know some of those AI people! Imagine seeing your face pop out of that face randomiser!

  • @Dan007UT
    @Dan007UT 3 ปีที่แล้ว +97

    I wish they tested the same picture test in the beginning put put a bright white background on both guys.

  • @jasonpce
    @jasonpce 3 ปีที่แล้ว +161

    Ya know what? Good for black people. We don't need facial recognition in today's society, and I genuinely perceive it as a slippery slope when it comes to surveillance. If computers are having trouble recognizing black people, all that means to me is that corporations and the government will have a harder time collecting data on them. I swear to God, we should be having conversations about wether or not facial recognition software should exist, not wether or not it's racist, because imo the former conversation is of much more importance.

    • @CleverGirlAAH
      @CleverGirlAAH 2 ปีที่แล้ว

      Yeah, we can certainly agree on this without even bringing the possible racial incongruencies into the conversation. The militarized police state is evil. Period.

    • @CarlTelama
      @CarlTelama 2 ปีที่แล้ว

      They literally discuss whether it should exist at all if you watch the video the whole way through

  • @elokarl04
    @elokarl04 3 ปีที่แล้ว +668

    It's been forever since I've last seen Joss in a video. I've almost forgotten how good and well-constructed her videos are.

  • @aguBert90
    @aguBert90 3 ปีที่แล้ว +178

    "the human desicions in the design of something (technology or knowledge)" is what actually means when academics say "facts are a social construction" it doesn't mean it is fake (which is the most common and wrong read), it means that there are some human externalities and non intended outcomes in the process of making a technology/knowledge. Tech and knowledge is presented to the public as a finished factual black box, not many people know how them were designed, investigated, etc

  • @TheAstronomyDude
    @TheAstronomyDude 3 ปีที่แล้ว +64

    Not enough black people in China. Most of the datasets every algorithm uses were trained by CCTV data from Chinese streets and Chinese ID cards.

  • @vijayabhaskar-j
    @vijayabhaskar-j 3 ปีที่แล้ว +281

    This is exactly why AI powered software are not for 100% automation, they should always be used as a support tool to the human who is responsible for the job, for example: In your health risk prediction task, the threshold of predicting high risk patient should be lowered from 90%+ to 70%+ and a human should verify they are indeed high risk patient or not, this will both save time(as humans are looking at only mid risk-high patients) and resources, and reduce the bias.

  • @WMDistraction
    @WMDistraction 3 ปีที่แล้ว +665

    I didn’t realize I missed having Joss videos this much. She’s so good!

  • @civ20
    @civ20 3 ปีที่แล้ว +437

    The most important thing when it comes to training A.I is the raw data you feed it. Give the A.I 51% images of white people and 49% images of black people and the A.I will have a ~1% bias towards white people.

  • @caioarcanjo2806
    @caioarcanjo2806 2 ปีที่แล้ว +4

    Why don't just post the same picture changing both positions, so we can get already a good estimative :)

  • @arturchagas7253
    @arturchagas7253 2 ปีที่แล้ว +2

    love the fact that this video has audio description! this is so important

  • @Oxideacid
    @Oxideacid 3 ปีที่แล้ว +268

    11:50
    We're just gonna gloss over how she writes backwards so perfectly?

  • @terrab1ter4
    @terrab1ter4 3 ปีที่แล้ว +359

    This reminds me of that book, "Weapons of Math Destruction"
    Great read for anyone interested, it's about these large algorithms which take on a life of their own

  • @atticusbrown8210
    @atticusbrown8210 2 ปีที่แล้ว +2

    In the hand video the black person was tilting his hand so that it would go around the area that the sensor could detect easily. The white hand was directly under it. That would most likely cause a difference.

  • @emiliojurado5069
    @emiliojurado5069 3 ปีที่แล้ว +260

    it will be funny when machines start to preffer machines and ai than humans itself.

  • @marqkistoomqk5985
    @marqkistoomqk5985 2 หลายเดือนก่อน

    I just took a C1 English exam and the third listening was literally a clip from this video. It was nice to see a youtube video as part of such an exam.

    • @ivanromerogomez4049
      @ivanromerogomez4049 หลายเดือนก่อน +1

      I’m in the same situation as you. The problem were the questions. I really don’t know how I did on the exam. I hope we have luck…

  • @greyowul
    @greyowul 3 ปีที่แล้ว +152

    People seem to be noticing how nicely the professor can write backwards...
    Fun fact: That's a camera trick!
    th-cam.com/video/eVOPDQ5KYso/w-d-xo.html
    She is actually writing normally, (So the original video shows the text backwards) but then in editing, the video was flipped again, making the text appear normal. Notice that she is writing with her left hand, which should only be a 10% chance.
    Great video btw! I thought that the visualization of the machine learning process was extremely clever.

  • @danielvonbose557
    @danielvonbose557 2 ปีที่แล้ว +3

    There should be an analog to be precautionary principle used in environmental politics that would be a similar principle when applied to social issues. That is if there is a signficant amount or reasonable risk in doing something, then that thing should not be done.

  • @danzmachinz2269
    @danzmachinz2269 3 ปีที่แล้ว +248

    Joss!!! Why did you print all those photos!!!!!?

  • @kuldeepaurya9178
    @kuldeepaurya9178 3 ปีที่แล้ว +8

    Wait.. how did the soap dispenser differentiate between the two hands???

  • @Bethan.C
    @Bethan.C 2 ปีที่แล้ว +1

    Haven’t seen any new videos come from Joss, miss her so much~

  • @d_as_fel
    @d_as_fel 3 ปีที่แล้ว +138

    15:50 how she can write in mirror image effortless??

  • @mequellekeeling5029
    @mequellekeeling5029 3 ปีที่แล้ว +358

    At the beginning of the video i thought this was dumb but by midway through I’m like this is what we need.

  • @user-vn7ce5ig1z
    @user-vn7ce5ig1z 3 ปีที่แล้ว +55

    2:58 - Lee was on the right track, it's about machine-vision and facial-detection. One test is to try light and dark faces on light and dark backgrounds. It's a matter of contrast and edge- and feature-detection. Machines are limited in what they can do for now. Some things might never be improved, like the soap-dispenser; if they increase the sensitivity, then it will be leaking soap.
    8:13 - And what did the search results of "white girls" return? What about "chinese girls"? 🤨 A partial test is useless. ¬_¬
    9:00 - This is just regular confirmation bias; there aren't many articles about Muslims who… sculpted a statue or made a film.
    12:34 - Yikes! Hard to deny raw numbers. 🤦
    12:41 - A.I.s are black-boxes, you _can't_ know why they make the "decisions" they make.
    13:33 - Most of the people who worked on developing technologies were white (and mostly American). They may or may not have had an inherent bias, but at the very least, they used their own data to test stuff at the beginning while they were still just tinkering around on their own, before they were moved up to labs with teams and bigger datasets. And cats built the Internet.🤷
    14:44 - I can't believe you guys built this thing just for this video. What did you do afterwards? 🤔

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว +2

      Re 9:00, and what is the underlying societal reason that the majority of English language newspaper reports about Muslims have that negative tilt…? The implications extracted from training data merely reflect society.

  • @chafacorpTV
    @chafacorpTV 3 ปีที่แล้ว +365

    Me at first: "who even asks these questions, sreiously?"
    Me after finishing the video: "Aight, fair point."

  • @michaelfadzai8221
    @michaelfadzai8221 3 ปีที่แล้ว +199

    So Twitter said they didn't find evidence of racial bias when testing the tool. My opinion is that they were not looking for it in the first place.

  • @wj35651
    @wj35651 3 ปีที่แล้ว +69

    18:36 why are they pretending they are talking on a video chat, when they had crystal clear picture from another camera? Reality and perception, subtle differences.

    • @Vox
      @Vox  3 ปีที่แล้ว +79

      We had a camera crew on each end of our zoom call, since we couldn't travel due to Covid. - Joss

  • @testos2701
    @testos2701 ปีที่แล้ว +1

    This is happening everywhere right now, when you go shopping, to restaurants, to buy a house, to buy a boat, to get a job. It is designed that way, from start to finish, and there are always excuses of why this is happening and promises that it will change but I have yet to see any changes! As a matter of fact the more you dig the more you will find! 😅🤣😂

  • @TubOfSun
    @TubOfSun 3 ปีที่แล้ว +223

    Waited for Joss for what feels like years

  • @Vox
    @Vox  3 ปีที่แล้ว +164

    On this season of Glad You Asked, we explore the impact of systemic racism on our communities and in our daily lives. Watch the full season here: bit.ly/3fCd6lt
    Want updates on our new projects and series? Sign up for the Vox video newsletter: www.vox.com/video-newsletter
    For more reading about bias in AI, which we covered in this episode, visit our post on Vox.com: bit.ly/3mcZD4J

  • @user-dv1dd3cf8v
    @user-dv1dd3cf8v ปีที่แล้ว

    We found in class that this video cuts off at about 5 minutes from the end.

  • @virlives1
    @virlives1 2 ปีที่แล้ว +7

    Un claro ejemplo comienza cuando en un canal de TH-cam que publica contenido a nivel internacional. Recibe comentarios de varios idiomas. El tema es que los yanquis o estadounidenses, no toleran que las personas hablen otro idioma. Entonces desvalorizan cualquier comentario en otro idioma. Lo hemos estado experimentando.

  • @ShionChosa
    @ShionChosa 3 ปีที่แล้ว +30

    There was a Better off Ted episode. Corporate head office decided to discontinue use of the energy saving technology to save money.

  • @AnthonyVasquezEndZz
    @AnthonyVasquezEndZz 2 ปีที่แล้ว +2

    Could it be contrast? What if you photoshop the skin tones to green, yellow, or red and hair color with inverted color. Then use people like dark skin and light colored hair and light hair light skin to see if the contrast difference is what's causing this.

  • @bellesasmr
    @bellesasmr 2 ปีที่แล้ว

    why didn’t we try the two guys switching places in that picccc

  • @pendlelancashire
    @pendlelancashire 2 ปีที่แล้ว +1

    *I am surprised the scientists working on these algorithms only facilitate europhilic imaging.*

  • @dr_jj
    @dr_jj 3 ปีที่แล้ว +64

    Worlds of Math Destruction by Cathy O’Neil really touches hard on this subject, where biases of the designers or the customers of the algorithm have big negative impacts to society. There seriously needs some kind of ethical standard for designing algorithms but it’s so damn hard... :/

  • @venusathena3560
    @venusathena3560 3 ปีที่แล้ว +204

    Thank you so much to make this free to watch

  • @retrocat604
    @retrocat604 3 ปีที่แล้ว +77

    It's the same with face scan security.

  • @bananaluvs111
    @bananaluvs111 3 ปีที่แล้ว +42

    I am amazed with the look of the studio. I would love to work there, the atmosphere is just different, unique and everyone have a place there 😍

  • @eduardomirafuentes1420
    @eduardomirafuentes1420 2 ปีที่แล้ว +1

    I understand all the video purposes but my question is how mucho machines have to know about us and how we act?

  • @Curious_mind3408
    @Curious_mind3408 3 ปีที่แล้ว +42

    Wait, the cameraman is in both rooms but they are face timing each other???

  • @jordanjj6996
    @jordanjj6996 3 ปีที่แล้ว +172

    What a thought provoking episode! That young woman Inioluwa not only knew the underlying problem but she even formed a solution.. when she said that it should be devs responsibility to proactively be conscious of those that could be targeted or specified in a social situation and do their best to prevent it in advance. She’s intelligent, and understands just what needs to be done and stated in a conflict; A solution.. Hats off to her..

  • @alvakellstrom9109
    @alvakellstrom9109 10 หลายเดือนก่อน

    Great video! Really informative and very important. Thanks for a great watch

  • @Nicole__Natalia
    @Nicole__Natalia ปีที่แล้ว

    Off topic but why is the progress bar blue instead of red?

  • @cheesecakelasagna
    @cheesecakelasagna 3 ปีที่แล้ว +30

    Love the production especially on the set!

  • @meredithwhite5790
    @meredithwhite5790 3 ปีที่แล้ว +63

    Algorithms of Oppression is a really good book if you want to learn about racial and gender bias in big tech algorithms, like Google searches. It shows that machines and algorithms are only as objective as we are. It seems like machine learning and algorithms are more like groupthink and are not objective.

  • @Bthepig
    @Bthepig 3 ปีที่แล้ว +143

    Wow, another amazing video. I love the high-minds meet middle-school science fair feel of these videos. They're so accessible but also tackling really massive questions. Each one is so well put together and so thought provoking.

  • @ChristianTheodorus909
    @ChristianTheodorus909 3 ปีที่แล้ว +102

    long time no see Joss!

    • @torressr3
      @torressr3 3 ปีที่แล้ว +5

      Right? I missed her too. She's a great jornalist and freaking cute as all hell!

  • @hotroxy240
    @hotroxy240 3 ปีที่แล้ว +42

    Is this why the automatic sink in public restrooms barely work for me because it’s designed to read lighter skin 🧐🥲🧐🥲

  • @kingjulian420
    @kingjulian420 3 ปีที่แล้ว +10

    4:30. Why are you filming and driving!! No don’t read a quote!! JOSS NOOO. *boomp*
    *beep beep beep*

  • @MikosMiko
    @MikosMiko ปีที่แล้ว +1

    I am black and I build models. The theory is: bad data in, bad data out. Whatever data and rules that these algorithms were built on is what should be in question. Machines are not racist, the people (in tech companies, vendors, agencies) who build them are.

  • @ronxaviersantos3184
    @ronxaviersantos3184 3 ปีที่แล้ว +51

    Joss talking about twitter in 10:09 then went straight to ad, an you guessed it: twitter

  • @aronjohnreginaldo1913
    @aronjohnreginaldo1913 3 ปีที่แล้ว +45

    When you first see Joss's face on the thumbnail for sure every topics are interesting 😅

  • @anonymousbub3410
    @anonymousbub3410 3 ปีที่แล้ว +10

    7:45 me seeing a little hand wave on the edge of the screen

  • @ZubinSiddharth
    @ZubinSiddharth 3 ปีที่แล้ว +22

    Wait, how was the professor from Princeton able to write in reverse on that glass, so that we could read straight?

  • @killianbecker1164
    @killianbecker1164 3 ปีที่แล้ว +19

    This feels like a pbs kids show. With the set and all!

  • @charlespaine987
    @charlespaine987 2 ปีที่แล้ว

    Have you considered infrared (radiated heat) differences , light and dark surfaces throw at different rates.

  • @ShivamSharma-kv1yd
    @ShivamSharma-kv1yd 2 ปีที่แล้ว

    For algorithm mostly images are converted to black/white or grey scale...that may be causing it. Possible??

  • @lorentianelite63
    @lorentianelite63 3 ปีที่แล้ว +48

    I'm a simple man. I see Joss, I click.

  • @santif90
    @santif90 3 ปีที่แล้ว +59

    I'll take this video as it has a good intention of creating an important conversation. But your data is kind of funky

  • @pranavkakkar7637
    @pranavkakkar7637 3 ปีที่แล้ว +51

    I missed seeing Josh in videos. Glad she's back.

  • @pavanyaragudi
    @pavanyaragudi 3 ปีที่แล้ว +42

    Joss Fong!❤️🔥

  • @dEcmircEd
    @dEcmircEd 3 ปีที่แล้ว +7

    maybe it was more tech focused but it was way more interesting to me than the one about assessing is own racism, which seemed a bit more frivolous in its sourcing and it's overall process.
    Joss does really great stuff

  • @rizkypratama807
    @rizkypratama807 3 ปีที่แล้ว +40

    Stop with the Joss Fong comments, I can't stop liking them

  • @augustlions
    @augustlions 3 ปีที่แล้ว +84

    I see Joss Fong I click

  • @luizmpx833
    @luizmpx833 3 ปีที่แล้ว +216

    very good information, it reminded me of your video from 5 years ago...
    "Color film was built for white people. Here's what it did to dark skin"

  • @koroshitchy
    @koroshitchy 4 หลายเดือนก่อน +1

    That is not racism at all. Of course, the machine has good technical reasons to distinguish better the features of the lighter items (which reflect more light and, in addition, if they are faces, will typically contain darker frames and reference features such as the brows that will provide good contrast) than those of darker items in which nearly all items are dark and, therefore, less distinguishable. It is all a matter of brightness and contrast. When it comes to crime and other issues such as stereotypes, it is a matter of statistics. Machine learning algorithms are good are recognising patterns and, if the patterns are there, the algorithm will likely find them. So, for it to be "fair", we have to censor it so that it will ignore certain patterns that make us uncomfortable.

  • @mnengwa
    @mnengwa 3 ปีที่แล้ว +70

    Aren't we the creators of the machines, passing our own image (strengths & short-comings) to what we create? If they were to become sentient and seek to learn from the human race won't the machines pick up bias and hatred??

  • @MissSarcasticBunny
    @MissSarcasticBunny 3 ปีที่แล้ว +57

    This is a really interesting look into machine learning - great job Glad You Asked team! It stands to reason that there would be bias no matter what because even if the machine doesn't have any inherent bias or self-interest in focusing on one face over another, people are still feeding information into the machine and the machine is basing its results on that information. And humans are still flawed beings who bring with them their own personalities, thought patterns, biases, childhood backgrounds, class backgrounds, et cetera. The only solution is to focus on what information we're feeding machines.

  • @MR.CLEAN777
    @MR.CLEAN777 3 ปีที่แล้ว +155

    Next thing u know toasters are ganna be racist

  • @mariofrancisco6717
    @mariofrancisco6717 2 ปีที่แล้ว +1

    As máquinas não são racistas, elas são mal programadas ou mal configuradas por pessoas que não tomaram os cuidados adequados durante o projeto.
    The machines are not racist, they are poorly programmed or misconfigured by people who did not take proper care during the project.

    • @bluebutterfly4594
      @bluebutterfly4594 2 ปีที่แล้ว +1

      And why do the creators still not bother to take care this is not the first time these issues have been raised. Its not getting better.
      So why do you think they choose to disregard part of the population?

  • @elbaecc
    @elbaecc 3 ปีที่แล้ว +29

    As more and more governments, like say China, India, Middle Eastern countries, are employing face recognition tools that use such AI for law enforcement and surveillance, and they're buying said software from Western Countries, I am wondering how accurate these systems are seeing as the AI were trained on primarily white faces. Do these AI then learn "locally", and if so, can this data then be fed back into the original AI to make it learn how to recognise those ethnicities in western countries with an ethnically diverse population, like USA, UK, etc.?

    • @kaitlyn__L
      @kaitlyn__L 2 ปีที่แล้ว +1

      They don’t learn while being run. Training is very compute intensive and is done once, centrally, on large servers. After training is complete the neural networks can run on very little compute power, on a phone or laptop or camera, but they’re totally static.

  • @gadmas2670
    @gadmas2670 3 ปีที่แล้ว +57

    Goddamn interesting as a cs student, thanks!

  • @Dallas_AWG
    @Dallas_AWG 3 ปีที่แล้ว +55

    Joss is so good. She has the perfect voice

  • @mrush8057
    @mrush8057 3 ปีที่แล้ว +35

    the camera thing is not racist it is just black colors blend with the back ground while whites are more strong and bright so white is hard not to see for a computer

  • @HerrZenki
    @HerrZenki 3 ปีที่แล้ว +215

    Software's only as good as the guy who programmed it i say.

    • @Suavocado602
      @Suavocado602 3 ปีที่แล้ว +96

      Confirmed. I’m a programmer and both me and my software suck.