Digital Discrimination: Cognitive Bias in Machine Learning - Maureen Mc Elaney, Brendan Dwyer

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ม.ค. 2025

ความคิดเห็น • 34

  • @Ordenkar
    @Ordenkar 2 ปีที่แล้ว +14

    Computers do not have bias, they are brute logic. The only way a system exhibits bias is if the ground logic it was built with was hardcoded with bias from a human, in which case the machine learning is not generating the bias but rather the developers are. The logic of this argument can be considered self evident.
    In the event of a neural network that in some way simulates the thought process of a human, there are two points to consider.
    1. The AI will not yet have the sheer amount of variables that a human brain does and so isolating the emergence of bias would be dubious.
    2. Even if the results appear to represent bias, the data was paired and allocated based on either logical operations (unbiased) or perhaps biased human intervention.

  • @evanlane1690
    @evanlane1690 2 ปีที่แล้ว +3

    Excellent talk on clear issues regarding using haphazardly trained algorithms for anything that can be aimed at people and affect their lives.
    Thank you for taking the time to conduct this research and present it clearly.

  • @phillnick21
    @phillnick21 2 ปีที่แล้ว +12

    Lol. Those damn racist 1s and 0s.

    • @miscellaneoushistory9924
      @miscellaneoushistory9924 2 ปีที่แล้ว

      It’s almost like AI is a tool and not infallible and can be prone to bias if it is fed bias data. You racist dudes are so loud behind your thinly veiled “anonymity”

    • @phillnick21
      @phillnick21 2 ปีที่แล้ว +2

      @@miscellaneoushistory9924 Oh no, he called me racist! What ever will I do? I don't.... think.... I can go on. Snooze 🥱. Ironic how the people that try to structure society around the color of people's skin constantly call every one else racist without any evidence to support thier claims 🤔. Bye bye 👋 bigot.

  • @dougsholly9323
    @dougsholly9323 2 ปีที่แล้ว

    In the segment between about 6:00 to 10:00, did they actually identify the attribute of race as the sole factor in the 'mistake'? Surely race and offense weren't the only two attributes fed into the AI. It sounds like to me, they used human bias to conclude that the only explanation is the disparity in race, which is just as egregious as their accusation against the AI. Show me tests where they took the same data, eliminated the identification of race, and tell me what it spits out then. THEN I will be on board with your conclusions. Until then, these people are as likely to suck as the AI.

  • @firstname4337
    @firstname4337 2 ปีที่แล้ว +5

    if you train your AI with facts then this is what you get
    if you don't want your AI to make accurate predictions then you should instead adjust the training data to fit YOUR biases

    • @nirmaljacob3968
      @nirmaljacob3968 2 ปีที่แล้ว +2

      So you're saying that the bias inherent in the collected data and the programmers don't have anything to do with it.

    • @miscellaneoushistory9924
      @miscellaneoushistory9924 2 ปีที่แล้ว

      AI isn’t trained with facts, that’s what you want to believe because you’re racist, AI can be biased if the programmer is and depending on what data it’s fed. AI is a tool, not an all knowing infallible technology.

    • @Ordenkar
      @Ordenkar 2 ปีที่แล้ว

      @@miscellaneoushistory9924 Wow that's two people you called racists so far. Well done. They will now kill themselves for sure. Actually it could be that they just misunderstood the concept rather than they were being racists.

    • @catbjorndestroyerofworlds8108
      @catbjorndestroyerofworlds8108 2 ปีที่แล้ว +3

      @@nirmaljacob3968 not to get heated in a youtube comment section but hes saying that the ai is saying theyre more likely to commit crimes because statistically they are, and if you dont want your ai to show that then just give it skewed information to make you feel better about it and fit what you wish it was 🗿

    • @thatsux0rz
      @thatsux0rz 2 ปีที่แล้ว

      @@catbjorndestroyerofworlds8108 yes, but looking at the data, WHAT is the reason they considered more likely to re-offend? They showed at least 4 cases where a POC was deemed more of a risk but didn't end up being vs the Caucasian offender when they had lots of previous criminal activity relating to the charge they had received.

  • @diggoran
    @diggoran 2 ปีที่แล้ว

    What is going on in this comment section, and why are there so few programmers in it? I feel like the thumbnail might have been a mistake.

    • @tradition1433
      @tradition1433 2 ปีที่แล้ว +1

      Because the video is only tangentially related to programming. A better question would be what went so wrong in the field that it has been hijacked by grifters trying to divert focus from actual computing just to suit their own petty ends.

    • @diggoran
      @diggoran 2 ปีที่แล้ว +2

      @@tradition1433 I didn’t watch the whole video but I’ve heard talks like this before. The gist is to not assume an algorithm would be more fair than the people or data that trained it.
      Machine learning is designed specifically for detecting hidden patterns. You want the computer to tell you what pattern a particular inmate matches, and it will. It usually even comes with a confidence level.
      This talk could have been framed more positively: Put unbiased data in and your algorithm will produce unbiased results. You will never make another biased decision again because the computer will protect you from your own biases.
      The challenge is in detecting whether the input data is biased or unbiased. If you go in thinking the data doesn’t lie then you will likely consider it unbiased. If you go in thinking the data is biased because it’s compiled from the history of an already biased system, then you would think it’s impossible for it to be unbiased.
      In the end, the machine will only be as good as the people or data that trains it. It’s up to us to decide if we are satisfied with those inputs before we start accepting the outputs.

  • @blankx15
    @blankx15 2 ปีที่แล้ว

    I think its the last name.

  • @Jamestilfords
    @Jamestilfords 2 ปีที่แล้ว

    Brb going crazy

  • @Locreai
    @Locreai 2 ปีที่แล้ว

    Fits the crime statistics just sayin

  • @myothersoul1953
    @myothersoul1953 2 ปีที่แล้ว

    I thought AI was suppose to be smarter than humans, turns out it's just as dumb as humans.
    I am confident AI can make unbiased discriminations when discrimination has some objective outcome and the AI is trained on real world data. Unbiased meaning, it won't use inputs that are uncorrelated with the outcome and not unbiased as in doesn't include things we think shouldn't be included.
    But for values judgements like sentiment, all AI can do is model human bias. There is no sensor that can be developed to measure positive or negative sentiment, the only measure will always only be the subjective opinion of humans. So of course corrections will need to be made so the AI makes the correct discriminations but who get's to decide what the correct opinion is? Should "I'm a dog" really be neutral? How about "I am a bug"?

  • @catbjorndestroyerofworlds8108
    @catbjorndestroyerofworlds8108 2 ปีที่แล้ว +2

    lol based machine

  • @CoreyCauble
    @CoreyCauble 2 ปีที่แล้ว

    Just like people its all in what they are fed. Ridiculousness.

    • @tylerx2ez699
      @tylerx2ez699 2 ปีที่แล้ว

      i wonder if it revolves more around statistics. would make sense for machine learning.

  • @monkeyrobotsinc.9875
    @monkeyrobotsinc.9875 3 ปีที่แล้ว +12

    lmao bias. sure

    • @SizeMatterz
      @SizeMatterz 3 ปีที่แล้ว +5

      Right..... How does a machine have bias, it's purely analytical. They use the word "bias" to pander to political activist because controversy makes money. They are all drama junkies, it's a drug for them.

    • @miscellaneoushistory9924
      @miscellaneoushistory9924 2 ปีที่แล้ว +1

      These are literal experts in the field, they’re a lot more credible than some edgy racist child on the internet like you.

  • @dandre3K
    @dandre3K 2 ปีที่แล้ว

    Does machine learning discriminate against Koreans? 😂

  • @miekwavesoundlab
    @miekwavesoundlab 2 ปีที่แล้ว

    So basically it’s trained like humans are

  • @ahoif
    @ahoif 2 ปีที่แล้ว

    Hahaha