Bias in AI and How to Fix It | Runway

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024
  • Our new research on mitigating stereotypical biases in text to image generative systems has proven to significantly improve fairness across different groups of people.
    Learn more about our diversity fine tuning (DFT) approach: research.runwa...
    runwayml.com
    research.runwayml.com

ความคิดเห็น • 40

  • @T1544767
    @T1544767 7 หลายเดือนก่อน +24

    I hear what you're saying, Runway, but consider this:
    My psychology professor (this was in the early 2000s) offended the class when he said "stereotypes are true" and he followed it up with "that's how they become stereotypes." The truth may be offensive to people, but that doesn't remove the fact that it's true.
    As recently as 1980, the United States population was 80% white. I think it's closer to 70% now.
    As of 2019--and I'm pulling this straight from Google--the race of doctors in the United States was "56.2% identified as White, 17.1% identified as Asian, 5.8% identified as Hispanic, and 5.0% identified as Black or African American."
    Let's say I had a bowl of Skittles and 56.2 were red, 17.1 were green, 5.8 were yellow, and 5% were orange. If I were to pick 1 Skittle from the bowl while blindfolded, what flavor of Skittle would I be most likely to pick? Let's say I did that 100 times: the flavors should vaguely represent the percentages of likelihood that I would pick those flavors.
    Therefore:
    If my prompt was "1980's man" I should expect 4 out of 5 results to be white.
    If my prompt was "man" I should expect 7 out of every 10 results to be white.
    If my prompt is "doctor" I should expect roughly half of the results to be white.
    If I were to generate footage talking about the Kony 2012 campaign and my prompt was "child soldier in the congo" or something like that, what race would the output be?
    This isn't bias; it's just statistics.
    My main takeaway from this is that the next time I use Runway, I should SPECIFY the race of the person that I want to generate.

    • @VortexProjectStudios
      @VortexProjectStudios 7 หลายเดือนก่อน +2

      This is a fair assesement, but it doesn't mean human stereotyping can keep up with societal evolution. Especially when we are fed bias by industries who profit in maintaining these said stereotypes. Stereotypes may be true, *for a time*.

    • @AgusVallejoV
      @AgusVallejoV 7 หลายเดือนก่อน +12

      The problem with your underlying assumption is that, while for the US 80% of the population was white, that's not true for the rest of the world. If my prompt was "man", images shouldn't be 7 out of 10 white, because in reality the caucasian ethnicity accounts for roughly between 10% to 30% of world's population (and the actual statistic is very blurry, since most countries census race very differently). So while "man from the united states" should (in an statistical ideal world) be 7 out of 10 white, it's not true for "man". So the fact that most training datasets are based on the US, significantly shifts the statistical perception of the model away from what you would call "Truth". There is no truth, there is only bias, and it's important to recognize where you're getting your Skittles from.

    • @knocknak
      @knocknak 7 หลายเดือนก่อน

      This is amazing wow

    • @think32
      @think32 7 หลายเดือนก่อน

      @T1544767 I understand what you're saying, but I see a few potential problems. I'd like to mention two in particular, if you don't mind.
      1. Stereotypes are not equivalent to probabilities. Stereotypes are oversimplifications, based on assumptions. Those assumptions come from personal experience which is relevant but largely subjective. By definition, stereotypes are not true, nor do they account for realistic probabilities. Because of this, they can lead to harmful outcomes and discriminatory policies and behaviors. A few well-known instances are the stop and frisk laws of NY. Or the report that African Americans were 7 times more likely to be wrongly convicted than their White male counterparts. Or the fact that African Americans receive longer prison sentences than their White counterparts for the same crimes. Stereotypes are not fact, and they are not based on reality. Your professor was mistaken.
      2. You said you should expect 4 out of 5 results to be white if your prompt was "1980's man," and you gave 2 other expectations. But have you actually tested these? Do your generated results match your expectations? Also, were you required to specify "American" or "U.S.?" Or did it assume you meant White American male? The majority of the world is not actually white, after all. Or male.
      Just for kicks, I just generated "1980s male" in the first free AI generator I found. The first 27 were all white males. The 28th was ambiguous, looked like a tanned Italian with a dark beard.
      The truth is, generative AI sees whiteness as normal, as the default. Those who seek anything else have to do extra work. Runway is aware of this and is taking steps to try to remove these biases. I commend them.

    • @senoritarita
      @senoritarita 7 หลายเดือนก่อน

      If u really learned psychology u should know that stereotypes were created by human brain to simplify our world perception, because of a structure of our brain we always try to put so called tags to everything we see and not to think to much as its a lot of info in the world, stereotypes are not truthful and a thinking person always analyses, goes deeper knowing this brain algorithm that can actually make wrong conclusion about everything to simplify your life, unfortunately not many of us r ready to spend their «valuable» time to understand that

  • @localminimum
    @localminimum 7 หลายเดือนก่อน +25

    This has nothing to do with reducing bias, it's all about increasing bias to support an agenda.

  • @shanedk
    @shanedk 6 หลายเดือนก่อน +4

    This has NOTHING to do with bias in AI. It's just screeching about political correctness.
    True example of bias in AI: Use a prompt involving "wizard." You'll mostly get the Merlin archetype: old white guy with a beard and a hat. But change "wizard" to "mage" and you get all sorts of magic-users: female, young, white, Asian, even elves. Plus, no hat. So the AI is NOT biased to thinking only white guys can be magic users (any more than it thinks you have to be wearing a hat) because it gives you something different when you change the prompt to something that should be a synonym. "Doctor," "MD," "physician," "medic," "healer," "medical professional," and other synonyms will get you different varieties of people based on their representation in the training data.
    Your examples are cherry-picked, because in the case of other low-income workers, like "plumber" or "factory worker," you'll get a greater number of white people. Same with other high-income professionals.
    Bias in training data is a big issue, but it's NOT how you're representing it to be.

  • @Danuxsy
    @Danuxsy 7 หลายเดือนก่อน +17

    What I like most is how diversity often turn into reverse racism

    • @BobbyMasteria
      @BobbyMasteria 7 หลายเดือนก่อน +6

      reverse racism is racism

    • @Danuxsy
      @Danuxsy 7 หลายเดือนก่อน +5

      Yes exactly.@@BobbyMasteria

    • @FlamespeedyAMV
      @FlamespeedyAMV 7 หลายเดือนก่อน +1

      because they hire a diverse bunch of anti white racists

    • @think32
      @think32 7 หลายเดือนก่อน +2

      ​@@Danuxsy Racism is a system that benefits or disadvantages certain people based on their "race" (which I put in quotes because race is a human construct, not an objective reality). Fighting against such a system is not racism. It's anti-racism. "Reverse racism" doesn't actually exist and makes no logical sense. It's a phrase invented by those who either don't understand these concepts or feel threatened by the prospect of losing their privilege. Or both.
      For example, if a bunch of "black" people started creating laws that unfairly disadvantaged "white" people, and this somehow took hold in the U.S., giving an upper hand to "black" people, then this would just be racism. Not "reverse racism."

  • @OneSwitch
    @OneSwitch 7 หลายเดือนก่อน +15

    I think your idea of diversity in this video is myopically narrow. If A.I. were to look at what Google image search prioritises, or modern advertising it would be picking up on enormously skewed data, that seems to be shifted through the lens of heavy DEI/EDI weighting. In short, you will be replacing one set of biases with another.
    I don't trust you have a good moral solution at all.

  • @diegomadero3792
    @diegomadero3792 7 หลายเดือนก่อน +5

    This is pure ideology. Social engineering.

  • @csok758
    @csok758 7 หลายเดือนก่อน +3

    When I start getting black knights around Arthur's Round table is the day I quit runway

  • @brianwalls3369
    @brianwalls3369 7 หลายเดือนก่อน +7

    As a paying Runway customer I just want to say that I think this is a really bad idea. Stereotypes exist for a reason, and images of young attractive people is a perfectly suitable default, not only because they are the most pleasing to look at, but because they are the people most likely to be photographed, and therefore make up the largest percentage of images in the training data. If you ask an AI to generate a photo of an NBA player you expect to get a photo of an extremely tall athletic black man. This is NOT A PROBLEM. Forcing the AI to warp reality to fit some idealized marxist ideology that demands equality of outcomes is extremely dishonest and people do not like it. Disney is the proof.

    • @peace2011nabs
      @peace2011nabs 7 หลายเดือนก่อน

      The problem with not rectifying biases is that leads to further bias. A kid from China not finding enough basketball influences who look like him would find it harder to accept the role in the future himself, compared to someone Black for example, as you've stated. Not everything is about an immediate gain. Disney is just one case study.

    • @senoritarita
      @senoritarita 7 หลายเดือนก่อน +1

      It is a problem, cos I try to get different results and types in certain areas but they all are same, it is boring as any limitation, it does not let express and show the idea i have, standards were made by pr agencies in order to earn money, thats all

  • @goncalocartaxana
    @goncalocartaxana 6 หลายเดือนก่อน

    1:32 I don't think it's necessarily that the models have bias because the data comes from us humans....It is also definitely that, but...
    But it's because its really hard to have a dataset that encompasses all possible cases. So the model trains in a biased dataset and becomes biased.

  • @deeplearningpartnership
    @deeplearningpartnership 6 หลายเดือนก่อน +1

    Maybe that's how Google Gemini came up with black WWII Nazis. Lol.

  • @alterverse_ai
    @alterverse_ai 7 หลายเดือนก่อน

    Did you clone the voice from Vox or hire the person who is doing the VO. 🤔🤔

  • @wamaricle
    @wamaricle 15 วันที่ผ่านมา

    This video is hilarious. My experience with Runway so far is that 9 times out of 10 this "randomly" generated person will NOT be white. 🤣

  • @miguelsureda9762
    @miguelsureda9762 7 หลายเดือนก่อน +2

    Very important. I have continously issues with this in Kaiber for instance.

  • @fabianmosele2321
    @fabianmosele2321 7 หลายเดือนก่อน +6

    Very well done video. Bias is extremely important to talk about because these models have a very white male western centric tendency, since the datasets were mostly created and curated by them.
    I suppose this can be a temporary fix, but bias cannot be eradicated from such models. Bias is at the basis since it learns from the data its fed and makes statistical averages, scraping off all the hard corners. While this effort is certainly a step in the right direction, I feel we need to discuss it as a generative model specific problem, its architecture and datasets, as there will always be a tendency towards specific ideologies ingrained inside of the model.
    But anyway, I do appreciate seeing Runway investing in these important topics.

    • @deepsurface
      @deepsurface 7 หลายเดือนก่อน +4

      you sound broken

  • @FlyingLotus
    @FlyingLotus 7 หลายเดือนก่อน +4

    I’m glad runway are looking into this. It’s important and well timed

  • @think32
    @think32 7 หลายเดือนก่อน +1

    Not that I expected anything less, but it's interesting to see the degree of white fragility in this comment section.

    • @brianwalls3369
      @brianwalls3369 7 หลายเดือนก่อน +3

      I was a music major in college and did you know that when you take a "music history" class you are really only learning about the history of white people's music - starting with gregorian chants in the dark ages, and continuing forward to the modern day. There is a really good reason for this: gregorian chants were sung by monks in their monasteries, and so these monks invented musical notation so they could all sing the same "melody". Later on in france some priests came up with the idea of singing 2 melodies at the same time and "harmony" was born. No other people in the entire world ever invented a way to preserve their music. It was just "oral tradition" passed on from generation to generation. Almost everything you take for granted exists due to white ingenuity and creativity. almost EVERYTHING.

    • @think32
      @think32 7 หลายเดือนก่อน

      ​@@brianwalls3369 I hope you understand that you just perfectly demonstrated my point. Your strong defense of white supremacy and white exceptionalism is based on a limited, subjective, and strongly biased perspective. It's also based on a significant gap of knowledge which, if filled, would render your perspective obsolete.
      For instance, the oral tradition is a highly effective way of spreading one's culture, and rote learning anchors knowledge into the brain more deeply than notation. In fact, one cannot utilize notation without some degree of rote learning as a precursor (even you had to learn the alphabet before you could read English). Notation has perks as well, especially when preserving information or organizing multiple people to accomplish a task in unison or in harmony. Notation is simply instructions.
      Many cultures throughout history have invented ways to notate music. Korea has a system of notation dating back the 1400s. China has been notating music for over a thousand years. And the oldest surviving notated melody comes from India. None of this invalidates or devalues European developments, of course. It's all great stuff and can be used to learn, understand different cultures, make more interesting art, and communicate with more people.
      I too majored in music and subsequently became an educator. I've studied music from many parts of the world, including South America, Japan, the Middle East, and I had the pleasure of studying in Ghana last summer. At this point in my life, I feel slightly betrayed that I too was presented a very Eurocentric education by my undergraduate professors, but I don't fault them. I try to remain grateful that I've expanded my view, and I appreciate that learning Western European music theory gave me tools that helped me to understand other cultures and musical languages. It's all connected. There's no reason to have a supremacist world-view. Such thinking is extremely primitive and counterproductive.
      The same can be said for you final sentence, which is not only false, but completely unnecessary. I can't imagine what you hope to gain by placing such severe restrictions on your worldview.

    • @OneSwitch
      @OneSwitch 5 หลายเดือนก่อน

      And that's racist too. Although you'll likely be using the revised racist definition of racism. We're all riddled with bias. It's unavoidable. There's no panacea. It's fantasy land to think there some way to shift society into one correct way of thinking.

    • @think32
      @think32 4 หลายเดือนก่อน

      @@OneSwitch "And that's racist too." To what are you referring?