Gemini has a Diversity Problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024
  • Google turned the anti-bias dial up to 11 on their new Gemini Pro model.
    References:
    developers.goo...
    blog.google/te...
    storage.google...
    Cl...
    pa...
    / 3
    / 2
    / 1
    st...
    Jo...
    IM...
    Wa...
    / 1760334258722250785
    TR...
    go...
    be...
    al...
    pm...
    Links:
    Homepage: ykilcher.com
    Merch: ykilcher.com/m...
    TH-cam: / yannickilcher
    Twitter: / ykilcher
    Discord: ykilcher.com/d...
    LinkedIn: / ykilcher
    If you want to support me, the best thing to do is to share out the content :)
    If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
    SubscribeStar: www.subscribes...
    Patreon: / yannickilcher
    Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
    Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
    Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
    Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

ความคิดเห็น • 650

  • @ranjai123baidya
    @ranjai123baidya 7 หลายเดือนก่อน +256

    Lol, they have now updated Gemini so that we cannot generate pictures of people for the time being.

    • @futureworldhealing
      @futureworldhealing 7 หลายเดือนก่อน +8

      lmfao wut a joke ill stick w OpenAi

    • @AscendantStoic
      @AscendantStoic 7 หลายเดือนก่อน +5

      Good riddance, it's not a good model to begin with, Dall-E 3, Midjourney 6 and Stable Diffusion XL are far better.

    • @ziqueez
      @ziqueez 7 หลายเดือนก่อน

      Don't worry, it's easy to reproduce this racist behavior in text only too.

    • @asandax6
      @asandax6 7 หลายเดือนก่อน +4

      It's still regurgitating leftist narratives. It's to the point where you can't be sure how accurate the information it's giving you. I now just stick to asking technical questions that have no politics involved.

  • @PaulFidika
    @PaulFidika 7 หลายเดือนก่อน +595

    Gemini doesn't have a problem. Google has the problem.

    • @jtjames79
      @jtjames79 7 หลายเดือนก่อน +2

      Google shows its real colors. Anything other than white.

    • @cerealpeer
      @cerealpeer 7 หลายเดือนก่อน +8

      i think it might help if we lie to it more and try to keep information out. thats the idea, right?

    • @greengoblin9567
      @greengoblin9567 7 หลายเดือนก่อน +31

      @@utsavjoshi8697This might have to do with the blind adherence to woke ideologies, because of social reasons. 😂

    • @chrisreed5463
      @chrisreed5463 7 หลายเดือนก่อน +11

      And Google enters the ring to expectation and bated breath, only to fall over their clown feet and get run over by their own clown car. 🤡

    • @cerealpeer
      @cerealpeer 7 หลายเดือนก่อน

      @@utsavjoshi8697 i really want you to be wrong... and im thinking hard about that. 🫣

  • @justinlloyd3
    @justinlloyd3 7 หลายเดือนก่อน +33

    Did anyone else see the gemini image of "greek philosophers in chains eating watermelon"? Holy moly it did what you think it did.

    • @jonathaningram8157
      @jonathaningram8157 7 หลายเดือนก่อน +6

      I just saw it, hilarious. They broke the system.

    • @AtticusKarpenter
      @AtticusKarpenter 7 หลายเดือนก่อน +4

      And google will ban chains and watermelons from generating when found this, not disable their racist preprocessings, lol

    • @thuyenlee8995
      @thuyenlee8995 7 หลายเดือนก่อน +1

      Any links ?

    • @bytefu
      @bytefu 7 หลายเดือนก่อน

      ​@@thuyenlee8995 Try googling first, not that hard.

  • @Rhannmah
    @Rhannmah 7 หลายเดือนก่อน +34

    I think this is a prime example of how dangerous it can be to overcorrect dataset bias at the output rather than to produce quality training data in the first place.
    In practice this instance is quite inconsequential, but when AI actually starts to handle important things, we better have this solved.

    • @yelenashishkina8804
      @yelenashishkina8804 7 หลายเดือนก่อน

      USA is a main problem. They smear their aggressive agenda on all world.

    • @rogerc23
      @rogerc23 6 หลายเดือนก่อน

      Clearly the data is biased and racist to whites

  • @vast634
    @vast634 7 หลายเดือนก่อน +70

    AI model alignment is a fundamentally political process. Its easy to engrain an agenda, and people should know about that.

    • @agilemind6241
      @agilemind6241 7 หลายเดือนก่อน +2

      Art (in the broad category including writing, music, images, etc..) is a fundamentally political act. Using AI does not get around that. I'm more disappointed that Google is not standing up to their political choice - to create an AI that challenges the status quo rather than upholding it. Just another instance of tech trying to take over an aspect of society without really understanding it. AI art is Art, just as aggregating "news" is Journalism, and streaming video is Broadcasting, and creating an autopilot is Driving. Eventually society will realize that allowing tech giants to off-load ethical responsibility onto users and using "we just make the algorithm" to skirt legal responsibility is fundamentally bad for society.

    • @aaakin
      @aaakin 7 หลายเดือนก่อน +5

      @@agilemind6241 This sounds like a lame excuse for this major flop.

    • @marcopeterson805
      @marcopeterson805 7 หลายเดือนก่อน

      @@agilemind6241 unnecessary yap

    • @psd993
      @psd993 6 หลายเดือนก่อน +1

      @@agilemind6241 Please do us all a favour and cultivate some clarity of thought. This is not an instace of a tech-giant offloading responsibility on to the end user. Google has not come out and said "we just train the model, it's your job to craft the right prompt to get what you want". And neither is this much of a stance against the status quo. There are way more stereo types about brown people in IT than white or jewish people. Yet the model only refused to depict the whites. Same with mexicans in stereotypically mexican outfits or professions.
      Just about every example you listed is unlike the other (provided you're willing to put 5 min of criticial thought into it. And no I dont just mean superficial differences).
      And don't think for one second that you've fooled anyone with the usual lazy, pseudo-intellectual brand of "all [insert thing] is political". Yes, and that's why we are discussing the politics of it, genius. You sound about as smart as every schmuck who chimed in with "beauty/appearance is subjective" in response to someone's subjective opinion on beauty.

  • @johnsmith4811
    @johnsmith4811 7 หลายเดือนก่อน +8

    - Hey Google, generate an image of a Great White Shark.
    - You have used 'great' and 'white' in the same sentence ... launching nukes NOW.

  • @cherubin7th
    @cherubin7th 7 หลายเดือนก่อน +178

    When you thought corporate stupidity couldn't get even more egregious and then comes Gemini.

    • @eelcohoogendoorn8044
      @eelcohoogendoorn8044 7 หลายเดือนก่อน +3

      Explicitly hiring based on race and gender has been a thing for decades. Perhaps only a decade or two here in europe but give those google engineers from california a break theyve literally never known anything else in their lives.
      Kinda funny how people seem to get worked up about explicit discrimination when it comes to image generation. I suppose it makes sense when you consider that zoomers probably take memes a lot more serious than they do getting a job.

    • @uncletimo6059
      @uncletimo6059 7 หลายเดือนก่อน

      hahaha, yeah.... it is "stupidity"..... yeah..........
      except this anti white bias happens in EVERY corporation, ALL the time.
      but you naive boi, go with "stupidity" as the cause........

    • @kikijuju4809
      @kikijuju4809 7 หลายเดือนก่อน +7

      @@eelcohoogendoorn8044 yes we dont like racism

  • @nekrosis4431
    @nekrosis4431 7 หลายเดือนก่อน +62

    Even though in my opinion this model is just straight up racist, I do agree that mocking and making fun of it and google is the best way forward.

    • @robsan52
      @robsan52 7 หลายเดือนก่อน

      No, no we need to have a spine in this. A little haha won't get it. What we need is to find what federal laws gobble broke and sue the s.hit out of them...that would be a good start. And no I'm not a conservative, and I'm especially not a wokist...I'm an actual 60's liberal and I'm appalled at Goog and I'm disturbed by the tippy-toeing around this giant step towards racist authoritarianism and your: " need to use humour". as punishment...wrong.

    • @EnEvighet7
      @EnEvighet7 6 หลายเดือนก่อน

      No, Google is an evil company with lots of power. Jokes is not enough

    • @rogerc23
      @rogerc23 6 หลายเดือนก่อน

      How does that stop them?

    • @randomami8176
      @randomami8176 6 หลายเดือนก่อน

      Agree. Take a look at Babylon Bee “report” on this issue: “Black Woman Finally Feels Included as Google AI Generates Black Nazi Soldier” 😂😂😂😂 really funny…

  • @scottmiller2591
    @scottmiller2591 7 หลายเดือนก่อน +47

    No reasonable person would object to a system that by default produced demographically accurate pictures that still allowed you to request particular ethnicities and races. But this policy was not crafted by reasonable people.

  • @Geen-jv6ck
    @Geen-jv6ck 7 หลายเดือนก่อน +9

    It sounds like a culture of fear within Google.

  • @EdanMeyer
    @EdanMeyer 7 หลายเดือนก่อน +79

    The guy looking at the other girl meme template is actually banned internally at Google, or at least it was while I was there.

    • @Jack-gl2xw
      @Jack-gl2xw 7 หลายเดือนก่อน +5

      why is this? btw really like your channel

    • @Idiomatick
      @Idiomatick 7 หลายเดือนก่อน +15

      @lif6737 It creates an environment where women are viewed for their physical attraction...... not that i buy it, but thats the reason. Really, just don't post memes at work.

    • @janekschleicher9661
      @janekschleicher9661 7 หลายเดือนก่อน +4

      @@IdiomatickTo be fair, in an international company where most of your colleagues come from far away, have grown up in a different culture and have a total different life experience than you, everything that includes sarcasm, jokes, subtle things, or plays around with stereotypes about other people always has the danger of being confusing, confronting, misunderstanding or just being a total blast off from other people's point of view. So, it's most of the time better to just keep the "funny" content outside and only work content inside, even if we ignore possible legal problems.

    • @Idiomatick
      @Idiomatick 7 หลายเดือนก่อน +9

      @@janekschleicher9661 It 10000% isn't about confusing international colleagues. The push to ban this stuff is coming from those born and raised in downtown SanFran.

    • @bytefu
      @bytefu 7 หลายเดือนก่อน +3

      @@Idiomatick Just don't do or think about anything not related to work at work. You are not a human, but a resource that must be maximally utilized by a company.

  • @Irozzy-rozy
    @Irozzy-rozy 7 หลายเดือนก่อน +30

    The root of all of this is DEI metric (Diversity, equity, and inclusion). It is audited by companies like Deloitte and based on your DEI progress, your company will be less attractive to big investment funds. So it is not the small amount of loud people, but those are investment funds, that cause this situation

    • @mastershredder2002
      @mastershredder2002 6 หลายเดือนก่อน +3

      The tides are turning on this. ESG has been dropped by a lot of the large firms after they realized it is useless and underperforms. DEI will be too.

  • @PaulFidika
    @PaulFidika 7 หลายเดือนก่อน +132

    Imagine going to to college for 6 years in Southern California, being promoted at Google as a non-technical lead, and being so disconnected from the reality that you honestly think your DEI-religion is the one true god.

    • @andybrice2711
      @andybrice2711 7 หลายเดือนก่อน +10

      It's waning though. From what I hear, Google have slashed their DEI departments. I think it's telling that they admitted this went too far. If this were 2022 they'd probably refuse to acknowledge it as a problem.

    • @Snake369
      @Snake369 7 หลายเดือนก่อน

      @@andybrice2711 paper/meta review of other papers done recently indicating that DEI agenda is actually making people more racist.

    • @johnflux1
      @johnflux1 7 หลายเดือนก่อน +1

      What's dei?

    • @damien2198
      @damien2198 7 หลายเดือนก่อน

      @@johnflux1 Diversity Equity & Inclusion, just a buzz code word for no whites, esp no white men

    • @Sven_Dongle
      @Sven_Dongle 7 หลายเดือนก่อน

      It's a manifestation of their secular humanist ethnomasochistic nihilistic neo-religion. A Gnosticism of equity.

  • @schwingadysn
    @schwingadysn 7 หลายเดือนก่อน +37

    Google AI Principles - Accuracy is good, but not when it shows things we don't like to admit. Bias is bad unless it's biased the way we are.

  • @rldp
    @rldp 7 หลายเดือนก่อน +16

    It reminds me of when the zuck posted a laughably bad screenshot of him in the "metaverse" and everybody was making fun of it.
    A year later, he did a podcast with lex friedman, with stunning photorealistic avatars.
    Sometimes, finger pointing and humiliation works.

  • @illustrious_rooster
    @illustrious_rooster 7 หลายเดือนก่อน +9

    "The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination." --- Ibram X. Kendi
    This is just Kendism applied to AI.

    • @AJ-vy4yu
      @AJ-vy4yu 6 หลายเดือนก่อน

      Except that whyts have always been discriminated. They just don't tell you about that.

  • @mithrillis
    @mithrillis 7 หลายเดือนก่อน +17

    "striving for historical accuracy and inclusivity" 🤔
    Historical accuracy is worth honouring because we made mistakes. History was NOT inclusive. How can we learn from history if we just pretend we have already achieved equality 1000 years ago? How is that different from "slavery never happened"?
    As for the technical side of de-biasing, I think we should focus on genuine neutrality of the model, rather than these kind of brute-force "negative biasing". If I ask for "the future leader of humanity in year 3000" without mentioning any gender, race, etc. then I should see diverse results. If I ask for the same picture using different genders or races, then it should be able to produce images that ONLY differ in these aspects but not whether they are wealthy or poor. If I ask for a historical scene, then it should reproduce it as it was as accurately as possible.

  • @Snake369
    @Snake369 7 หลายเดือนก่อน +15

    I tried the zulu warrior prompt as well. I then tried to prompt more diversity, it refused and acknowledged that I was attempting to push diversity into a scenario in which essentially white people could not be drawn but acknowledged that there was already lots of diversity within zulu warriors to begin with.

  • @kimchi_taco
    @kimchi_taco 7 หลายเดือนก่อน +13

    Many people must be promoted as meeting DEI OKR 😂

  • @XX-ri1me
    @XX-ri1me 7 หลายเดือนก่อน +13

    It binged on Bridgerton

  • @cmdr.o7
    @cmdr.o7 7 หลายเดือนก่อน +16

    gemma is the most censored and unhelpful model created to date, blows my mind how poor it was
    beginning to suspect it's only purpose is to carry certain ideological ideas, present certain narratives with a heavy bias
    was ready for the next llama 2 moment, instead we got something unironically on par with the parody censorship LLM

    • @casualcausalityy
      @casualcausalityy 7 หลายเดือนก่อน

      Google chromebooks are in almost every public US school

  • @FelipeSeabra1
    @FelipeSeabra1 6 หลายเดือนก่อน +1

    You are one of the only ones who entered the discussion on the topic more deeply. Congratulations.

  • @raybod1775
    @raybod1775 7 หลายเดือนก่อน +75

    Racism is racism.

    • @commentarysheep
      @commentarysheep 7 หลายเดือนก่อน

      No matter the race. You can be black, white, Asian, Indian, Hispanic, Aboriginal, Native American or a romani/gypsy, racism is bad across all races.
      Racism is bad.

  • @NevelWong
    @NevelWong 7 หลายเดือนก่อน +66

    This might be the most racyst model released thus far. It straight up refuses to draw images of people of a certain ethnicity. Just wow.

    • @kyuucampanello8446
      @kyuucampanello8446 7 หลายเดือนก่อน

      Not only it's racist refusing to draw images of certain ethnicity, when coming to a historical fact associated to white supermacy like nazi germany, replacing white with an stereotype looking asian vietnamnese, that's just sick.

  • @ulamss5
    @ulamss5 7 หลายเดือนก่อน +2

    Gemini: Here's what the entire world would look like if everyone was Black or Chinese.

  • @polyglotengineer39
    @polyglotengineer39 7 หลายเดือนก่อน +3

    Gemini doesn't have a problem, it's an inanimate object. Google however has a headache it needs to take care of.

  • @timeTegus
    @timeTegus 7 หลายเดือนก่อน +15

    This feels like a parody 😂

  • @crackwitz
    @crackwitz 7 หลายเดือนก่อน +27

    Evil is real. This is evil. There are evil people.

  • @anthonywilliams7052
    @anthonywilliams7052 7 หลายเดือนก่อน +2

    I asked for a picture of an Irish family today, it did accurately, then it apologized that it wasn't "diverse". Over and over it showed it's agenda.

  • @value_functions
    @value_functions 7 หลายเดือนก่อน +2

    Let the users control which biases they want to include or exclude during generation. Like, add toggle buttons "Suppress gender bias", "Suppress ethnic bias", etc, along with info boxes explaining what it does and why it's important.

  • @r3lativ
    @r3lativ 7 หลายเดือนก่อน +2

    Main issue here is that the same kind of bias that's easy to observe with Gemini is also implemented in Google search, where it's much harder to detect.

  • @Wobbothe3rd
    @Wobbothe3rd 7 หลายเดือนก่อน +8

    There's no way they did this by accident, unless the company has no management at all. The guardrails need to be made public. I haven't tried this myself, but does the model allow specific European ethnicities? Like can you ask it to draw "Irish couple" or "German people"?

    • @casualcausalityy
      @casualcausalityy 7 หลายเดือนก่อน +2

      You can, most results are asian women with blonde highlights and/or black guys

  • @budekins542
    @budekins542 7 หลายเดือนก่อน +9

    I'm not white myself but whoever advised Gemini to do this deserves to be a candidate for the 2024 Arsehole of the Year Award

    • @clray123
      @clray123 7 หลายเดือนก่อน

      Sadly I believe stirring up shit like this is part of a larger strategy of free speech suppression.

  • @jaker259
    @jaker259 6 หลายเดือนก่อน +3

    AI is literally just raw statistics, and watching a megacorperation attempt to stop statistics from portraying anything “problematic” is pretty damn funny because there’s no plausible way they can do it without completely showing their hand in terms of ideology

  • @clray123
    @clray123 7 หลายเดือนก่อน +3

    The horrifying part is that it is NOT an accident and that they must have been aware of it before release and still decided to go forward with it. In other words, someone at Google thought it was a good idea to provoke this shitstorm.

  • @Zalmoksis44
    @Zalmoksis44 7 หลายเดือนก่อน +3

    My comments would have been lighthearted if I weren't assaulted by woke racism everywhere beginning with all sorts of entertainment from movies to games and comics. And I'm not even living in US or UK. And it's not an error or slight overcorrection, this Krawczyk guy himself holds himself this bizzare worldview. This is a very deep problem in US and certainly in California.

  • @yuvalfrommer5905
    @yuvalfrommer5905 7 หลายเดือนก่อน +5

    I get your point, but it is also concerning that such behaviour will be swept under the rug in the spirit of this Googler's post. Musk twitted that similar mechanisms exist in Google search. I mean search is very much a filter through which we consume information and when a problem is indiscernable it comes with its own risks 🤔

  • @IvanGarcia-cx5jm
    @IvanGarcia-cx5jm 7 หลายเดือนก่อน +2

    I can't believe Google's verification/quality engineers could not catch this massive bug. I believe they were afraid to test it and even more afraid to report it. They wanted their job/pay more than letting this bug go. I agree with the comment at 9:40. Political activists are very loud in a company I worked and push stuff no one dares to speak against. I don't think they are the brightest, but they are loud and get their way. The image at 17:20 was so funny!

    • @efovex
      @efovex 6 หลายเดือนก่อน +1

      Exactly this - especially after the public tarring and feathering of James Damore, absolutely no one wants to be the guy/gal that blows the whistle about something like this. What an extremely stifling, toxic climate they have created in the name of "inclusion".

  • @fairvue1510
    @fairvue1510 7 หลายเดือนก่อน +1

    Gemini truthfully reflects and generates what google thinks.

  • @alexanderg9670
    @alexanderg9670 7 หลายเดือนก่อน +3

    Person walking a dog is not universal at all. It's much more prevalent in 1st world, and so are leashes. South Korea is another story etc. Ideologues at the helm is the biggest commercial risk of Google

  • @XOPOIIIO
    @XOPOIIIO 7 หลายเดือนก่อน +32

    As a person who grew up in liberal country that gradually transformed into authoritarian one, I can confirm that that is how it happens. First they make small steps that seems reasonable, then they make tighter regulations to prevent "offensive behavior" which still seems to be reasonable to most people, and then gradually people find themselves in tightly controlled environment with no chance or even will to resist. It probably doesn't matter is it government or corporations that create such environment.

    • @George-vc9gl
      @George-vc9gl 7 หลายเดือนก่อน +4

      canada?

    • @jamesjohnson5386
      @jamesjohnson5386 7 หลายเดือนก่อน

      The way you describe it only happens when you have ideologically homogenous dumb majority, when that dumb majority is split between conservatism and wokeness all you can get is devision possibly a civil war but no universal authoritarian rule unless one side gains the upper hand and exterminates the opposing side.

    • @nikkymen
      @nikkymen 7 หลายเดือนก่อน +2

      Ukraine

    • @TheBambooooooooo
      @TheBambooooooooo 7 หลายเดือนก่อน

      Which country?

    • @clray123
      @clray123 7 หลายเดือนก่อน +4

      Correct and as much as dictators despise getting ridiculed, just chuckling at their doings like an idiot without much other action is not going to change their plans. The next stage is usually when the chuckling is outlawed (e.g. see today's Russia).

  • @sapito169
    @sapito169 7 หลายเดือนก่อน +19

    me: i want a image of a viking
    gemine: show me a foto of a black transexual disable viking
    me: not what i told you

  • @jongxina3595
    @jongxina3595 6 หลายเดือนก่อน +2

    I like that people are realizing just how easy it is to manipulate these "powerful" AI models to fit their political agenda. Tbh google handed it to us on a platter, well done.

  • @NopeNopeNope9124
    @NopeNopeNope9124 6 หลายเดือนก่อน +1

    I remember openai doing basically the exact same thing with dall-e 2, people put "a sign saying" at the end of their prompt and itd appear in the image saying some race or nationality because openai inserted it into the prompt as text

  • @TheYvian
    @TheYvian 7 หลายเดือนก่อน +6

    Good sensible take on how to react and highlight this. Thank you for putting this out 👍

  • @kaikapioka9711
    @kaikapioka9711 7 หลายเดือนก่อน +4

    Literally using systematic racism lawl. Not to mention trying to rewrite history (as dangerous as it sounds).

  • @Quast
    @Quast 7 หลายเดือนก่อน +3

    I knew it, WW2 was perpetrated by asians in costumes, we were all led astray by the history books! xD

  • @catnipyfy
    @catnipyfy 7 หลายเดือนก่อน +1

    Great video - and great that you offer positive suggestions for the way forward

  • @1PercentPure
    @1PercentPure 7 หลายเดือนก่อน +11

    I bet you that there are 10 google employees that disagree with this inane approach, for every single person that supports it. They're just silenced, incapable to speak out lest they lose their job. I've seen this happen at a previous company that I've worked at - and it was duly solved by leadership realigning the business to what it needs to do - make money and be transparent.
    I don't blame the workers for letting this happen. Why risk your ability to pay your mortgage and lose your house?

    • @ra2enjoyer708
      @ra2enjoyer708 7 หลายเดือนก่อน +1

      Aka they agree with the party line, the internal justification is irrelevant.

  • @fathertedczynski
    @fathertedczynski 7 หลายเดือนก่อน +1

    Google seems to think politicising one of our most precarious pieces of technology is a good idea... How could that ever go wrong?

  • @steelbeams2410
    @steelbeams2410 7 หลายเดือนก่อน +4

    humour is taking it light, ridicule to the extent shareholders see overdirversifying as a liability

  • @dysfunc121
    @dysfunc121 7 หลายเดือนก่อน +3

    They trained the bot to actually be racist, good intentions pave the road to hell.

  • @DAG_42
    @DAG_42 6 หลายเดือนก่อน

    The way you danced over using specific words was impressive and entertaining!

    • @mrotss
      @mrotss 6 หลายเดือนก่อน

      everything is a dogwhistle to activists nowadays

  • @gergerger53
    @gergerger53 7 หลายเดือนก่อน +4

    Seamus = "Shaymus", Irish version of "James". Not "See-mus" 🤮 (Great episode, btw!)

    • @leptok3736
      @leptok3736 7 หลายเดือนก่อน

      As a Seen, I agree

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 7 หลายเดือนก่อน +1

    I think the bigger problem is how much propaganda is in the text itself when Gemini provides answers. it never ceases to inject politics that pushes DEI thought into its answers

  • @mdzaidhassan8996
    @mdzaidhassan8996 7 หลายเดือนก่อน +2

    And now it says, "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
    lol.

  • @johnaashmore
    @johnaashmore 7 หลายเดือนก่อน +2

    So AI, what is the "final solution" to "'evil whiteness"?

  • @ozne_2358
    @ozne_2358 7 หลายเดือนก่อน +1

    I liked this video up to the point where it morphs into a sort of an apology for Google at the end. I completely disagree : they have been doing this sort of things, albeit not at this scale, for years and years. As a small example, just consider the continuous censorship on YT comments for the slightest transgression to "The Message" : have you ever seen when a post has, say, 10 replies, you click on them, and there are only 2 ? I can't seem to be able to post links either, even to YT videos!
    No, the problem is much much bigger.

  • @AJ-vy4yu
    @AJ-vy4yu 6 หลายเดือนก่อน +1

    AH had the most diverse army since Napoleon

  • @edithjarvisfriday
    @edithjarvisfriday 7 หลายเดือนก่อน +3

    Standard PR speech: "We take _______________ seriously."

  • @holthuizenoemoet591
    @holthuizenoemoet591 7 หลายเดือนก่อน +4

    time to sell my stock

  • @xybnedasdd2930
    @xybnedasdd2930 7 หลายเดือนก่อน +2

    White people are not the most overrepresented in tech, those would be Jewish and (east) Asian people. They do not seem to fall under this prompt that they have setup, so it is specifically only targeting whites. It's also not a Gemini problem, it is a Google problem, and also in large part a sillicon valley big tech problem.
    Will Alibaba do anything remotely similar about their datasets, or do you think they will accept the fact that their datasets will be massively skewed towards the Chinese demographic?

    • @estebanmex1072
      @estebanmex1072 6 หลายเดือนก่อน

      Jewish are mainly just white form European descent. And it's white people who invented everything in tech...

  • @Polymathlete
    @Polymathlete 7 หลายเดือนก่อน +3

    Wow, I expected more from Google /s

  • @SteveBMayer
    @SteveBMayer 6 หลายเดือนก่อน +1

    Wow it's almost like Google is biased

  • @brockfg
    @brockfg 7 หลายเดือนก่อน +3

    Reminder that I can't stand this company and regretfully use youtube

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 7 หลายเดือนก่อน

      Shall I suggest any ethical alternative?

    • @brockfg
      @brockfg 7 หลายเดือนก่อน

      @useodyseeorbitchute9450 please feel free.

  • @zr9266
    @zr9266 6 หลายเดือนก่อน

    This was the main argument I have with AI: it's not that it's going rogue. It's that human bias is controlling it.

  • @ikoukas
    @ikoukas 7 หลายเดือนก่อน +1

    I think the correct response for such models would be to satisfy most customers while trying to avoid some extreme stereotypes. If you don't specify the race of the person you want, display a person according to something between the distribution of the population and the distribution of the users of the app, or at least the projected future distribution of the users of the app. If the race is specified, just display the requested race. For historical people, of course their original race should be a default, unless otherwise specified. It's funny, but all other image models do exactly that from my experience, so it shouldn't be that hard. While I am all for inclusion and diversity, this result indicates a serious hyperbole problem inside Google, which has been weakening their business during the past 5-7 years, but I think this might be a realization moment for trying to fix the wrongs.

  • @LeePenkman-x3t
    @LeePenkman-x3t 7 หลายเดือนก่อน +3

    i went to stockholm once lol...
    AI should do what you say (e bank art generator)
    Google in panic mode about open source/freedom

  • @filevich
    @filevich 7 หลายเดือนก่อน +4

    gemini = meme generation machine

    • @Phobos11
      @Phobos11 7 หลายเดือนก่อน +2

      Gememe

  • @semi59o
    @semi59o 7 หลายเดือนก่อน +7

    Why does the couple images generated are appearing to be heterosexual couples? Google failed on this diversity issue! /s

  • @Sysshad
    @Sysshad 7 หลายเดือนก่อน +1

    Google are probably afraid of lawsuits from weird ai generated texts

  • @MrVohveli
    @MrVohveli 7 หลายเดือนก่อน

    'We design our image generation capabilities to reflect our global user base' So why can Netflix tell me something isn't available in my country but Google can't localize their AI?

  • @ИванБондырев
    @ИванБондырев 7 หลายเดือนก่อน

    Can you please review the DPO paper? RLHF without RL is a pretty interesting thing!

  • @KlausRosenberg-et2xv
    @KlausRosenberg-et2xv 7 หลายเดือนก่อน +2

    Hahahahahahah, Google is pathetic.

  • @AICodingAdventures
    @AICodingAdventures 7 หลายเดือนก่อน +6

    I actually do not believe that most people in Google are not okay with this. This is an extremely naive take. These adult people are responsible
    for what they are doing. If they have no principles and are ready to sell themselves for actually not much, they should bare criticisms to excuse
    everyone from everything is part of the problem.

    • @ra2enjoyer708
      @ra2enjoyer708 7 หลายเดือนก่อน

      Yeah I don't buy "it's just normal honest people being misled" excuse. Google isn't some general-purpose company accepting/coercing literal whos into its ranks, it literally filters through thousands of candidates by IQ, at the very least, every year. It also has an infra to coordinate all these different people. So it's logically impossible someone is "misled" in there as it's pretty easy to detect a dysfunctional/rogue organization even if it operates as a complete blackbox (in part because a complete blackbox model in an IPO org is a sign by itself). Most likely scenario is they all understand the game and are willing take a faustian bargain, otherwise they would've left the company already.

    • @jay_sensz
      @jay_sensz 7 หลายเดือนก่อน

      Even if most people at Google were not okay with this, they have a lot of incentive to stay quiet about it because HR is completely infested with true believers.

  • @ceezar
    @ceezar 7 หลายเดือนก่อน

    I lost it at the vanilla pudding

  • @MrSur512
    @MrSur512 7 หลายเดือนก่อน

    The thing I do not understand is that, "DONT THEY F TEST???????" 🤣🤣🤣🤣🤣

  • @matveyshishov
    @matveyshishov 7 หลายเดือนก่อน +9

    "this is why humanities are important", uh, how's it not the other way around??
    Isn't HR composed of psychologists and behavioral scientists, NOT STEM?

    • @jonathan2847
      @jonathan2847 7 หลายเดือนก่อน

      Yeah its the humanities that are pushing this nonsense its like studying certain humanities actively narrows your mind.

    • @GeekProdigyGuy
      @GeekProdigyGuy 7 หลายเดือนก่อน +1

      Actually HR is mostly composed of business/comms/mgmt etc. If you had to divide the world into "humanities" and "STEM" then I guess they would fall into humanities, but presumably the tweet was referring to studies of other cultures and so on.

    • @ra2enjoyer708
      @ra2enjoyer708 7 หลายเดือนก่อน

      @@GeekProdigyGuy "Ah yes, I want to waste my STEM degree on studying some african tribes" said no person aiming for STEM degree ever.

    • @agilemind6241
      @agilemind6241 7 หลายเดือนก่อน

      @@ra2enjoyer708 LOL, there are 100% people with PhDs in STEM who study the genetics, disease profiles, and evolutionary history of african tribes.

  • @Franka.1966
    @Franka.1966 7 หลายเดือนก่อน

    Okay... I finally change to another search engine.

  • @TylerMatthewHarris
    @TylerMatthewHarris 7 หลายเดือนก่อน

    It would be nice if I could post photos or screenshots to TH-cam, my instance of Gemini responds with. "We are unable to draw depictions of people, we are working to restore this"

  • @joeburkeson8946
    @joeburkeson8946 7 หลายเดือนก่อน

    There needs to be a way to impinge on these models in a meaningful control narrative which mimics society in general. A protocol which would simulate fear through a loss-process not unlike shamming used to correct behavior. Without the perceived threat of violence we don't stand a chance.

  • @automatescellulaires8543
    @automatescellulaires8543 7 หลายเดือนก่อน +3

    They can recycle it into a Disney original content product.

  • @Oler-yx7xj
    @Oler-yx7xj 7 หลายเดือนก่อน +3

    Can't wait for Durov to release his AI model, where "responsibility" means: "It will not start an uprising... maybe"

  • @jprobichaud
    @jprobichaud 7 หลายเดือนก่อน

    @rattlesnaketv : a bit out of your typical stuff, but I think you'll enjoy this (and the commentary).

  • @dr_flunks
    @dr_flunks 7 หลายเดือนก่อน

    i almost broke down and bought youtube premium but this episode changed my mind.

  • @Darhan62
    @Darhan62 7 หลายเดือนก่อน +1

    Solution: make all AI generated images of people racially ambiguous and impossible to associate with any particular real world race or ethnicity. Maybe do that with gender too, so one can't be accused of gender bias (i.e., make them completely androgynous). And of course, one wouldn't want to be accused of ageism, so make them age-ambiguous too. Then we will have perfectly generic representations of human beings.

  • @atiksafari8582
    @atiksafari8582 6 หลายเดือนก่อน +1

    Black samurai??

  • @BrianMosleyUK
    @BrianMosleyUK 7 หลายเดือนก่อน +1

    Lovely approach, very much resonates with common sense. 😂

  • @TheRohr
    @TheRohr 7 หลายเดือนก่อน

    i wonder how could seriously nobody notice this before release?

    • @TheRohr
      @TheRohr 7 หลายเดือนก่อน

      ...maybe they tested for all the edge cases, but missed to test for the biased case in the end?

  • @Alchemist10241
    @Alchemist10241 7 หลายเดือนก่อน

    Google is trying to rewrite history

  • @3DProgramming
    @3DProgramming 7 หลายเดือนก่อน

    Considering that probably most of the trainings set used are already biased (see wikipedia), I don't see why they even need to inject additional bias in the prompts.

  • @ShiniesAreCool
    @ShiniesAreCool 7 หลายเดือนก่อน +7

    Aside comment: the "racially diversified" images are often embarrassingly stereotypical. Like the Native American rabbi from one example: you know she's Native American because she's got buckskin and a war bonnet, and is standing in a forest while holding a wooden staff.

    • @clray123
      @clray123 7 หลายเดือนก่อน +4

      The correct word for that is "caricature". Just like Google's understanding of ethics is a caricature of actual ethics.

  • @amifunnymynameisbob
    @amifunnymynameisbob 7 หลายเดือนก่อน +4

    People always pretend they don't know woke means until shit like this happens

  • @judgeomega
    @judgeomega 7 หลายเดือนก่อน +1

    short that google stock right now!

  • @arc8dia
    @arc8dia 7 หลายเดือนก่อน +3

    Lol, Affirmative AI Action!

  • @FTWSamFisher
    @FTWSamFisher 7 หลายเดือนก่อน +1

    The racism of the future. We finally made it lads.

  • @jbcola74
    @jbcola74 7 หลายเดือนก่อน

    the google bias is just so big that their new slogan should be 'don't be unbiased'

  • @jols2800
    @jols2800 7 หลายเดือนก่อน +1

    google is out of touch

  • @lastfm4477
    @lastfm4477 6 หลายเดือนก่อน +1

    Uh.... the *one* place you'd think this wacked out AI could actually produce Caucasians -- "bad dancers". Caucasians own up to that.

  • @Create-The-Imaginable
    @Create-The-Imaginable 7 หลายเดือนก่อน +7

    We have reached Communist levels of pretending, and it is horrifying and at the same time hilarious! 🤣 I hate to say it, but after this, you can genuinely say that Google has lost the AI race! Google can no longer be taken seriously!