The Miseducation of Google’s A.I.

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 มิ.ย. 2024
  • When Google released Gemini, a new chatbot powered by artificial intelligence, it quickly faced a backlash - and unleashed a fierce debate about whether A.I. should be guided by social values, and if so, whose values they should be.
    Kevin Roose, a technology columnist for The Times and co-host of the podcast “Hard Fork,” explains.
    Guest: Kevin Roose (www.nytimes.com/by/kevin-roos...) , a technology columnist for The New York Times and co-host of the podcast “Hard Fork.”
    Background reading:
    • Hard Fork: Gemini’s culture wars (www.nytimes.com/2024/03/01/po...) , and more.
    • From Opinion: Should we fear the woke A.I.? (www.nytimes.com/2024/02/24/op...)
    For more information on today’s episode, visit nytimes.com/thedaily (nytimes.com/thedaily?smid=pc-t...) . Transcripts of each episode will be made available by the next workday.

ความคิดเห็น • 30

  • @thomasdequincey5811
    @thomasdequincey5811 3 หลายเดือนก่อน +3

    Diverse Nazi's was the best response by Gemini.

  • @stephenboyington630
    @stephenboyington630 3 หลายเดือนก่อน +7

    Artificial intelligence is NOT intelligent. It is maddening that people value these AI companies at the crazy levels they do.

  • @randomami8176
    @randomami8176 3 หลายเดือนก่อน +2

    “Prompt transformation”…now I’ve heard it all. Sounds better to call it “smug transformation” by the usual smug people.

  • @lomotil3370
    @lomotil3370 3 หลายเดือนก่อน +1

    🎯 Key Takeaways for quick navigation:
    00:20 *Google AI Backlash.*
    02:54 *Gemini Model Flaws.*
    05:17 *Gemini's Diversity Issues.*
    08:02 *Google Stock Drop.*
    14:13 *Safeguarding Against Bias.*
    19:49 *Google's Overcorrection.*
    22:55 *Product Failure Debate.*
    23:50 *AI Biases Dilemma.*
    25:14 *Personalized AI.*
    26:24 *Values in AI.*
    Made with HARPA AI

  • @maxheadrom3088
    @maxheadrom3088 3 หลายเดือนก่อน +2

    Google's reputation for accuracy? Are you sure? Remember that TH-cam is part of Google ...

  • @Ryanandboys
    @Ryanandboys 3 หลายเดือนก่อน +10

    The problem with trying to avoid stereotypes is stereotypes are most often true so if you tried to eliminate them you're going to end up making a lot of factual errors. Of course stereotypes are often times not true but they would not be stereotypes if they were not usually true.

  • @UvstudioCaToronto
    @UvstudioCaToronto 3 หลายเดือนก่อน +4

    Over-correction is a problem.

    • @gus473
      @gus473 3 หลายเดือนก่อน

      As is undercorrection. I don't begrudge Canadian Tire for not having an Armani suit, nor Target for not carrying the tractor part I need. But I don't do business with Amazon, because they try to play both ends against the middle while overpromising and under-delivering on a consistent basis. There's a l-o-n-g way to go with all of this..... ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

    • @kickingnscreaming
      @kickingnscreaming 2 หลายเดือนก่อน

      When did AI safety alignment (which is about AI not being used for harmful and criminal purposes) become about political correctness?

    • @UvstudioCaToronto
      @UvstudioCaToronto 2 หลายเดือนก่อน

      @@kickingnscreaming Avoiding stereotypes is not same as political correctness.

  • @__________5737
    @__________5737 3 หลายเดือนก่อน +2

    Horrible idea personalizing output. Tying output to historically accurate diversity data would be the best option. Nearly any issue could be resolved this way. Also it is criminal to not mention this is a problem that open source LLM and associated models are best situated to do this.

    • @thomasdequincey5811
      @thomasdequincey5811 3 หลายเดือนก่อน

      What is "historically accurate diversity data"?

    • @__________5737
      @__________5737 3 หลายเดือนก่อน

      @@thomasdequincey5811 thought that was self explanatory as in x percent of general population was y so that percentage should show in the output

  • @katherineelizabethco
    @katherineelizabethco 3 หลายเดือนก่อน

    When I opened Google search Gemini AI popped up with a description of what it could do. It actually warned about being careful with your responses. I couldn’t get it off. It froze up regular Google. Now it’s gone. I thought it must have been a prank. That’s how ludicrous it seemed.

  • @brownhorsesoftware3605
    @brownhorsesoftware3605 3 หลายเดือนก่อน

    Bias cannot be eliminated. It can only be replaced.

  • @existentialvoid
    @existentialvoid 3 หลายเดือนก่อน +2

    A bunch of analytic philosophy majors thinking Ethics is a major field of study. . . time to go back to pragmatic epistemology and build your ethical model from scratch.
    DEI in AI creates a nightmare. . .

  • @Poppy-yx8js
    @Poppy-yx8js หลายเดือนก่อน

    Whoever screwed with my ability to search on the internet for several years I will sue.

  • @meh4062
    @meh4062 3 หลายเดือนก่อน +1

    The funny thing is i bet there were googlers who thought the same concern. But work politics prevented them from saying anything.

    • @krl970
      @krl970 3 หลายเดือนก่อน

      It is an over correction of a prior problem in their products. The original AI team left and spawned their own unicorn AI company.

  • @jon9625
    @jon9625 3 หลายเดือนก่อน +6

    Gemini merely reflecting the world view of its creators

  • @goldnutter412
    @goldnutter412 2 หลายเดือนก่อน

    Classic human assumptions about what information even is. What the universe is, the brain..
    These are not bugs. Programs are working as intended..

  • @felipearbustopotd
    @felipearbustopotd 3 หลายเดือนก่อน +4

    Garbage in, garbage out.

  • @orionpk
    @orionpk 3 หลายเดือนก่อน

    That's funny because the narrators of this podcast are AI... voices they sound very robotic

    • @joythought
      @joythought 3 หลายเดือนก่อน

      Restart your phone occasionally

  • @zfvr
    @zfvr 3 หลายเดือนก่อน

    Doesn't the internet reflect the real world? This is the first time I have heard someone complain that it is biased. What is the evidence for that?