The problem with trying to avoid stereotypes is stereotypes are most often true so if you tried to eliminate them you're going to end up making a lot of factual errors. Of course stereotypes are often times not true but they would not be stereotypes if they were not usually true.
Horrible idea personalizing output. Tying output to historically accurate diversity data would be the best option. Nearly any issue could be resolved this way. Also it is criminal to not mention this is a problem that open source LLM and associated models are best situated to do this.
A bunch of analytic philosophy majors thinking Ethics is a major field of study. . . time to go back to pragmatic epistemology and build your ethical model from scratch. DEI in AI creates a nightmare. . .
As is undercorrection. I don't begrudge Canadian Tire for not having an Armani suit, nor Target for not carrying the tractor part I need. But I don't do business with Amazon, because they try to play both ends against the middle while overpromising and under-delivering on a consistent basis. There's a l-o-n-g way to go with all of this..... ¯\_(ツ)_/¯
When I opened Google search Gemini AI popped up with a description of what it could do. It actually warned about being careful with your responses. I couldn’t get it off. It froze up regular Google. Now it’s gone. I thought it must have been a prank. That’s how ludicrous it seemed.
Artificial intelligence is NOT intelligent. It is maddening that people value these AI companies at the crazy levels they do.
The problem with trying to avoid stereotypes is stereotypes are most often true so if you tried to eliminate them you're going to end up making a lot of factual errors. Of course stereotypes are often times not true but they would not be stereotypes if they were not usually true.
Google's reputation for accuracy? Are you sure? Remember that TH-cam is part of Google ...
🎯 Key Takeaways for quick navigation:
00:20 *Google AI Backlash.*
02:54 *Gemini Model Flaws.*
05:17 *Gemini's Diversity Issues.*
08:02 *Google Stock Drop.*
14:13 *Safeguarding Against Bias.*
19:49 *Google's Overcorrection.*
22:55 *Product Failure Debate.*
23:50 *AI Biases Dilemma.*
25:14 *Personalized AI.*
26:24 *Values in AI.*
Made with HARPA AI
Diverse Nazi's was the best response by Gemini.
“Prompt transformation”…now I’ve heard it all. Sounds better to call it “smug transformation” by the usual smug people.
Horrible idea personalizing output. Tying output to historically accurate diversity data would be the best option. Nearly any issue could be resolved this way. Also it is criminal to not mention this is a problem that open source LLM and associated models are best situated to do this.
What is "historically accurate diversity data"?
@@thomasdequincey5811 thought that was self explanatory as in x percent of general population was y so that percentage should show in the output
A bunch of analytic philosophy majors thinking Ethics is a major field of study. . . time to go back to pragmatic epistemology and build your ethical model from scratch.
DEI in AI creates a nightmare. . .
Over-correction is a problem.
As is undercorrection. I don't begrudge Canadian Tire for not having an Armani suit, nor Target for not carrying the tractor part I need. But I don't do business with Amazon, because they try to play both ends against the middle while overpromising and under-delivering on a consistent basis. There's a l-o-n-g way to go with all of this..... ¯\_(ツ)_/¯
When did AI safety alignment (which is about AI not being used for harmful and criminal purposes) become about political correctness?
@@kickingnscreaming Avoiding stereotypes is not same as political correctness.
When I opened Google search Gemini AI popped up with a description of what it could do. It actually warned about being careful with your responses. I couldn’t get it off. It froze up regular Google. Now it’s gone. I thought it must have been a prank. That’s how ludicrous it seemed.
Whoever screwed with my ability to search on the internet for several years I will sue.
Bias cannot be eliminated. It can only be replaced.
Gemini merely reflecting the world view of its creators
Garbage in, garbage out.
The funny thing is i bet there were googlers who thought the same concern. But work politics prevented them from saying anything.
It is an over correction of a prior problem in their products. The original AI team left and spawned their own unicorn AI company.
Classic human assumptions about what information even is. What the universe is, the brain..
These are not bugs. Programs are working as intended..
Doesn't the internet reflect the real world? This is the first time I have heard someone complain that it is biased. What is the evidence for that?
That's funny because the narrators of this podcast are AI... voices they sound very robotic
Restart your phone occasionally