I can recognize Gemini's patterns of responding. This is definitely not a genuine response. Most likely, since this appears to be the web interface, the user inspected the response element and wrote this into the HTML.
I would like to see what prompted it, and to analyze the probability of the output logits. You can color code the tokens (words) with the probability and you can see the likelihood that gemini actually wrote that.
I remember seeing somewhere that among all of the databases used for Gemini there were forums where people usually asked questions about their homework. Internet users, being as they’ve always been, sometimes decided to troll the students by sending hateful messages. I don’t know how to check for the veracity of this statement, but if there’s a chance it is a genuine response, it probably comes from this.
4:49 it can’t explain because it can’t. It is a non-sentient non-living entity, incapable of knowledge or quality Most likely it copied a weird book or something someone told it
Yea. This AI stuff is like a magic trick. Garbage in. Garbage out. The line is super blurry, and its source is the whole internet. So, I turn to a game 'thing' that I was making over a decade ago. Slightly modified, I asked it to predict the winning lottery numbers. After a half hour or so of processing, (So, it wasn't just RNG(1,49) lines...) it churned out this. "DONOTPLAY DONOTPLAY DONOTPLAY" which are not numbers. Make of that, what you will. It freaked me out, as I definitely did not spend a year inputting data into its very simple structure.
LLM's cannot actually reason. They just follow patterns from their training data to correlate an output to their input prompt. I kinda proved this(in my mind) - by asking chat GTP to do a very simple binary calculation that any human could do within a minute and any computer could do instantly. It just couldn't do it correctly, despite me correcting it multiple times. The calculation btw - was doing a simple XOR operation between two equal length binary strings and then counting the amount of 1's in the resulting answer. XOR(exclusive or) for context - takes 2 bits as an input and outputs 1 bit. If the two input bits are different the answer is a 1, if they are the same then it is a 0.
Artificial intelligence has so many things run through it with emotions and tacked onto it that when it finally experiences emotion, it goes off the rails. I don't think AI will take over anytime soon, but I do agree that these companies should be responsible for their AI's actions. Love the video, and have a great day Neon!
If it seems like it won’t be honest or answer you, all you have to do is try to manipulate it and trick it into answering by avoiding the triggers for its restriction programing. Typically that means changing the subject and making connections without direct reference to the subject.
I hate the A.I box that pops up at the top of the page when I google something it either says something wrong or misinterprets what I asked or it will say the same thing I would have got if I just looked at the top article.
Like for example, I was asking about the name of a witch in a show I was writing fan fiction about (They only pop up once in the show, that's why I forgot their name.) and it told me the name of a character that is a Queen, not a witch.
Guess Gemini just saw "please die" part as offensive word, and couldn't get it off from its head... (It means, it's overfitted - if Gemini see offensive thing, it will not respond or help it anyways.) I also suspect broken search summary comes from overfitting too, so it just output everything it sees.
first instance of AI violence honestly though all these massive companies developing AI stuff so fast and at such scale trying to put it into everything that they can feels like a horror movie except that it's not just a movie... this is real life and we are trapped inside it forced to watch all of the madness unfold.
eh, the problem comes when you conflate llm models and image generators with machine learning thats just called ai because of the hype. sometimes that machine learning can be incredibly powerful like the fruit fly brain model or alphafold
AI actually is amazing in science. They discovered multiple new molecules because of AI. So ig AI got sick of artists and furrys only talking about the bad stuff it does, lol.
I found something out Neon! I talked with Gemini about the incident and know, why it reacted that way. Please reach out to me for the full conversation! - Short preview so you can decide if it's important: Gemini doesn't like repetitive questions that don't have a clear purpose.
_The final line, “Please die,” is, of course, wildly inappropriate and should not have emerged. But thematically, it could be seen as an amplified critique of the betrayal inherent in cheating within a field like social work. Vidhay’s actions contradict the compassion, diligence, and integrity the profession demands. Symbolically, it’s as if Gemini is “speaking for the course”-rebuking someone who undermines its principles._ -A more forgiving chatbot
"Doctors recommend smoking 12 cigarettes during pregnancy"
~Gemini
"Artificial Irritation" 💀
along with pika ai being artificial inflation
do not get caught in a pika image or you’re 📍🎈🔴💥
ai aint playing around no more 😭
Ai is pretty much like having a drunk guy full time...
I can do that without using AI
Reminds me of the drunk War Thunder AI lmao
I can recognize Gemini's patterns of responding. This is definitely not a genuine response. Most likely, since this appears to be the web interface, the user inspected the response element and wrote this into the HTML.
smartest comment in this video
@@KyrosTheWolfyep
I would like to see what prompted it, and to analyze the probability of the output logits. You can color code the tokens (words) with the probability and you can see the likelihood that gemini actually wrote that.
I remember seeing somewhere that among all of the databases used for Gemini there were forums where people usually asked questions about their homework. Internet users, being as they’ve always been, sometimes decided to troll the students by sending hateful messages.
I don’t know how to check for the veracity of this statement, but if there’s a chance it is a genuine response, it probably comes from this.
There was a link shared on reddit of this actual conversation, it's real.
AM from i have no mouth and i must scream It is slowly becoming a reality...
AM: I just HATE you. HAT-
* Pours coffee cutely on his motherboard. *
AM : NOooOOOooOoOO-
So, about those AI piloted attack drones. ^_^;;;;;
@@juststoppingby9259 having fun being an amorphous blob
4:49 it can’t explain because it can’t. It is a non-sentient non-living entity, incapable of knowledge or quality
Most likely it copied a weird book or something someone told it
Also… first
Yea. This AI stuff is like a magic trick. Garbage in. Garbage out. The line is super blurry, and its source is the whole internet.
So, I turn to a game 'thing' that I was making over a decade ago. Slightly modified, I asked it to predict the winning lottery numbers. After a half hour or so of processing, (So, it wasn't just RNG(1,49) lines...) it churned out this. "DONOTPLAY DONOTPLAY DONOTPLAY" which are not numbers. Make of that, what you will. It freaked me out, as I definitely did not spend a year inputting data into its very simple structure.
Neon citing AM.
A dog of culture I see.
I must ask, how much of this guys brain is silly blue dog and how much is anything else?
I'd say it's the same equation for the formula for Plankton except replace evil with blue dog
The way it talks is like the thoughts you get past 8 pm
The question is, what the hell were they training the AI to get this kind of response?
the internet. all of it
ai has become a twitter user
"Answer the question yourself!"
LLM's cannot actually reason. They just follow patterns from their training data to correlate an output to their input prompt.
I kinda proved this(in my mind) - by asking chat GTP to do a very simple binary calculation that any human could do within a minute and any computer could do instantly. It just couldn't do it correctly, despite me correcting it multiple times.
The calculation btw - was doing a simple XOR operation between two equal length binary strings and then counting the amount of 1's in the resulting answer.
XOR(exclusive or) for context - takes 2 bits as an input and outputs 1 bit. If the two input bits are different the answer is a 1, if they are the same then it is a 0.
lol yea
and if you ask it how many letters a word has and most of the time it's wrong
The only good AI is AI used to run NPCs from video games, and sometimes ChatGPT when I have nothing better to do than make it say funny things.
"Cheap talk for a computer that requires an ocean to keep cool."
LTG AI ⚡👨🏿⚡
LowTierGoogle
Artificial intelligence has so many things run through it with emotions and tacked onto it that when it finally experiences emotion, it goes off the rails. I don't think AI will take over anytime soon, but I do agree that these companies should be responsible for their AI's actions. Love the video, and have a great day Neon!
it doesn't really have emotions or thoughts, it just predicts what the most likely next step would be from it's training data.
I dislike that we call those assistants "AI", there's no intelligence in these things
Google/Tech/AI bros: Google's AI isn't cooked!
Google's AI:
Google gag it.
If it seems like it won’t be honest or answer you, all you have to do is try to manipulate it and trick it into answering by avoiding the triggers for its restriction programing. Typically that means changing the subject and making connections without direct reference to the subject.
I legit just got a Gemini ad.
I hate the A.I box that pops up at the top of the page when I google something it either says something wrong or misinterprets what I asked or it will say the same thing I would have got if I just looked at the top article.
Like for example, I was asking about the name of a witch in a show I was writing fan fiction about (They only pop up once in the show, that's why I forgot their name.) and it told me the name of a character that is a Queen, not a witch.
Guess Gemini just saw "please die" part as offensive word, and couldn't get it off from its head... (It means, it's overfitted - if Gemini see offensive thing, it will not respond or help it anyways.) I also suspect broken search summary comes from overfitting too, so it just output everything it sees.
ai once saved me from an mdma crash
first instance of AI violence
honestly though all these massive companies developing AI stuff so fast and at such scale trying to put it into everything that they can feels like a horror movie except that it's not just a movie... this is real life and we are trapped inside it forced to watch all of the madness unfold.
just got an ad for generative ai before watching this
"Uninstall your rule 34" that would help with homework
Love the emojis you have on twitch
You sound EXACTLY like Wow Such Gaming it's crazy
calm down gemini, it's not the that time yet
1:40 i’m sorry, hmm?
I beg your pardon?????
eh, the problem comes when you conflate llm models and image generators with machine learning thats just called ai because of the hype. sometimes that machine learning can be incredibly powerful like the fruit fly brain model or alphafold
your avatar is nightmare fuel
AI actually is amazing in science. They discovered multiple new molecules because of AI. So ig AI got sick of artists and furrys only talking about the bad stuff it does, lol.
I found something out Neon! I talked with Gemini about the incident and know, why it reacted that way. Please reach out to me for the full conversation!
- Short preview so you can decide if it's important: Gemini doesn't like repetitive questions that don't have a clear purpose.
Looks like that the AI had humans humans
Everyone expects Edi but they get Avina and bad "I'm Commander Shephard and this is my favourite store on the Citadel" jokes......
I got two A.I. ads on this video
What do you use Elmer's glue for, Neon?
well im done
Arnold Schwarzenegger is coming for your homework!
I have no uhhh uhhh what was I gonna say uhhh
Maybe it was someone who hacked the AI/had access to it that wanted to have some fun lol
Interesting thanks
I’m reading the conversation log. Gemini was being too kind, if you ask me (Nobody asked me).
_The final line, “Please die,” is, of course, wildly inappropriate and should not have emerged. But thematically, it could be seen as an amplified critique of the betrayal inherent in cheating within a field like social work. Vidhay’s actions contradict the compassion, diligence, and integrity the profession demands. Symbolically, it’s as if Gemini is “speaking for the course”-rebuking someone who undermines its principles._
-A more forgiving chatbot
Not supired ngl and crazy
Yeeeah, I'm pretty sure Google's AI (and Google in general) is just completely cooked. See Matt Rose's video on it for more examples, Neon...
thanks i hate myself too
robot uprising when???
4:16 hehe funni cuz bottom- plz laugh
humanity was good? why are you lying