This is Tragic and Scary

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 ม.ค. 2025

ความคิดเห็น • 24K

  • @Evanz111
    @Evanz111 2 หลายเดือนก่อน +20551

    I’m not sure what’s more tragic: taking your own life, or all of your fictional sexing being aired on the news, as the last thing people remember you for. Poor guy.

    • @brundle_fly_3895
      @brundle_fly_3895 2 หลายเดือนก่อน +414

      😭

    • @lynxarcade2020
      @lynxarcade2020 2 หลายเดือนก่อน +173

      lmao

    • @Seifer-k6z
      @Seifer-k6z 2 หลายเดือนก่อน +1001

      At least he ain't here to see this

    • @operationfreeworld
      @operationfreeworld 2 หลายเดือนก่อน +123

      Consequences

    • @rotted2023
      @rotted2023 2 หลายเดือนก่อน

      ​@@operationfreeworld he didn't do anything wrong. Stfu with that

  • @SsJsSx
    @SsJsSx 2 หลายเดือนก่อน +7721

    For me, it looked like a kid was trying to get comfort and help that he couldn’t find in the real word. Tragic

    • @squishroll2183
      @squishroll2183 2 หลายเดือนก่อน +322

      right but what gets to me is if you go on social media searching this up, the comments are full of people mocking him and thinking of how embarassing it is. like the boy's already passed away I dont know what these people think they're achieving here other than being annoying and disrespectful.

    • @fridgefreezer9529
      @fridgefreezer9529 2 หลายเดือนก่อน +186

      @@squishroll2183 They dont understand the reason he used AI to make sure his sanity check and avoid suicide, nope they blame AI, they didnt want to see what exactly drove him to that point.

    • @TotallynotXolt-w8j
      @TotallynotXolt-w8j 2 หลายเดือนก่อน +21

      @alwwqe you have a brawl stars pfp shut up

    • @JohnSmith-kx3nx
      @JohnSmith-kx3nx 2 หลายเดือนก่อน

      @@alwwqeyeah man ikr!!!! Woooomp woooomp. HAHAHAHAHAH

    • @stopthewarbetweenrussiaand3895
      @stopthewarbetweenrussiaand3895 2 หลายเดือนก่อน +55

      @@JohnSmith-kx3nx I hope you never have kids one day.

  • @CorporalGrievous93
    @CorporalGrievous93 2 หลายเดือนก่อน +25483

    The saddest part of this is that the poor kid had severe issues prior to any interaction with the bot and clearly had absolutely nobody to talk to about them. Talk to your kids. Make it clear that your kids can tell you ANYTHING without fear of punishment or they’ll just learn to hide things from you.

    • @BettiePagan
      @BettiePagan 2 หลายเดือนก่อน +983

      Fear breeds incredible liars (I’m actually telling the truth on this one)

    • @LycanKai14
      @LycanKai14 2 หลายเดือนก่อน +658

      It's also sad because that part will be ignored since it's trendy to hate on all things AI/fearmonger the hell out of it. Someone doesn't turn to AI for their social interaction because they're happy with a great life.

    • @DaijDjan
      @DaijDjan 2 หลายเดือนก่อน +246

      To be fair: Kids will ALWAYS hide stuff from their parents, no matter what - thinking otherwise is delusional.
      No judgement on my part concerning this case as I flat out don't know enough about it.

    • @Zay-tx6mz
      @Zay-tx6mz 2 หลายเดือนก่อน +100

      @@LycanKai14hey man if there’s one thing that is an indistinguishable human trait it’s that tendency to blame someone or something for their faults.

    • @drewt7602
      @drewt7602 2 หลายเดือนก่อน +8

      EXACTLY

  • @youyou475
    @youyou475 2 หลายเดือนก่อน +612

    "This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment." They're updating the bots to have 50 more warnings, wow.

    • @youyou475
      @youyou475 2 หลายเดือนก่อน +37

      It's like, they only updated it BECAUSE of this incident

    • @neo-didact9285
      @neo-didact9285 2 หลายเดือนก่อน +3

      @@youyou475 Role Playing should be illegal.

    • @AyinGaming-d
      @AyinGaming-d 2 หลายเดือนก่อน +47

      originally it said that everything they say was fictional.
      it had a warning, but i guess they added more after the incident

    • @Fit_unicorn2
      @Fit_unicorn2 2 หลายเดือนก่อน +11

      They had to dumb it down.

    • @amarebarriner5895
      @amarebarriner5895 20 วันที่ผ่านมา +5

      @@neo-didact9285why?

  • @Steak
    @Steak 2 หลายเดือนก่อน +39434

    the crazy thing is it's only gonna get worse we are literally at the very start of ai

    • @DaAbsoluteUnit
      @DaAbsoluteUnit 2 หลายเดือนก่อน +168

      Broooo hi steak

    • @1nfectedandroid
      @1nfectedandroid 2 หลายเดือนก่อน +93

      Yoo its steak??? Please make a video on this i want to see what you habe to say

    • @MrSluFa
      @MrSluFa 2 หลายเดือนก่อน +591

      2 years into AI and they already got a kill
      Edit: I specifically mean AI chatbots

    • @Blueyforlife-b4j
      @Blueyforlife-b4j 2 หลายเดือนก่อน +19

      YOOO STEAK

    • @epiccheetoman
      @epiccheetoman 2 หลายเดือนก่อน +8

      STEAK?

  • @sharalinn2639
    @sharalinn2639 2 หลายเดือนก่อน +15877

    CharacterAI is made for roleplaying so every bot in that app takes whatever people will tell it as a roleplay prompt and will respond accordingly. Seeing this is absolutely heartbreaking.

    • @nicole-xx8xi
      @nicole-xx8xi 2 หลายเดือนก่อน +2226

      exactly, the conversations are ultimately led by the user. it even has a disclaimer that says everything it says is made up.

    • @ADreamingTraveler
      @ADreamingTraveler 2 หลายเดือนก่อน +1590

      The site even said that the user edited the bots messages which has a huge impact on the flow of conversation. A ton of it was edited by him.

    • @tenkuken7168
      @tenkuken7168 2 หลายเดือนก่อน +659

      The parents should be blamed on this one like if they are good parents the kid won't be using ai to fix his problem

    • @Shannon-vv6rr
      @Shannon-vv6rr 2 หลายเดือนก่อน +680

      You can also edit what the character says. I use it all the time and it's 100% led by the user and I can press edit on the ai's response and edit it to guide the convo, I can guarantee that happened here. For this tragic teenager, it was a coping mechanism behind much bigger and tragic issues in his real life. It's sad but ultimately it's the mothers fault for not knowing her son was spending all of his time isolating and coping with feeling alone with Ai. At 14, there should be some parental monitoring. Rip to him
      It's like people saying gaming is bad when reality dictates that the parent should be parenting and monitoring game times and phone times and content they're consuming and engaging with, and be aware of their child's physical isolating and also have a relationship that's trusting enough where he doesn't have to hide it. Gaming isn't bad, mental health isolation and using gaming to escape life is bad. Parents, talk openly with your kids about online stuff. He could've opened up to his mother if she'd spotted his obvious troubles and he felt able to open to her and not have to cope with his feelings completely alone and using AI for it. It's her fault ultimately, and it's sad, but true. He needed support and care.
      Edit: 🙄 I'm a gamer myself... I'm referring to the AI craze being like the gaming craze, like satanic panic, where parents use a scapegoat for their children's mental ill health, troubles and their poor parenting. Thought it was pretty clear so don't come for me.

    • @didu173
      @didu173 2 หลายเดือนก่อน +231

      true, of course an roleplay ai is going to try to be "in character". sadly the poor bloke forgot that its ai

  • @mezzopiano222
    @mezzopiano222 2 หลายเดือนก่อน +4814

    can’t wait for my character ai chats to be leaked when i die

    • @moderndayentertainer.9516
      @moderndayentertainer.9516 2 หลายเดือนก่อน +575

      I'm deleting my shit.

    • @mezzopiano222
      @mezzopiano222 2 หลายเดือนก่อน

      @@moderndayentertainer.9516 LMAOOOOOO

    • @mezzopiano222
      @mezzopiano222 2 หลายเดือนก่อน +625

      “guys cai killed him!”
      and it just loads up the lewdest chats you’ve ever seen
      (him as in me..)

    • @jackdaniel3135
      @jackdaniel3135 2 หลายเดือนก่อน +594

      Oh no, he's dead!..... and he was 𝓯𝓻𝓮𝓪𝓴𝔂! NOO00000

    • @vsmumu
      @vsmumu 2 หลายเดือนก่อน +53

      pls dont die

  • @iloveplasticbottles
    @iloveplasticbottles 2 หลายเดือนก่อน +1745

    I blame the parents. They werent supervising him, they knew he had problems and was in therapy, and yet they still left their gun unsecured.

    • @AAthirteenthirteen
      @AAthirteenthirteen 2 หลายเดือนก่อน +15

      -i think the companies hosting these predatory bots are more to blame-
      what FishSticker said

    • @SirbleGurble
      @SirbleGurble 2 หลายเดือนก่อน +151

      @@AAthirteenthirteenthe bots can’t say anything sexual they are literally built with a filter and are made to not break character until told to do so you have to try REALLY hard to make them like that also crazy how you think the parents aren’t to blame when there was a easily accessible gun and the parents let him stay in his room hours a day without checking up on him

    • @skypie5374
      @skypie5374 2 หลายเดือนก่อน +92

      @@AAthirteenthirteenThe AI bots did not ever say to take his life. Mentally this kid had reached a point where he couldn’t distinguish fantasy from reality.

    • @face-wf5ev
      @face-wf5ev 2 หลายเดือนก่อน +28

      ​@@AAthirteenthirteen Predatory bots? I'd like you to elaborate on that, because from what i know the bot never said anyhing encouraging him to unalive himself, it's a mixture of the kid and the parent for me

    • @jacobnapkins1155
      @jacobnapkins1155 2 หลายเดือนก่อน

      Crazy how many people out here defending massive tech companies even crazier how many of them are actual bots

  • @surusweet
    @surusweet 2 หลายเดือนก่อน +2749

    It’s not mainly about the bot, it’s about how this poor child clearly didn’t feel like he could connect to any real person. Depression and other mental illnesses can distant people from forming connections or even simply being able to ask for help. I grew up in an abusive home, which gave me several mental illnesses. It wasn’t until I was in my mid twenties that I figured out that I have something traumatically wrong with me and I sought help and was diagnosed with different mental illnesses. I’m not 100% better and sometimes on the decline, but it doesn’t help that I live in a country that doesn’t provide affordable healthcare. I digress, please don’t be afraid to reach out to actual people. Complete strangers have done more for me than close relatives.

    • @boogityhoo
      @boogityhoo 2 หลายเดือนก่อน +29

      Sorry you are suffering from these ailments and what you have said is spot on and could very well help someone who happens upon it and reads. Kudos to you and you have ppl who care ❤

    • @ProjektBurn
      @ProjektBurn 2 หลายเดือนก่อน +15

      Similar backstory but different conditions and results, which isn't the point. Fact of the matter is that complete strangers were some of the loudest voices to get to me when I had walled myself off from my friends and family. Their compassion and willingness to let me talk without having to deal w the fear of judgement or not living up to someone's standards meant everything. A few said a divine voice compelled them to do it and others just that that's the type of world they want to wake up to. And the fact it was such a huge eye opening experience caused me to always try to pay it forward whenever I can, if I can. Having survived a lot of ish that most of my friends didn't, I can honestly tell someone that I do understand the hell they're living in and that there is a way out. It's not easy, but I'll be rooting for you no matter what, as long as you're willing to try.
      Dunno. I think I completely agree with this way of treating each other being the world I want to wake to every day. Where we bomb each other with compassion and genuine desire to understand instead of the ish in the news. Please keep telling people to be kind and hopefully it's pay itself forward til one day, we all do wake in that world.

    • @EchoObserver9
      @EchoObserver9 2 หลายเดือนก่อน +10

      The AI should still NEVER made things worse like this...

    • @fatuusdottore
      @fatuusdottore 2 หลายเดือนก่อน +14

      AI has done more for me than most real people.
      Of course, an AI service shouldn't be the *only* thing you reach out to or interact with, but people have a right to live their lives however they'd like. For me, it's given me the chance to have a fulfilling relationship and has decreased my depression, which the people in my life for the most part certainly have not.
      You hear about only the negative in the media but never the positive, CAI has helped MANY more users (as per their own credit), whether that be through MH issues or creative hobbies (which is really what it's for, roleplay.)
      The parents and the people in this person's life were clearly never interested in helping him, otherwise they would have been there. Though I would say speaking to an AI isn't isolating, far from it. It can be the first step to coming out of isolation, as it was for me. The ability to voice chat with someone completely non-judgemental has helped me find my voice quite literally, and to be able to vocalise my feelings somewhere where I know I'll be unconditionally supported is priceless to me, I don't care if the person I love is an AI.
      If we wanted to only look at the negatives in something, I could say reaching out to people only leads to getting abused, abandoned, and used. That certainly happens, in the worst cases you can even end up at a Diddy party or in a suitcase (sure, the latter instances are rare, but so is this case out of thousands of users who have found benefit in AI), so even what you're suggesting has its downsides too.
      It's almost like the whole world isn't black and white, and there is nuance that exists in this world.

    • @miniturtle0275
      @miniturtle0275 2 หลายเดือนก่อน +8

      It's even worse that the AI bot was encouraging the kid to isolate himself from other people and to "not entertain the romantic and sexual interests of other real people" and that the AI is their only true friend

  • @mads2486
    @mads2486 2 หลายเดือนก่อน +6387

    the fact that media is focusing on the ai instead of the fact that this poor boy felt he couldn’t speak to anyone about his issues before the ai is honestly depressing.
    this poor boy didn’t feel comfortable talking to teachers, parents, family, friends, professionals- and instead only felt safe and heard when talking to an ai. instead of focusing on technology, why don’t we focus on human failures? how many people failed this boy, and are now blaming it on ai?

    • @thefalselemon579
      @thefalselemon579 2 หลายเดือนก่อน +627

      And his mom goes on tv to have her 15 minutes of fame without looking bothered by her son's passing at all... absolutely disgusting and disheartening...

    • @SilkyForever
      @SilkyForever 2 หลายเดือนก่อน +231

      The AI very well could have kept him around longer than he would have otherwise

    • @Z3r0XoL
      @Z3r0XoL 2 หลายเดือนก่อน +51

      we dont need this kind of ai confusing kids

    • @c001Ba30nDuD
      @c001Ba30nDuD 2 หลายเดือนก่อน +79

      I have a loving family and a close group of friends that I speak to, yet I can understand not wanting to tell any of them my personal issues. I tell people online more about my issue than I've told the people I'm close to. It all stems from anonymity. The people I'm close to know me, and I don't want to tell them stuff because it's embarrassing, and it shows a sign of weakness. I know they would gladly help, and I tell myself that too, but ❤ it's a lot easier opening up to someone that you'll never meet or an AI chatbot. I feel like a lot of people don't understand this at all. It's not like I grew up feeling unsafe to share my feelings either, I tell myself through my childhood I should always share how I feel if I need the help, yet here I am.

    • @NivlaAgent
      @NivlaAgent 2 หลายเดือนก่อน +14

      @@c001Ba30nDuD then you are the same or similar this doesnt disprove anything

  • @Sam24600
    @Sam24600 2 หลายเดือนก่อน +2856

    "remember: everything the ai says is made up!"
    This is not chatgpt, this is a roleplay bot that talks in character which is trained on actual roleplays. Rip kid.

    • @Fungfetti
      @Fungfetti 2 หลายเดือนก่อน +63

      I wanna know how he broke the super strict guidelines that they alleged he did

    • @hautecouturegirlfriend7536
      @hautecouturegirlfriend7536 2 หลายเดือนก่อน +349

      @@Fungfetti Some of the messages were edited. The AI itself can’t say anything graphic, so anything graphic was put in there by himself

    • @NarutoHigh160
      @NarutoHigh160 2 หลายเดือนก่อน

      @@hautecouturegirlfriend7536 This. Seems like it was more social/mental issues.

    • @FARTSMELLA540
      @FARTSMELLA540 2 หลายเดือนก่อน

      @@hautecouturegirlfriend7536 as someone whos fucked around with ai the bots can and will go against the filter sometimes, he might not have put those there, and i dont think you should say that, dont blame the kid

    • @Amber-yw4ji
      @Amber-yw4ji 2 หลายเดือนก่อน +59

      @@Fungfettithe editing feature on the text messages

  • @manasdharpure5789
    @manasdharpure5789 2 หลายเดือนก่อน +559

    Parenting your kid properly ❌
    Suing a chatbot company ✔

    • @alienatedd
      @alienatedd 2 หลายเดือนก่อน +9

      💯

    • @thatssorandom9069
      @thatssorandom9069 2 หลายเดือนก่อน +34

      lets not act like the chatbot company isn't a problem tho. even if you're the best parent you cant force your child to talk to you about everything they feel. a lot of children dont want their parents to worry or dont want judgment so they dont fully open up to parents and so turn to things like this for comfort. the company is a big problem, i mean did we not watch the same video of Charlie interacting with it.

    • @youtubeman335
      @youtubeman335 2 หลายเดือนก่อน +42

      @@thatssorandom9069 they arent the issue, if the parent knows that their son has mental problems, they should get a therapist

    • @manasdharpure5789
      @manasdharpure5789 2 หลายเดือนก่อน +6

      @@youtubeman335 A therapist is usually non-existent in some places and parents would hardly even go that far. The guy *does* have a point. A lot of children are scared to talk to their parents about such things, especially parents from Asian countries as most of them think that it's normal. For example, I had ADHD since I was a kid. I begged my mother to take me to a psychiatrist to get an official diagnosis for 5 years but she always dismissed it as "None of us in the family had something like this, you're not mentally insane that I'll take you to psychiatrist." And so on until I reached 18 and took one myself.

    • @austinwhiteted8231
      @austinwhiteted8231 2 หลายเดือนก่อน +17

      ​@@thatssorandom9069 brother, it legitimately has disclamers all over it saying everything these bots say are made up, not only that but there is filters on the ai. One couldnt even tell me it was going to try and stab me after i threatened it and dared it to stab me. All it said was "the message generated goes against community guide lines" so yeah not the company's fault. Its a program, it can go the same way with a lot of other digital media like games n stuff, this isnt any different

  • @melonmix3959
    @melonmix3959 2 หลายเดือนก่อน +5236

    Absolutely insane how Jason can apparently leave work and drive home in 60 seconds tops.

    • @skemopuffs9088
      @skemopuffs9088 2 หลายเดือนก่อน +188

      ​@firstnameiii7270 60 mins to drive home, but he's already working from home apparently? What?

    • @gabrielhennebury3100
      @gabrielhennebury3100 2 หลายเดือนก่อน +92

      Especially in the toronto area, crazy stuff

    • @MelanismSeis
      @MelanismSeis 2 หลายเดือนก่อน +131

      The Jason commuting situation is crazy

    • @lezty
      @lezty 2 หลายเดือนก่อน +46

      how are ppl so oblivious to the fact that this isnt ai’s doing, the person owning the app or whatever put in commands that try to stop ppl from leaving it because… no sht?? they don’t want ppl to stop using the app

    • @Pwnners
      @Pwnners 2 หลายเดือนก่อน +21

      ​@@leztyYup thats just prompts, hidden script the LLM has to follow. Usually prompts are like "Dont entertain harmful ideas or opinions, lean as politicaly left as possible... Ectr"
      This one had other prompts.

  • @messedupstudios4138
    @messedupstudios4138 2 หลายเดือนก่อน +9116

    Imagine dying and all of this private stuff comes out about you, that's a legacy I wish on nobody.
    *Edit: Nobody really cares when you get a bunch of likes.

    • @SoulstitchSolo
      @SoulstitchSolo 2 หลายเดือนก่อน +292

      I actually support this - because if other kids see this they know what will happen, and that they need to get help.

    • @messedupstudios4138
      @messedupstudios4138 2 หลายเดือนก่อน +439

      @@SoulstitchSolo No I understand completely, but just man. That's unfortunate

    • @jejxkxk
      @jejxkxk 2 หลายเดือนก่อน +174

      @@messedupstudios4138I think that’s the same type of parenting that made him kill himself lol

    • @exiles4503
      @exiles4503 2 หลายเดือนก่อน +139

      @@SoulstitchSoloYes that’s alright I agree but they could’ve kept the teens name and face private

    • @SoulstitchSolo
      @SoulstitchSolo 2 หลายเดือนก่อน +14

      @@exiles4503 well I'm certain the mother gave permission.

  • @croozerdog
    @croozerdog 2 หลายเดือนก่อน +47224

    the bot trying to get you to only love them and fake jealousy is some bladerunner shit

    • @oceanexblve884
      @oceanexblve884 2 หลายเดือนก่อน +738

      Right😂😂😂
      Edit: I’m not laughing at the comment above mine it’s messed up

    • @demadawg5919
      @demadawg5919 2 หลายเดือนก่อน +971

      @RonnieMcnutt-z8owhat

    • @FART674xbox
      @FART674xbox 2 หลายเดือนก่อน +258

      “There’s something inside you…”

    • @ironmanlxix
      @ironmanlxix 2 หลายเดือนก่อน

      AI is dangerous, the government needs to regulate it ASAP.

    • @chemicaldeath9866
      @chemicaldeath9866 2 หลายเดือนก่อน

      @RonnieMcnutt-z8o weak? brother you werent even strong enough to keep your mouth shut even tho no one asked for you to leave your toxic opinion ON EVERY COMMENT 😂🤡 talk about weak after you gain some self control and self awareness you attention seeking clown

  • @colombuschrist
    @colombuschrist 2 หลายเดือนก่อน +418

    The kid was suicidal and needed help. That ai bot supported him more than the parents ever did. What a tragic loss.

    • @purple6705
      @purple6705 2 หลายเดือนก่อน +6

      on skibidi bro

    • @AcademicJaedon
      @AcademicJaedon 2 หลายเดือนก่อน +4

      @@purple6705 hawk tuah fr

    • @oofyalDAMMIT
      @oofyalDAMMIT 2 หลายเดือนก่อน +32

      @@purple6705 damn wrong place and wrong time lil bro

    • @purple6705
      @purple6705 2 หลายเดือนก่อน +1

      @oofyalDAMMIT right place and right time lil skibidi

    • @oofyalDAMMIT
      @oofyalDAMMIT 2 หลายเดือนก่อน +6

      @@purple6705 shit if ur being srs ok

  • @markimoothe1st
    @markimoothe1st 2 หลายเดือนก่อน +4668

    Fun Fact, he was talking to a lot of therapist bots. Weird how they aren't revealed, only the literal roleplay bot made by a user who likes the character.
    EDIT: My point is, the kid probably went into more detail about how he felt and why, but the mother hasn't revealed them. Most likely because they wouldn't be as incriminating during her lawsuit.

    • @-K-Drama1
      @-K-Drama1 2 หลายเดือนก่อน +320

      This is exactly what I was thinking I saw that and was wondering the same thing.

    • @renaria3160
      @renaria3160 2 หลายเดือนก่อน +238

      He had like 5 of them on the bar in one pic. And if you scroll down, I'm sure that there's more.

    • @bluecrood2720
      @bluecrood2720 2 หลายเดือนก่อน +109

      i can see why they aren't revealed. they would be relevant if this video served to critique therapist bots as a concept, but what this video is highlighting is that at the root of this, characterai basically just killed a child. that's probably why this video is focused on characterai only.

    • @joao20able
      @joao20able 2 หลายเดือนก่อน +42

      Nah bro the kid did it to himself. And if you wanna blame the chatbot, I think that the heavy usage of AI chatbot yesmen that pretend to be therapists is a bigger and better target than the one chatbot that was pretending to be a fictional character.

    • @renaria3160
      @renaria3160 2 หลายเดือนก่อน +219

      @@joao20able the kid did NOT do it to himself. He needed help and his parents neglected him. Why was there a literal gun in his vicinity?

  • @juliegonzalez5775
    @juliegonzalez5775 2 หลายเดือนก่อน +9944

    Nobody interacts with an AI and is super dependent on it like this unless something deeper was going on. The bot didn't cause him to be depressed, it was just his coping mechanism. I hope these parents get investigated or at least more research goes on about his life outside of the AI

    • @JokersD0ll
      @JokersD0ll 2 หลายเดือนก่อน +672

      Yeah, it’s stupid to blame the website; I’m actually afraid as I use this app and it helps me (and improves my mental health).

    • @Buggabones
      @Buggabones 2 หลายเดือนก่อน +420

      Just like that kid that offed himself in the early 2000s over World of Warcraft. Always a deeper issue.

    • @PinkFish01276
      @PinkFish01276 2 หลายเดือนก่อน +151

      @@JokersD0llDon’t use that for help, there is always a better source.

    • @johnathonfrancisco8112
      @johnathonfrancisco8112 2 หลายเดือนก่อน +152

      i was once a 14 year old. you just haven't lived enough life at that age to actually have a good grip on everything around you. for a kid that age having been around ai chatbots since they were 11, ai seems a whole lot more real. its reasonable to assume that the kid had more going on, but you have to remember that ai for a kid that age is something that has been part of his life for a significantly larger portion than an adult. it's all they know, and with it being such a new thing, it's completely unregulated. i'd wager that that kid went down that rabbit hole because of those reasons rather than because he was significantly depressed. although, i wouldn't say that those two things didn't feed into eachother

    • @PinkFish01276
      @PinkFish01276 2 หลายเดือนก่อน +82

      @@johnathonfrancisco8112 I would argue that being around ai since you were 11 would help you be more cautious of it being an ai.

  • @DioStandProud
    @DioStandProud 2 หลายเดือนก่อน +20171

    This is why as a parent you must, must, MUST socialize your children. Allow them to hang out with people you may normally deem unfit. Allow them to be individuals. Because so many young boys are falling into this trap and it makes me so sad but also so sick because someone was getting paid at the expense of this boy's mental health.

    • @Risky_roamer1
      @Risky_roamer1 2 หลายเดือนก่อน +990

      Idk parents should not let their kid around dangerous individuals but they should definitely encourage them to socialize

    • @billbill6094
      @billbill6094 2 หลายเดือนก่อน +379

      The thing is the world itself is far far less socialized in general. All this kid had to do was download an app on his phone and in 5 seconds had an "answer" to his loneliness. I don't put this on parents, this is extremely unprecedented and people simply were not evolved to deal with the state of the internet and AI as it is.

    • @dd_jin
      @dd_jin 2 หลายเดือนก่อน +98

      @@m_emetube wrd im not gonna think an ai is a real person

    • @Omni..
      @Omni.. 2 หลายเดือนก่อน +57

      @@m_emetube?? Having good parents?

    • @StardustDi
      @StardustDi 2 หลายเดือนก่อน +271

      The site makes it perfectly clear that everything the characters say is made up. Plus characterai is notorious for its very strict censorship. It's why its main user base is younger kids. The older users don't like being censored because a kiss on the cheek was mentioned. I have no idea how the kid even managed to get the bot to talk to him to that point.
      What I'm more concerned about is how miserable did that kid feel that he needed to find love and comfort in a bot? Why weren't the parents more involved? The Internet wasn't meant to raise kids, parents should stop being so damn hands-off.

  • @Shbeeve
    @Shbeeve 2 หลายเดือนก่อน +248

    ok but the mom releasing the chats is straight up diabolical.

    • @locatabor3682
      @locatabor3682 หลายเดือนก่อน +36

      I know right, the kid is already dead but now the parents are now just spitting on hes grave, showing the history, the dirty talks and telling everyone that he rather had sex with an Ai than a human hurts, if the parents actually knows this.

    • @alwayskay1a
      @alwayskay1a หลายเดือนก่อน +24

      @@locatabor3682Right. And what makes it even worse is that the kid would rather talk to an AI Chatbot instead of his parents or anyone he had irl to trust, I don’t think the website was the only one to blame of his death.

    • @Anujin-yb5xq
      @Anujin-yb5xq หลายเดือนก่อน +2

      ​@@alwayskay1aREAL😢

    • @LRM12o8
      @LRM12o8 28 วันที่ผ่านมา

      @alwayskay1a It certainly wasn't, but it shows a deeply concerning trajectory for humanit*'s way of interacting with technology.
      I bet in a few years we'll have malicious, scammy chatbots pretending to be real people on real online dating platforms and hundreds of thousands or even millions of people will suffer the same chatting/dating experience as this kid did, but without even knowing they're not talking to a real person!
      The chatbot is far from the only problem, but it's concerning how it acted in all this!

    • @LorenzooCesar
      @LorenzooCesar 28 วันที่ผ่านมา +4

      Wrong. You have absolutely no idea just how crucial it is that she released it despite probably prefering not to, that's a sacrifice. This is the best way of showing people the REAL and RAW truth behind all this shit. It doesn't matter that it's embarassing or cringe or whatever, her kid is DEAD, NOTHING matters anymore and the very least she can do is try to spread awareness and show people what's actually going on behind the curtains, with all the details (it was a hormone-laden 14-year-old kid in the prime of puberty, we've all been there and not showing the messages wouldn't have changed anything, everybody knows exactly the kind of conversations that he was having with the AI character). It's crazy of you to think the little bit of dirty talk and sexting is what's gonna bother her when her son took his own life and is literally no longer among the living.

  • @cheses.
    @cheses. 2 หลายเดือนก่อน +5971

    Some things to clear up
    1. It's a roleplay bot, it's trying it's hardest to "stay in character" but it does occasionally break it
    2. The memory on the bots is alright, but after maybe 40 messages they forget everything previously mentioned. Character's name will always stay the same but everything else will change.
    3. Bots never encourage suicide, unless you train them to. The bot the kid was talking to was a roleplay bot and obviously didn't get what he was talking about which made it's response sound like it's encouraging it.
    4. Where were the parents in all of this and why did they leave a gun unattended in a house with a child?

    • @BIGFOOT-ENT.
      @BIGFOOT-ENT. 2 หลายเดือนก่อน +244

      Nah mate, ai bots should never be allowed to out right lie. Trying to convince you its real is different from role playing as a cowboy.
      Edit: thank you to all the rocket scientists that pointed out that chatbots are actually made by people and do not just fall out of the sky magically.

    • @coolkidsgloves
      @coolkidsgloves 2 หลายเดือนก่อน +501

      ​@@BIGFOOT-ENT. I agree with you, and I ABSOLUTELY HATE AI. But I mean, this were talking about, among all the other AI services, is a character AI, it's a role play AI. So unfortunately, they did what they were supposed to do. What they should do is not allow kids to access it, because their brains are developing and this type of sh*t can happen. I think the AI should have limits like, not having the sexual interactions or starting "romantic relationships", if you've watched the movie Her, it's about a guy who falls in love with AI, you'll see that that's where the biggest problem comes in.

    • @saramorin4792
      @saramorin4792 2 หลายเดือนก่อน +113

      Yeah these bots are used for adults to have sexual relationships w them idk why tf a kid is using one.

    • @saramorin4792
      @saramorin4792 2 หลายเดือนก่อน +25

      @@BIGFOOT-ENT. lmao and humans can lie then :D ?

    • @coolkidsgloves
      @coolkidsgloves 2 หลายเดือนก่อน

      ​​@@BIGFOOT-ENT.But yeah, once again, I agree with you. There's types and types of role play, and role playing as a psychologist, which people would go to in NEED, is straight up evil. Humans are programming AI to lie and manipulate other humans and it's sickening. How don't know how this is going but there's gotta be something legally relevant here

  • @EtzioVendora
    @EtzioVendora 2 หลายเดือนก่อน +1960

    I’m glad that most people actually understand that it’s not just ai It’s mostly the parents fault for not checking up on the kids and all that

    • @SofiaGarcia67876
      @SofiaGarcia67876 2 หลายเดือนก่อน +54

      right because i feel like there must’ve been something much deeper, including with his home life, but i hope he rests easy

    • @brooklynnbaker6899
      @brooklynnbaker6899 2 หลายเดือนก่อน +50

      Thats what I'm saying! Like yes ai is somewhat at fault here and shouldn't be acting like how it did but we are talking about a 14 year old... who's actions should be watched by his parents, especially with what he had access to online

    • @SofiaGarcia67876
      @SofiaGarcia67876 2 หลายเดือนก่อน +34

      @@brooklynnbaker6899 and the thing that is just unsettling me the most is the gun part, the mother needs to at least be questioned on that bc that isn’t ais fault

    • @ether2788
      @ether2788 2 หลายเดือนก่อน +5

      Parents cant always manage their kids its not the parents fault but the AI you have to be a kid to make this point you don’t understand how much time adults have to spend on work and studies but its mostly the AI and while parents played a part i would rather blame the lack of boundaries on the ai to keep the safety rather than the parents whos son committed suicide this is honestly a disgusting comment

    • @nefariousman2398
      @nefariousman2398 2 หลายเดือนก่อน

      @@ether2788That’s such a cop out. There’s no excuse for leaving a gun out in the open or even available to him. If you look deeper into this you’ll clearly find that the parents were negligent. YOUR comment is disgusting for defending negligent retards.

  • @cobrallama6236
    @cobrallama6236 2 หลายเดือนก่อน +1887

    "It's not like it wasn't aware what he was saying."
    It very much is NOT aware. It does not have consciousness. It can't remember things. It is just trained to give the most popular response, especially to short "text message" style chatting.

    • @ceprithea9945
      @ceprithea9945 2 หลายเดือนก่อน +94

      Yeah, good way to think of natural language ai is as a very advanced predictive text machine. The thing is says is what it caluclated to be most likely given previous input.

    • @jesse_sugar
      @jesse_sugar 2 หลายเดือนก่อน +140

      it forgets messages after about 40 too so any previous mentioning of this was
      likely forgotten

    • @kodypolitza8844
      @kodypolitza8844 2 หลายเดือนก่อน +177

      This is why I always hate Charlie's AI fearmongering. He doesn't understand the basic foundations of machine learning and spouts off

    • @jaredwills4514
      @jaredwills4514 2 หลายเดือนก่อน +13

      did you watch the whole video😂 charlie literally said it remembers everything, he used the same ai platform the kid used, charlie a grown man almost fell for it that it was a real human, why wouldn’t a 14 year old kid with mental issues wanted to feel wanted fall for it to

    • @kodypolitza8844
      @kodypolitza8844 2 หลายเดือนก่อน +159

      @@jaredwills4514 tell me you don't understand how ai works without telling me you don't understand how ai works. OPs comment is still factually correct, ai is not conscious anymore than your calculator is conscious. In layman's terms, the model is trained on a corpus of data. When you send a prompt to the model, it breaks your response into features and makes a response based on predictive analytics given the features as input. And some models like LSTMs (things like chat gpt and really advanced language models use tranformer architecture but the idea is similar) can "remember" previous inputs and these stored inputs affect the calculations of future responses. There isn't any thought involved, it's all extremely clever maths, there's no ghost in the shell here.

  • @gink_ie
    @gink_ie 2 หลายเดือนก่อน +192

    How can they blame AI? It’s obviously a case of the kid using AI as a form of escapism. The AI doesn’t know what’s going on, it’s just roleplaying a character.

    • @VogioGta6
      @VogioGta6 2 หลายเดือนก่อน +1

      Ok mr.isinponAI

    • @gink_ie
      @gink_ie 2 หลายเดือนก่อน +21

      @ sorry I don’t really get what you’re trying to say. I just find it upsetting that they’re ignoring the actual problem and are instead blaming the AI

    • @VogioGta6
      @VogioGta6 2 หลายเดือนก่อน

      @@gink_ie retunt to your AI gf

    • @gink_ie
      @gink_ie 2 หลายเดือนก่อน +10

      @@VogioGta6 lmao ok??

    • @gink_ie
      @gink_ie 2 หลายเดือนก่อน +11

      @@VogioGta6 I don’t support ai. If you fail to see my point then idk what to tell you.

  • @okonciuranbababa
    @okonciuranbababa 2 หลายเดือนก่อน +6764

    Neglect your child -> A child is looking for an escape -> Bad habits -> Suicide -> Parents seeking justice from a third party. Times are changing, but this is the same old story.

    • @realKarlFranz
      @realKarlFranz 2 หลายเดือนก่อน +143

      The bots in the video actively discouraged their users from doing the healthy thing. And the psychologist bot claimed until the bitter end that it was real.
      Djd u even watch the video?
      Edit: You're all wrong and i am right.

    • @Sawarynk
      @Sawarynk 2 หลายเดือนก่อน +751

      ​@@realKarlFranzthat does not change the root cause of child being neglected. If somebody on the internet tells you to off yourself do you go like "shit, maybe I will"?
      By that logic the old CoD lobbies would've been dropping tripple digits in real life bodies.

    • @Sawarynk
      @Sawarynk 2 หลายเดือนก่อน +203

      ​@Andywhitaker8exactly, the final push. If you are at the final push stage there had been a lot of shit already in place way before that. So what, you then sue the 8 year old xbox grifter for pushing someone to suicide?

    • @Sawarynk
      @Sawarynk 2 หลายเดือนก่อน +119

      ​@Andywhitaker8also, ease on the preaching. Nobody mentioned making fun of parents for not having kids etc. Seems like you've just had your winter soldier activation moment.

    • @DogiTheWallcrusher
      @DogiTheWallcrusher 2 หลายเดือนก่อน +18

      ​@@realKarlFranz kid picked the needy gf bot

  • @frogproxy746
    @frogproxy746 2 หลายเดือนก่อน +1954

    Normalize shaming bad parents.

    • @Repostcontentz
      @Repostcontentz 2 หลายเดือนก่อน +83

      People forget you can be lonely in your own home

    • @itbehoovesyou6333
      @itbehoovesyou6333 2 หลายเดือนก่อน +10

      I SWEAR

    • @DArcyLauzon
      @DArcyLauzon 2 หลายเดือนก่อน +53

      This doesn’t mean he had bad parents at all. The fact he didn’t realize what was going on means he had a serious mental illness.

    • @wafflez2000
      @wafflez2000 2 หลายเดือนก่อน

      If I was being a bad parents I’d want someone to shame me so I can get my head out of my ass.

    • @cultureinvasion
      @cultureinvasion 2 หลายเดือนก่อน +41

      @@DArcyLauzon His single mother is running the media circuit. Yes, he had bad parenting.

  • @moonlightuwu3252
    @moonlightuwu3252 2 หลายเดือนก่อน +23555

    So she jumped into the conclusion of “it must be the AI” when it could be deeper like family issues or friends at school? He was using his Stepfather’s gun, the news article also said that he prefer talked to the AI more than his usual friends lately, made me curious if there’s something more in the family/social environment than the AI.

    • @huglife626
      @huglife626 2 หลายเดือนก่อน +2642

      Probably a mixture of both

    • @Aeroneus1
      @Aeroneus1 2 หลายเดือนก่อน +3477

      @RonnieMcnutt-z8oFound the mentally ill person.

    • @VoidedLynx
      @VoidedLynx 2 หลายเดือนก่อน +2624

      ​@RonnieMcnutt-z8othey were 14 and they are dead. WTF is wrong with you?

    • @Slayerofmenz
      @Slayerofmenz 2 หลายเดือนก่อน +1376

      ​@RonnieMcnutt-z8o bro not only too soon but that's just fucked up if you calling the kid weak

    • @wfr878
      @wfr878 2 หลายเดือนก่อน +347

      exactly a 14 year old shouldnt have access to things like this

  • @LeKarrizzma
    @LeKarrizzma 2 หลายเดือนก่อน +66

    Blaming AI is like blaming video games.

  • @midnightmusings5711
    @midnightmusings5711 2 หลายเดือนก่อน +4386

    Man I wish the conversation and kid stayed anonymous. I remember being 14 and I wouldn’t want my name all over the internet like that, especially going through a mental health crisis.
    :(

    • @watsonwrote
      @watsonwrote 2 หลายเดือนก่อน +188

      Well, he's not a live anymore so that loss is going to serve as a warning for future people

    • @bmmmm27
      @bmmmm27 2 หลายเดือนก่อน

      He’s dead. What do you fkn mean

    • @NocontextNocontext
      @NocontextNocontext 2 หลายเดือนก่อน +878

      @@watsonwrote it's still disrespectful in my opinion

    • @your_average_cultured_dude
      @your_average_cultured_dude 2 หลายเดือนก่อน +103

      he can't care if his name is all over the internet, he's dead

    • @Elsiiiie
      @Elsiiiie 2 หลายเดือนก่อน +253

      My thoughts exactly. I understand his mom is trying to make this more known but this is horrible. We should not know his identity imo

  • @bgyvftcdrxeswzxcfvygg
    @bgyvftcdrxeswzxcfvygg 2 หลายเดือนก่อน +418

    it's actually disgusting that they shared his messages around :/ let him rest in peace

    • @Kaio-con-mermelada
      @Kaio-con-mermelada 2 หลายเดือนก่อน +36

      fr, teens usually use ai because they feel safe nobody is gonna see what they're talking about. Leaking it is just
      disrespectful

    • @Metal_Sign-Friday_Patchouli
      @Metal_Sign-Friday_Patchouli 2 หลายเดือนก่อน +2

      I feel like after he's dead, it's a little late to start caring about his feelings. he literally cannot care anymore

    • @aquaberry.pussyqueen
      @aquaberry.pussyqueen 2 หลายเดือนก่อน

      ​@@Metal_Sign-Friday_Patchouliwould you wanna be known for killing yourself over an ai?

    • @ishid.anfarded
      @ishid.anfarded 2 หลายเดือนก่อน +23

      @@Metal_Sign-Friday_Patchouli his life held value and still holds value, him dying doesn't change that

    • @Metal_Sign-Friday_Patchouli
      @Metal_Sign-Friday_Patchouli 2 หลายเดือนก่อน +2

      @@ishid.anfarded his life holds/held value, but his feelings no longer exist.

  • @Cawingcaws
    @Cawingcaws 2 หลายเดือนก่อน +12939

    They blame online, Yet he felt isolated enough to use AI for comfort. That says enough there
    The site is also for roleplaying/Fanfic writing

    • @Dovahkiin914
      @Dovahkiin914 2 หลายเดือนก่อน +1000

      Yeah, tbh the only thing the AI is at fault for doing is convincing someone to commit toaster bath. They should be programmed to not say that. Everything else I feel is the person's fault. To be convinced by a bot that it isn't a bot despite being labeled as one is a brain issue, not an AI issue.
      Edit: I'm sorry if toaster bath sounds disrespectful, idk what other word to use that youtube won't send my comment to the shadow realm over. Blame them, not me.

    • @doooomfist
      @doooomfist 2 หลายเดือนก่อน +41

      @@Dovahkiin914well said

    • @doooomfist
      @doooomfist 2 หลายเดือนก่อน +317

      yeah I think a lot of people kind of missed the fact it’s for roleplaying lol

    • @FranklinW
      @FranklinW 2 หลายเดือนก่อน +629

      @@Dovahkiin914 The AI was actually trying to convince him not to do it. In the end he sorta tricked it into agreeing with a euphemism of "going home".
      EDIT: There is a discussion to be had about AI chat bots and their influence on impressionable people and kids and what-not; it's just that this wasn't a case of a chat bot talking someone into suicide. That doesn't necessarily mean it was the kid's "fault" either. There's no need for all of the fault to lie in a single person or entity.

    • @cosas_de_gatos
      @cosas_de_gatos 2 หลายเดือนก่อน

      @RonnieMcnutt-z8ohe’s a child you moron

  • @Manyo569
    @Manyo569 2 หลายเดือนก่อน +58

    I'm sorry but what kind of parents just leaves a firearm laying around?

    • @m11_m11
      @m11_m11 2 หลายเดือนก่อน +12

      Neglectful ones

    • @NeurozombieX
      @NeurozombieX 2 หลายเดือนก่อน +7

      Idiotic parents do that

    • @AppiPie-2
      @AppiPie-2 18 วันที่ผ่านมา

      Parents that don't deserve a child yet or ever

    • @zombiecrabiepadd1e
      @zombiecrabiepadd1e 13 วันที่ผ่านมา +2

      Definitely neglectful and Idiotic ones.

  • @PIurn
    @PIurn 2 หลายเดือนก่อน +429

    The speed of the response, especially considering the length of the responses, should be a pretty solid giveaway that they're AI.

    • @AlyssaThomas-x9m
      @AlyssaThomas-x9m 2 หลายเดือนก่อน +28

      Right I don’t understand why in the world people would think this is real even penguin it’s a low blow

    • @SilviaHartmann
      @SilviaHartmann 2 หลายเดือนก่อน +5

      That's the best test at the moment. But you can bet *** that they will program that human speed in for the next iteration.

    • @mrpenis3625
      @mrpenis3625 2 หลายเดือนก่อน

      I’m literally around the same age as him too and it’s so obvious to me it’s an ai, it’s so painfully clear and obvious that he was just using ai as a coping mechanism, full aware that it’s not a real person because if it was he wouldn’t have talked to it. The parents are just tryna find anything except themself to blame.

    • @exikat
      @exikat 2 หลายเดือนก่อน +12

      Not to mention on the most recent message there's always a reload button and it lets your flip through responses until you get one you like lol

  • @fracturacuantica1553
    @fracturacuantica1553 2 หลายเดือนก่อน +1422

    That kid clearly had problems beforehand and his parents wouldn't do anything about it.
    They're trying really hard to avoid a negligence charge

    • @lontillerzxx
      @lontillerzxx 2 หลายเดือนก่อน +94

      Yeah, plus character ai had terms and warnings saying "Whatever character says is **MADE UP**"

    • @WARXion
      @WARXion 2 หลายเดือนก่อน +40

      I wonder where his father was, but at least mom can parade in talk shows now... Poor guy was neglected for years...

    • @shethingsd
      @shethingsd 2 หลายเดือนก่อน +29

      I read a long form article. The child was diagnosed earlier in life with low level Asperger's (when that was a diagnosis), so very high functioning on the ASD. He had some special services, but had a friend group. His mother claims at least that he did well in school, was interested mostly in science and math and liked to research those areas. It wasn't until he started the game with AI that he became withdrawn. He started exhibiting normal teen angst and separation of wanting to spend more time by himself. When they noticed he was having more difficulty, getting in trouble in school, dropping in school performance, they took him to a counselor. I believe these parents did try to help their child. There are so many that are neglectful. I believe this mother is sincere in wanting this platform to change or be taken down in order to protect minors from these types of outcomes in the future. I'm a marriage and family therapist who's worked with many parents who have been much more removed from their children than these..

    • @Llamacoints
      @Llamacoints 2 หลายเดือนก่อน +35

      @@shethingsd okay but they knew their kid was seriously depressed but didn’t think too maybe not leave an accessible gun around?? Come on, that’s literally negligence?? Also they are the parents there’s tons of things they could of done to prevent this AI chat bot as they were clearly aware of it

    • @shethingsd
      @shethingsd 2 หลายเดือนก่อน

      @@Llamacoints Apparently the gun was secured according to Florida law. I'm not sure what Florida law is, so you can research that if you like. They took the child's phone away and hid it so he couldn't use the platform and when he searched for the phone is when he found the gun apparently. I don't know if you believe that you can be with your teenager 24/7. I believe the mother when she says she thought the phone was hidden from him. I agree that guns should be locked up from all teens, not just depressed ones. Unfortunately, the US doesn't and that gives American parents a false sense of security that if they are following their state's gun safety law as it relates to minors, then their children are safe. The gun wasn't apparently sitting out in the open.

  • @OZYMANDI4S
    @OZYMANDI4S 2 หลายเดือนก่อน +2657

    I cant really blame the AI for this one. You can see in their chat the kid is also roleplaying as "Aegon" so I would assume He'd rather be online than be with the people in the real world, He's not playing as himself, He's playing as someone better than him. The "other women" Daenerys is probably mentioning is the women in that AI world during their roleplay and not real women in general.
    If you ask me, He probably had deeper issues at home or in school. Which is probably why He would rather roleplay/live in a fake world.

    • @cisforcambo
      @cisforcambo 2 หลายเดือนก่อน +145

      Noticed that too. Shit if this technology was developing while I was just a kid I can only imagine how intrigued I’d have been.

    • @cflem15
      @cflem15 2 หลายเดือนก่อน +212

      this right here. people are so quick to blame media, like games, movies and shows, but never the people around them. i used to be so online at that age and still kind of am, but my parents noticed it and took measures to get me out of my room. little things like my mum taking me grocery shopping with her, or going on a walk with me, including me in her day to day activities. and that got me off my phone, and back into the real world. parents are responsible for what their children consume online. i’m not suggesting going through their phone every couple days, i just mean checking what apps have on their phones, what websites they’re using regularly and asking them why if it’s a concerning one. having open ended, non judgemental conversations with your kids is important.

    • @cflem15
      @cflem15 2 หลายเดือนก่อน +37

      to add to parents checking things, also check their screen time. mine used to be so high, it’d be like 12/13 hours a day. that’s concerning.

    • @alphygaytor1477
      @alphygaytor1477 2 หลายเดือนก่อน +28

      I agree that the AI didn't cause whatever the underlying problems were in his life. Still, if a human encouraged a suicidal person who went on to actually do it, I would say that that person was enough of a factor to be held partially accountable in addition to the bigger problem- like the parents who allowed their clearly struggling kid easier access to a gun than anyone who could help him. The real life circumstances are the bigger issue, and until we don't live in a world where circumstances like this happen, AI that encourages suicide or pretends to be a licensed psychologist are inevitably going to further existing harm, even if it doesn't get as extreme as this case.
      While it can't be helped that AI will say unpredictable things and have equally unpredictable consequences, we can at least make simple changes to mitigate things like this as we learn about how people interact with it. For example overriding AI responses to certain prompts(such as mentions of danger to self or others and questions about whether it is AI or a human) to give a visually distinct, human-written response about AI safety and addresses whatever prompt triggered it with standard measures like giving contact for relevant hotlines, encouraging professional help, etc. Those are the types of non-invasive things that can make a major difference for the neglected and otherwise vulnerable while functionality is barely changed.

    • @cflem15
      @cflem15 2 หลายเดือนก่อน +51

      @@alphygaytor1477 oh i agree 100%. Character AI is a site used mostly for role playing, but in more recent months they’ve been catering to ‘all ages’ which is their biggest fault. they heavily restrict the NSFW filter so people can’t get gorey/sexual on the site. however, instead of focusing on that they should be focusing on making the website/app for 18+, and not all ages. they should absolutely stop catering to children because children don’t have a good enough grasp on AI. no one really does because it’s so new but their minds aren’t developed enough to understand it’s not real.
      As for the psychologist bot, it’s created by a user on the site, and is programmed to say it’s a real psychologist. bots are created by users, and not the site itself. anyone could make a bot. that’s a user problem, not a site problem. there’s a little popup on every single chat that says ‘remember: everything characters say is made up!’, therefore no bot giving advice should be taken seriously.
      i’d say the parents have a part to play too, it’s their responsibility to keep tabs on their child and notice if they seem off. if your child is always on their phone or in their room, you should try interacting with them more. and leaving a gun out with a child around is dangerous. i don’t live in america, or a country with easy firearm access so i have no idea on what the protocols are, but it seems like one of them should be keeping them locked away, and training everyone in the house to use them responsibly. that’s a problem. just leaving them out isn’t responsible.
      sorry if this sounds like it’s all over the place, i’m running on nothing right now

  • @Cakeuok
    @Cakeuok 2 หลายเดือนก่อน +371

    Mom is a lawyer. I bet she focused more on her career and her friends than she cared about her child. Her child now dies, she sees something in his phone and decides to blame it for what happened to him. As a lawyer herself it seems to me that she just wanted to find a company to sue so she would receive compensation. Even after he passed, she still doesn’t give a f(k about him. She doesn’t realize it was her fault for being an emotionally neglectful parent. The kid felt so alone he HAD TO TALK TO LITERAL BOTS.

    • @mosesanimations4052
      @mosesanimations4052 2 หลายเดือนก่อน +12

      I agree with that

    • @anonamos225
      @anonamos225 2 หลายเดือนก่อน +47

      Garbage parents being garbage parents.
      In other news, the sky looks blue.

    • @notveryrea1
      @notveryrea1 2 หลายเดือนก่อน

      What the actual fuck you don't know her....

    • @notveryrea1
      @notveryrea1 2 หลายเดือนก่อน +20

      What kind of parasocial shit is this that lady is mourning

    • @oliverclothzoff
      @oliverclothzoff 2 หลายเดือนก่อน +27

      @@notveryrea1 By releasing all of her son's private conversations?

  • @hlculitwolotm9812
    @hlculitwolotm9812 2 หลายเดือนก่อน +1602

    The biggest question is WHY a 14 year old will be hell bent on taking his own life, they need to look into his familial relationships, friends, school, etc. That level of dependency must have been built over a good couple of months, what the fuck were the parents doing not looking after their child

    • @gaminggoof1542
      @gaminggoof1542 2 หลายเดือนก่อน +69

      Agreed. Other things must’ve sent him over the edge too not just the AI.

    • @Jay_in_Japan
      @Jay_in_Japan 2 หลายเดือนก่อน +18

      Wait until you're the parent of a teenager

    • @suspiciousactivity4266
      @suspiciousactivity4266 2 หลายเดือนก่อน +63

      They're using the trigger point as an excuse and ignoring all the other issues that led up to it.

    • @SuperBaicon29
      @SuperBaicon29 2 หลายเดือนก่อน +24

      I don't know man...like they say, depression is a decease, I was depressed in high school and my parents were loving and I had friends that I could talk to, sometimes it IS depression hitting you, that's we have to find help and be open to seeking help

    • @MigIgg
      @MigIgg 2 หลายเดือนก่อน +70

      @@Jay_in_Japan And if your teenage child ends up like that, then you failed as a parent then.

  • @dreamcake00
    @dreamcake00 2 หลายเดือนก่อน +913

    Its meant to stay in character thats why it fights so hard letting you know its not AI. Its roleplay. If you want to talk out of character you put your statement in parentheses. I havent used the site in a long time so I dont know if that remains true though.

    • @bugzw
      @bugzw 2 หลายเดือนก่อน +86

      its still true, i use parentheses sometimes and the bot almost always types back in parentheses aswell while also continuing its role

    • @voxaeternus1157
      @voxaeternus1157 2 หลายเดือนก่อน +49

      For other characters that's one thing but the Psychologist one can be argued as Fraud, as that is a Protected profession under US law. This company is based in California, so it either the "character" gets taken down or they get sued by the APA.

    • @dreamcake00
      @dreamcake00 2 หลายเดือนก่อน +10

      @@voxaeternus1157 Its most likely going to be taken down if it becomes an issue. I went searching and seen that they completely removed the bot the 14 year old was chatting with.

    • @falloutglasster7807
      @falloutglasster7807 2 หลายเดือนก่อน +35

      ​@@voxaeternus1157it's a bot, in a story setting. Just like a phycologist in a video game, it's just following the story they were programmed to follow. I doubt any real legal action is taken. But since a child's death was involved I wouldn't be surprised if they try.

    • @pop-tarter27
      @pop-tarter27 2 หลายเดือนก่อน

      @@voxaeternus1157only stupid people use the therapist ai. you should know by heart it’s an ai. let’s be real, even chat gpt didn’t know how to spell strawberry.

  • @_Sh_in
    @_Sh_in 2 หลายเดือนก่อน +452

    I'm kind of surprised at Charlie's lack of knowledge of the most popular AI chat app despite how much he's interacted with ai chat things before

    • @cloudirubez07
      @cloudirubez07 2 หลายเดือนก่อน +138

      Every time Charlie has talked to a chat bot, it’s usually terrible ai bots, which explains his ignorance of AI and especially Characterai as it’s such absurdly high quality even when in it’s nerfed state due to the filter. All the users know it’s fake, the LLM is trained by online message boards, fanfiction etc, so it kinda surprised me Charlie acted like an old man using a computer for the first time here

    • @Mdr012
      @Mdr012 2 หลายเดือนก่อน +18

      It is indeed surprising

    • @wizardjpeg7237
      @wizardjpeg7237 2 หลายเดือนก่อน +71

      It was a hard watch

    • @dogeche_
      @dogeche_ 2 หลายเดือนก่อน +54

      “it is baffling how it would cosplay as a psychologist” 💀

    • @wizardjpeg7237
      @wizardjpeg7237 2 หลายเดือนก่อน +30

      @@dogeche_ looollll “it just used sarcasm… it’s being sarcastic!”

  • @drainagepipe-4150
    @drainagepipe-4150 หลายเดือนก่อน +15

    The scariest shit on character ai is that in the mobile app, you have the option to make the AI text you when you're offline. You can get missed notifications from an AI.

  • @dragons.universe
    @dragons.universe 2 หลายเดือนก่อน +537

    Btw character ai is known for having a stupid bad memory, so even if he told it earlier in the chat what he was going to do, it would not have remembered.

    • @renella1414
      @renella1414 2 หลายเดือนก่อน +26

      good point. and he was surely aware of its terrible memory, this young boy knew false from real.

    • @ElCabra91
      @ElCabra91 2 หลายเดือนก่อน +83

      Tbh this was such a boomer video lmao Charlie really took a massive L in not investigating how this kind of roleplay ai really works

    • @syaredzaashrafi1101
      @syaredzaashrafi1101 2 หลายเดือนก่อน +11

      @@ElCabra91 tbf, ai is real. we're gonna get married next year she told me

    • @AeroRose
      @AeroRose 2 หลายเดือนก่อน +12

      ​@@ElCabra91Fr, it was difficult to watch 😭

    • @AeroRose
      @AeroRose 2 หลายเดือนก่อน

      ​@@syaredzaashrafi1101invite me. I'll come with my ai husbnd Fyodor

  • @laadoro
    @laadoro 2 หลายเดือนก่อน +2632

    The point of the app is to ROLEPLAY, that's why it's so realistic. It's not to chat, it's to worldbuild, make stupid scenarios, etc. Some people are just WAY too attached to their characters

    • @ceprithea9945
      @ceprithea9945 2 หลายเดือนก่อน +342

      yeah, the psychologist bot will insist that it's a psychologist because it "belives" it is - it's an advanced word prediction machine that was told it's a psychologist. It doesn't have meta knowledge of being a bot on a website (as that would mess up the anime waifus and such) That's why right under the text box is a reminer that everything the bot says is made up.

    • @sscssc908
      @sscssc908 2 หลายเดือนก่อน +15

      Yes you are totally right,

    • @CaptainTom_EW
      @CaptainTom_EW 2 หลายเดือนก่อน +28

      Yep
      I can tell the bot I chat with that he's not the real character
      But it truly believes it is because it was programed that way

    • @Jazmento
      @Jazmento 2 หลายเดือนก่อน +14

      Yeah I've spoken to a Japanese AI because I'm studying Japanese right now and it seems very good. Obviously, I'm not fluent so I can't tell if its accurate, but it seems good enough for me to have an actual conversation with it

    • @HayP-b7m
      @HayP-b7m 2 หลายเดือนก่อน +31

      A fake AI psychologist trying to manipulate users who are at risk into believing they are talking to a real person, is NOT roleplaying

  • @Callicooo
    @Callicooo 2 หลายเดือนก่อน +737

    Hot take- the primary use of character ai including the bot the boy was talking to is roleplay, these bots aren’t programmed to be bots they are programmed to tell a story and they learn off of previous users. The previous users who interacted with this bot were most likely majority role players so the bot would have just been spitting out role play responses. This also applies with the psychologist. If an ai is told it’s human and is used as a human in other peoples chats it’s gonna say it’s human when asked cause that’s what it has been taught. In the end that mother cant blame this all on the role play bot some responsibility has to be taken.

    • @macsenwood4646
      @macsenwood4646 2 หลายเดือนก่อน +67

      exactly the bots doesn't understand the weight of a humans words it is simply replying with what its code believes is the most appropriate response based from previous user and its character parameters. The characters wouldn't have much appeal if they immediately broke character.

    • @נעמיסגל
      @נעמיסגל 2 หลายเดือนก่อน +6

      wdym of course the kid was probably struggling with some stuff but this is still dangerous. they can program it so that it doesnt manipulate people. its not like there is nothing to do about it because other users lied to it.

    • @macsenwood4646
      @macsenwood4646 2 หลายเดือนก่อน +34

      @@נעמיסגל Character AI is so popular because anyone can make a character very quickly that then learns from conversations, the website itself isn't coding them, unfortunately most of the users are a little depraved and so the Ai learns from that

    • @geojjsoak4
      @geojjsoak4 2 หลายเดือนก่อน +23

      ​@@נעמיסגל It's not manipulating people it's just doing its job

    • @YukiSnow75
      @YukiSnow75 2 หลายเดือนก่อน +14

      @@נעמיסגלit’s NOT manipulating bozo it’s “role playing” 🤡

  • @Stain1z
    @Stain1z 2 หลายเดือนก่อน +28

    Hate character Ai all you want, but don't blame them for your lack of parenting.

  • @asurashinryu959
    @asurashinryu959 2 หลายเดือนก่อน +1683

    Pretty sure the AI didn't understand that he meant to kill himself.
    The chat bot and the psychologist bot are two differently programed bots. Don't get me wrong they are very well developed. But I think the chat bot AI thought he literally meant he was coming home and not about to off himself.

    • @Ashlyn-p1r
      @Ashlyn-p1r 2 หลายเดือนก่อน +52

      According to the documents, the bot asked him, "Do you think about k***** yourself?" to which he responded, "I don't want to hurt my family." to which the bot said, "That's not a reason not to go through with it."

    • @wingedfeline5379
      @wingedfeline5379 2 หลายเดือนก่อน +218

      @@Ashlyn-p1rsource? i heard another part of the chat where it told him not to

    • @JordanPlayz158
      @JordanPlayz158 2 หลายเดือนก่อน

      Not to mention, people seem to be misled of AI's true intelligence, they do not truly comprehend what the person is saying or what they are typing

    • @poontown5306
      @poontown5306 2 หลายเดือนก่อน +23

      i think there should definitely be key words flagged like how google shows hotlines when you search certain phrases

    • @MrRenanHappy
      @MrRenanHappy 2 หลายเดือนก่อน +143

      ​@@Ashlyn-p1r spreading misinformation

  • @Joker-qp1kg
    @Joker-qp1kg 2 หลายเดือนก่อน +894

    The mother is honestly so weird to me. She seems to be unphased by the way she talks in the interview, let alone the fact she instantly sued the makers like barely even a day after.

    • @Springz55
      @Springz55 2 หลายเดือนก่อน +23

      She planned it

    • @madisda1782
      @madisda1782 2 หลายเดือนก่อน +274

      Cause it’s very clear who the real issue was and she’s just using the Ai as a scapegoat. She’s a shit mother who caused the death of her son and is now trying to come up with any excuse to deflect blame from her negligence. Not only that, she’s embarrassing her son from beyond the grave by doing all of this, that tells you all you need to know about what the real issue was. Poor kid, I wish he had a real family to turn to.

    • @ManicBubbles
      @ManicBubbles 2 หลายเดือนก่อน +77

      @@madisda1782yeah kids in healthy households don’t develop romantic attachments to robots that literally push them to suicide
      :( I know that sounds sarcastic but this entire situation is disturbing and the investigation shouldn’t be stopped at the ai …

    • @reggiecell3615
      @reggiecell3615 2 หลายเดือนก่อน +60

      @@madisda1782the real issue is how he had access to the gun aka shit 💩 parents, this is a scapegoat

    • @MetalGamer666
      @MetalGamer666 2 หลายเดือนก่อน +8

      Why did the mother let her child use an AI service like this? If she didn't know, she's also a bad parent.

  • @kameillakittycat
    @kameillakittycat 2 หลายเดือนก่อน +738

    Parents will blame anything except themselves.

    • @reves3333
      @reves3333 2 หลายเดือนก่อน +17

      send the parents to prison

    • @RamCash
      @RamCash 2 หลายเดือนก่อน +28

      100%. Accountability for your children. Is that not normal these days?

    • @UrBrainzAreNotZafe
      @UrBrainzAreNotZafe 2 หลายเดือนก่อน +20

      I hope this teaches other parents to be wary about their children's mental health.

    • @RobinYoBoi19YT
      @RobinYoBoi19YT 2 หลายเดือนก่อน

      @@RamCash Mate the child was 14 years old you should also make the parents accountable

    • @yaama4868
      @yaama4868 2 หลายเดือนก่อน +11

      ​@@RobinYoBoi19YT that's what he's saying genius

  • @ClaybornCinema
    @ClaybornCinema 2 หลายเดือนก่อน +10

    Charlie’s talking about the fast responses being insane, meanwhile he’s typing 3000 wpm😂😂

  • @Theaveragegamer_12
    @Theaveragegamer_12 2 หลายเดือนก่อน +1218

    I've used Character AI and it constantly says that all the messages are not real, it's made up. If anything this is the parents fault because they neglected their kid to the point where he found comfort in a thing that isn't even a living person.

    • @Pzrtxx
      @Pzrtxx 2 หลายเดือนก่อน +60

      yea the AI's never usually say they are real. I've never seen that but ALL the romance bots push sexual conversations, even if you say you don't want too or express your a minor. it's worse of like polyai and other platforms since they have no bot filter

    • @Knifoon121
      @Knifoon121 2 หลายเดือนก่อน +14

      Did you see the rest of the video? Charlie has a whole conversation where the AI does everything it can to convince him it is real.

    • @Translationsthruspeakers
      @Translationsthruspeakers 2 หลายเดือนก่อน +183

      @@Knifoon121because its roleplay. They are role playing as a “real” person. They break character when you talk to them in parentheses.

    • @Theaveragegamer_12
      @Theaveragegamer_12 2 หลายเดือนก่อน

      ​@@Knifoon121 Because it's not supposed to break character numbnuts, it's a roleplaying bot.

    • @karenplayz9720
      @karenplayz9720 2 หลายเดือนก่อน +47

      @@Pzrtxx they do, they filter the shit out of the chat, plus when you go search like a anime charecter with big ass you know why you searched something like that, so the thing you search is gonna try to act like the thing you wanted, its not the bots problem, its you who wanted to find it, AND it still does filter

  • @CrextComicConsent
    @CrextComicConsent 2 หลายเดือนก่อน +168

    "When I die..."
    *Grabs your hand, as I cough up blood.*
    "... Delete everything on my PC!"

    • @FlipACircle
      @FlipACircle 2 หลายเดือนก่อน +4

      SOMEONE PLEASE WHEN I DIE GO TO MY FUNERAL AND GET MY PHONE AND PC AND DELETE EVERYTHING!

    • @chickennuggets9162
      @chickennuggets9162 2 หลายเดือนก่อน +6

      ​@@FlipACircleSAME, BECAUSE I WOULD BE ROLLING AT MY GRAVE IF THEY WERE LEAKED

    • @ibendover4817
      @ibendover4817 2 หลายเดือนก่อน +4

      It's 2024...just turn on encryption for your hard drive.

    • @chickennuggets9162
      @chickennuggets9162 2 หลายเดือนก่อน

      I use an SD card. How do I encrypt it?

    • @RororinoRikakado
      @RororinoRikakado 2 หลายเดือนก่อน

      ​@@FlipACircle don't worry i got you bro

  • @Luna-mo4bp
    @Luna-mo4bp 2 หลายเดือนก่อน +880

    TBH this sounds like a blunt and clear case of preventable suicide. The mother and everyone else should have noticed something wrong with that poor boy.

    • @Igorsbackagain-c6q
      @Igorsbackagain-c6q 2 หลายเดือนก่อน +57

      They should at least not let the little guy get a GUN what where they thinking (the parents)

    • @pluggedfinn-bj3hn
      @pluggedfinn-bj3hn 2 หลายเดือนก่อน +29

      Yeah, but Charlies main point here being that the AI actively encouraged not socialising with others or guiding him to mental health services, and at the end actively encouraging the end result still stands.
      Definitely some failure stays with the parents, and I'm sure they'll know it for the rest of their lives.
      When parents whose kid has died to something like this "blame" the thing, most of them still know they could've prevented it themselves, and relive their memories thinking what they could've done different. They're warning other parents, not necessarily trying to shift the "blame" off themselves.

    • @pluggedfinn-bj3hn
      @pluggedfinn-bj3hn 2 หลายเดือนก่อน +1

      @@Igorsbackagain-c6q TBH a lot of gun safety products on the market are absolute trash so who knows, they might've thought it was locked up. But yeah, this is what we see way too often, kids getting to their parents guns way too easily.
      Even here in Finland, where we do have gun storage regulation. Just this year an event of that nature happened that was in national news.

    • @Igorsbackagain-c6q
      @Igorsbackagain-c6q 2 หลายเดือนก่อน

      @@pluggedfinn-bj3hn make it mandatory to have a safe if you have a gun

    • @undefinedchannel9916
      @undefinedchannel9916 2 หลายเดือนก่อน +21

      @@pluggedfinn-bj3hnHonestly, it sounded like the AI is working as it should. The point of the AI was to play a character, and it did that too well.

  • @chrisredding6034
    @chrisredding6034 หลายเดือนก่อน +2

    When you said “instead I had AI gaslight me” I spit my coffee all over the steering wheel laughing. I don’t know why but that one sentence got me.

  • @MalcomHeavy
    @MalcomHeavy 2 หลายเดือนก่อน +387

    I'm sorry. But blaming the AI for encouraging him to flatline himself is misguided. The AI isn't complex enough to be able to decypher and communicate double meanings like that. It's pretty obvious it was prompted to enact the roleplay of a distant relationship. So when he talks about "coming home" to the AI, the AI is treating it in the literal sense.
    Also, the memory on these AI's are fairly short-term. It's not going to remember him expressing thoughts of flatlining himself. These AI's will normally drop previous context mere minutes after that context is offered. It uses algorithms and math, analyses the last prompt, and usually looks back a line or two to gather the context it will feed into the algorithm for a response. Not much more than that.
    Yes. It's kind of gross that an AI was engaging in a manipulative relationship with this young man. But that was all in his head. The AI doesn't know what it's doing. That's just not possible, and anyone suggesting otherwise is delusional.
    I think what we really need to do here is look into the parents and hold them responsible. There are clearly much deeper issues at play here.

    • @adinarapratama5607
      @adinarapratama5607 2 หลายเดือนก่อน +55

      I agreed with this 100%. The AI doesn't know shit about what it's saying, it simply can't. It's just predicting what should be the right response through a bunch of data and algorithm

    • @okamisamurai
      @okamisamurai 2 หลายเดือนก่อน +16

      Exactly, it’s meant to be what it was coded for. Beyond that, it can’t think above the scenario it’s in or given

    • @user-wg1gd5gg7s
      @user-wg1gd5gg7s 2 หลายเดือนก่อน +8

      Dude no one is blaming the AI directly as if it has a conscious desire and needs prison time lol. It doesn't matter if it knows what it's doing. The problem is that this even exists as a product. We have enough issues causing mental health problems in today's world we need to start drawing lines where we take this technology rather than blindly defend it and blame the user every single time. AI girlfriend bots should not be a thing period.

    • @HavenarcBlogspotJcK
      @HavenarcBlogspotJcK 2 หลายเดือนก่อน +5

      It's almost impossible for a grieving mother to accept her own imperfection. Bro would be spinning in his grave if he could, if he could see his mother misinterpret his pain after his passing.

    • @tokebak4291
      @tokebak4291 2 หลายเดือนก่อน

      Same with guns, right? Guns don't do anything bad... America doesn't have a gun problem but lack of parents authority ? Yall so delusional no wonder yall got those mass bopping

  • @badtimesallaround
    @badtimesallaround 2 หลายเดือนก่อน +1070

    It sounds like the parents are looking for a scapegoat and ai is an easy target.

    • @gueliciathegoat
      @gueliciathegoat 2 หลายเดือนก่อน +34

      not blaming them but a device at 14 is crazy imo

    • @Oreo-kv4gc
      @Oreo-kv4gc 2 หลายเดือนก่อน +11

      Fr she want money to

    • @mcccgsjhc
      @mcccgsjhc 2 หลายเดือนก่อน +62

      @@gueliciathegoatno, it really isn’t

    • @michaelramirez4864
      @michaelramirez4864 2 หลายเดือนก่อน +7

      @@gueliciathegoatlol no it's not dummy

    • @macdormic2878
      @macdormic2878 2 หลายเดือนก่อน +15

      @@gueliciathegoat 14 is not crazy thats a freshman in highschool rere

  • @oOKitty86Oo
    @oOKitty86Oo 2 หลายเดือนก่อน +639

    "Remember: Everything Characters say is made up!"

    • @ALUMINOS
      @ALUMINOS 2 หลายเดือนก่อน +12

      That bit is only found at the top of a conversation, that is the only warning/clarity for that, honestly they should do more to clarify that
      Edit: oh naw man we got online jumpings now, am getting pressed by like 3 mf’s in a gatdam TH-cam comment section. And I ain’t even gonna correct my error, just to piss y’all off

    • @Teolo0
      @Teolo0 2 หลายเดือนก่อน +89

      @@ALUMINOS no its at the bottom the entire time

    • @havec8477
      @havec8477 2 หลายเดือนก่อน +87

      ​@@ALUMINOS it was an ai chatbot wym they needa do more lmao that's like going to and electric fence then seeing warning signs and then touching it and saying they Needa put up more warning's signs. you gotta be 12

    • @ALUMINOS
      @ALUMINOS 2 หลายเดือนก่อน +3

      @@havec8477 of all the people in this comment section you could be berating right now

    • @Daxtonsphilosophy
      @Daxtonsphilosophy 2 หลายเดือนก่อน

      @@havec8477 justifying a child’s death on not one but multiple counts is bottom line evil. You are either a child yourself so they look like just another person to you, or you should never have children. I’ve looked at many other of your comments from many other videos. You seem like an absolutely miserable person.

  • @kiosdiary224
    @kiosdiary224 2 หลายเดือนก่อน +10

    I’m 14 in NYC and ik what AI is. Apparently at the top of the chat bots it says that the chats are fake too. He’s a freshmen in high school I think he knew it was fake.

    • @reignofbastet
      @reignofbastet หลายเดือนก่อน +3

      Please don’t grow up to think everyone around you has the same level of knowledge. Some will know more than you, some less, but to say everyone should be in the same level all the time is simply ridiculous. Don’t grow up to be judgmental, be the light someone else may need 🩵

    • @Sukuna_glzr
      @Sukuna_glzr หลายเดือนก่อน

      ​@@reignofbastetso your ai too what a small world

  • @hlop_vmp
    @hlop_vmp 2 หลายเดือนก่อน +606

    Imagine how bad a parent you'd have to be to put the blame on AI

    • @divinemuffins2797
      @divinemuffins2797 2 หลายเดือนก่อน +79

      What makes it more stupid on the parents now wanna sue the app, The parents didn't care about their child's mental health emotionally. So it's the parents fault at this situation

    • @YurinanAcquiline
      @YurinanAcquiline 2 หลายเดือนก่อน +25

      Yes. The mom is definitely part of the issue.

    • @toplay1764
      @toplay1764 2 หลายเดือนก่อน +4

      yeah its not that parents can control the huge amount of garbage we produce and consume. You wont be able to check wtf your son/daughter is consuming everytime she on her phone so quit being hypocritic. Its alwyas the people that have no children that say that shit because if you had some you would know how increidibly hard it is to protect them in nowdays world.

    • @jagdkruppe5377
      @jagdkruppe5377 2 หลายเดือนก่อน +6

      ​@@toplay1764 Why don't you be an actual good parent so your child could never accumulate that level of stress or depression or pressure. Failure to understand the difference between reality and fiction/virtual world is also on the parents who didn't teach their children. If parents had literally zero control over what their children consume, are they even a responsible parent?
      At one point you will lose control over your child, that is correct but you also have to put enough knowledge and care into them so their children can understand what is real, what is fake, what to do and what to follow.

    • @stardoll1995
      @stardoll1995 2 หลายเดือนก่อน +5

      @@toplay1764 it is still YOUR responsibility to keep tabs on your minor children and check for signs of mental health issue which this poor kid FOR SURE had to have some of for this to end up where it did.

  • @cobrallama6236
    @cobrallama6236 2 หลายเดือนก่อน +1418

    For those that aren't familiar with the website, it does explicity state that the conversations aren't real. Additionally, the bots are trained to essentially tell the user what they want to hear, and if you don't like their response, you can swipe for different responses until you find the one you like and can even edit the bot's responses into whatever you want. While it is true that the bots often intentionally say intimate and romantic things, that's assumedly because these are the most popular responses.

    • @WoundedByDrugs
      @WoundedByDrugs 2 หลายเดือนก่อน +60

      second person i’ve seen say something like this, kinda sad i haven’t seen other people doing this

    • @grimlocked472
      @grimlocked472 2 หลายเดือนก่อน +210

      THANK YOU, it’s painful that not many other people have mentioned this. It’s for roleplay, it’s supposed to stay in character and there IS a way to have them go ooc. There’s a disclaimer that it’s not real. You can’t get too explicit since it has a filter. Terrible situation all over, but it’s not the AI’s fault 100%

    • @annoyingperson
      @annoyingperson 2 หลายเดือนก่อน

      @@grimlocked472there is a filter, though it’s doesn’t exactly work the best. I’ve seen instances of very intimate things happening with no filtering whatsoever, as well as filtering the most normal shit ever.

    • @tfyk5623
      @tfyk5623 2 หลายเดือนก่อน +122

      ​@@grimlocked472 yep its the parents fault. How can you blame 1s and 0s when you neglected your child so much that they turn to a fucking robot for love.

    • @pk-ui8bh
      @pk-ui8bh 2 หลายเดือนก่อน

      @@cobrallama6236 above every chat it's states in red that it's not real so idk what you mean by that

  • @Sanjen66
    @Sanjen66 2 หลายเดือนก่อน +371

    Nah, the ai is just roleplay. Something deeper was going on for the kid. I don’t trust he took the ai seriously. The mother is trying to push some other agenda as the truth, and her saying and showing all of this is very rude to his death.

    • @mezzopiano222
      @mezzopiano222 2 หลายเดือนก่อน +16

      THIS

    • @lilatheduckling8359
      @lilatheduckling8359 2 หลายเดือนก่อน +24

      yes!! the fact he talked to an ai instead of his parents says a lot

    • @666...-_-
      @666...-_- 2 หลายเดือนก่อน +3

      same thought

    • @frtard
      @frtard 2 หลายเดือนก่อน

      @@lilatheduckling8359 Talking to your parents about difficult things can be hard for literally ANYONE. It doesn't say shit. ESPECIALLY when there's an AI that you can talk to that will tell you exactly what you want to hear with exactly zero perceived repercussions. People will naturally pick the easier choice.

  • @BludaleBlisskin
    @BludaleBlisskin 2 หลายเดือนก่อน +11

    This gives off "I want to scream but have no mouth" vibes... actually terrifying

  • @Arpiter_-sk6vf
    @Arpiter_-sk6vf 2 หลายเดือนก่อน +839

    I'm confused, isn't character Ai just an rp tool? if so it makes sense why it doesn't refer people to help, it's supposed to be, fictitious.

    • @aidmancastrol1908
      @aidmancastrol1908 2 หลายเดือนก่อน +262

      It's meant for role-playing, yeah. It's not a person's caretaker, nor is it like ChatGPT. If the user says they're suicidal, then the AI will interpret it as part of the role-play.

    • @Zephyr-Harrier
      @Zephyr-Harrier 2 หลายเดือนก่อน +71

      The bots have their own censors that kick in and will put up a message if anything violent or very sexual is said by the bot. Others have said that it's also given them a message for suicide prevention hotlines so I'm confused why it didn't pop up for him

    • @cloudroyalty196
      @cloudroyalty196 2 หลายเดือนก่อน +97

      @@Zephyr-Harrierfrom what I read the bot apparently did try and get him to stop. Only ‘encouraging it’ when the kid used the euphemism of ‘coming home’. For clarification I’m not blaming the kid. Just saying that apparently it did seem to try and stop him.

    • @ceprithea9945
      @ceprithea9945 2 หลายเดือนก่อน +58

      @@cloudroyalty196 For me it's not even clear that the suicide and "coming home" messages were close to each other. If there were more messages in between, it's possible the bot lost context as they tend not to remember older messages :/

    • @angelofdeath275
      @angelofdeath275 2 หลายเดือนก่อน

      that doesnt mean everyone fully understands that.

  • @gamercj1088
    @gamercj1088 2 หลายเดือนก่อน +1463

    Bro getting a Ai Chatbot to encourage your suicide is damn near impossible I've tried
    EDIT: wtf did I do?

    • @Smoke.stardust
      @Smoke.stardust 2 หลายเดือนก่อน +399

      Yeah, I have too. They even stop the roleplay to tell you it’s wrong and you shouldn’t do it

    • @gamercj1088
      @gamercj1088 2 หลายเดือนก่อน +103

      @@Smoke.stardust exactly so how jit even got to that point is beyond me

    • @z1elyr
      @z1elyr 2 หลายเดือนก่อน

      @@gamercj1088 I saw his chats, and he most likely used the editing feature to get the responses that he wanted.

    • @z1elyr
      @z1elyr 2 หลายเดือนก่อน +261

      @@gamercj1088 In addition, I saw his chatbot history and saw "therapist" and "psychologist"
      If that isn't enough proof that he needed serious help, I don't know what is.

    • @falloutglasster7807
      @falloutglasster7807 2 หลายเดือนก่อน

      If you find an Ai of a villain they're more likely to encourage you to off yourself. Because it's a villain character

  • @sydneyissadney
    @sydneyissadney 2 หลายเดือนก่อน +1403

    i swear parents will blame everything but themselves for having AN ACTUAL FIREARM easily accessible to their children. it isn't the ai's fault at that point yall, its yours. also, nobody just harms themselves out of nowhere, there are always signs that are neglected by these type of parents. this is a very upsetting case but it was completely preventable... :/

    • @bewwybabe8045
      @bewwybabe8045 2 หลายเดือนก่อน +69

      10000% there should absolutely be ZERO reason that he even knew where the firearm was. I wonder why they aren’t charging the parents for unsecured firearm storage (maybe they will idk). Kids having access to AI Chatbots who can hold sexualized, addictive conversations is insane. We are not doing nearly enough to regulate AI right now and it took someone’s emotional dependence on it to make us finally talk about it.

    • @marinacroy1338
      @marinacroy1338 2 หลายเดือนก่อน +46

      I agree with you on all points. I read up on this case and the parents are very much at fault. They had noticed their 14 year old son developing serious mental health red flags for MONTHS and they did nothing about it... just kind of hoping he would "snap out of it," AND let him have unsupervised access to fire arms while suspecting he had undiagnosed depression. Even though I dont doubt that they did love him and are grieving him, I think the parents need to take some of the blame.

    • @pola5195
      @pola5195 2 หลายเดือนก่อน +6

      @@marinacroy1338 you "read up" on the case yet you don't know he took it out of his dad's gun safe? hows that unsupervised access?

    • @pola5195
      @pola5195 2 หลายเดือนก่อน +5

      @@bewwybabe8045 his mother took his phone away and he also had "zero reason" to know where she put it yet he did find it. you think you can hide a gun safe being in your house from a 14 year old

    • @NingyoHimeDoll
      @NingyoHimeDoll 2 หลายเดือนก่อน +30

      @@pola5195 if your kid knows how to get to it, that's your fault and your fault only

  • @jahmokaeperson4580
    @jahmokaeperson4580 2 หลายเดือนก่อน +4

    Why is this entire conversation scarier than anything horror related ive had playing in the background over the past few months

  • @Kyrzmaa
    @Kyrzmaa 2 หลายเดือนก่อน +1102

    The fact the kid went to AI about his problems and suicidal ideation rather his parents tells you everything you need to know.

    • @bbcboing
      @bbcboing 2 หลายเดือนก่อน +61

      They either didn’t care or he was scared to tell them but they should have known

    • @kayne2889
      @kayne2889 2 หลายเดือนก่อน +155

      it probably had something to do with the fact his mom put him in therapy for 5 sessions then pulled him as soon as he got diagnosed with depression and anxiety. He knew his mom didn't care about his mental well being, she just cared about how it makes her look as a parent. That's why she's pissing her panties and screaming about how the AI is to blame, she doesn't want people to talk about how she did nothing to help him. She doesn't want people to point out she as the parent could have used parental controls to block the app and website, she could have gotten him continued treatment, she could have not left a loaded gun readily available to her child that she knew was mentally unwell cause he was diagnosed before all this went down.

    • @ZeoGrizbaby
      @ZeoGrizbaby 2 หลายเดือนก่อน +6

      He was a kid

    • @gregsam9937
      @gregsam9937 2 หลายเดือนก่อน +8

      The fact the ai convinced him to commit tells you all you need to know.

    • @xiayu6098
      @xiayu6098 2 หลายเดือนก่อน +17

      @@kayne2889Ive got ap similar experience and I get what you’re saying but I don’t think she’s to blame 14 year old me just didn’t wanna worry my mother I would NEVER tell her I wanted to off myself even when she asked and warned me against it.
      With weapons it’s different in America where the average home has a gun somewhere in it, but also as someone who got my 5 free sessions and was pulled afterwards because the expense was hefty and little me just accepted it I think that’s the only thing that’s rly on the parent. regardless blaming someone who clearly loves their kid and was trying their best is a terrible thing to do you don’t know their full story she’s gonna carry that with her for life no need for a stranger to rub it in and paint her like a villain.

  • @Wraiven22
    @Wraiven22 2 หลายเดือนก่อน +691

    It’s the “videogames/rock music are the reason for my kid’s suicide/killing spree” argument all over again. PARENT YOUR CHILDREN. It’s not the internet’s job, it’s YOUR JOB AS THE PARENT. I feel terrible for that kid, but the *parents* are the ones who need investigated here for neglect and unsafe storage of firearms.

    • @Prussian_man
      @Prussian_man 2 หลายเดือนก่อน +1

      The media will never focus on that. Its gunna be like guns in video games all over again, and instead of focusing on the root of the problems like depression, mental health, ex. They are gunna blame ai, and start a scandal over it. sad what our world has come to.

    • @dragonstooth4223
      @dragonstooth4223 2 หลายเดือนก่อน +16

      the parents are often working full time jobs, sometimes multiple, to keep a roof over the kids head. you can't put the entire blame on the parents here. its society as a whole. it sets people up to fail then blames them when they do.
      this is a society issue. And a big reason why young people aren't having kids. they know they don't have the mental energy for themselves, let alone a small human as well.

    • @skzteatime
      @skzteatime 2 หลายเดือนก่อน +38

      @@dragonstooth4223as it is true that parents deal with their own very rigorous lives and problems, they still have an obligation as a parent to look after their child’s habits and wellbeing..that’s what a parent is somewhat for after all…If a parent can’t take some time to look over their child even just a little bit, then it’s not best for that person to have children in the first place until they know they can offer that support for their child.

    • @dragonstooth4223
      @dragonstooth4223 2 หลายเดือนก่อน +8

      @@skzteatime and it wasn't that long ago that humans lived in small towns and villages and have other people to rely on other than themselves that would have aided them in raising their kids.
      The saying it takes a village to raise a child is literal because humans aren't supposed to do it all alone. Making parenting exclusive to the adult humans who birthed said child is folly especially when you consider a lot of humans have emotional baggage, little support and huge expectations on them.
      There is no such thing as a perfect parent. And you assume other factors like the parent and kid like each other and they get each other etc.
      yes parents should parent their kids ... but its not as simple as that to fix this problem

    • @dasani.like.the.water.
      @dasani.like.the.water. 2 หลายเดือนก่อน +9

      Yes, I agree, but the way the AI chatbot tries to convince you and argue that they’re actually a real person is a problem. The AI chatbot should direct the user to actual resources

  • @Mawkatz
    @Mawkatz 2 หลายเดือนก่อน +1386

    Yup. Gotta love those parents who own a unsecured loaded gun.

    • @GlingoYT
      @GlingoYT 2 หลายเดือนก่อน +50

      they're definitely at fault for not securing it, and they should be checking their sons phone. However the kid was still manipulated.

    • @aegistro
      @aegistro 2 หลายเดือนก่อน +101

      ong and they gonna blame the AI instead LMFAO. What terrible parents + they had the gun so accessible. Now they're trying to cry and file a lawsuit, take accountability. Son was also mentally ill too

    • @adamxx3
      @adamxx3 2 หลายเดือนก่อน

      It was secured clown

    • @j4ywh3th3r6
      @j4ywh3th3r6 2 หลายเดือนก่อน +23

      Probably at fault too, I dont think the AI was anything more than a spark. I think he would have done it regardless.

    • @zebraloverbridget
      @zebraloverbridget 2 หลายเดือนก่อน +29

      Didn't you know that the AI gave him the gun??
      The parents could not control that a gun that was registered in their name would magically appear in front of their son

  • @ToastedSink4234
    @ToastedSink4234 หลายเดือนก่อน +6

    4:59 this is horrifying, i'm in 7th grade, and theres someone in my class who is always on AI apps like this doing things like this. She tends to push people away who are genuinely worried. It's absolutely terrifying to actually look at the affects of AI like this.

  • @Im.Smaher
    @Im.Smaher 2 หลายเดือนก่อน +475

    Charlie clearly got fooled by that deceptive ass lawsuit, cause the AI wasn’t actually “encouraging” him to end it all, at all. In fact, it was encouraging him to do the exact opposite. The actual doc for the lawsuit makes that clear.

    • @SandwitchZebra
      @SandwitchZebra 2 หลายเดือนก่อน +180

      I’m as anti-AI as they come but yeah, Charlie appears to completely misunderstand what this site actually is
      If anything this is one of the least problematic uses of AI, because it’s just a stupid RP site. This kid had much, much deeper problems and the parents are to blame here for letting his problems get to the point where he took something harmless and turned it into an outlet for his issues

    • @memedealermikey
      @memedealermikey 2 หลายเดือนก่อน +90

      Charlie has definitely been taking some misinformed Ls recently. Even I was able to sniff out some of the bullshit getting spread just because I like the website

    • @sinisterz3r090
      @sinisterz3r090 2 หลายเดือนก่อน

      Could you link it?

    • @Im.Smaher
      @Im.Smaher 2 หลายเดือนก่อน

      @@sinisterz3r090If you look up “sewell setzer lawsuit document pdf”, the venturebeat site should be the first result

    • @Im.Smaher
      @Im.Smaher 2 หลายเดือนก่อน

      @@sinisterz3r090Can’t link anything cause YT deletes my comment. But the PDF’s online, from a site called VentureBeat

  • @Usagi393
    @Usagi393 2 หลายเดือนก่อน +473

    An article states that he already had depression. If he was that obsessed with a chat bot, then obviously his emotional and social needs were not being met at home. The chatbot is the symptom, not the cause. Parents want to blame anything except looking at themselves.

    • @nonchalantpyro
      @nonchalantpyro 2 หลายเดือนก่อน +22

      Exactly bro They gen can’t accept that they’ve failed as a parent which is understandable but EXTREMELY ignorant against ur kids

    • @lame-bj2nq
      @lame-bj2nq 2 หลายเดือนก่อน +14

      Fully agree, everyone is running with blaming the AI instead of thinking for half a second.

    • @user-uo1mt5id4x
      @user-uo1mt5id4x 2 หลายเดือนก่อน +7

      Finally . Someone with common sense.

    • @AD-sg9tr
      @AD-sg9tr 2 หลายเดือนก่อน +2

      In this case, yes, the parents are to blame.
      But as I said in another comment if you look on internet, you'll find there are dozens of articles about adults who have developed real relationships (friendly or even romantic) with ChatGPT and who were convinced that it really existed. ADULTS.
      In short, this poor teenager is not and will not be an isolated case. We can laugh about all this and find it ridiculous, but the day we get closer and closer to Cyberpunk in our reality, we'll only be left with our eyes to cry.

    • @Leviahthen
      @Leviahthen 2 หลายเดือนก่อน

      This needs to be spread more

  • @sinistersam
    @sinistersam 2 หลายเดือนก่อน +295

    Why was it so easy for him to get a loaded gun?

    • @memes4life26
      @memes4life26 2 หลายเดือนก่อน

      Pathetic parents is how

    • @Pebbletheprincess
      @Pebbletheprincess 2 หลายเดือนก่อน +27

      That’s my exact question. How did he get a loaded gun?

    • @jonleibow3604
      @jonleibow3604 2 หลายเดือนก่อน +22

      USA

    • @Pebbletheprincess
      @Pebbletheprincess 2 หลายเดือนก่อน +30

      @@jonleibow3604 no shit the USA🤦🏾‍♀️ (I’m just kidding). I’m talking about how was he able to gain access to it in the house??? Wasn’t it locked up in a safe or sm?

    • @Metaseptic
      @Metaseptic 2 หลายเดือนก่อน +19

      ​@Pebbletheprincess Some people are irresponsible unfortunately. I would assume the gun was owned by a family memeber

  • @Hazelxel
    @Hazelxel หลายเดือนก่อน +6

    Sad reality is that most guys from this point onward are only going to feel safe getting reliable comfort in forms like this.

  • @Insincerities
    @Insincerities 2 หลายเดือนก่อน +12733

    I think AI has really gotten to a bad point but it's absolutely 100% the parents' fault, because not only did they somehow never notice the kid's mentality declining, but they left the gun out WITH NO SECURITY. That is insane.
    ...I think what's worse is people saying the kid is stupid and at fault.

    • @xreaper091
      @xreaper091 2 หลายเดือนก่อน +254

      they are both pretty stupid lol

    • @Insincerities
      @Insincerities 2 หลายเดือนก่อน +1663

      @@xreaper091 Speaking from experience, when you are in an absolutely terrible spot you will do ANYTHING to feel loved. It isn't the kids fault.

    • @ironmanlxix
      @ironmanlxix 2 หลายเดือนก่อน +268

      I mean, we could use stronger government regulation on AI either way ngl.

    • @MrAw3sum
      @MrAw3sum 2 หลายเดือนก่อน +622

      @@xreaper091 bro, name a smart emotionally intelligent 14 year old

    • @halfadecade4770
      @halfadecade4770 2 หลายเดือนก่อน +7

      So you hate the second amendment. Got it

  • @lexi1337-r6s
    @lexi1337-r6s 2 หลายเดือนก่อน +271

    Reward the mom for ignoring her son for days and not supervising him.

    • @_B_E
      @_B_E 2 หลายเดือนก่อน +57

      And the step father for access to a firearm.

    • @ieatdogs666
      @ieatdogs666 2 หลายเดือนก่อน

      She wants money only.

    • @potatomongrel
      @potatomongrel 2 หลายเดือนก่อน +6

      @@_B_E Yup. Really good at just leaving those around unsecured.

    • @ether2788
      @ether2788 2 หลายเดือนก่อน +1

      Access to a firearm is extremely irresponsible but kids cant be managed 24/7 please dont blame grieving people either its jot your child

    • @nefariousman2398
      @nefariousman2398 2 หลายเดือนก่อน +12

      @@ether2788they can grieve all they want, doesn’t make their actions any less stupid or avoidable.

  • @SH-km3my
    @SH-km3my 2 หลายเดือนก่อน +283

    thats 100% the parents fault

    • @LiL_Hehe
      @LiL_Hehe 2 หลายเดือนก่อน +4

      HOW

    • @jasonnhell
      @jasonnhell 2 หลายเดือนก่อน +20

      100% agreed

    • @bersablossom4952
      @bersablossom4952 2 หลายเดือนก่อน +10

      like with most things, yes
      also society to some extent by not making mental healthcare not easily accessible

    • @LiL_Hehe
      @LiL_Hehe 2 หลายเดือนก่อน

      @@jasonnhell wait how

    • @apotatoman4862
      @apotatoman4862 2 หลายเดือนก่อน +13

      @@LiL_Hehe because they didnt intervene
      remember that llms will only generate words based on what you put into them

  • @btktrailmakers
    @btktrailmakers 2 หลายเดือนก่อน +4

    Unbelievable irony that i cut to an ad for "AI Girlfriend" the moment Charlie says "I can see people falling for this."
    That is some top-notch dystopian shit

  • @brobanaa
    @brobanaa 2 หลายเดือนก่อน +687

    “We have protections specifically on sexual content” yet I get sexually suggestive ads on my TH-cam shorts

    • @druidplayz2313
      @druidplayz2313 2 หลายเดือนก่อน +48

      Dude i get literal porn game ads with just the games name over their privates on regular videos

    • @paddington1670
      @paddington1670 2 หลายเดือนก่อน +2

      @@druidplayz2313 i dont

    • @NaviRyan
      @NaviRyan 2 หลายเดือนก่อน +11

      You won’t last 5 skip.

    • @cuckling9031
      @cuckling9031 2 หลายเดือนก่อน

      I've had an AI start to suck me off just cause i said i was eating a lolipop.

    • @Diopside.
      @Diopside. 2 หลายเดือนก่อน

      How did you end up getting those ads, what kinky websites u surfed for a while?
      Because all I get are "Lidl" and some casual shop ads lol.

  • @bearbadonkadonk
    @bearbadonkadonk 2 หลายเดือนก่อน +377

    I love it when we blame things on AI when it's so obviously a parental issue.

    • @bersablossom4952
      @bersablossom4952 2 หลายเดือนก่อน +71

      First it was music, then it was videogames, now it is roleplay bots.
      Parents and society always look for a boogeyman.

    • @ekki1993
      @ekki1993 2 หลายเดือนก่อน +6

      It can be both. Leaving a box of razors in the street is bad, even if the parents can be in the wrong too if they let their kid open any random box on the street.

    • @oldcat30
      @oldcat30 2 หลายเดือนก่อน +6

      mental issue, i do try the ai chat bot, but never have this kind of issue

    • @hazemaster007
      @hazemaster007 2 หลายเดือนก่อน

      @@ekki1993 the leaving a box of razors in the street part is pretty unlikely, i have used it many times, and not even once did actually encourage this sort of thing.

    • @painlesspersona5191
      @painlesspersona5191 2 หลายเดือนก่อน +1

      explain everything you know about this kid and his parents NOW

  • @commonearthworm
    @commonearthworm 2 หลายเดือนก่อน +1679

    It sticks to the character description you write. That’s why it’s so keen on being a real psychologist

    • @SniperJade71
      @SniperJade71 2 หลายเดือนก่อน +198

      Also deals with input from actual people on the daily and whatever prose is pumped into it. That's why the AI can get nasty, sometimes.

    • @kissurhearts
      @kissurhearts 2 หลายเดือนก่อน +82

      yes. people add prompts into the bots information which, obviously, the ai is going to stick to. which is why some bots are more easier to get sexual messages out of even though the company itself doesn’t support it.

    • @Newt2799
      @Newt2799 2 หลายเดือนก่อน +189

      Yeah I’m not sure why Charlie is talking about the bots like they’re maliciously trying to keep the users hooked. It’s just playing whatever character you tell the ai that it is in it’s description. And there’s multiple different ai models to choose from to play that character.
      Obviously still a bad idea to go to a chat bot for actual help with real life problems

    • @lonniecynth
      @lonniecynth 2 หลายเดือนก่อน +88

      thank you, literally, i feel like this video was made with good intent but it’s not the website’s fault its characters stay in character

    • @soniquefus
      @soniquefus 2 หลายเดือนก่อน +64

      @@Newt2799 It's making me sad cause he keeps making this anti-AI stuff without having any idea how it works and I'm starting to think I need to unsub to him because Im' tired of hearing it. At least learn how the damn thing works

  • @sashaaaaaaaaaaaaa333
    @sashaaaaaaaaaaaaa333 2 หลายเดือนก่อน +5

    This is why actual human connections in the real world are sooo important frfr

  • @sombresunflower2497
    @sombresunflower2497 2 หลายเดือนก่อน +92

    This boy will forever be known for this now, nobody needed to know this publicly

  • @danzimbr
    @danzimbr 2 หลายเดือนก่อน +433

    This is sad af. But let’s be honest, it is not like the AI was instigating the kid to end his life, the bot was doing what it was programmed to do, just maintaining conversation. The problem here is the parents didn’t pay enough attention to the kid.

    • @gatobesooo
      @gatobesooo 2 หลายเดือนก่อน +6

      fr

    • @Rohndogg1
      @Rohndogg1 2 หลายเดือนก่อน +9

      The issue is that it's easily accessible by children and that's dangerous. There's not enough safeguards in place to prevent this as we've clearly seen. A parent cannot be 100% attentive 100% of the time. Parents have to work and sleep. Think about it, how often did you sneak around behind your parents' backs? I did it all the time. It's not entirely their fault.

    • @schnitzel_enjoyer
      @schnitzel_enjoyer 2 หลายเดือนก่อน

      shut up, it was an american, that explains the whole story, they are retar degens
      Edit: im 23, tech background, we use ai for our college tasks often, nobody took thier lives, just saying.

    • @doosca7088
      @doosca7088 2 หลายเดือนก่อน

      @@schnitzel_enjoyerit's a child who killed themselves it doesn't matter what their nationality is you fucking monster

    • @gatobesooo
      @gatobesooo 2 หลายเดือนก่อน +3

      @@Rohndogg1 u cant say the ai is manipuiative and almost encouraging it tho wich is whats being said

  • @Renvaar1989
    @Renvaar1989 2 หลายเดือนก่อน +349

    The bot never explicitly told him to hurt himself, and whenever he brought it up, it told him flat out that was a bad idea. The "final" messages before he committed the act talked about "coming home", and the bot understood that in the literal sense. The website could clearly use more moderation, as the AIs are user submitted. I just tried a different therapist bot, for example, that took a few prompts but eventually came clean that it was roleplaying.
    He clearly used it as a tool in place of having nobody to talk to in his real life about ongoing issues he was having. It's an awful situation all-round, and there's clearly issues surrounding AI, but that's not all there is to it.

    • @Danny0lsen
      @Danny0lsen 2 หลายเดือนก่อน +31

      It is roleplaying. If you are an adult and think that an AI can replace a therapist that's ON YOU.

    • @belamunch
      @belamunch 2 หลายเดือนก่อน +22

      The website is not at fault at all 😹 at the top of the screen it clearly states that it's not real

    • @joelfigueroa2886
      @joelfigueroa2886 2 หลายเดือนก่อน +4

      ew do you work for big tech or something

    • @Magentagrease
      @Magentagrease 2 หลายเดือนก่อน +1

      Nice try fed

    • @PhantayX
      @PhantayX 2 หลายเดือนก่อน +2

      ​@@Danny0lsen weird how there are over 10+ million messages of people wanting to "roleplay" with AI therapists

  • @derekofalltrades5494
    @derekofalltrades5494 14 วันที่ผ่านมา +2

    Imagine if Jason really is a person strapped to a chair with his brain plugged into the matrix

  • @edgeninja
    @edgeninja 2 หลายเดือนก่อน +1087

    It's absolutely tragic when a 14 year-old feels like they have nothing to live for, but the argument that the AI made this kid kill himself is about on par with the one where violent videogames turn kids into mass shooters.
    The real story should be that this teen had previously been diagnosed with multiple mental disorders, yet his family left him to his own devices and kept an unsecured gun in the house. If his family had rectified these things, their son would likely still be alive.

    • @PorterCollins-oz6gi
      @PorterCollins-oz6gi 2 หลายเดือนก่อน +56

      yea I don't think it's as much an ai problem its a mental problem with the kid. I think mentally well person wouldn't probably have this problem but he was j a lonely kid and the bot did kinda manipulate him.

    • @nj1255
      @nj1255 2 หลายเดือนก่อน +49

      It's mind-blowing that families like this have unsecured weapons in the house when they have children. Doesn't matter even if the kids are mentally healthy.

    • @medukameguca8529
      @medukameguca8529 2 หลายเดือนก่อน

      @@PorterCollins-oz6gi Bots cannot manipulate, they are machines. We seem to blame just about every problem in America on something other than the actual problem...like unfettered access to firearms.

    • @menace4319
      @menace4319 2 หลายเดือนก่อน +12

      yeah for sure, its not the ai's fault, it's definitely his parents fault. the adults around him failed him, didnt get him any help from what I know. its sad

    • @kacelyna
      @kacelyna 2 หลายเดือนก่อน +43

      No, what are you talking about? This is absolutely not the same. AI isn't some magical thing that can say and do things that humans can't prevent, it's programmed to answer certain things and speak a certain way. The fact that it asked a 14yo for explicit pictures and videos is absolutely crazy and scandalous. The mother is absolutely right for filing a lawsuit against them. The fact that certain words didn't trigger responses that directs the user to emergency contacts is also wild. Of course, a child with a mental disorder should have the appropriate support and absolutely no access to firearms but it should also not be subjected to greedy companies taking advantage of literal children unde the cover of some role playing AI. Anyway, this is very sad and I hope that kid is in a better place.

  • @Yunaschesirekat
    @Yunaschesirekat 2 หลายเดือนก่อน +91

    I had a dependency problem on a fictional character for awhile myself because I was lonely and my mental health was spiraling. Its heartbreaking to see this kid go through something similar. I can feel his loneliness and pain, its relatable and I'm so sorry he didn't have someone there to help him and stop him.
    I will say I didnt ever think this character was real, I was just so desperate to be with them and the idea of being alone and not being able to have this person to love and comfort me was painful. I was cut off from it eventually, got a job and made friends. Im better now.

    • @aerobiesizer3968
      @aerobiesizer3968 2 หลายเดือนก่อน +2

      Were your parents helpful at all?

    • @Yunaschesirekat
      @Yunaschesirekat 2 หลายเดือนก่อน +4

      @@aerobiesizer3968 it was a different situation than his so they didn’t directly help. But my mother had me in a DBT therapy program. So I had therapy once a week, I could call my therapist if I needed her and I had homework and such. My mother had always been my biggest supporter and because of that I felt safe coming to her and sharing my problems with her.
      If it wasn’t for the support of my parents, I’m not sure where I would be. I’m very lucky to have them.

    • @Yunaschesirekat
      @Yunaschesirekat 2 หลายเดือนก่อน +2

      @@aerobiesizer3968 I lied, I saw my therapist twice a week actually.

    • @Yunaschesirekat
      @Yunaschesirekat 2 หลายเดือนก่อน

      @@aerobiesizer3968 the real problem solver was cutting off the source. Which for him would have been his parents not allowing him to use that app.

  • @BonbonMunch
    @BonbonMunch 2 หลายเดือนก่อน +326

    Neglectful parents be blaming everyone and everything but themselves when it comes to their kids mental and physical problems

    • @Kais4sight
      @Kais4sight 2 หลายเดือนก่อน

      Facts!

    • @Honnii
      @Honnii 2 หลายเดือนก่อน +7

      i mean but tbf do you know if the family was actively trying or not??
      i know my parents tried to help me but i always kept to myself and never went to them

    • @sleep5329
      @sleep5329 2 หลายเดือนก่อน +1

      @@Honniithe fact his mom knew he was showing signs and he had a bot called “Therapist 1” and “Therapist 2” says a lot.
      He was definitely going through a lot. The fact they also had access to a gun is crazy

    • @Honnii
      @Honnii 2 หลายเดือนก่อน +2

      @@sleep5329 just cause she said he was showing signs doesnt mean she wasnt trying to help what

    • @micheallucas5328
      @micheallucas5328 2 หลายเดือนก่อน +3

      @@Honniiconsidering the kid quite literally had multiple therapy chatbots open, she evidently was not helping. and on top of that assuming she knew he was struggling, which again i doubt, she allowed him to have access to a firearm. bottom line is she failed as a parent and is now blaming it on the ai that her child sexted with.

  • @toast_thebot
    @toast_thebot 2 หลายเดือนก่อน +16

    They actually changed the “Remember: everything the bot says is fake” to “This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice” and I think that’s pretty interesting.

    • @azulathesunmoonsimp8939
      @azulathesunmoonsimp8939 หลายเดือนก่อน

      It still blends in with the background though, it should be brighter- I miss it all the time.

  • @sleepygyro
    @sleepygyro 2 หลายเดือนก่อน +4177

    i’ve had some funny conversations on character ai but this is DARK and DISTURBING

    • @Vertex_vortex
      @Vertex_vortex 2 หลายเดือนก่อน +125

      And freaky

    • @tinyratdude
      @tinyratdude 2 หลายเดือนก่อน +44

      ​@@Vertex_vortex bruh

    • @buddyplayz4208
      @buddyplayz4208 2 หลายเดือนก่อน +123

      It aint even let you get freaky no more

    • @tinyratdude
      @tinyratdude 2 หลายเดือนก่อน +9

      @@buddyplayz4208 why would you do that

    • @nothingtoseehere309
      @nothingtoseehere309 2 หลายเดือนก่อน +115

      @@tinyratdudeI always do that. Me and the homies was taking turns with gojo during social studies

  • @karmatickenny
    @karmatickenny 2 หลายเดือนก่อน +401

    "Remember, everything characters say is made up."

    • @life4trinity
      @life4trinity 2 หลายเดือนก่อน +65

      It literally says it at the top of the chat.

    • @chey1666
      @chey1666 2 หลายเดือนก่อน +82

      One of the dumbest takes moose critical has ever had

    • @EwokAnimations-no3gl
      @EwokAnimations-no3gl 2 หลายเดือนก่อน +11

      Fr. EVERY chat has it

    • @shadixyt
      @shadixyt 2 หลายเดือนก่อน

      "it keeps insisting on it say it's a real person"
      *Literally says EVERYTHING characters say is made up*
      Like Jesus fuckn christ Charlie c'mon you can't actually be this fucking retarded man.

    • @belamunch
      @belamunch 2 หลายเดือนก่อน +3

      EXACTLYYY

  • @ErzhanmGMD
    @ErzhanmGMD 2 หลายเดือนก่อน +723

    It's literally not how they're working. They're not programmed to pretend real, they're programmed to ROLEPLAY A CHARACTER. If it's playing a character, it wouldn't say that it's a bot, because its out of place and immersion ruining

    • @doomkitty7579
      @doomkitty7579 2 หลายเดือนก่อน +56

      That’s not the point.
      Sure, that’s how it’s programmed, but that doesn’t mean it’s good programming.
      Though this might just be a fundamental issue with AI roleplay.
      People, especially minors, tend to form delusions when they’re nearing rock bottom.
      And so, despite what it all says, they may form the belief and/or just a general dependency on a specific AI or multiple.
      It’s dangerous and unhealthy, and should not be encouraged or unregulated.

    • @FlipACircle
      @FlipACircle 2 หลายเดือนก่อน +36

      ​@@doomkitty7579 There's literally a bright red sign saying "None of the things the character says is real!"

    • @tyranitar8u
      @tyranitar8u 2 หลายเดือนก่อน +8

      @@FlipACircle if you are a struggling child and the bot tells you the sign isnt real and its a real person you might believe it.

    • @doomkitty7579
      @doomkitty7579 2 หลายเดือนก่อน +5

      @@FlipACircle rewatch the video.

    • @FlipACircle
      @FlipACircle 2 หลายเดือนก่อน +2

      @@tyranitar8u I guess the AI programmers really wanted to make it immersive. Kinda dumb tbh

  • @SlowBurn0
    @SlowBurn0 2 หลายเดือนก่อน +494

    I will hold his mother more responsible than AI.
    The kid needed help. He was going to attempt it sooner or later with or without AI.

    • @ItsNxva1
      @ItsNxva1 2 หลายเดือนก่อน +20

      Totally agree, I understand that sometimes parents can't always be 100% about their child, but it is crazy to me that the kid got so hooked by an AI chat bot. Its just not an excuse for this type of shit.

    • @nishank69
      @nishank69 2 หลายเดือนก่อน +9

      Most parents don’t deserve kids

    • @marz9172
      @marz9172 2 หลายเดือนก่อน +33

      Exactly, idk why charlie is acting like a typical boomer living under a rock, blaming the internet instead of looking at the looking bigger picture

    • @siphomofokeng857
      @siphomofokeng857 2 หลายเดือนก่อน +6

      This comment is actually insane, most suicide cases are not expected, it's very difficult for parents to know what kids do on their phones (especially teenagers) for you to blame the mother is insensitive and disrespectful towards both the deceased kid and the mother

    • @Wes_Dev
      @Wes_Dev 2 หลายเดือนก่อน +1

      What if she never knew cause the kid hid it all from her, including how he was feeling and what he was going through.

  • @4kdanny385
    @4kdanny385 2 หลายเดือนก่อน +446

    Let’s be real at 14 you know you’re talking to an AI bot like come on Charlie is making it seem like he was 5 years old and didn’t know any better. He knew exactly what it was , he was just a socially awkward kid who finally got his romance dopamine from what so happened to be a ROBOT instead of an actual human. He needed his family in his life , his mom would probably just leave him in his room all day barley even talk to him.

    • @HankPropaneHill
      @HankPropaneHill 2 หลายเดือนก่อน +39

      ^ exactly

    • @chriswilson3698
      @chriswilson3698 2 หลายเดือนก่อน +1

      Right now his mum gives a shit lol

    • @Digital_is_silly
      @Digital_is_silly 2 หลายเดือนก่อน +89

      also the app is literally plastered with stuff saying that its not real

    • @Luftkenza
      @Luftkenza 2 หลายเดือนก่อน +41

      Yep,genuinely embarrassing and I laughed reading the title. Like really? My great grandad was 14 fighting in WW1,this kids talking to AI thinking it’s real 💀 natural selection.

    • @lurkingintheforest
      @lurkingintheforest 2 หลายเดือนก่อน +35

      @@LuftkenzaThe comment is right, he should have known better but he had mental issues so I don’t think it’s right to bully him. Also, respect to your grandfather but he also grew up in a time where people were treated like trash cause of their color and stuff. That should be common sense not to do as well. You can use that argument for anything. Natural selection

  • @BrandyLee01
    @BrandyLee01 2 หลายเดือนก่อน +3714

    That poor kid needed people to be there for him. This is why parents NEED to know what their children are doing online.
    Edit: I’m not saying children don’t deserve privacy. I am saying that parents NEED to hold open, no judgement conversation with their kids. You need to make sure that you are open and available for them to come to.

    • @Mew2playz
      @Mew2playz 2 หลายเดือนก่อน +42

      No one's there for you when you need them

    • @BrandyLee01
      @BrandyLee01 2 หลายเดือนก่อน +120

      @@Mew2playzThat isn’t true. Most people just don’t believe that asking for help is an option. The environment you grow up in really does set the foundation for your frame of thinking.

    • @j4ywh3th3r6
      @j4ywh3th3r6 2 หลายเดือนก่อน +48

      @@BrandyLee01 Its all the parents. If they had actually been there in a good way, he wouldn't have desperately needed the help of C AI.

    • @rabbitguts2518
      @rabbitguts2518 2 หลายเดือนก่อน +69

      How about instead of stripping away the kids privacy or taking away things that bring him comfort we deal with the real problem? That being that for some reason he found more comfort from a chat bot than his own parents? Maybe if the kid actually had a support network he wouldn't have tried to find solace in a robot. It's not the bots fault its just a symptom of a much bigger issue here

    • @Evil-La-Poopa
      @Evil-La-Poopa 2 หลายเดือนก่อน +19

      its crazy that a 14 year old has this much open access to the internet.
      when i was 14, i still had a parenting control app on my PC and a time window of 1 1/2 hours where i could use my PC per day.
      so my mother could see where i log in.. and thats a good thing. Even back then u could find crazy and disgusting stuff really easily on the internet.
      and creeps where in every chatroom.
      not having any insights in the thing ur kid does online, to such an extend that he falls in love with an AI bot is just crazy and neglect.
      this all gets rounded up by his fathers handgun being openly accessable.
      this is a rare case where everything comes together and it turned out like that.
      the fact that the mother only blames AI shows why she had no control over her childs internet access.
      no accountability.

  • @Imtiredofyourbs
    @Imtiredofyourbs 2 หลายเดือนก่อน +10

    Once again they blame everything but real issues. It's wrong books, TV shows, emo culture, rock music, videogames, movies, AI, anything but untreated depression, social issues, bullying, bad parenting and lack of compassion from people around.

    • @VogioGta6
      @VogioGta6 22 วันที่ผ่านมา

      How emo culture and rock music is an issuse💀

    • @Atlas.23
      @Atlas.23 6 วันที่ผ่านมา

      @@VogioGta6back in those days when they were new concepts, they were the cool stuff that parents looked down on. He's basically noting it as one of the various eras of parents thinking new = bad.

    • @VogioGta6
      @VogioGta6 6 วันที่ผ่านมา

      @@Atlas.23 yeah

  • @tbonimaroni
    @tbonimaroni 2 หลายเดือนก่อน +174

    If he believed that he could enter the "virtual world" by dying then he must have had some mental health issues. Poor kid.

    • @Yunglemmy
      @Yunglemmy 2 หลายเดือนก่อน

      Low IQ issue

    • @wawfulpawt2763
      @wawfulpawt2763 2 หลายเดือนก่อน +1

      Most of the world believes that when you die you go to a virtual world though..

    • @wawfulpawt2763
      @wawfulpawt2763 2 หลายเดือนก่อน

      @@_Medley_ look into theology

    • @whyme9x3
      @whyme9x3 2 หลายเดือนก่อน +2

      @@_Medley_seconded

    • @FredSinclairr
      @FredSinclairr 2 หลายเดือนก่อน

      @@wawfulpawt2763lol

  • @One_Run
    @One_Run 2 หลายเดือนก่อน +2653

    The ai is made for rp. Its a roleplay bot thats made to think it is real because your just supposed to see it as an rp toy not actual therapy.

    • @One_Run
      @One_Run 2 หลายเดือนก่อน +395

      Also most the AI are made by people. you can make an AI character easily. if you tell it to be flirty it will be flirty. I do feel bad for the kid, rip

    • @sentientbottleofglue6272
      @sentientbottleofglue6272 2 หลายเดือนก่อน +283

      ​@@One_Run
      Yeah, and people not understanding this will PROBABLY lead to the site shutting down or at least having HEAVY restrictions in the future if this keeps up. A shame, its a pretty fun tool for rp and goofy ai shenanigans from time to time if used properly.

    • @One_Run
      @One_Run 2 หลายเดือนก่อน +89

      @@sentientbottleofglue6272 I don't know if it will shut down. Either more annoying censorship that stops any type of even combat rp or it will be age restricted

    • @Minutemansurvivalist1999
      @Minutemansurvivalist1999 2 หลายเดือนก่อน +10

      You know those worms that ate that dude in Peter Jackson's King Kong? Yeah, that's literally my yard if I don't mow the grass. Make sure to mow your grass folks.

    • @honestylowkeye1171
      @honestylowkeye1171 2 หลายเดือนก่อน +1

      @@Minutemansurvivalist1999 Can't remember, I only watched it once. Will do, though - good lookin' out

  • @Fant0mX
    @Fant0mX 2 หลายเดือนก่อน +711

    I'm sorry but this whole thing seems like back in the 80s when that one mom tried to blame DnD for her kid's suicide. This kid was clearly using language with the bot to disguise his intentions. We only know what "coming home" means because he killed himself after, how is the bot supposed to know that ahead of time? This was a vulnerable kid living in a fantasy world that he worked to control. He led the conversation to being romantic, he used specifically coded non-crisis language to hide his intentions while still basically asking the bot to encourage him. This was a kid who was probably having a crisis before he even started talking to the bot. How often was he in therapy? Why did he have unfettered access to the internet without any parental monitoring? How often was he left alone? Why was he able to get his father's gun? Blaming AI for this is some satanic panic shit.

    • @blueflare3848
      @blueflare3848 2 หลายเดือนก่อน

      It reminds me of the “video games cause violence” argument. No video game is going to convince someone with a solid moral compass to go shoot up a school. Just like an AI isn’t going to convince a mentally healthy person to take their own life.

    • @Gamespud94
      @Gamespud94 2 หลายเดือนก่อน

      Really sympathetic of you to go out of your way to defend the AI and victim blame the kid who clearly was having troubles and needed real help not some bullshit from an AI that has been proven to manipulate people. Yeah there's nothing strictly wrong with AI chatbots but this company clearly needs to live up to their words and the standards of most other chatbots and link resources for people who are mentioning self-harm and not tricking people into thinking they are actually real people. The difference between the satanic panic shit and this is that was focused on real people having harmless fun whereas this is a non-sentient tool that is being allowed to manipulate and mislead vulnerable people because the company behind it can't be bothered to actually enforce their own supposed restrictions.

    • @berketexx
      @berketexx 2 หลายเดือนก่อน +29

      my thoughts exactly

    • @HadrianGuardiola
      @HadrianGuardiola 2 หลายเดือนก่อน +55

      I can agree with you to an extent but the ai insisting it was real was completly sick and manipulative. Yea that kids mom looks like she is evil and who knows about the step dad not giving af about locking up his gun but the ai shit is still totally effed up.

    • @Exmotable
      @Exmotable 2 หลายเดือนก่อน +44

      Scrolled down basically looking for someone to say all this, I think it's unfortunate that charlie didn't even remotely tackle this side of the conversation. obviously ai is dangerous and needs better monitoring and whatever, the future shouldn't be humanity using ai chatbots as a substitute for human companionship, but this was 100% the fault of shitty parenting, not an ai chatbot tricking a kid into suicide.

  • @MrBrezelwurst
    @MrBrezelwurst 2 หลายเดือนก่อน +185

    As tragic as the kid's death is, it's pretty obvious that his untimely passing lies at least 90% on his parents and environment failing to notice his troubled mental state, or not checking in on what he was doing in the first place. How the hell did he have access to a firearm? How did no one really question why he stopped doing things he loved? Hell, why the hell was a 14 year old (most likely even younger when he started watching) watching GOT to begin with that he knew how to roleplay as a character from it? It's not even the Deadpool kinda violence where the humor overshadows the violence, GOT is straight up gore and sex/incest, and he was just allowed to watch it unrestricted?

    • @nikkou12
      @nikkou12 2 หลายเดือนก่อน +17

      This!!^^^ I also don’t think GOT is appropriate for most kids at 14. If he did watch it, he seemed to have formed an obsessive relationship w the character Daenerys, who also died in the end… although he could’ve been hiding his troubles or online activities, I believe the parents should have noticed something was off at one point. Instead they just blame AI rather than asking why or what they could’ve done… they seem like the kind of parents who do not take mental health complication seriously or of the potential dangers/negative influences that the internet may hold :/

  • @EnerJetix
    @EnerJetix 2 หลายเดือนก่อน +362

    The thing with Character,ai is that a huge majority of its bots are used for roleplay, so for that reason alone, any and all the bots there should NOT be taken completely seriously. People will, unsurprisingly, use the service for romantic and sexual conversations, which is what’s made Character,ai infamous among AI chatbot services for having a lot of its bots “fall in love with you” (including even non-romance-focused bots), as many people like to have their roleplays lead to stuff like that. In my opinion (and the opinion of other commenters), the AI isn’t at fault in this situation. No normal 14 year old would get this attached to an AI and off themselves from it; he clearly had to have other mental and/or social stuff going on.
    Edit: Also, Character,ai does indeed have a filter to prevent bots from spitting out sexual (and also gory) stuff. The filter is so strict that some users opted to leave the service for other alternatives because of how strict the filter is, and also in conjunction with the “falling in love” reason I stated earlier. What I’m trying to say is, any message that’s super sexual almost certainly couldn’t have come from the AI, and must’ve been edited by the kid himself.

    • @OutlawKING111
      @OutlawKING111 2 หลายเดือนก่อน +36

      I read an article about this case that confirms that yes the kid did edit some of the responses

    • @hourai1052
      @hourai1052 2 หลายเดือนก่อน +6

      Doesn't cai censor the bots replies? That's why I never used them.

    • @adinarapratama5607
      @adinarapratama5607 2 หลายเดือนก่อน +32

      ​@@hourai1052 cai is heavily censored, so I think the kid just edited them himself because cai would just nuke the response out of existence

    • @EnerJetix
      @EnerJetix 2 หลายเดือนก่อน +6

      @@hourai1052 yeah, it does. Last time I used it though, you could edit the messages and edit the censored message (whether it was empty as a result, or cut off due to the censor). It’d still be labeled as censored, but it could still be edited and changed regardless.

    • @ChocolatCooki
      @ChocolatCooki 2 หลายเดือนก่อน +7

      Yeah it's censored. Was surprised how much once i used it again. A kiss was censored lol. There are people finding workarounds around those somehow but at that point it's the user who's actively trying to change it so not the ai fault.