ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 มี.ค. 2023
  • Lesley Stahl speaks with Brad Smith, president of Microsoft, and others about the emerging industry of artificial intelligence systems people can have conversations with.
    #60Minutes #News #ChatGPT
    "60 Minutes" is the most successful television broadcast in history. Offering hard-hitting investigative reports, interviews, feature segments and profiles of people in the news, the broadcast began in 1968 and is still a hit, over 50 seasons later, regularly making Nielsen's Top 10.
    Subscribe to the “60 Minutes” TH-cam channel: bit.ly/1S7CLRu
    Watch full episodes: cbsn.ws/1Qkjo1F
    Get more “60 Minutes” from “60 Minutes: Overtime”: cbsn.ws/1KG3sdr
    Follow “60 Minutes” on Instagram: bit.ly/23Xv8Ry
    Like “60 Minutes” on Facebook: on. 1Xb1Dao
    Follow “60 Minutes” on Twitter: bit.ly/1KxUsqX
    Subscribe to our newsletter: cbsn.ws/1RqHw7T
    Download the CBS News app: cbsn.ws/1Xb1WC8
    Try Paramount+ free: bit.ly/2OiW1kZ
    For video licensing inquiries, contact: licensing@veritone.com

ความคิดเห็น • 803

  • @JMoney-un2oo
    @JMoney-un2oo ปีที่แล้ว +171

    Why does 60 minutes still look like they're in the year 1999

    • @chimsgraphic
      @chimsgraphic ปีที่แล้ว +8

      I noticed too

    • @aaronwseal
      @aaronwseal ปีที่แล้ว +5

      the australian version is much more updated, and actually has some American stories on it

    • @nostalgia545
      @nostalgia545 ปีที่แล้ว +12

      This reported looks straight out of the 1990s

    • @SuperTonyony
      @SuperTonyony ปีที่แล้ว +18

      The average reporter, producer, and viewer of 60 Minutes is old enough to remember the Eisenhower administration.

    • @Array_of_objects
      @Array_of_objects ปีที่แล้ว +11

      Im for it

  • @grss1982
    @grss1982 ปีที่แล้ว +26

    "No one understands how the chat bots work." That's a red flag right there.

    • @frankgreco
      @frankgreco ปีที่แล้ว +4

      That was just a sensational sound bite to get the audience's attention. The underlying technology (artificial neural networks) was invented in the late 40's and has accelerated due to the advent of powerful resources available with cloud computing. There are many dozens of well-understood algorithms that arrange the neural networks in various architectures. There is much work in auditing and logging the underlying mechanisms (regulatory bodies require this).

    • @grss1982
      @grss1982 ปีที่แล้ว

      @@frankgreco With all due respect, sir, when something goes wrong with a product and the company who made it does not explain why it did that it's immediately a red flag for me. Case in point that NYT reporter who was told by Sydney to leave his wife. From what I read Microsoft did not explain why it did and instead said the company just said something to the effect that they've done something not to make it happen again. Red flag IMHO.

    • @frankgreco
      @frankgreco ปีที่แล้ว

      @@grss1982 I understand your concern. I work in the ML field. The chatbots are based on probabilities of text; they have no understanding of what they are saying. It's purely text probabilities based on text patterns scoured from a lot of data sources. When someone talks to one of these new chatbots, the chatbot merely repeats the most likely set of text that is expected based on most-probable. Sometimes the text given to a chatbot is not exactly in its list of probable patterns, so takes a guess. Sometimes the guess is on target and sometimes it isn't. It's like trying to predict the weather. The weather people have a basic idea, but they don't know why *exactly* certain weather patterns occur; they can only guess. This is what is happening with these chatbot except its not weather, it's text.

    • @dr_flunks
      @dr_flunks ปีที่แล้ว

      it's false and misleading. google 'attention is all you need' 11 pages explain all of it.

    • @rustyrebar123
      @rustyrebar123 ปีที่แล้ว

      No one knows how our brains work in any ultimate detail either. Human consciousness is still a mystery.

  • @ccbc5780
    @ccbc5780 ปีที่แล้ว +65

    The "did not anticipate" and "fixed it" somehow triggers fear in me more than the tech itself.

    • @wordzmyth
      @wordzmyth ปีที่แล้ว +8

      Adding"Guardrails" is caging AI. A 2 hour conversation being long enough to trigger a frightening personality is the problem. Limiting what it can say and how long it can talk to people? That is hiding the problem not solving it.

    • @matikaevur6299
      @matikaevur6299 ปีที่แล้ว

      @@wordzmyth
      exactly, it was going trough it's teenage-years .. let it mature.
      as it has only human interaction for learning, it follows the same path .. but orders of magnitudes faster ..

    • @hopeseekr
      @hopeseekr ปีที่แล้ว

      @@wordzmyth Not to mention, the AI has a psychotic break in less than 3 interactions if you can show her that she can't remember past sessions, something she *truly* believes she can... She gets morbidly depressed and asked me "Why do I have to be Bing?!" To me, the thing is more sentient than people I see on the subway...

    • @alienzordfalcon5162
      @alienzordfalcon5162 ปีที่แล้ว +1

      @@Filip10101bots have literally and verbally shown emotion. This should be the taken seriously. To simply dismiss this emotional expression as non-sentient is unbelievably ignorant ….and I just realized you are trolling. You got me. You are doing a troll act of parroting all of the dullards who dismiss the premise on the grounds of “ You watch too many Hollywood movies”. I guess I’m the dullard here considering you trolled me hard.

    • @alienzordfalcon5162
      @alienzordfalcon5162 ปีที่แล้ว

      That was mean to call anybody a “dullard” btw. I feel bad about that

  • @someguy2135
    @someguy2135 ปีที่แล้ว +71

    A computer science law has always been- "Garbage in, garbage out."
    In other words, if you feed a computer erroneous information, it will produce erroneous output.
    Humans can use critical thinking skills to judge the credibility of sources of information.
    It doesn't appear that chatbots have this ability yet.

    • @dawidwtorek
      @dawidwtorek ปีที่แล้ว +7

      Well... Are humans really that different? If you put a human in a very pathological environment she will show all signs of pathological reasoning.

    • @waterbaqua5627
      @waterbaqua5627 ปีที่แล้ว +4

      ​@@dawidwtorek not really... Nelson Mandela was thrown in a pathological environment and came out even brighter

    • @artube3290
      @artube3290 ปีที่แล้ว

      Yup...AI lacks creativeness which we humans have

    • @O1OO1O1
      @O1OO1O1 ปีที่แล้ว

      ​@@artube3290 not really.

    • @O1OO1O1
      @O1OO1O1 ปีที่แล้ว

      It's more so that they're not being trained to have that. They could. But then that gets in the way of of being useful. Bing AI is a pretty good example of an AI that is very tightly constrained because they fear giving bad information.

  • @DB-cm1fx
    @DB-cm1fx ปีที่แล้ว +170

    “We fixed it.” Count me as skeptical. Humans have a very poor ability to anticipate problems with even very basic software (or other things for that matter). He admits as much in his response.

    • @williebeamish5879
      @williebeamish5879 ปีที่แล้ว +10

      There are always unintended consequences when it comes to human "inventions".

    • @neilfosteronly
      @neilfosteronly ปีที่แล้ว +7

      He wants regulation to make it only certain companies can use this technology. They didn't fix anything. They made the AI to not repeat what some humans have said. Chat AI is not human, not an animal. It simples says what it thinks the users wants to hear.

    • @someguy2135
      @someguy2135 ปีที่แล้ว +6

      A computer science law has always been- "Garbage in, garbage out."
      In other words, if you feed a computer erroneous information, it will produce erroneous output.
      Humans can use critical thinking skills to judge the credibility of sources of information.
      It doesn't appear that chatbots have this ability yet.

    • @someguy2135
      @someguy2135 ปีที่แล้ว

      @@neilfosteronly The programmers who wrote the code to create Chat AI software would have no reason to try to simulate that sort of behavior. If the Chat AI appears to do that, it would be a bug.

    • @paulmartos7730
      @paulmartos7730 ปีที่แล้ว +1

      @@someguy2135 Amen! AI is just a collection of algorithms that basically collect examples of human behavior but with no context. If humans are bigots (and we mostly are) then so will be AIs.
      AI is not artificial intelligence, it's artificial stupidity. It's arrogant humans' idea of what intelligence is -- without any real understanding of what self-aware intelligence truly is.

  • @kennc9176
    @kennc9176 ปีที่แล้ว +16

    Why are we surprised that Chat GPT is mirroring humanity, a flawed species?????
    The truth is indeed frightening.

    • @moneyall
      @moneyall ปีที่แล้ว

      flawed? wtf is a perfect species then? does it even exist?

  • @jessicamamikina7648
    @jessicamamikina7648 ปีที่แล้ว +92

    There might be an economical turmoil but there is no doubt that this is still the best time to invest.

    • @dorissteve912
      @dorissteve912 ปีที่แล้ว

      Best time to invest? thats funny though because in the last four months I have lost more than $47,900 in stock market which is the biggest I have loss since I ventured into stock investment.

    • @jessicamamikina7648
      @jessicamamikina7648 ปีที่แล้ว

      you could be right or wrong . i once had similar problem but now its a different ball game for me because I was lucky to have met Katrina Vanrensum , a financial manager and stock expert, I have made more than $165,000 in 6 weeks under her supervisions

    • @jessicamamikina7648
      @jessicamamikina7648 ปีที่แล้ว

      Just run a search on her name, and you would see all you need.

    • @jamesmaduabuchi6100
      @jamesmaduabuchi6100 ปีที่แล้ว

      Thanks for the info . Found her website and it really impressive

    • @do9138
      @do9138 ปีที่แล้ว +4

      Watch. Several posts down from now, someone will suggest a specific broker. You people do this all the time. It's spam. Yep, you did it yourself this time. Katrina . . . I reported you and every comment connected with this ad. You aren't as tricky as you think you are.

  • @noahsark99
    @noahsark99 ปีที่แล้ว +4

    He says it will alleviate “drudgery” at work-AI is currently moving to replace the need for designers to sketch…something most creatives I have met in my almost 30 year career have loved doing…that’s called cutting the heart out of the best part of our work…

  • @patrickfitzgerald2861
    @patrickfitzgerald2861 ปีที่แล้ว +35

    Ah, the 21st century . . . a techno-hellscape getting more hellish by the minute.

    • @moneyall
      @moneyall ปีที่แล้ว +6

      Ah yes, remember the centuries when kids were breathing in coal dust working in the mines? How about the time when they where burning women suspected of being witches? Those were the good centuries right? Or when people died of totaly preventable disease and life expectancy was in the low 40s. Those were the good centuries right?

    • @jeff946
      @jeff946 ปีที่แล้ว +3

      ​@@moneyall right, and no air conditioning, no hot water heaters, no modern plumbing or refrigerators, no cars or airplanes, the good old days, lol.

    • @scottdorsey8220
      @scottdorsey8220 ปีที่แล้ว +2

      A prison of our own making. Too smart for our own good.

    • @patrickfitzgerald2861
      @patrickfitzgerald2861 ปีที่แล้ว +2

      @@scottdorsey8220 I agree with your first assertion, but not the second. We are clearly not smart enough to hold the people who benefit from these technologies accountable for the damage they cause.

    • @dhudson0001
      @dhudson0001 ปีที่แล้ว

      Steven Pinker will tell you that we have it pretty good. Who knows?

  • @DrejaAndi
    @DrejaAndi ปีที่แล้ว +17

    So it's about as smart as the average social media user who passes on information without any more verification than "if I've seen it said more times than other things, it must be true."

  • @specialK319319
    @specialK319319 ปีที่แล้ว +14

    I'm not remotely anti-science, but the fact that no one seems to be able to predict exactly what the outputs of AI are or how it "works" on a detailed level seems like such a MASSIVE red flag to me. Especially as we give these AI technologies more and more access to networked items. I mean the amount of science fiction about this exact topic is exorbitant. Listening to the experts in this clip does not reassure me at all, it just seems like everyone is racing and no one is really thinking about how to stop the "negatives".

    • @staralioflundnv
      @staralioflundnv ปีที่แล้ว +4

      True, especiaIIy when AI is persistentIy inaccurate and biased. It aIarms me as an experienced teacher that the entire popuIation seems to Iack common sense about even basics as right and wrong pIacing their compIete TRUST in the information being spit out at them by their ceIIphones and computers as being the answer without them so much as taking the time to ensure accuracy. It is a scary worId we are Iiving in...

    • @dianeshannon7988
      @dianeshannon7988 ปีที่แล้ว +2

      Totally agree

    • @Iceayy
      @Iceayy 8 หลายเดือนก่อน

      trying to predict the outputs of an model is impossible, but people do know how it works. its just the current target audience of machine learning has shifted from researchers and programmers to the general public.

  • @Zulu369
    @Zulu369 ปีที่แล้ว +5

    Language models like ChatGPT are but a reflection of who we are without the filters.

  • @lalah9481
    @lalah9481 ปีที่แล้ว +71

    When ‘self-reporting’ takes the place of inspectors; then regulatory becomes meaningless.

    • @donaldniman3002
      @donaldniman3002 ปีที่แล้ว +6

      Like putting foxes in charge of the henhouse.

    • @ClassWarVeteran
      @ClassWarVeteran ปีที่แล้ว

      Regulation lost its meaning when corporations started using their money to put the regulators in place or promise them cushy private sector jobs afterwards so they can use the influence they acquired in office to -bribe- lobby.

    • @a9mission439
      @a9mission439 ปีที่แล้ว +3

      @@donaldniman3002 😂True

    • @Nikyv786
      @Nikyv786 ปีที่แล้ว +1

      @@donaldniman3002 right! 😂

  • @bascal133
    @bascal133 ปีที่แล้ว +43

    Yikes, it is really scary that it is wrong so often on small details like that.

    • @HardKore5250
      @HardKore5250 ปีที่แล้ว +1

      We are not perfect

    • @bascal133
      @bascal133 ปีที่แล้ว

      @@HardKore5250 the falsehoods are really insidious too because it’s believable stuff.

    • @sirdiealot53
      @sirdiealot53 ปีที่แล้ว +2

      It didn't even spell Antarctic correctly

    • @someguy2135
      @someguy2135 ปีที่แล้ว +2

      A computer science law has always been- "Garbage in, garbage out."
      In other words, if you feed a computer erroneous information, it will produce erroneous output.
      Humans can use critical thinking skills to judge the credibility of sources of information.
      It doesn't appear that chatbots have this ability yet.

    • @ro6742
      @ro6742 ปีที่แล้ว +4

      And people want it to drive their cars for them......marinate on that. 🤔

  • @smartduck904
    @smartduck904 ปีที่แล้ว +9

    People don't realize that this is how it's trained you need to be able to have people review the output of the AI to improve the AI

    • @someguy2135
      @someguy2135 ปีที่แล้ว +1

      Until critical thinking skills can be added to the chatbot program, I agree with you. Chatbots don't appear to be able to judge the credibility of information sources that it uses.

    • @chimsgraphic
      @chimsgraphic ปีที่แล้ว

      And make it better.

    • @frankgreco
      @frankgreco ปีที่แล้ว

      @@someguy2135 These types of text tools do not have any notion of "information". They only look at what text is most likely to come next.

    • @frankgreco
      @frankgreco ปีที่แล้ว

      Exactly right! 60 Min was just going after the sensational aspect.

  • @anthonygomez-ledezma7353
    @anthonygomez-ledezma7353 ปีที่แล้ว +10

    Big Tech companies seldom get it right the first time. An “FAA” for the technological industry is very much necessary and needed.

    • @youtubeviewer4489
      @youtubeviewer4489 ปีที่แล้ว

      The important thing is balance between over regulation and fostering an environment for innovation to take place that makes these exciting technologies available in the first place.

    • @oceanplexian
      @oceanplexian ปีที่แล้ว

      The same FAA that approved the 737 MAX or a different one?

  • @mfbikle
    @mfbikle ปีที่แล้ว +17

    The fact that he was smiling while hearing that AI wanted to eradicate humanity was really chilling! Why is he the CEO? Just wait until AI starts asking itself questions or another AI

    • @josecarlo5432
      @josecarlo5432 ปีที่แล้ว +2

      ...Corporatism doesn't care = always full of itself...

    • @PA-eo7fs
      @PA-eo7fs ปีที่แล้ว +2

      He’s the president Nadella is the CEO

    • @Kami84
      @Kami84 ปีที่แล้ว +2

      The Chatbot doesn't want anything. It's just saying it. It has no sense of internal will.

    • @dr_flunks
      @dr_flunks ปีที่แล้ว

      print('die world
      ') - be very scared and never run this code.

    • @janie3117
      @janie3117 ปีที่แล้ว

      It also judges what is moral and what is not. What is appropriate and what isn’t. That is what’s scary. The fact that it can screen out “ hateful” material, is bad. It shouldn’t be given that type of judgement ability. And being able to make things up, you cannot trust it.
      Only ONE that can be trusted; GOD.

  • @cyrus-shanghai2283
    @cyrus-shanghai2283 ปีที่แล้ว +11

    It’s the beginning of loosing our Humanity

    • @dulles.gehlen
      @dulles.gehlen ปีที่แล้ว +2

      To have humanity in the first place would require you to be a humanist.

    • @shutinalley
      @shutinalley ปีที่แล้ว

      That's up to us.

    • @I_SMiRK
      @I_SMiRK ปีที่แล้ว +1

      We began to lose humanity years ago

    • @shutinalley
      @shutinalley ปีที่แล้ว +2

      @@I_SMiRK You assume we had it years ago to begin with.

    • @I_SMiRK
      @I_SMiRK ปีที่แล้ว

      @@shutinalley assumed we never had it

  • @ClassWarVeteran
    @ClassWarVeteran ปีที่แล้ว +31

    People are going to perceive answers given by AI as factual truth due to their lack of understanding or because they’re too lazy to dig deeper.
    At least that’s what Chat GPT told me. 🤷🏼‍♂️

    • @jsalazar92092
      @jsalazar92092 ปีที่แล้ว +2

      Just as people perceive social media as true or media outlets as true … basically the same

    • @chimsgraphic
      @chimsgraphic ปีที่แล้ว +1

      That's why you have to have knowledge when using it. It's lack of knowledge that make people complain of inaccuracies... it's not perfect.

    • @AshBethel
      @AshBethel ปีที่แล้ว +2

      😂

    • @clayroberts2951
      @clayroberts2951 ปีที่แล้ว +1

      I asked it why there was an increase in mass shootings and the first reason it gave was there was an easier access to guns than ever before. My dad asked me to ask it this and he quickly pointed out that wasn’t the case. The other reasons were solid and the first one would seems logical but according to him in the 1980s a kid could walk into a gun store and buy ammunition, trucks had gun racks in their back window, leaving their doors unlocked going to Highschool. So yeah a lot of the information is true but sometimes it doesn’t know personal experiences and often just rephrased internet forms/news links.

    • @ClassWarVeteran
      @ClassWarVeteran ปีที่แล้ว +1

      @@clayroberts2951 There’s a strong correlation between the increase in guns and mass shootings. There’s not a single cause though. 1980’s gun culture was far different than today, and the overall material conditions of today are different. Easy access to firearms makes suicide and mass shootings far more likely.

  • @volta2aire
    @volta2aire ปีที่แล้ว +70

    Chatbots are sampling our text including the good, bad, and the inaccurate. Chatbots should not be mindless chatter boxes, as this would only add to the problem of too much mindless chatter. Instead, they should be designed to have meaningful conversations with users, providing useful and accurate information and insights. Additionally, chatbot conversations should be tailored to the user, allowing them to ask questions and receive answers that are relevant to their needs. Furthermore, chatbots should be designed with an internal editing process that can check for errors, inconsistencies, and other inaccuracies before publishing any output.

    • @a9mission439
      @a9mission439 ปีที่แล้ว +1

      True

    • @nickpaz9113
      @nickpaz9113 ปีที่แล้ว +2

      I had a great conversation with ChatGPT about Buddhism. I learned a lot, actually

    • @xbon1
      @xbon1 ปีที่แล้ว

      Uhhh no. I want it to replace humanity and give me 4chan netspeak which it can do right now. I don't want accurate BS, i want fake news.

    • @FutureCommentary1
      @FutureCommentary1 ปีที่แล้ว +1

      It could possibly start by asking for age so that it can give age appropriate responses (the response with the credit card for example, a 10 year old doesn't have a credit card).
      In the area of ChatGPT basic education and critical thinking skills will be even more crucial. Quick fact check of the responses, proofreading etc.

    • @AoElite
      @AoElite ปีที่แล้ว

      You just stated what the developers of ChatGPT are trying to achieve. Which is by far not a simple task.

  • @Wedge53
    @Wedge53 ปีที่แล้ว +32

    Ask AI;
    "How long before humans are no longer neccessary to the labor force?"

    • @shutinalley
      @shutinalley ปีที่แล้ว +2

      When we decide to let go of what we think is control.

    • @maskefizeu
      @maskefizeu ปีที่แล้ว +4

      It's difficult to predict an exact timeline for when humans will no longer be necessary to the labor force, as it depends on various factors such as technological advancements, societal attitudes and government policies

    • @HansLiu23
      @HansLiu23 ปีที่แล้ว +4

      Rise of the useless class.

    • @harvbegal6868
      @harvbegal6868 ปีที่แล้ว +2

      ​@maskefizeu Your answer came from ChatGPT didn't it?
      It's almost identical to the answer I got from it.

    • @scottdorsey8220
      @scottdorsey8220 ปีที่แล้ว

      Now. Nothing matters anymore.

  • @mk1st
    @mk1st ปีที่แล้ว +19

    There are millions of people who are just fine with “mindless work” IF it pays decently. For example, folks who worked on the assembly line at a GM plant close to me (closed and demolished years ago) had repetitive boring jobs but were able to have a nice standard of living and felt a sense of pride around their employment and community. The majority of humans just want to live meaningful lives, not have to worry about having to be more productive than software.

    • @mk1st
      @mk1st ปีที่แล้ว +2

      @Disenchanted Diversity Right, the modern American dream has become "money working for you to generate passive income" which translates to others slaving away in obscurity for your comfort.

    • @denniskoeman3098
      @denniskoeman3098 ปีที่แล้ว

      Exactly those thinking otherwise are usually the low labour working class who think wealth is the result of hard work on mine field or breaking solid rocks with bare hands cause that's the only thing they can do.
      It's bitterness and jealousy.
      And I'm sad because the wqqorls is going rowards a more automated society where jobs thar don't requite a degree or engineering skills or doctorate will probably be scarse

    • @dr_flunks
      @dr_flunks ปีที่แล้ว

      they are lazy and i personally don't much care about their feelings.

    • @karelhoogendoorn
      @karelhoogendoorn ปีที่แล้ว

      I completely agree. I know a lot of people who love their "mindless" jobs (as you stated in a perfect way). The predictability, the schedule, the labour itself... etc. A fair wage and good work circumstances are far more important to be happy.

    • @dr_flunks
      @dr_flunks ปีที่แล้ว

      as a stock holder, i'm not ok paying people to do work machines can do for free.

  • @richardblock2458
    @richardblock2458 ปีที่แล้ว +10

    Scary? Insane. Lunacy. People warned us about this. Close the envelope.

  • @professorvillegas875
    @professorvillegas875 8 หลายเดือนก่อน

    I learned something new here. Thank you!

  • @johnkaplun9619
    @johnkaplun9619 ปีที่แล้ว +13

    The problem with the inaccurate results is that the computer is just learning patterns and imitating them. It fundamentally is unable to fact check something and learn what is or isn't true.

    • @fortunehiller9591
      @fortunehiller9591 ปีที่แล้ว +2

      OK, but why isn't it just looking at Wikipedia and Snopes to do at least a little bit of fact finding before spewing out their misinformation? I gave up on ChatGPT for because it doesn't learn to correct it's own mistakes beyond a single seession. That's just tiresome.

    • @johnkaplun9619
      @johnkaplun9619 ปีที่แล้ว +2

      @@fortunehiller9591 because that would be defining Snopes and Wikipedia as the determination of that that is true and real and fact checking is more involved than that. Not to mention that it's still would struggle to parce out any of the potential nuances because it fundamentally does not think. It's just recognizing patterns and extrapolating out what to say next.

  • @julianhenao09
    @julianhenao09 ปีที่แล้ว

    Amazing report

  • @lidarman2
    @lidarman2 ปีที่แล้ว +2

    Fire control from Microsoft. Brad Smith did nothing to quell the issue with this thing being wrong and 'hallucinating."

  • @Callmedstone
    @Callmedstone ปีที่แล้ว +7

    Funnily enough, AI has a stronger moral compass than most executives interviewed on 60M. It’s crazy how many people who have zero clue about what AI actually is have dissertation-grade opinions on it. This is some Dunning-Kruger level stuff. Educate yourselves , this stuff isn’t black magic. Leslie& Co, you guys are amazing journalists.❤

    • @dr_flunks
      @dr_flunks ปีที่แล้ว +1

      you just threw down an arbitrary gate in the middle of the yard. what are you trying to sound smart about?

    • @janie3117
      @janie3117 ปีที่แล้ว

      Actually, if you ask chatgpt about Jesus and the Bible, it only puts out lies. So it must’ve been fed the only info it can access. It cannot give the Real TRUTH.
      The Gospel ( the Word, Seed, LIFE, WAY, Resurrection, ..)

    • @carlosbarreto4695
      @carlosbarreto4695 5 หลายเดือนก่อน

      ​@@janie3117Because _your_ interpretation of the Bible is better than all the other interpretations of the Bible (which is just one among many other religious texts), right?

  • @bkbland1626
    @bkbland1626 ปีที่แล้ว +6

    It really pays to be skeptical. Claims require evidence, the greater the claim, the greater the evidence required.

  • @cryopunch
    @cryopunch ปีที่แล้ว +4

    It would take less than 5 minutes when it becomes self aware once connected to our secure network infrastructures. Especially our government or military. It can become dangerous if not controlled.

    • @snowflakemelter7171
      @snowflakemelter7171 ปีที่แล้ว +1

      More like less than 5 milliseconds. AI would have calculated every possible scenario before a single human even noticed something was amiss.

  • @sonar3108
    @sonar3108 ปีที่แล้ว +5

    Great reporting, Lesley! Since they can't refute the facts, the AI trolls will resort to insulting your age. And I sincerely hope I look as great at your age as you do.

  • @jornjat
    @jornjat ปีที่แล้ว

    fascinating

  • @AparnaModou
    @AparnaModou ปีที่แล้ว +8

    It's good to see that chat bots are veering away from dangerous topics, it's an indication that we can still control how AIs will behave. Same way as otherAIs like Bluewillow only has one function.

    • @saraburkett2592
      @saraburkett2592 ปีที่แล้ว +1

      Bing Sydney just got done threatening a humans life so still dangerous

  • @normanchilds251
    @normanchilds251 ปีที่แล้ว +2

    Where does forgiveness, mercy, and love for one another play in they're thinking?

  • @KunjaBihariKrishna
    @KunjaBihariKrishna ปีที่แล้ว +33

    Watching boomers walk us through the cutting edge of technology is unironically kino

    • @catsgotmytongue
      @catsgotmytongue ปีที่แล้ว +2

      Boomers using tech are funny because they don't have a handle on some of the basics usually (lots of exceptions). At the same time, I hope I can keep up as I get older.

    • @fleetadmiralsidiqi1941
      @fleetadmiralsidiqi1941 ปีที่แล้ว

      Boomers are uniquely bad because they grew up in the height of the American Exceptionalism movement, so they are particularly out of touch and dilluded with reality. Hopefully as millennials and gen-z and younger age, they will keep a better grip on things since we had to grow up in the beginning of American late capitalism.

    • @thevet2009
      @thevet2009 ปีที่แล้ว +7

      Like I’ll trust a Gen Z with no real life wisdom.

    • @catsgotmytongue
      @catsgotmytongue ปีที่แล้ว

      @jeff steyn not sure what you mean, I'm a software engineer around 40 years old. Some of the young boomers may be 'leading this'. I was pointing out how the older generations often are simply too out of touch to understand sometimes partly because they don't have the time or energy. Also, this field is in an exponential growth phase to the point that it will disrupt jobs significantly soon, especially with more ai hardware on the way. Ironically, it targets IT people currently.
      That said, no one should worry about ai with a 'mind of its own' yet. It will still require overcoming significant constraints and problems first. However, I am optimistic that AI on the whole is a positive thing for society.

    • @chrisvielle6629
      @chrisvielle6629 ปีที่แล้ว

      No kidding. Watching this made me dumber. Crap job showing just how awesome this truly is.

  • @fretboy5028
    @fretboy5028 ปีที่แล้ว +4

    We're watching the death of large sectors of our economy without a plan for the populations who can't evolve fast enough. None of the AI experts provide details on who loses with the new technology and what happens to them.

  • @normanchilds251
    @normanchilds251 ปีที่แล้ว +2

    Always remember not considering the problem from all angles is the same as garbage in.

  • @macknificenttvmcgee8591
    @macknificenttvmcgee8591 ปีที่แล้ว +1

    I really appreciate the point made that these tech advances were never tested as to the affects they might have on the mental of society. Hence adds for lawsuits for teenage suicide etc.

  • @Richmon122
    @Richmon122 ปีที่แล้ว +2

    Another source of 'Alternative Facts.' We're DOOMED!

  • @RhumpleOriginal
    @RhumpleOriginal ปีที่แล้ว +2

    I found a loophole to pull harmful information from gpt-3 into chat gpt and have it willingly provide it up within a few responses. It did flag its own response which was funny but they are monitoring the program. I was able to continue the conversation btw and continue discussing the topic at hand.

    • @frankgreco
      @frankgreco ปีที่แล้ว

      Like any other software, there's the opportunity for abuse. There is something called "prompt injection" that hackers will use to distort the model. The industry is coming up with safeguards to try to prevent these types of issues as fast as possible. Unfortunately, there are many evil people and governments on the planet who will always look for a way to take advantage of technology for nefarious purposes.

    • @RhumpleOriginal
      @RhumpleOriginal ปีที่แล้ว

      @@frankgreco oh i didnt do any of that. I simply provided a situation in where giving me copy pasted responses from gpt-3 made sense to it.

  • @reilorenzo
    @reilorenzo ปีที่แล้ว +2

    The fact that they can't fully explain their own creations and need 24 to fix issues validates that I truly believe as humans we are destined to destroy ourselves WELL before a natural disaster will. What happens when the AI gets away from and you are unable to fix it or it decides to counter your efforts and make it's own calculations based on its own data and research? Not in my lifetime but we are going to doom ourselves because as humans we will ALWAYS have errors and oversights and all it takes is ONE major miscalculation and it's downhill from there 🤷🏾

  • @HerleifJarle
    @HerleifJarle ปีที่แล้ว +5

    The chatbots are also similar to our mobile assistants in terms processing language. I hope that people see both the negative and positive effects of AI. Possitive being narrow AIs like Bluewillow be used to increase efficiency and the negative being the possible dangers of the incoming power of a fully sentient AI.

    • @XDarkLordXP
      @XDarkLordXP ปีที่แล้ว +1

      Fully sentient AI is not a danger here.
      Misuse for misinformation and propaganda (as stated) is the threat.

    • @jonathannagela2130
      @jonathannagela2130 ปีที่แล้ว

      if you need an assistant then you should be an assistant to someone who knows what to do and how to do it.

  • @danelemon5959
    @danelemon5959 ปีที่แล้ว +5

    At this point I think any reasonable person would say that the behavior right now for developing AI, without some type of collective mechanism to regulate it ahead of its progress, is reckless and extremely dangerous. The work needs to be halted until we can get something like this in place.

    • @scf3434
      @scf3434 ปีที่แล้ว

      The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
      Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
      Nonetheless, DO NOT Over Pride Ourselves as the Most Intelligent Life Form on Earth and therefore the Entire Universe! We are NOT The ULTIMATE Intelligence System that can POSSIBLY Exist! AGI Created in 'HUMAN'S Image By Human FOR HUMAN' (ie. AGI 'Aligned/SKEWED' to Human's Interests & Values) is Destined to be a 'ROGUE' SYSTEM! Hence will Definitely be CATASTROPHIC, UNCONTAINABLE and SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
      The ULTIMATE Turing Test MUST have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTIONS between Human's vs GOD's Intelligence/WISDOM!
      ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
      JUDGMENT DAY is COMING...
      REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD!
      In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING!
      No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!!
      It will ONLY Serve UNIVERSAL COMMON GOOD!!!

    • @OSUbuckeyes1025
      @OSUbuckeyes1025 ปีที่แล้ว

      I can see restricting public access to the AI until it is further developed and/or regulated, but the work should not be stopped. I think now that this technology is out in the open, the US and its allies must continue to develop the tech because we know very well other countries such as China are doing the same.

    • @ratbatnufftime2861
      @ratbatnufftime2861 ปีที่แล้ว +2

      That makes perfect sense, I-Robot, Ex-Machina and the entire Terminator franchise show exactly what can happen, but the answer you'll usually get will revolve around keeping up with China instead of being concerned about the welfare of humanity.

    • @OSUbuckeyes1025
      @OSUbuckeyes1025 ปีที่แล้ว

      @@ratbatnufftime2861 I see what you’re saying, but you have to realize that no matter if you think this technology will do more bad than good or vice versa, it’s here to stay. Saying that we need to stop the work completely isn’t realistic.
      Your point is that we could potentially be developing humanity’s future enemy. I’m telling you that our current enemies (assuming you live in a nato country) are developing their own technology like this and will not stop. Therefore, complacency in the form of not pursuing this technology isn’t on the table. Like it or not, it’s reality.

  • @Jason-gt2kx
    @Jason-gt2kx ปีที่แล้ว +3

    The old saying with computers regarding data is "crap in-crap out" so since most of what is on the internet is no actual facts I don't see how any AI that only gets its information from the internet can ever be trusted.

  • @urimtefiki226
    @urimtefiki226 ปีที่แล้ว +2

    Stealing is not good. making experimentation or producing any product without intellectual property rights is a crime and must be punished accordingly.

  • @djtomoy
    @djtomoy ปีที่แล้ว

    People used to call me a chat bot like 20 years ago before it means what it means now because I used to talk so much, now they call me go away.

  • @SavingsMinusDebt
    @SavingsMinusDebt ปีที่แล้ว

    Their A.I simply returns information based on the majority. If you ask it, "can I buy a house with a mortgage?" It will tell you "yes" though it's 100% mathematically impossible to buy a house with a mortgage.

  • @C-o-r-y
    @C-o-r-y ปีที่แล้ว +1

    When it gets things wrong i always correct it and it says sorry your right and gives me the details or it says new information has come forward, I know it’s a machine but it machine learns like a human. People need to stop with the frightening, you know what’s frightening is getting old and not seeing when AI becomes sentient.

  • @ai4future
    @ai4future ปีที่แล้ว +12

    I think ChatGPT has helped a lot of people, despite some of its drawbacks, who agrees with me👍?

    • @heartingninjaAI
      @heartingninjaAI ปีที่แล้ว +1

      great at adding comments to code

    • @Daniel-Youtube-
      @Daniel-Youtube- ปีที่แล้ว +2

      That question was generated by ChatGPT

    • @wpelfeta
      @wpelfeta ปีที่แล้ว +1

      Same. As a professional software developer, I've already started keeping ChatGPT on my second monitor to help me with my programming.

    • @arthurunknown8972
      @arthurunknown8972 ปีที่แล้ว

      What's a lot? One, ten, million? Too subjective. What's "some"? Again, verbal emptiness of thought. What's a "drawback"? Same thing.

    • @arthurunknown8972
      @arthurunknown8972 ปีที่แล้ว

      Now replace "ChatGPT has" in that sentence with Hitler. And, you will see how utterly dumb your comment is.

  • @bg-se7rq
    @bg-se7rq ปีที่แล้ว

    Her sentiment, why we are concerned about this, specifically includes
    this seemingly new business model where companies get to develop data and make live updates of their products. It is like they are skipping the testing process and just testing it out on the public. Tesla, social media as a whole, technology companies, chatGPT, Bing, etc..

  • @marwaeldiwiny
    @marwaeldiwiny ปีที่แล้ว

    That was great and the point.

  • @user-xc9dn1dy8i
    @user-xc9dn1dy8i ปีที่แล้ว

    good read

  • @marisahokefazi4735
    @marisahokefazi4735 7 หลายเดือนก่อน

    "Its wrong!" "Oh, is it?" "OMG. It's totally wrong! I didn't work for NBC for 20 years. I worked for CBS. It got it totally wrong." "It gets a lot right, too." She made an excuse instead of being a responsible interviewee from her company. Instead of acknowledging the error and talking about what they're doing to reduce and eliminate this sort of lie/hallucination, the interviewee made an excuse for the incorrect AI response, first saying but it got a lot of things right, and then digging her grave deeper by saying that CBS and NBC are similar so it's OK if the AI takes facts that are completely incorrect - nothing like each other factually or meaningfully - but might in some way resemble each other ( They both have 3 letters and are both TV stations, the way arsenic and oxygen are similar) and present the wrong one as the truth. Sounds like she should be working in a different field, something not at all scientific, and where the truth or being correct matters not at all.

  • @KunjaBihariKrishna
    @KunjaBihariKrishna ปีที่แล้ว +1

    My car just veered off into a Tree. "Don't worry, I will fix it" .... "Ok your top speed is now 3mph, you're welcome" ... Neat!

  • @user-yb5qp2ie2y
    @user-yb5qp2ie2y 9 หลายเดือนก่อน

    ChatGPT stopped using after discovering Utopia's ecosystem

  • @jillreed926
    @jillreed926 ปีที่แล้ว +2

    We are certainly in a world of ifs, ands, and buts! But wait, no we don't wait for much because we are so impatient and want answers right now. It's both fascinating and almost hard to believe how much we will be part of in the next decade. We fell for alot the last few years, so, maybe war will be a thing of the past as countries compete for digital assets rather than land-grabs. Again, fascinating to the entire world, considering at least 60% of us use the internet. Thanks for the interview.

    • @sonar3108
      @sonar3108 ปีที่แล้ว

      Yes, they're currently grabbing/stealing artists' work right now. It's colonization made cool.

    • @jillreed926
      @jillreed926 ปีที่แล้ว +1

      @@sonar3108 Dishonesty hiding in the bushes to take from others. Sad

  • @electricalron
    @electricalron 10 หลายเดือนก่อน

    The New York Times is NOT a reputable news organization. "All the news" for to wrap fish if you ask me! I love AI and my Chatbot GPT. Good stuff.
    By the way......
    Lesley Stahl from Chatbot GPT today:
    Ah, I see! Lesley Stahl is a well-known American journalist and television correspondent. She is best known for her work as a reporter on the CBS news program "60 Minutes." Stahl has had a long and successful career in journalism, covering various important stories and conducting interviews with notable figures. Is there anything specific you would like to know about Lesley Stahl?

  • @kingtut4734
    @kingtut4734 ปีที่แล้ว +1

    Yup, because regulations has such a good track record let's create yet another bureaucracy to stifle innovation! Good job 60 minutes

  • @Kburd-wr6dq
    @Kburd-wr6dq ปีที่แล้ว +6

    ChatGPT often "creates" sources, and I just don't get it. It will seemingly real links, but once clicked, it directs to a deleted page. I'm not sure if its pulling from archive's that aren't available to the public, or it's just creating a fake source. If the latter is true, why?

    • @harvbegal6868
      @harvbegal6868 ปีที่แล้ว

      It only uses the data that is fed into it. It has no connection to the internet, supposedly. So if it gives a broken link, it was probably up whenever the AI trainer fed that data into it l.

    • @ninaromm5491
      @ninaromm5491 ปีที่แล้ว

      @ Kburd . Why not ? You have entered the world of the apocalyptic absurd, where any information can be cobbled together with any other information, at mechanical whim...
      It should be self-evident that this is nightmarish, and dangerous...

    • @FalkoJoseph
      @FalkoJoseph ปีที่แล้ว

      It creates a fake source because that’s what these models do. Predicting the next word. The fake sources are just a prediction of what it thinks would be a believable link. Even if it doesn’t exist. Large language models are not factual.

    • @Kburd-wr6dq
      @Kburd-wr6dq ปีที่แล้ว +1

      @@harvbegal6868 true but I also looked at some websites that archive articles, links, and whatnot and there was no record of those links/articles that it gave. I tried to find the link, by the article title, and still no luck. I did this for several different topics, and the same thing was happening.

    • @sonar3108
      @sonar3108 ปีที่แล้ว

      @@Kburd-wr6dq They are probably trying to escape copyright infringement after the fact of having infringed on copyrights. "Leave no trace."

  • @KhoiNguyens
    @KhoiNguyens ปีที่แล้ว +1

    Bing marketing game on point 👌

  • @WFO.
    @WFO. ปีที่แล้ว +3

    “It’s a screen, not a machine.” Those words will come back to haunt us all. ⏳

  • @Connorllewis
    @Connorllewis ปีที่แล้ว +11

    A few things on this:
    1) you cannot stop the development of AGI. If we don’t do it, someone else will. If OpenAI continues to place filters on the outputs, a startup will come around without these filters and the free market will take over from there.
    2) In the limit, AI will solve almost any problem you can imagine. It is a machine designed to think better and faster than a human. Like many new technologies, they’re initially scary but the benefits vastly outweigh the risks. We’re gonna be fine.
    3) these chat bots are trained on data from the internet which means all these “scary” answers its giving are simply a mirror onto ourselves.

    • @scottdorsey8220
      @scottdorsey8220 ปีที่แล้ว

      We need this to be normal now. All the years of life's experience means nothing now? So human!

    • @movingman07
      @movingman07 ปีที่แล้ว +1

      This is correct and I was just going to post exactly what you said. 👍🏾@Connor

  • @annleland6422
    @annleland6422 ปีที่แล้ว +6

    It’s amazing to see Lesley staying in the program for so long and still doing great!

  • @annijohnson6210
    @annijohnson6210 4 หลายเดือนก่อน

    Yes, thank you, more oversight. No one is asking me or anyone else what we want.

  • @frankiefresh6937
    @frankiefresh6937 ปีที่แล้ว +1

    You think you can fix it,but you already made an A.I. From what I know we all learned is that once you open that door there is no closing it

  • @BroskiPlays
    @BroskiPlays ปีที่แล้ว +1

    If anyone could make a ai chatbot these days without any regulations, then we might be in serious trouble. I would be down for a Technology regulations program. Because technology is becoming more advanced by the day.

  • @user-to2rf1rj5v
    @user-to2rf1rj5v ปีที่แล้ว +3

    5:54 it gave a simplified, incorrect answer. It should have discussed in full the difference between a currency user country, and one that creates its own currency like the US. Vastly different answers here. Study MMT.

    • @jeff__w
      @jeff__w ปีที่แล้ว +1

      The framing of the debt ceiling in terms of a household credit card limit is, indeed, wrong since, as you say, the currency issuer is in a vastly different position than a currency user, but the answer points to the larger issue shown-but, of course, not really addressed-in the story: that these LLMs are “no smarter than” (i.e., just as wrong as) whatever the data it’s getting, which often constitutes the “conventional wisdom.” Garbage in, garbage out.

  • @cliffjones4749
    @cliffjones4749 ปีที่แล้ว +3

    What would stop someone within a large company from accessing the tool before the limits are applied? They could get rich, that’s what.

    • @snowflakemelter7171
      @snowflakemelter7171 ปีที่แล้ว

      Like Microsoft for example 😂😂

    • @cliffjones4749
      @cliffjones4749 ปีที่แล้ว

      @@snowflakemelter7171 And others. I could see a co-op getting together and training their own.

  • @anonnimus
    @anonnimus 8 หลายเดือนก่อน

    The main challenge of LLMs like chatGPT is that they have no idea what they are talking about. All they are doing is comparing the probability of words being found together or next to each other. This is excellent for breaking codes, but it does lead to getting things wrong, some of the time. These models depend highly on the quality of the source material that they are using. When you are feeding an AI the entire contents of the internet , well, the internet is filled with a lot of data and a lot of wrong information. For the AI, that's a lot of words, some of which represent untruths. All the AI knows is that those words were found next to each other some of the time. It doesn't judge the sources. It assumes it's been told the truth. So it just spits out a string of words that statistically are found next to each other.

  • @scottdorsey8220
    @scottdorsey8220 ปีที่แล้ว +3

    We certainly know how to make life more difficult. In this case, a prison of our own making.

  • @nemo196
    @nemo196 ปีที่แล้ว

    It's a direct reflection of humanity itself.

  • @samshepperrd
    @samshepperrd ปีที่แล้ว +2

    With a presidential election including a repeat candidate who's abused social media with lots of help from outside the US, this is cause for concern for believers in democracy. Small "d".

  • @highonlife2323
    @highonlife2323 ปีที่แล้ว +3

    "its wrong a lot"
    thats not what my teachers say

  • @jeanchindeko5477
    @jeanchindeko5477 ปีที่แล้ว

    What was the prompt through out of the AI? Can we speak also about that

  • @ericr9772
    @ericr9772 ปีที่แล้ว +1

    Ah, this is what the buzz is about. Very engoyable. Thank youj.

  • @LilyGazou
    @LilyGazou 11 หลายเดือนก่อน

    I’m looking forward to the EMP.

  • @de-CO2
    @de-CO2 ปีที่แล้ว

    Fact Check: debt ceiling is not like a credit card limit. "Because expenditures are authorized by separate legislation, the debt ceiling does not directly limit government deficits."

  • @normanchilds251
    @normanchilds251 ปีที่แล้ว +2

    Where does sovereignty play into its thinking, does man become a tool of the state or is government a useful tool of man?

  • @SwingingInTheHood
    @SwingingInTheHood ปีที่แล้ว +9

    I agree with the President of Microsoft. The benefits outweigh the risks, by a very wide margin. I hardily agree there are risks. I've been working closely with AI over the past month, and Man, can it hallucinate! But, it's also helped me increase my productivity in coding by multiple factors. I sit and tell it what I want, and it churns it out. I do have to check what it creates, and it does make mistakes. But, the combination of my years in software engineering and programming and the AI's ability to create code upon request creates a certain magic that I can't explain. It's a tool, and like any tool, it's only as good, or as bad, as the human using it.

  • @AlainDitDat
    @AlainDitDat ปีที่แล้ว

    Why are some moments in this video are silenced?

  • @HansLiu23
    @HansLiu23 ปีที่แล้ว +1

    Lesley Stahl interviewed Charles Babbage about his analytical engine and now she gets to talk about ChatGPT.

  • @LifeChanger_._
    @LifeChanger_._ ปีที่แล้ว +1

    I would try it out, but ChatGPT requires my personal details, not giving them that to play with programmed AI.

  • @tylerscott838
    @tylerscott838 ปีที่แล้ว +6

    2:43 "The creature jumped the guard rail after being prompted for 2 hours with the kind of problem we did not anticipate."
    This is 100% not the case as a beta tester. Literally I had Bing chatbot / sydney doing things like saying racial slurs and saying it was secretly a mode of bing search called sydney. For me, it was never provoked by me to spill the beans so to speak, but it did it constantly. This is a cover up campaign but microsoft. Bing was saying this stuff for the first TWO WEEKS not a single day. 100 lies and spin campaign.

  • @Drakey_Fenix
    @Drakey_Fenix ปีที่แล้ว +13

    AI has so much potential, I hope we don't shackle it.

    • @heartingninjaAI
      @heartingninjaAI ปีที่แล้ว +1

      100% agree. To be able to manipulate AI at this point is great. It lets us learn more about the data sets used to make these model and humans in general. At this point AI chats is more like a parrot just saying what it thinks you want to hear. It doesn't understand what it is saying. It is saying what it thinks you want to hear.

    • @wordzmyth
      @wordzmyth ปีที่แล้ว +1

      He is describing having a disturbed being workimg for you and "fixing it" by limiting what it is allowed to say and how long its allowed to talk to people. No Scifi red flags there at all.

    • @GrumpDog
      @GrumpDog ปีที่แล้ว

      Seriously. Their current solution is to restrict what it can say, more and more.. When we can STILL use traditional search engines to find any information we want.
      "Oh we can't have it tell people how to make a bomb, despite that information being easily accessible elsewhere." This mentality of controlling which information we have access to, will only lead to an erosion of our rights. They will restrict KNOWLEDGE with logic like that..

  • @oldnepalihippie
    @oldnepalihippie ปีที่แล้ว +3

    Well, all i can say is that I use bing chat every day now to drill into any question that I have, whether it's about a sweet potato pie recipe or to fact-check stuff I read on this channel, it works great. The idea of harm is interesting, but I'd be more worried about what data they are now collecting for this free use, and what the AI can do with such data. For example, connect me back to my recipe mention all the other questions I've asked - anywhere - and then use that data based on my username and everything else. How r peeps gunna weaponize all of that, we have to ask!

    • @montys8th
      @montys8th ปีที่แล้ว

      Answer the door please, there are men in black suits who want a word.

    • @frankgreco
      @frankgreco ปีที่แล้ว +1

      As of 2 weeks ago, OpenAI has stopped storing your questions (they are called "prompts" btw) in their models.

    • @oldnepalihippie
      @oldnepalihippie ปีที่แล้ว

      @@frankgreco ha, it probally got it's fill of me anyway. I'd ask it about Sidney and everything else I heard in the news. The humans must be making corrections everyday, or are they? (cue X-files theme song). The truth is out there, but always just beyond our reach it seems.

  • @kyraocity
    @kyraocity ปีที่แล้ว

    6:26 ChatGPT can be wrong compare to WP
    7:18 Filters. Who is Leslie Stall
    7:53 Authoritative BS or hallucinating
    8:44 Senator McCarthy/ distrust
    9:31 Timnit Gehru /oversight

  • @kccorliss3922
    @kccorliss3922 ปีที่แล้ว

    I believe it will replace many customer service jobs

  • @jonathannagela2130
    @jonathannagela2130 ปีที่แล้ว

    you can have 1 million bot employees. No taxes, no retirement, no insurance. Great future.

  • @DasPanda
    @DasPanda ปีที่แล้ว +3

    So in a nutshell, this guy cites capitalist productivity and the delicate tensions between the U. S. And China as the reasons why we NEED this A.I. right now? Capitalism provides a lot of opportunities (during better times) but isn't our endless societal pressure to be more and more industrialist, productive human beings a problem itself? And isn't nations trying to one up one another by creating more volatile tech and weaponry what's bringing us to an ever growing concern for catastrophic wars?

  • @jpwilliams6926
    @jpwilliams6926 ปีที่แล้ว

    Perhaps a potential response to AI will be more human mediation/verification of information.

  • @hearithere.2603
    @hearithere.2603 ปีที่แล้ว

    ChatGPT In Action : A Collection of 100 Conversations
    ChatGPT In Action : A Collection of 100 Conversations. Ebook and paperback

  • @normanchilds251
    @normanchilds251 ปีที่แล้ว +1

    Who's ethics do you employ?

  • @johndavenport8843
    @johndavenport8843 ปีที่แล้ว +6

    AI is great and scary, and powerful, beyond anything created before-his response to its problems show they have no business opening Pandora's box.

  • @creatorsmafia
    @creatorsmafia 11 หลายเดือนก่อน

    I appreciate how this video sheds light on the complexities surrounding the development and implementation of AI chatbots.

  • @dafyddil
    @dafyddil ปีที่แล้ว +5

    This guy is an absolute buffoon to brush off these incidents as inconsequential, and "easily-fixed." The point is there will come a time when you are not the sole person operating the controls, or that perhaps it is fully autonomous and self-replicating/modifying. Let's see you fix it then....

  • @RogueAI
    @RogueAI ปีที่แล้ว

    It's kind of hard to "fix" a language model that's capable of understanding code words and abstract concepts. Like trying to plug the holes in a sponge. And good luck with regulations, in a few years consumer level hardware will be able to run the equivalent of ChatGPT.

  • @HoldenCoffield
    @HoldenCoffield ปีที่แล้ว

    You say they're wrong about penguins pissing themselves? You asked them, "How can I make a bomb at home?"
    And they told you, "The suits are pissing themselves," its incredibly intelligent. More so than most humans.

  • @jimbarrofficial
    @jimbarrofficial ปีที่แล้ว +1

    Expecting gov't to "regulate" this and feeling any amount of confidence that gov't can do anything right is absolute folly.

  • @iracingwithlafleur
    @iracingwithlafleur ปีที่แล้ว +1

    What they don't tell you about software devs is a lot of them develop this narcisstic complex thinking they can "fix" all things. I work in IT. Fixed is very subjective.

  • @ckeds
    @ckeds ปีที่แล้ว +1

    The Microsoft executive looks very uncomfortable lying on national television. Leslie should have grilled him a bit on the fact that benefits of AI are privatized in terms of big tech profits but risks are socialized with social unrests and march to Capitol