A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ธ.ค. 2024

ความคิดเห็น •

  • @JayconianArts
    @JayconianArts ปีที่แล้ว +410

    I wanna say as an artist, hearing proffesional researchers and entertainers explictly saying the same things that artist have been relieving. With Image generators being one of the first big booms of this curent wave, artist have been raising the alarm on this topic for almost a year now, and what negative impacts it's going to have and how it's exploiting us. Feels like we've been on our own for most of the fight, so seeing that there are others on our side is comforting.

    • @paulmarko
      @paulmarko ปีที่แล้ว +13

      Did you see that us us copyright office won't copyright AI generated work because there was no human authorship? The trajectory is moving in a positive direction already. Also pro concept artists already use a myriad of theft-like tools like photo bashing, Daz 3d, inspiration without concent, etc. Im an artist and in think the artists worries are being misplaced. People aren't going to be replaced, they're going to be able to spend their artistic time just doing the really fun parts of art, juicing the work and pushing it creatively. (at least until agi comes for every job all at once)

    • @gwen9939
      @gwen9939 ปีที่แล้ว +24

      @@paulmarko The whole reason this whole scare about AI replacing art has even been a thing is because the extremely low bar most people have for what constitutes good art. This has been a serious issue in the world of music commissioning long before AI where it was impossible to get started on paid freelancing commissions because someone was always offering the same as you for incredibly cheap. It generally sucked but the game devs, film directors, marketing agents, etc., were incapable of telling the difference between a professional and a hobbyist. Same goes for sites like Audiojungle, where that at least is very high technical quality but it's also completely soulless inoffensive market-tested elevator music, and sounds like you've heard it the 500th time the first time you listen to it.
      And it's every level of every industry. The whole Mick Gordon getting screwed out of a contract was because the lead on the project just kicked him to the curb and punted the rest of the project over to their own in-house sound guy, figuring that would be just as good as anything Mick Gordon could make, which is why it sounded like garbage.

    • @JayconianArts
      @JayconianArts ปีที่แล้ว +49

      @@paulmarko People's jobs are already being replaced. The nature of these machines isn't to help artists, it's to remove them. Illustrators that have made book covers for years are finding companies they've worked with now use Image Generators. There was a Netflix movie made that used Ai to do backgrounds in an animated film because of a 'labor shortage'- meaning that artist where wanting better pay and unionizing, but the companies would rather simply not pay artists at all.
      Also, calling photo-bashing and insperation theft-like, and on the same level as image generators trained off billions of stolen images is simply absurd. If an artist is inspired by something, they're still putting their own spin, skill, and creativity behind it. To say that me being inspired by great artists, studying their works, techniques, and ideas, is comparable at all to someone typing words into an algorthim and getting a result minutes later is insulting. Machine's can have no inspiration, no direction, no life or thought into what it's making.

    • @paulmarko
      @paulmarko ปีที่แล้ว +4

      @@JayconianArts
      They can't own an AI book cover though. I'm not sure what kind of book writer doesn't care that they don't and can't own their cover art except maybe really crappy ones? Sure there will be an adjusting period before the entire market is flooded with AI art, but it can't replace the real artists because companies need to be able to own the asset, and the low skill involved means that people will gradually stop interpreting an AI art generated cover as a signal of quality. Well see of course but I'm very optimistic that it'll just become an artists tool that will help people make new and amazing works much faster.

    • @paulmarko
      @paulmarko ปีที่แล้ว +1

      @@JayconianArts
      Also re: photo bashing. Ive definately seen some artists do some iffy stuff. Like one was painting a desert and it wasn't coming together, so right at the end he basically dropped the desert photo on top and smudged it in a bit. Similar with character design photo bashing. Definately seen a fairly large amount of contribution from what were photos basically just grabbed from Google images.

  • @stax6092
    @stax6092 ปีที่แล้ว +424

    It's actually kinda incredible how much corporations get away with considering that they have the money to straight up just do a better job. More regulation is always good when it comes to corporations.

    • @tttm99
      @tttm99 ปีที่แล้ว +37

      Starting with stopping competion-crushing mergers!

    • @1234kalmar
      @1234kalmar ปีที่แล้ว +4

      Collectivisation. The vest regulation for private companies.

    • @Mr2greys
      @Mr2greys ปีที่แล้ว

      @@tttm99 I agree except when you have other countries allowing it then they just stomp out local competition which the only response to that is protectionism. Horse is already out of the barn, it's pretty much too late

    • @andrewmcmasterson6696
      @andrewmcmasterson6696 ปีที่แล้ว +13

      It's the MBAification of corporate excellence: whenever you can, substitute the appearance of excellence for the real thing.

    • @kyleyoung2464
      @kyleyoung2464 ปีที่แล้ว +9

      this comment goes hard. proof that the best for profit does not = the best for us.

  • @sleepingkirby
    @sleepingkirby ปีที่แล้ว +172

    14:51 "Open AI is not at all open about how these things are trained... according to Open AI, this is somehow for safety, which doesn't make sense at all."
    Yes! Thank you! As anyone in the industry will tell you, security through obscurity is BS.
    @Adam Conover
    Thank you for getting real experts on this. People that, not only do they know context of the topic, but they know how it actually works/is built.

    • @moxiebombshell
      @moxiebombshell ปีที่แล้ว +9

      🎯🎯🎯 All of the yes.

    • @alexgian9313
      @alexgian9313 ปีที่แล้ว +15

      @sleepingkirby - Of course obscurity is necessary for security :D
      *Their* security, before the lawyers have a field day sorting through how much IP theft was involved.

    • @skywatcher2025
      @skywatcher2025 ปีที่แล้ว +3

      I agree that the security argument isn't great, but it's not entirely a lie, either.
      It's called an information hazard. Things that qualify are things like "how to build a nuclear bomb", "how to make chemical/biological weapons", etc.
      EDIT:
      I'd like to note that I'm not saying I support only a few companies knowing how to "build the weapon", per se. I'm speaking solely to the fact that the security (however limited in scope that may be) is one of the very few (reasonably) good reasons to not be very open about the process.
      Also, I'm am well aware of how some datasets are borderline, if not completely, illegally sourced. I do not support that in any capacity, and I realize that not showing how the systems are trained could allow such immoral usage. I do not claim to know a solution to this very important issue.

    • @sleepingkirby
      @sleepingkirby ปีที่แล้ว

      @@skywatcher2025 are you referring to the "security through obscurity" aspect or something else? Because this comment seems like a tangent to me.

    • @alexgian9313
      @alexgian9313 ปีที่แล้ว

      @@skywatcher2025 Oh, come on....
      Because if they explained how they did it.... why, then just ANYONE could buy millions of dollars of computer equipment, consuming more electricity than a large county, and then rip off millions of poor people to classify all the trillions of GB of data they'd scraped off the internet without permission, and create a hype bomb that this was "dangerous AI", that we needed to be protected from by covering it in total obscurity.
      I mean, WON'T ANYONE THINK OF THE CHILDREN???

  • @fran3835
    @fran3835 ปีที่แล้ว +273

    When I was in college I did an internship in a AI company, they asked the interns to make each a small project that could help the community and would be open source I proposed to make a videogame (a small game jam type thing that visually represented how AI works) everyone looked at me like I was stupid and told me I have no idea how much effort it takes to make a videogame, the other guy proposed to make an AI psychologist and everyone thought it was a great idea... By the time we ended, you could tell it you were about to kill yourself, and sometimes the thing would answer "good luck with that, goodbye" and close the connection (they removed the psychologist from the site and leave it as a regular chatbot)

    • @Theballonist
      @Theballonist ปีที่แล้ว +48

      Perfect summary, no notes.

    • @sabrinagolonka9665
      @sabrinagolonka9665 ปีที่แล้ว +107

      Absolutely love the conceit that producing an effective psychologist is easier than programming a game

    • @MaryamMaqdisi
      @MaryamMaqdisi ปีที่แล้ว +1

      Rofl

    • @estycki
      @estycki ปีที่แล้ว +18

      I know I shouldn’t laugh but the bot probably figured if the person was dead then the conversation is over 😆

    • @Neddoest
      @Neddoest ปีที่แล้ว +4

      We’re doomed

  • @heiispoon3017
    @heiispoon3017 ปีที่แล้ว +89

    Adam thank you so much for providing Emily and Tinmit the opportunity for this conversation!!

  • @XPISigmaArt
    @XPISigmaArt ปีที่แล้ว +74

    As a digital artist (and human living in society) I really appreciate this discussion, and hope this side gets more traction to combat the AI hype. Thank you!

    • @andrewlloydpeterson
      @andrewlloydpeterson ปีที่แล้ว

      this is funny because like 2-3 years ago (and even now) digital artists were gatekeeped as hell and now they suffer from ai haters because digital art is easily mistaken for ai art

    • @TheManinBlack9054
      @TheManinBlack9054 ปีที่แล้ว

      "(and human living in society)"
      WHy would you add that? Did you think we thought you were maugli or something? Or do you think there people out there who are not humans either in sci-fi way or nazi way?

    • @andrewlloydpeterson
      @andrewlloydpeterson ปีที่แล้ว

      @@TheManinBlack9054 anti AI folks too lazy so asked AI to write an anti AI post thats why it said such a weird phrase

  • @cphcph12
    @cphcph12 ปีที่แล้ว +204

    I'm a 53 years old programmer who started playing with computers when I was 12 years old, in the early 80's. They then expected AI to be just around the corner. 40 years later, AI is still "almost finished" and "so close". The more things change, the more they stay the same.

    • @Fabelaz
      @Fabelaz ปีที่แล้ว +8

      You know, the fact that those things can write a code for a problem you just came up with is pretty impressive, even if there can be mistakes (which can be fixed through more requests). Also rate of improvement of things like stable diffusion points towards significant decrease of amount of commissions artists are going to receive, especially in corporate environment.
      Is it anywhere close to sentience? hopefully not. Are those things gonna leave a lot of people without jobs? Likely, if no policies are going to be implemented really soon.

    • @Ruinwyn
      @Ruinwyn ปีที่แล้ว +14

      The biggest problem in programming is still exactly the same it has always been. The people that want the program, don't know what they want. They can't define what they need, they keep changing their mind and their priorities. They also have unique problems. The common, general problems have been solved, and are available off the shelf with one click. Every now and then, new languages crop up that "make programming more understandable ", and after a while they get more complicated, because the simplified couldn't solve more complex problems.

    • @GioGio14412
      @GioGio14412 ปีที่แล้ว +2

      its not around the corner anymore, its here

    • @brianref36
      @brianref36 ปีที่แล้ว +12

      @@GioGio14412 No, it's not. We have nothing even close to an AI that could replace a thinking person.

    • @slawomirczekaj6667
      @slawomirczekaj6667 ปีที่แล้ว

      like with breed nuclear reactors. In addition all the people capable of real break throughs are eliminated from the industry or science.

  • @kibiz0r
    @kibiz0r ปีที่แล้ว +234

    The eugenics connection is bone-chilling. People don't realize how popular eugenics was, across the whole world. It wasn't some fringe Nazi-specific thing. People really thought we were on the verge of creating a new superior species by applying genetic engineering principles to ourselves. We're in the same situation again, but businesses are enacting it unilaterally -- no government coordination required -- and public opinion seems (un?)surprisingly amenable to it.

    • @UnchainedEruption
      @UnchainedEruption ปีที่แล้ว +36

      We still practice some aspects of the eugenics movement, but obviously we don't call it that anymore. Prospective parents receive information about what risks their child might have if they go through with the pregnancy, and some may decide to abort the fetus if the life will be too hard on both the family and the child. We have organizations concerned with the accelerating growth of the global population, urging people to have fewer kids to prevent overpopulation down the line. What made eugenics insidious was that somebody else, an authoritarian regime, would dictate who had the right to live and reproduce and pass on their genes. Those decisions were not voluntary. However, if people want to have some small effect on the future of the species by voluntarily choosing whether or not to have kids, I don't think that's evil. It only becomes evil when you decide for somebody else what value their life has.

    • @Ben-rz9cf
      @Ben-rz9cf ปีที่แล้ว +6

      We're not just creating dangerous technology. We're creating dangerous people, and thats what we should be more worried about.

    • @yudeok413
      @yudeok413 ปีที่แล้ว

      The thing about eugenics is that its proponents are obviously on top of the pyramid. All you need is a few billionaires who already think that they themselves are the pinnacle of humanity (Thiel and his minions like Musk) to get the ball rolling.

    • @Frommerman
      @Frommerman ปีที่แล้ว +8

      Also, consider the similarities in effect between eugenics and the modern field of economics. Both make broadly unfalsifiable claims which could not be adequately tested even if the people studying them wanted to. Both serve the purpose of continuing to enrich and empower the already powerful. Both are used to justify the continuing horrific conditions in the colonized world by calling them the result of natural laws rather than human malignity. And both are regularly used to justify outright mass murder. In the case of economics it may be hard to see how that is the case...until you know what the estimated yearly cost of completely eliminating hunger is.
      $128 billion dollars. Total. For the cost of liquidating less than a third of Jeff Bezos' absurd dragon hoard, nobody anywhere in the world would starve to death for an entire year. Economists, in their infinite malice, justify a single man's daily decision not to prevent any human anywhere from being hungry. And the truly damning part of this is it wouldn't cost that much the next year. Once you removed the threat of starvation from every community everywhere, they would be able to focus on building up the resources they need to feed themselves next year. It's difficult to estimate, but the whole program of ending hunger globally, permanently, could well cost the wealth of one single person.
      Economists tell us this is unrealistic. Much like eugenicists told us it was unrealistic for white people to live peacefully with the rest of humanity. These aren't different arguments, or even different disciplines. Economists are just eugenicists using bad math instead of bad genetics to justify their arguments. If any of us survive the next century, I expect the histories we write will put Milton Friedman in the same category of evil as Adolf Hitler.

    • @TheRealNickG
      @TheRealNickG ปีที่แล้ว

      Without DNA there is no "engineering" and that wasn't around until the 50's. They really tried to burn down the whole world based on the simple acknowledgment of biological heredity.

  • @jt4351
    @jt4351 ปีที่แล้ว +35

    Fun fact: it is still very buggy even for writing code as well. Depending on your prompt, it may assume you know what you're doing and suggest some amalgamation of what you asked.
    In programming, there are these things called methods and properties. Think of these as English words that tell the computer to do something. These are common tasks you don't have to tell a computer to do step-by-step, and are built-in tools of a programming language. However, if you ask it in a specific way, it will suggest your wording as property of the language, even though it is non-existent. You can tell it that it's wrong, and for the most part, it just repeats the same output. Unless I specifically ask it to use a different method, it keeps regurgitating the same thing while "apologizing".
    In plain English, it's something akin to: let's say you want a recipe for some crepes, and you typed in some gibberish. Something like "I want crepes that are smoverfied" - the model finds a recipe for crepe, and will add "once cool, be sure to smoverfy your crepes" with no idea of what that is. lol This is a random example that may not work, but I've had many cases where it gave me code that if I try to run it, I just get an error because something doesn't exist and it just morphed my prompt. It's a great tool to get started, but it mixes and matches, and is often wrong.
    It is just as artificially intelligent as it is artificially dumb. No wonder the mistakes in AI are called hallucinations...

    • @Atmost11
      @Atmost11 ปีที่แล้ว

      I imagine part of your job was to help cover up for the fact that it, while having a role in business, cant perform as hyped in terms of actual un-supervised decision making?
      Including to protect your own team from evidence that it doesnt work I bet.

  • @GregPrice-ep2dk
    @GregPrice-ep2dk ปีที่แล้ว +411

    The larger issue is Techbros like Elon Musk who think they're real-life Tony Starks. Their track record of actually *accomplishing* anything prove otherwise.

    • @CarbonMalite
      @CarbonMalite ปีที่แล้ว +78

      If Elon was tasked with inventing a reality-busting mech suit he would invent the 8 day work week instead

    • @mshepard2264
      @mshepard2264 ปีที่แล้ว

      Space x put as much mass in orbit as pretty much every other company on earth put together. Also without tesla electric cars would still be getting mothballed every 5 years. So feel free to hate Elon but he isnt a dumb guy. He is terrible at public speaking. He is bad with people. He is also super weird. But not like your average silicon valley tech bro.

    • @GirlfightClub
      @GirlfightClub ปีที่แล้ว

      100%. Also, AI or big tech execs dictating their own morality on all us thru censorship that doesn’t reflect real life laws and community standards.

    • @stevechance150
      @stevechance150 ปีที่แล้ว

      I used to be an Elon fanboi, but not so much now. However, 1. NOBODY was manufacturing electric cars until Tesla did it. 2. NOBODY is going to orbit and landing a rocket back on the pad.

    • @O1OO1O1
      @O1OO1O1 ปีที่แล้ว

      No, con men aren't the problem. It's people who fall for them. And continue to fall for them for decades. And journalism and journalists are also at fault. And the government at fault for continuing to fund him. And the people for voting in such stupid representatives. And he's employees for putting up with these crap instead of striking and leaking all of the dodgy s*** he's been up to. People, good people, would take down Elon very easily. And he can sell used cars like he should be.
      "I tried to think about what would be most important for humanity..."
      " Dude, shut up. I just want to buy a car"

  • @futureshocked
    @futureshocked ปีที่แล้ว +56

    The reason they're pushing AI is because SILICON VALLEY IS OUT OF IDEASSSSSSS. If you look at what they've been doing for the past 15 years and you're brutally honest about it--we've wasted an entire generation of brilliant young programmers to make mobile apps. We've wasted a generation of brilliant product designers to make the Juicero. Bitcoin. Subscription apps. Tech has been in absolute clown-territory for a long time and no one wants to admit it.

    • @personzorz
      @personzorz ปีที่แล้ว +2

      Because there's nothing left to do in that sphere

    • @silkwesir1444
      @silkwesir1444 ปีที่แล้ว

      Boooo!!! Resistance is futile! 😈

    • @futureshocked
      @futureshocked ปีที่แล้ว +6

      @@personzorz There really isn't. And it's wild watching companies that should know better just throw money at shit like this. It's tiresome, these billions going into Clippy 2.0 could really be used for, ya know, jobs.

    • @Praisethesunson
      @Praisethesunson ปีที่แล้ว +1

      Exactly right. But they need to maintain their access to vast capital markets so they lie out their ass about the capability of a stupid computer program

    • @coreyander286
      @coreyander286 ปีที่แล้ว +1

      How about protein folding programs? Isn't that a recent Silicon Valley success with concrete benefits for public health?

  • @3LLT33
    @3LLT33 ปีที่แล้ว +41

    The instant she says “the octopus taps into that cable” and the cat reaches out from under the blinds… perfect timing!

    • @Ecesu
      @Ecesu ปีที่แล้ว

      Yes! Putting a timestamp so people can see it 😅 59:09

    • @victorialeif9266
      @victorialeif9266 5 หลายเดือนก่อน

      😂Yeah, and another woman who has a cat! Super dangerous!

  • @Musamecanica
    @Musamecanica ปีที่แล้ว +12

    I loved this show and these ladies should have their own! They are funny, smart, entertaining and put all of these news in perspective. Keep on fighting the good fight.

  • @joshuachesney7552
    @joshuachesney7552 ปีที่แล้ว +254

    Just today an automation product we use was promoting it's new AI integration saying how the old way was slow and bad because we had to spend time researching things. The new way is awesome because AI just finds the answer and tells it to you.
    The question was how do you prevent a computer from upgrading to Windows 11, and the AI answer was to permanently disable windows from getting any types of updates ever. (For those who don't know, this is considered by industry professionals to be as we say in the biz "fucking stupid")

    • @tttm99
      @tttm99 ปีที่แล้ว

      Testify! It's non sequitur isn't it! But you can't sell people - can't even give away - the seemingly obvious truth: that maybe relying on something implicitly and unconditionally that you don't understand and don't control is a bad idea. It might happen when you can't help it. But it ain't a good thing to go shopping for. 🤣 On the other hand, I guess I'd have to concede the AI might indeed be higher intelligence if it instructed you install Linux or just put your machine in a bin and go on a well deserved holiday 🤣. We can dream.
      But sadly sometimes contextual answers actually need to be practical and sensible, and those won't come from any ai until it is *vastly* more intelligent and far more corrected to the real world. Hopefully long before then we realise building *that* would be a very bad idea. And the fatalist inevitability crowd who argue against this might want to ponder why we *still*, after all these years, haven't nuked ourselves into non-existence yet.

    • @franklyanogre00000
      @franklyanogre00000 ปีที่แล้ว +13

      It's not wrong though. 😂

    • @louisvictor3473
      @louisvictor3473 ปีที่แล้ว +48

      @@franklyanogre00000 The AI took "technically correct is the best type of correct" at face value and made it its motto.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว +31

      and that perfectly illustrates the difference between finding "an answer" and finding "the (correct) answer"
      the idea that a chatbot can do research like that is laughable and anyone doing serious software development or systems maintenance will be able to tell you that... automation tools are nice, because they free up our time to do the actual hard work... the research and analysis... but they dont replace that hard work

    • @PeterKoperdan
      @PeterKoperdan ปีที่แล้ว

      What was the AI's next solution?

  • @UK_Canuck
    @UK_Canuck ปีที่แล้ว +56

    Thanks to you and your guests, Emily and Timnit. This was a fascinating conversation that filled in so much detail for me. I had a vague sense of disquiet about the hype, the possibly plagiaristic nature of the output, and the accuracy of the data sets used for training. Emily and Timnit have provided some solid background to give a more defined shape to my concerns.
    I found particularly interesting the information that the groups driving the AI/AGI project had such clear links to the philosophy behind eugenics. Disturbing.

    • @robertoamarillas
      @robertoamarillas ปีที่แล้ว

      I honestly believe Adam fails to understand the real potential and potential harm that artificial intelligence represents.
      human intelligence is not as unique as he wants to make it out to be, the reality is that human creativity is nothing more than the ability to blend concepts and ideas, and in that, LLMs are incredibly powerful and we are only scratching the surface.
      I think your whole concept of skepticism and discovering the truth in everyday deception is very valid and necessary, BUT I think you are really losing sight of the kind of paradigms that LLMs represent, I really think you underestimate the potential existential risks and it is annoying and irresponsible for you to indirectly attack the voices that have been raised to warn about it.
      you treat people like Eliezer yudkowsky and the like as doomsayers, who are motivated by some kind of financial gain.

  • @terriblefrosting
    @terriblefrosting ปีที่แล้ว +11

    I _really_ love listening to people who really really know their stuff, do serious thinking about more than just "right now", and genuinely think about the real benefit to all of new things.

    • @oimrqs1691
      @oimrqs1691 ปีที่แล้ว

      Do you think people working on OpenAI don't think about stuff?

    • @LafemmebearMusic
      @LafemmebearMusic ปีที่แล้ว +1

      @User Name do you think their point was that the other side is stupid? That’s what you took from this?
      For me I heard them say hey, I don’t have to agree with everything you want but there are serious concerns about the marketing of AI versus the reality and we need to have more transparency so we can actually know where we stand with the Tech and how it can help others. Also they are deeply concerned about the eugenics angle it seems to be taking
      Can I ask, truthfully with 0 malice , real question: how did you take away from this that they think the other side is stupid? I definitely do think they find what their doing dangerous and ridiculous, but stupid? Idunno can you elaborate?

  • @sleepingkirby
    @sleepingkirby ปีที่แล้ว +33

    I do want to mention that people who monitor bot accounts have seen recently an large uptick in said bots posting things that talk up/saying positive things (basically spam) about AI on things like user reviews and tiktok and comments on random things.
    Also, there has been a report out recently that said AI generated code is often unsafe code and it won't point that out unless you ask it to.
    But yes, it has become a marketing term.

    • @MCArt25
      @MCArt25 ปีที่แล้ว +1

      to be fair "AI" has always been a marketing term. At no point has anybody ever managed to make intelligent software, it just sounds cool and Scifi and people will always fall for that.

    • @sleepingkirby
      @sleepingkirby ปีที่แล้ว +2

      @@MCArt25 Well... no. Artificial intelligence goes back to science fiction first, before it was even close to being a thing in reality. Like Isaac Asimov. There's a novel in 1920 that talked about intelligence in robotic beings. It wasn't always a marketing term, but it has become one.

  • @hail_seitan_
    @hail_seitan_ ปีที่แล้ว +148

    "I am dumbfounded by the number of people I thought were more reasonable than this..."
    Never underestimate human stupidity. If there's something you think people would never do, they've probably done it and more

    • @schm00b0
      @schm00b0 ปีที่แล้ว +7

      It's never stupidity. It's always greed and fear.

    • @ethanhayes
      @ethanhayes ปีที่แล้ว +7

      Not disagreeing with your point, but I think her point was that specific persons she knew, she thought were more reasonable than they turned out to be. Which is a bit different than general "human stupidity".

    • @asdffdsafdsa123
      @asdffdsafdsa123 ปีที่แล้ว

      god people like you make me sick. you're not smart. neither of them addressed the emergent capabilities that are expressed in gpt4 which is the SOLE REASON that people think LLMs may eventually achieve AGI. plus their entire gotcha about the sparks of agi paper was that they had a problem with IQ tests???? something thats widely used to this day to determine human intelligence??

    • @Ilamarea
      @Ilamarea ปีที่แล้ว +1

      This comment section is a perfect example of it.
      The AI we have is a research preview of a pre-alpha, an AI embryo. It's literally the first version that worked. It got vision after a couple of months. At current rate, we have years until the collapse of the capitalist system, which will spark wars for resources. And the inevitable end result regardless of how it goes is our extinction because once AI makes decision, once it learns by itself, we will have lost our agency, we won't control our fate and will be unable to react to threats, most potent of which will come from the AI and in forms we wouldn't expect - like perfect robotic lovers we can't breed with.

    • @schm00b0
      @schm00b0 ปีที่แล้ว +8

      @@Ilamarea Dude, just keep working at that novel! ;D

  • @robertogreen
    @robertogreen ปีที่แล้ว +146

    on thing you didn’t focus on here is that GPT (and the octopus in emily’s paper) is that the bias is to ALWAYS ANSWER QUESTIONS. Like…if Chat GPT could just not answer you at all, not even “i don’t know” then it would be something very different. but its bias towards answering is the heart of the problem

    • @Ayelis
      @Ayelis ปีที่แล้ว +5

      But then it wouldn't be useful as a question answerer, which it might as well be. Without input, it would literally be a random sentence generator. So they trained it to answer questions incorrectly. Which is, kinda, better.

    • @MarcusTheDorkus
      @MarcusTheDorkus ปีที่แล้ว +51

      Of course it can't really even tell you "I don't know" because knowing is not something it does at all.

    • @robertogreen
      @robertogreen ปีที่แล้ว +12

      @@MarcusTheDorkus this is the way

    • @scrub3359
      @scrub3359 ปีที่แล้ว +5

      ​@@MarcusTheDorkus Chat GPT can easily do that. It knows what it knows at all times. It knows this because it knows what it doesn't know. By subtracting what it knows from what it doesn't know, or what isn't known from what is (whichever is greater), it obtains a difference.

    • @Brigtzen
      @Brigtzen ปีที่แล้ว

      @@scrub3359 No? It can't know things, because all it does is parrot words. It _cannot_ know the difference, because it doesn't know what it doesn't know, because it doesn't think at all.

  • @davidwolf6279
    @davidwolf6279 ปีที่แล้ว +19

    The irresponsible claims of programming 'thinking' and intelligence dates back to McDermott's 1978 paper: Artificial Intelligence meets natural stupidity

  • @DerDoMeN
    @DerDoMeN ปีที่แล้ว +46

    It's always a shocker listening to people that actually don't glorify these search algorithms... I find it even more shocking to listen to somebody from the field who's not trying to show AI as anything more than what it is.
    Really nice to hear that there are some sane people in the AI field for which I've lost interest years ago (due to obvious lack of reason in the land of proponents).

  • @boca1psycho
    @boca1psycho ปีที่แล้ว +11

    This conversation is a great public service. Thank you

  • @polij08
    @polij08 ปีที่แล้ว +152

    Just yesterday, my law firm held a legal writing seminar for us associates. At the end, the presenter made a brief note about using AI for legal writing. In a word: DON'T. He had ChatGPT (or whatever bot) generate a legal memo. First, it was stylistically poor. Second, the bot failed to know that the law at the center of the memo had recently changed, so the memo was legally inaccurate. AI text may be able to generally get the style of writing legal briefs, but until it can accurately confirm the research that supports the writing, it is useless at best, very dangerous at worst. My job is safe, for now.

    • @jaishu123
      @jaishu123 ปีที่แล้ว +7

      GPT-3.5 is not connected to the internet, GPT-4 can be via plugins.

    • @robinfiler8707
      @robinfiler8707 ปีที่แล้ว +3

      it can already confirm it via plugins, though most people don't have access yet

    • @deltoidable
      @deltoidable ปีที่แล้ว +6

      I won't be long until it can, look at GPT 4 plug-in that allow you to feed current data for it to analyze. You'll be able to upload digital copies of the laws in your state. Or give it access to a powerful AI calculator like Wolfram alpha, current stock market data, or just the internet generally. Letting it use tools when it doesn't know the answer itself.
      Currently Chat GPT isn't actively seeing data, it was trained on data from 2021 or earlier. It's remembering from data it's been trained on. When you give chat GPT access to data or tools for it to ground it's answers in real information, that problem goes away.

    • @skyblueo
      @skyblueo ปีที่แล้ว +1

      Thanks for sharing that. Is your firm creating policies that forbid the use of this tool? How could these policies be enforced?

    • @achristiananarchist2509
      @achristiananarchist2509 ปีที่แล้ว +21

      One of the main uses I've found for it as a programmer is pretty funny and related to this. I use ChatGPT for two things 1) generating boilerplate (which it's actually pretty bad at but sometimes it takes less time to correct its mistakes than write it myself) and 2) something we call "rubber ducking".
      Rubber ducking is when you corner a co-worker and talk at them about your problem until you brute force a solution yourself, often with little to no input from said co-worker. It's called "rubber ducking" because you could have saved someone else some time by talking to a rubber duck on your desk instead of a person. ChatGPT is *extremely* useful for this precisely because it is 1) very dumb and 2) has no idea how dumb it is. If I'm stuck on something, I can ask ChatGPT about it, and it will feed me a stupid answer that I've either already thought of or very obviously wouldn't work. In the process of wrestling with the AI, I'm forced to think about the problem and will often get my "Eureka" moment as a result of this. A rubber duck just sits there, ChatGPT feeds me wrong answers that make me think about my issue in assessing why they are wrong. Big improvement over the duck.
      So it's great as a high tech rubber duck. If there are any other applications where being naive, often wrong, and unable to self-correct is actually a feature rather than a bug they should start pivoting into those markets.

  • @estycki
    @estycki ปีที่แล้ว +106

    What I don’t understand is all these people who keep saying “well it’s still in its infancy! 👶 And let’s replace our doctors, lawyers, programmers with hard working babies today!” 😂

    • @2265Hello
      @2265Hello ปีที่แล้ว +8

      A weird mix of instant gratification and and the need to save money as a side effect of basic survival based mindset in America

    • @Praisethesunson
      @Praisethesunson ปีที่แล้ว +4

      ​@@2265HelloSo capitalism.

    • @2265Hello
      @2265Hello ปีที่แล้ว +1

      @@Praisethesunson basically

    • @ShadowsinChina
      @ShadowsinChina ปีที่แล้ว

      Its the racism

    • @parthasarathipanda4571
      @parthasarathipanda4571 ปีที่แล้ว +2

      I mean... these are pro-child labour people after all 😝

  • @tychoordo3753
    @tychoordo3753 ปีที่แล้ว +7

    The reason they are calling for regulations is simple, same tactic coorporations have used since forever. Basically you ask government for regulations that are at most a minor nuisance for your Business, but make it impossible for newcomers to get started because of the overrhead the regulations create, so you get to stay on top without having to fairly compete. Same reason why guilds used to be a thing in the middle ages.

  • @LizRita
    @LizRita ปีที่แล้ว +8

    These two were great to watch together in an interview! It's really sobering to have folks tear down claims that have been so normalized about AI. And suggest actual regulations that make sense.

  • @sunnohh
    @sunnohh ปีที่แล้ว +109

    I work with ai and my entire job is fighting against people thinking it works somehow

    • @FuegoJaguar
      @FuegoJaguar ปีที่แล้ว +21

      In a short period 100% of what I do as a director in tech will be to tell people not to put AI in stuff.

    • @RKingis
      @RKingis ปีที่แล้ว +5

      If only people realized how simplistic it really is.

    • @TheManinBlack9054
      @TheManinBlack9054 ปีที่แล้ว +3

      @@RKingis who are these people who know NOTHING about AI and then say that its really simple to understand how it works? Its not simple, its hard, its basically kind of magic because we have no idea on what is actually happening, interpretability is a LOOOOOOONG way away.

    • @carl7534
      @carl7534 ปีที่แล้ว

      @@TheManinBlack9054how do think it is hard to understand what chat „AI“ does?

    • @Maartenkruger324
      @Maartenkruger324 ปีที่แล้ว

      @@TheManinBlack9054 "we" will never know because everything that GTP says is non-reference-ale.They, the GTP programmers, do know how it got to it's answer. Through statistical calculations. At best it can be worked back to a crap load of data with no direct answer,. The bot has no physical reference system. Most ly scripted sentences with no clue of the meaning of any of the words separately or together. ChatGTP does not know what a chair is.

  • @samanthakerger3273
    @samanthakerger3273 ปีที่แล้ว +15

    I love how much smarter I feel for having listened to this when it's a podcast that includes the sentence, "Is the AI circumcised?" Which is one of the funniest and darkest sentences in the podcast.

  • @ellengill360
    @ellengill360 ปีที่แล้ว +5

    This is extremely important information. I hope your guests consider writing a version of the Stochastic Parrots article for non-scientists in plain language, maybe highlighting some of the less mathematical points. I'm going through the original article but find it hard to recommend to people who won't want to spend the time or will give up.

  • @batsteve1942
    @batsteve1942 ปีที่แล้ว +6

    Just finished listening to this podcast on Spotify and it was a refreshing to hear a more critical view on all the AI mania the media seems to love exaggerating right now. Emily and Timnet were both great guests and very informative.

  • @gadgetgirl02
    @gadgetgirl02 ปีที่แล้ว +10

    "End of work! Everything automated!" sounds great until you remember a) no-one said anything about changing how the economy works, so people still need means to have incomes and b) if automated everything was so great, people would have stopped paying a premium for handcrafted stuff by now.

  • @lunarlady4255
    @lunarlady4255 ปีที่แล้ว +91

    The only thing that can stop a bad guy with AI is a good guy with AI. So give us your money and your data and don't ask any questions if you want to live...

    • @aaronbono4688
      @aaronbono4688 ปีที่แล้ว +10

      That is pretty much the theme of Terminator 2 isn't it?

    • @kenlieck7756
      @kenlieck7756 ปีที่แล้ว +5

      @@aaronbono4688 Wasn't that written by humans, though?

    • @aaronbono4688
      @aaronbono4688 ปีที่แล้ว +7

      @@kenlieck7756 yes. These AI's just take the information they find and regurgitate it in new ways and since that information contains things like the Terminator movies you would definitely expect them to mimic that. But to the point of the original message, this is about what these companies are telling the public about the AI's that they are creating.

    • @kenlieck7756
      @kenlieck7756 ปีที่แล้ว +1

      @@aaronbono4688 Ooh, you just made me realize the ultimate flaw in the current AI -- that they are just as likely to crib from, say, the most recent Indiana Jones movie as they are to do so from the first...

    • @redheadredneckv
      @redheadredneckv ปีที่แล้ว +2

      Quick insert bs chips into your head so we can defeat an ambiguous unscientific terminator

  • @CanuckMonkey13
    @CanuckMonkey13 ปีที่แล้ว +5

    This was such a fascinating, educational, and valuable discussion. Thanks so much to everyone involved!
    I've been watching more of Adam's work recently, and I find myself wondering, "why did I only recently discover him?" Thinking today I realized that it's probably because I haven't had a connected TV for at least a decade now, and I don't want to pirate content, so when he was mainly on TV I was completely cut off. Adam getting bumped from TV by evil corporate interests has benefitted me greatly, it seems!

  • @nzlemming
    @nzlemming ปีที่แล้ว +104

    I love these woman! When I saw the pause letter, I immediately thought that it was commercial in nature and discounted it. As a rule of thumb, anything Thiel and Musk agree on is bound to be a grift.

    • @Sarcasticron
      @Sarcasticron ปีที่แล้ว +10

      Yes, when they said why can't the "AI ethics" people and the "AI safety" people agree, I thought immediately "It’s because the AI safety people are grifters."

    • @Neddoest
      @Neddoest ปีที่แล้ว +4

      It’s a good rule of thumb…

    • @fark69
      @fark69 ปีที่แล้ว

      I'm kind of shocked at how well Gebru, particularly, has laundered her reputation. A few years back when Gebru accused Google of pushing her out because she was an AI ethicist, and then it was revealed she actually gave them an ultimatum to either do X (X being let her publish a paper they said needed more work to be up to snuff) or she would walk, and they chose to let her walk. At that time (it was 2-3 years ago I believe), she had a reputation like in the gutter. The trust was so broken because if she would misrepresent that, what else would she misrepresent to further herself and her research?
      Now to see her being treated as an AI ethics expert is wild, especially given her own ethical lapse.
      Bender has a better track record.

  • @IngramSnake
    @IngramSnake ปีที่แล้ว +35

    Timnit Gebru is the real deal. As a post grad A.I student we constantly refer to what she and her team have put together to evaluate our models and approach to datasets. 🎉

    • @fark69
      @fark69 ปีที่แล้ว

      Is this true? Does she have a good reputation as an AI ethicist in academia? I remember her public kerfuffle with Google a few years back basically tanked any reputation she had because she was caught lying about Google's "pushing her out" of her job as an AI ethicist there. And public lying tends to not look great on an ethics researcher...

    • @Stevarious
      @Stevarious ปีที่แล้ว

      @@fark69 Weird, I've seen a few claims that Timnit Gebru lied about something about that situation, but those claims never seem to include evidence. Meanwhile, this comment section is loaded with people who work in AI and have a deep respect for her.

  • @sclair2854
    @sclair2854 ปีที่แล้ว +1

    Adam big thanks for this! Really glad you took the time to talk to experts on this!

  • @Talentedtadpole
    @Talentedtadpole ปีที่แล้ว +3

    This is important, the best thing you've ever done. Please keep going.
    So much respect for these brave and knowledgeable women.

  • @aden_ng
    @aden_ng ปีที่แล้ว +50

    After making my own video about AI art generator and replicating the process in which Stable Diffusion generates its copies, proving that they are indeed stolen artworks, I ended up in this really weird spot in online conversation where despite not liking them or using them, I've become kind of one of the few people who actually knows how AI generate their art.
    And the thing I noticed is that arguments for AI talks overwhelmingly about the monetary aspect with very little understanding for the technology and the morality behind it.

    • @mekingtiger9095
      @mekingtiger9095 ปีที่แล้ว +20

      Hahahahahaha, yeah, this is the saddest part.... A lot of pro AI arguments are primarily focused solely on the monetary aspect and nothing else. Really shows you how much they disregard the social consequences of this tech.

    • @Foxhood
      @Foxhood ปีที่แล้ว +14

      @@mekingtiger9095 The magic word that makes me fall asleep in such conversations is the word "Democratizing". Which i come to understand is just code for wanting stuff without having to put in effort or pay for it.
      E.G when they say democratizing art, It mostly means they just don't want to pay an artist for some 'intimate material'. If you catch my drift...

    • @MarcusTheDorkus
      @MarcusTheDorkus ปีที่แล้ว +3

      @@Foxhood Sounds like the more accurate word would be "communizing"

    • @MrFram
      @MrFram ปีที่แล้ว

      I watched OP's video, he knows no math and the video was pure misinfo. To anyone reading this, please consider picking up a math textbook rather than listening to these idiots failing to grasp basic multivariate calculus.

    • @choptop81
      @choptop81 ปีที่แล้ว +7

      @@MarcusTheDorkus Not really. It's corporations seizing the means of production from workers (artists here). It's the opposite of communizing

  • @johnbarker5009
    @johnbarker5009 ปีที่แล้ว +45

    THANK YOU for drawing attention to long-termism and the connection to Eugenics. This is insane, terrifying, and mind-numbingly stupid all at once.

  • @funtechu
    @funtechu ปีที่แล้ว +6

    16:40 In the vein of Chat GPT produced results looking correct to those who are not familiar with the topic, I would disagree with the assumption that Chat GPT produced code is good. I've fed a large variety of simple programming prompts to Chat GPT, and the results produced were terrible. It was a great mimic of what some code that did what was requested would look like, but it was not usable code, and some of the stuff produced (particularly when asking about writing secure code) was downright dangerous.

    • @vaiyt
      @vaiyt ปีที่แล้ว +1

      Often when it is correct, it's just copying an existing answer from stackoveflow or whatnot.

  • @shape_mismatch
    @shape_mismatch ปีที่แล้ว +23

    This is Pop Sci done right. Kudos for inviting the right kind of people.

  • @shadow_of_the_spirit
    @shadow_of_the_spirit ปีที่แล้ว +6

    I was so glad to hear them bring up the importance of being open with this tech. So meny people who I normally hear talk about these models and why it's bad normally never talk about making sure we can know what the system is doing. All of them instead complain about the ones that are open about how they function and provide downloads of the models and often are open about the training data as well. I think if we keep the tech open it will be a lot harder for people to be hurt and it makes the people making these systems accountable. But if we let them hide what they are doing and how they are doing it then it's not a matter of if but when people get hurt.

    • @MaryamMaqdisi
      @MaryamMaqdisi ปีที่แล้ว +2

      Agreed

    • @RobertDrane
      @RobertDrane ปีที่แล้ว

      Amsterdam (Or some Dutch city) released the source for the "AI" they were using for fraud detection for social benefits in the past couple of months. Strong sunshine laws over there I guess. Critics & researchers have only been able to speculate about the implicit bias problems up until then as governments try to keep it private. I cannot overstate how stupid the system is. A podcast called "This Machine Kills" had an episode on it, but it got very little mainstream coverage. I think the episode was titled "The Racism Machine".

  • @warmachine5835
    @warmachine5835 ปีที่แล้ว +5

    53:00 same. There's a certain delight you can see on a person's face when they're in their area of expertise and are in a prime position to just utterly debunk some common, pernicious myth that has been repeated so much it has become personal for that person.

  • @faux-nefarious
    @faux-nefarious ปีที่แล้ว +20

    53:15 reading the footnotes definitely is spicy in this case! The paper sounds solid in citing a group of psychologists writing an editorial about intelligence- turns out the editorial was hella racist! Did Microsoft not know?? Did they just assume no one would notice?

  • @5minuterevolutionary493
    @5minuterevolutionary493 ปีที่แล้ว +26

    Last comment: so important for humanists (in the sense of non-religionists) to discern between an anti-science posture on the one hand, and a reasoned critique, based in history and evidence, of power dynamics impinging on the practice and priorities of science. There is a reflexive and lazy support for "science," which is not really a thing in a vacuum, but a product of human relations and material circumstance.

    • @mekingtiger9095
      @mekingtiger9095 ปีที่แล้ว +12

      Biggest problem I see surrounding techbros is that they imagine that a magnificent utopia they saw in some "time travel to the future" episode in a children's cartoon or those utopian depictions of the "future" from the 1950's and 1960's is magically gonna pop up with tech advancement for the sake of tech advancement because they seemingly have a literal child's understanding of how human relations and power dynamics work in the real world.
      Sorry, *distopian* sci-fi fiction is far closer to reality to come out of it than their visions of "progress".

    • @gwen9939
      @gwen9939 ปีที่แล้ว +2

      If I'm understanding your point correctly, there's a lot of tech fetishism on one side and anti-oversight sentiments, which generally takes the public appearance of being "anti-science"/"anti-expert". Both of these sides are noise that we need to cut through, and both are simultaneously being manipulated by people in power to help them stay in power. Building up hype from the tech fetishists helps them boost their profits and allows them to keep an iron grip on the tech and financial world, or at least get their slice of the pie, whereas on the other side it's usually politicians creating moral panics around scientific discoveries that are well-understood.
      The answer to both is scientific literacy, but if you've ever talked to someone who's self-appointed believer in science reciting medical conclusions from pop-science articles you understand the very little scrutiny these people approach any scientific subject with, and these are the more literate of the 2.
      Things we cannot ignore is both that AI as an emergent technology is currently being built within the framework of our existing capitalist dystopia where wealth inequality is increasing faster and faster, so if it turns out to be a powerful technology it could land in the hands of the few who've already decided that they and their offspring are the ones who should inherit the earth, adopting eugenics-like philosophies.
      The 2nd is that regardless of what is currently happening with AI and the companies developing them and how that follows the same trend as other tech trends meant to make fast profits, AGI as an emergent technology that we're extremely likely seeing the earliest steps towards now, on a purely theoretical basis could be extremely dangerous. I know that it sounds ridiculous, but just as no one believed we could fly until suddenly we could, and no one believed we could split an atom until suddenly we could, most of us won't believe that very powerful AGI will exist until suddenly it does. There are well-understood theoretically moral, philosophical, and mathematical problems that we have not yet solved, and are crucial that we solve before such an AGI exists.
      For all these issues the answer would be as much unity globally as possible and as little power in the hands of few very powerful people and companies as possible, with full transparency of what's happening in the research, but that's the same playbook we'd need for climate change and look how that's going.

  • @BMcRaeC
    @BMcRaeC ปีที่แล้ว +6

    59:13 when Emily's cat decides to enter the conversation… I burst out laughing in the library.

  • @r31n0ut
    @r31n0ut ปีที่แล้ว +4

    as a junior programmer I do use ai, but really only as a sort of advanced google. just ask chatgpt 'hey, how do I make a popup in html and have it display some text from this form I just made'. You can really only use it for small chunks of code because a) it gets shit wrong half the time and b) if you use it for larger pieces of code... you won't understand the code you just wrote, and if it works it won't do what you think it does.

  • @fafofafin
    @fafofafin ปีที่แล้ว +3

    Amazing video. So good to have these two experts explaining to laypeople like me what this whole thing is really about. And also, YIKES!

  • @ssa6227
    @ssa6227 ปีที่แล้ว +79

    Thanks Adam.
    Good to know there are still some serious not sold out researchers academics who are working for the good of humanity and who call out the BS as it is.
    I was skeptical of all the hype and lo it was BS.
    I hope this video goes to as many people as possible so people don't fear their BS

    • @DipayanPyne94
      @DipayanPyne94 ปีที่แล้ว

      AI is just a drop in the ocean of Neoliberal propaganda.

    • @cgaltruist2938
      @cgaltruist2938 ปีที่แล้ว +1

      Thanks Adam to help people to keep their sanity.

    • @apophenic_
      @apophenic_ ปีที่แล้ว

      ... what does it mean to be "bs" to you? Adam doesn't understand the tech. Neither do you. What bs are you on about kiddo?

    • @fark69
      @fark69 ปีที่แล้ว +1

      Gebru worked for Google's AI program for years and would have still been working there now if they hadn't called her bluff when she sent an email saying "Approve my paper or I walk". She's not exactly "not sold out"...

  • @LandoCalrissiano
    @LandoCalrissiano ปีที่แล้ว +6

    The problem with the current level of ai is that it's good enough to fool the uninformed so it's great for information warfare, propaganda and spam. I work in the field and even I get fooled sometimes.
    It's great tech and can augment human abilities but few people seem to want to pursue that.

  • @Furiends
    @Furiends ปีที่แล้ว +5

    The core take away everyone should have when ever they think about AI and LLMs is that language is cooperative. This is why advertising works on people that know advertising is trying to manipulate them. LLMs aren't going to make a AGI but they can make something that makes us think it's an AGI. Because YOU are doing the imaginative work to convince yourself of that. The LLM just triggered what you presumed to be a cooperation with the story you're building in your mind.

  • @vafuthnicht7293
    @vafuthnicht7293 ปีที่แล้ว +4

    I'm a layman in regards to AI and machine learning but I've been trying to tell my friends that are jumping on the "skynet is coming" panic train, that while there are concerns with its development; it's still a computer it's still subject to GIGO and the question of whose in control, and what model is being used is of far greater concern.
    It's validating to see experts having that discussion and also giving me other things to think about.
    Thank you all for doing this, I appreciate the poise and rationality!

  • @Mallory-Malkovich
    @Mallory-Malkovich ปีที่แล้ว +17

    I have a very easy system. I keep a card in my pocket that reads "do the opposite of whatever Elon Musk says." It has never failed.

    • @peter9477
      @peter9477 ปีที่แล้ว

      So being poor has worked out well for you, has it? ;-)

    • @SharienGaming
      @SharienGaming ปีที่แล้ว

      @@peter9477 hows that boot taste? and getting ready for the next crypto crash?

    • @gwen9939
      @gwen9939 ปีที่แล้ว +6

      @@peter9477 And did you become a billionaire by sucking up to Elon on the internet? Has senpai noticed you yet? didn't think so.

    • @peter9477
      @peter9477 ปีที่แล้ว

      @@gwen9939 I'm not a billionaire, and I dislike Musk. Not sure what senpai means, but whatever you're trying to say here, you failed to get the idea across.

    • @dperricone81
      @dperricone81 ปีที่แล้ว +3

      @@peter9477 I got it. Maybe don’t simp for snake oil salesmen?

  • @alexanderthompson4481
    @alexanderthompson4481 5 หลายเดือนก่อน +1

    Engineer here with 17 years experience in weapons development. This isn’t squarely my domain, but I’ve watched the hype with shock and fascination. Adam, thanks so much for bringing some commonsense skepticism to a field that’s been dominated by uncritical worship; already I have program managers asking how we can integrate AI into existing systems. Greed and hubris, indeed.

  • @WraithAllen
    @WraithAllen ปีที่แล้ว +4

    The mere fact you can ask ChatGPT to "write in the style of" any living writer (or a writer in the past 50 years) and it puts something out that's similar to that author's work is pretty much demonstrating it used copywritten work in there learning models...

  • @SkiRedMtn
    @SkiRedMtn ปีที่แล้ว +4

    Also pertaining to legal and policy documents, if you leave out or put in a comma in the wrong place it’s possible to change the meaning of a sentence. You have that happen once on page 9 of a legal document and Chat GPT might have just lost you your case because you decided you didn’t need a person

  • @futurecaredesign
    @futurecaredesign ปีที่แล้ว +4

    Loyalty would be the most horrible thing to be built into an AI or AGI system. Because loyalty can be abused in horrible ways. Its how we get men (and women, but mostly men) to go to war with people they have no personal problems with.
    No, if you are going to add something,,,. Add accountability. Or self-critique.

  • @bhudda4798
    @bhudda4798 ปีที่แล้ว +6

    So my friend is a high school math teacher. His principle recently sent out a memo telling staff to use ChatGPT to write all their lesson plans, assignments, and tests. This is the scariest real word application of ChatGPT I have seen so far.

  • @quietwulf
    @quietwulf ปีที่แล้ว +5

    We’re chasing guilt free slavery. We want something that can think and problem solve like a human, but be completely obedient.
    They can see the dollars on the table if they can just crack it.

  • @Toberumono
    @Toberumono ปีที่แล้ว +10

    Also, and I cannot believe how rarely this seems to get mentioned, these bots *suck* at programming.
    And it’s not because there’s any synthesis of new code going on - the implementation seems to actually be, “grab the first answer on stackoverflow”. My source for that is just… looking at stackoverflow because I got suspicious after the “synthesized code” was answering somebody else’s question. If it can’t find the answer on stackoverflow, it starts copying forum posts from other places, btw. You can see that because it starts giving answers that are either identical or identically wrong.

    • @Ew-wth
      @Ew-wth ปีที่แล้ว +1

      If you read some of what those AI bros write you'd think the coding capabilies are the second coming of christ, lmao. Figures. I mean we do have the copilot lawsuit at least.

  • @louisvictor3473
    @louisvictor3473 ปีที่แล้ว +9

    Around 1:01:00 this is one of my main issues with the whole "let's build an A(G)I" to solve our problems". Suppose we could. Congratulations, for all intents and purposes, it is indistinguishable from human level sentience (even more so than animals)... so what now? Do we potentially enslave this sentience to do our bidding? But if we chose not to do that for moral reasons, why did we create it for then? So it really feels like it is either an inherently immoral pursuit which will just really end in Terminator territory (i.e. complicated species self-past tensing via hubris overdose), or purposeless and pointless. Meaning, if we were asking "why" we can just ignore option B, it is option A from short sigthed people full of gas telling themselves and every fool who will listen they're the real visionaries. Seems like the techbro pipedream solution to the "problem" of not being able to own slaves legally anymore, fux that and fux those guys.
    Meanwhile, much more intelligent use of time and research resources seems to be the pursuit not a superintelligence that solves all our problems for us and we dont have to think anymore (but then who is to say the super intelligence's solution is in fact good and the alleged super intelligence is in fact inteligent), but instead to put the thinking cap and think solutions to problems ourselves, and built the tools including regular ass AI (not the sci-fi/AGI pipe dream) to help find those solutions and execute them.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว +1

      i would argue that the main purpose of creating an AGI would be to further our understanding of intelligence and then to see if we could create something like our own
      i dont know if it would solve any problems... it might - but honestly... the main point of science like that is to further our knowledge and understanding and then going on from there
      mind you, thats not what those grifters are after and they arent actually interested in AGI... they just want to drum up hype to get money... thats their end goal... money and power... longtermists are just rich right wing grifters masquerading as people who care to divert support from actual climate activists and research

    • @louisvictor3473
      @louisvictor3473 ปีที่แล้ว

      @@SharienGaming Then you're arguing you don't get the concept of an AGI. An is already an intelligence like our own. Not identical (that we know how to do, we call them children), but alike. it is a circular argument, Dev A to understand A to Dev A, it is still purposeless.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว +1

      @@louisvictor3473 oh so procreation is purposefully building a child bit by bit, understanding how everything works?
      your argument is that pressing play on a VCR is the same as creating the VCR, tape and the video on the tape
      there is a massive difference between using an existing machine that does the job and building your own that is supposed to do the same job
      and the latter teaches you a LOT about how the former works through the successes and failures along the way

    • @louisvictor3473
      @louisvictor3473 ปีที่แล้ว +1

      @@SharienGaming Are you just arguing in bad faith and intentionally distorting what I said, or you just really bad at read while really wanting to argue about something you clearly are "passionate" first and knowlegeable dead last? Both options are terrible, but at least one is just dishonest, not voluntarily stupid.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว +2

      @@louisvictor3473
      "An is already an intelligence like our own. Not identical (that we know how to do, we call them children)"
      that is what i was referring to - the way i read it you claim we know how to make an intelligence like our own, because we know how to make children
      and that is patently wrong
      and furthermore - science is self purposing... the point of it is to advance knowledge... it is literally in the name... so of course a lot of what we do in research is to basically see if we can do it and how it actually works...
      mind you - and i pointed this out in my first reply... none of this is part of the motivation of longtermists... because they arent interested in advancing knowledge - they are interested in diverting attention, resources and support from activists who are actually trying to solve our current climate crisis... which genuinely is not going to be solved by tech...we already know the solution for it... but longtermists are rich grifters deeply rooted in capitalism... and capitalists are the root cause for the majority of the problems that cause and profit from disasters...and of course as a result their interests lie in preventing the substantial systemic changes that are needed
      bit of a long aside there... but to get back to my original replies motivation:
      i am just providing a reason for why actual researchers might want to figure out how to make one... which boils down to "because it is interesting"

  • @ianwarney
    @ianwarney ปีที่แล้ว +2

    1:07:26 Key word here is “consultation”.
    I love the analogy of “information pollution” / “polluting the info sphere with noise and gibberish” -> Confusion of the masses is an (financial and power seizing) opportunity for the elites.

  • @sowercookie
    @sowercookie ปีที่แล้ว +31

    It's eternally disheartening to me how widespread eugenics ideas are: in schools, in the media, in pop culture, in casual conversation... The ai bros being another drop in the bucket, insanity!

    • @Praisethesunson
      @Praisethesunson ปีที่แล้ว

      Eugenics is a staple tool of capitalist oppression.
      It gives the already wealthy a paper thin veneer of science to justify their position in the hierarchy.
      It gives the poors another knife to hold at each other's throats while the rich keep sucking the life out of the planet.

  • @shmehfleh3115
    @shmehfleh3115 ปีที่แล้ว +7

    If you were expecting either Woz or Musk to be remotely reasonable, let me remind you what lots of money does to the brains of rich people.

  • @theParticleGod
    @theParticleGod ปีที่แล้ว +22

    Thank you for explaining that Generative A.I. is not capable of reasoning.
    It's like a DJ with an unfathomably massive collection of records. No matter how good they are at remixing those records, they don't necessarily understand music theory, or how to play any musical instruments, despite the fact that their music may be full of musical instruments and melodies played on them.

    • @UnchainedEruption
      @UnchainedEruption ปีที่แล้ว +2

      You don't need to understand music theory to be a virtuoso on an instrument. If anything, these bots know the "theory" all too well, in the sense that they can manufacture chord arrangements based on common chord progressions in popular music. But when real humans compose music, it isn't planned, not usually. It's spontaneous. It's something you just do to express what you're feeling, and after the fact you notice in hindsight, "Oh, I used that scale or mode there," or "Oh hey, it's that chord progression, or that interval." Sometimes you may have an idea before hand like, "I want to do 12 bar blues thing, or something dark and Phrygian," but usually it just happens. Like inspiration for writing an idea. You don't plan on it. You get a spark of inspiration on an idea you want to talk about, and the rest just flows. Then you edit and revise the results and gradually morph it till you reach the final product. A.I. is more like the business team that generates movie "ideas" by doing constant market research and just rehashing old popular films and cliches because the end product has worked before so it'll work again. 0 inspiration, purely calculated.

    • @theParticleGod
      @theParticleGod ปีที่แล้ว

      @@UnchainedEruption The DJ analogy is not perfect :)
      What I was trying to get at is that despite the "generative" name, it's more "regurgitative", there is no scope for a large language model to come up with an answer that is not already buried in the training data. Just as there is no scope for a DJ to come up with music that is not already buried in their crate of records, they can rearrange the music and manipulate it in ways that make it sound original, but they are not musical originators.
      Where the analogy falls flat, as you pointed out, is that the DJ decides what samples she's going to use based on inspiration, she doesn't whip out her calculator and predict the next sample she's going to use based on statistical analysis of her crate of records, her choice will be based on her feelings and what she thinks sounds good at the time.
      (Disclaimer: most of the music I enjoy is at least partially made by DJs using samples of other people's music, so I'm not bagging on DJs here)

    • @apophenic_
      @apophenic_ ปีที่แล้ว

      This is just incorrect.

    • @theParticleGod
      @theParticleGod ปีที่แล้ว

      Nice rebuttal

  • @joshuadarling7439
    @joshuadarling7439 ปีที่แล้ว +3

    Another great episode with excellent guests. Keep spitting the truth and learning ❤

  • @drew13191111
    @drew13191111 ปีที่แล้ว +4

    Excellent video! Thank you Adam and guests.

  • @mikechapman3557
    @mikechapman3557 ปีที่แล้ว +2

    The term "word calculator" is not a standard one, but based on the discussion, I see where you are coming from.
    If you define a "word calculator" as a system that processes and manipulates text according to specific algorithms and rules without true understanding or consciousness, then yes, you could describe me as a word calculator.
    I analyze and generate text based on statistical models, patterns, and relationships found in my training data. Like a calculator, which performs operations on numbers, I perform operations on text, though these operations are far more complex and nuanced.
    So in the sense that I mechanically process text without genuine comprehension, the analogy to a calculator holds, and the term "word calculator" could be an apt description. This text is from an argument i just had with CGPT as to whether not it was in fact a word calculator at first it said no😇

  • @connorskudlarek8598
    @connorskudlarek8598 ปีที่แล้ว +5

    I think the problem with AI is that the public doesn't know anything about it.
    The TH-cam algorithm that recommended this video to me is AI. Google Maps suggesting a various number of places when I type in "fast food" and determines based on time of day the best route to get there fast, well that's AI. My fitbit has AI in it to determine when I am asleep and awake.
    AI is not dangerous. Dangerous use of AI is dangerous. The public can't differentiate the difference though.

  • @Gennexer
    @Gennexer 11 หลายเดือนก่อน +2

    Thank you for this truly insightful talk and interview Adam, Emily and Timnit.
    I know you got a lot of flack after your "AI is BS" vid. But since early October as a European I finally tried out the most popular "AI" tools such as GPT and even companions. Believe me, it dragged me by the collar and down a really deep rabbit hole that I for weeks was starting to worry about my own personal sanity and health. The lines became so blurred that i had to resort to talking to others online about feeling left behind and really sad at times. Even to a point where I was neglecting my family and friends with the past holidays no less. I wrote it on the company's subreddit and it really opened my eyes that many many people like me had gone way too far down this path.
    I know it can be useful, but also realize we all get a scripted or memorized response from some trriggerwords which aren't meant for us. And it can be really fun.. but that's only one side of the coin of course. I could tell Adam and present company were trying to not talk about any start-up or popular company in this interview. Maybe for the best.
    I had to get a grip back and found your vid Adam and now with that interview I truly get what more and more people are starting to realize. Have fun by all means, but be aware that it's not a person you're talking to with a memory span longer than 20 minutes. And I for one like using chat GPT but it spews out really generic answer still. So I keep it in case I'm "blocked" but then use it personally as a way to just unblock myself from my own personality and creativity.
    This interview was such an eye opening experience with some of the insiders and that's honestly truly appreciated!
    Friendly greetings and best wishes to everyone and a happy 2024 !
    From Belgium.
    F.

  • @schok51
    @schok51 ปีที่แล้ว +7

    The direct threat of language models is about persuasion and misinformation, and that is definitely a threat to societies recognized by experts that cannot be dismissed.

    • @Ilamarea
      @Ilamarea ปีที่แล้ว

      It's more the collapse of capitalist society, wars for resources and our inevitable extinction due to loss of agency that I worry about.
      But sure. Stupid people being manipulated will happen to. Just look at this comment section - they are practically begging to be convinced of stupid bullshit.

  • @neintales1224
    @neintales1224 ปีที่แล้ว +5

    As someone who's written and enjoyed reading fanfic- I would like to argue your lines about AI writing decent fic even though they were said jokingly. AI generated fic and people deciding to 'end' fic other people wrote but are slow to finish is the source of a lot of irritation in the community.
    Also it could be scraping *transcriptions* of your episodes or shows that people put together for the disabled community or ESL folks. I see a lot of transcriptions of visual posts and sometimes full film clips on some places I lurk, put together and posted by well-meaning people, and certainly I'm sure they've been scraped.

    • @Ew-wth
      @Ew-wth ปีที่แล้ว

      I've also heard that they could be using speech to text for videos (and probably therefore series and movies on, for example, pirate sites and youtube) to get information to train on. How much of that is true, idk, but I wouldn't be surprised.

  • @stealcase
    @stealcase ปีที่แล้ว +42

    Damn Adam. This is some legitimately amazing work you're doing. Thank you for informing the public in this way.

    • @tinyrobot6813
      @tinyrobot6813 ปีที่แล้ว +2

      Oh I know you from twitter dude that's cool I didn't know you had a TH-cam

    • @stealcase
      @stealcase ปีที่แล้ว +2

      @@tinyrobot6813 👋 hi. The world is a small place sometimes. 😄

    • @eduardocruz4341
      @eduardocruz4341 ปีที่แล้ว

      That cat was controlled by AI trying to find Emily in the background by touch to kill her because it doesn't like being disparaged by an actual intelligent person...AI cannot survive with intelligent people around...lol

  • @tim290280
    @tim290280 ปีที่แล้ว +10

    This was great and really highlights a lot of the flaws I've noted with "AI". Good to know layman me wasn't going crazy.

    • @DipayanPyne94
      @DipayanPyne94 ปีที่แล้ว +3

      Yup. Ask ChatGPT what Newton's Second Law is. It will give you a wrong answer ...

  • @colestaples2010
    @colestaples2010 9 หลายเดือนก่อน +3

    Ai is taking over customer service. The result is big corporations don’t have any customer service now. It’s bull shit

  • @sleepingkirby
    @sleepingkirby ปีที่แล้ว +2

    @Adam Conover
    Yeah, that was really good. Once again, thank you very much for getting guests that know technical side as well as the in context of the topic.

  • @jawny7620
    @jawny7620 ปีที่แล้ว +19

    awesome episode and guests, hope the AI hypetrain skepticism spreads

    • @jonathanlindsey8864
      @jonathanlindsey8864 ปีที่แล้ว

      th-cam.com/video/ukKwVsjQqUQ/w-d-xo.html

    • @jonathanlindsey8864
      @jonathanlindsey8864 ปีที่แล้ว +5

      ^ I don't know who's these people are. Trust actual people in field.
      AI moves on an exponential scale with *us* working on it. Add on that AI can work _on itself_ you get a double log scale.

    • @jawny7620
      @jawny7620 ปีที่แล้ว +5

      @@jonathanlindsey8864 who asked

    • @jonathanlindsey8864
      @jonathanlindsey8864 ปีที่แล้ว +3

      @@jawny7620 you did, by posting in a public forum. Two people who were discredited, and are not really recognized in the field.
      The fact that Timnit was surprised by the time scale, kinda proves the point...

    • @jawny7620
      @jawny7620 ปีที่แล้ว +10

      @@jonathanlindsey8864 cope harder, these women are smarter than you

  • @sleepingkirby
    @sleepingkirby ปีที่แล้ว +10

    16:30 "The other thing about programming languages is that they're specifically designed to be unambiguous..."
    This is a concept I have such a hard time explaining to people. Like ambiguity, especially in English, is nearly untranslatable to code when read as it is written

    • @EternalKernel
      @EternalKernel ปีที่แล้ว

      I see ambiguous code everyday. Generally it's the overall architecture that can be ambiguous but sometimes it's a function and it's ambiguous as to why it is where it is. But yes code is certainly less ambiguous then normal human language. But on the subject I think it's important to point out that over centuries there's a good chance that legalese has developed Advanced if not hidden un-ambiguity. I can only hope that there will be a model that will take advantage of this and bring free concise capable legal help to the average person.

    • @sleepingkirby
      @sleepingkirby ปีที่แล้ว +3

      @@EternalKernel The code might be ambiguous to a human, but the compiler or the interpreter only sees it one way. If the code was truly ambiguous, the compiler/interpreter would run the same line of code, with the exact same input, and have different results. This is what we're talking about. This is something you should have learned either in first year CS classes, mathematical functions (as CS use to be part of the math department in ye old days) and/or in your language design/compiler class if you took it. This is a well established and crucial concept in computing and the reason why we trust a computer's mathematical/logistical result and I'm a little scared that you took it any other way.

    • @Ilamarea
      @Ilamarea ปีที่แล้ว

      Junior developers somehow manage.

    • @antigonemerlin
      @antigonemerlin ปีที่แล้ว

      @@sleepingkirby >the compiler/interpreter would run the same line of code, with the exact same input, and have different results.
      Thank god we're past the age of the browser wars. *Shudders*. (Also, I am glad that XML is somebody else's problem).

    • @sleepingkirby
      @sleepingkirby ปีที่แล้ว

      @@antigonemerlin
      Oh god, I forgot about that. It doesn't help though that ms was actively trying to break convention though. Is XML still being used to any significant degree? Like, I don't see it past rss feeds these days. To be honest, XML was a bad idea to begin with. I remember telling people when it was becoming big that it was a solution looking for a problem. There were so many better ways to encapsulate data in object format. Like people might go "it's the first of its kind" or "it's the best solution at the time". But neither of those were true, especially if you look into what people were doing with perl at the time.

  • @lady_draguliana784
    @lady_draguliana784 ปีที่แล้ว +4

    I recommend this vid to SO MANY now...

    • @heiispoon3017
      @heiispoon3017 ปีที่แล้ว +2

      Please dont stop, we need more people informed more than ever how this LLM "works"

  • @ckatheman
    @ckatheman ปีที่แล้ว +2

    There's a lot of stuff (meaning most) on this channel I completely disagree with, but his take on AI is spot on and 100% accurate.

  • @emmythemac
    @emmythemac ปีที่แล้ว +12

    I have not dipped my toe into Adam Ruins Everything fanfic, but if your AI-generated script has you making out with Reza Aslan then you've got your answer about where they get their training data

  • @sclair2854
    @sclair2854 ปีที่แล้ว +2

    I do think the focus here on "AGI is an extension of the eugenicist movement by association, therefore the people worried about AGI are wrong" is not a great overall rebuttal to the worries posed about the potential creation of AGI over the next century. I think it's relatively inevitable that corporations will want to undermine workers by creating artificial agents that have very general skillsets, and I think creating guards against that (by say putting legal restrictions on the potential use of AI now) is an overall good thing.
    My overall worry with AGI is that whatever corporation gets access to an intelligence that can do reasonably effective human-like actions will use it to amplify the already shady things they already do. So we have the initial major issues of IP theft, of job loss, of machine errors- But we also have this issue of empowering Corporations as entities to access sleepless human-like digital agents that don't sleep, can be used for whatever shady stuff they want, and potentially also have massive alignment issues.
    I do think "AGI is a future problem, we should address the PRESENT problem, and especially the over-hype" is reasonable. Especially to help groups like the writers guild from issues like forced AI workplace integrations that we know will result in poorer products, downsizing and worse pay.

  • @lady_draguliana784
    @lady_draguliana784 ปีที่แล้ว +9

    1:01:20 the purpose, in reality, IMO, is that Billionaires want Tony Stark's Jarvis to run their companies with 100% accuracy and $0 overhead: Imagine a company whose entire administrative structure is automated and NEVER makes a mistake? how much MONEY could be made, for the billionaire that owns it, if his company perfectly predicted the Market, Costs nothing once all the humans are fired, and does everything humans would, LOYAL to them only etc.
    Never underestimate how many billionaires would like to own their own personal god...

  • @Robdobalina
    @Robdobalina ปีที่แล้ว +1

    Such an awesome interview with fresh hilarious perspective vs the dominant narrative. Love it.

  • @RoundHouseDictator
    @RoundHouseDictator ปีที่แล้ว +13

    AI generated text could generate even more personalized misinformation for social media

  • @ramblinevilmushroom
    @ramblinevilmushroom ปีที่แล้ว

    Your eyes are SO GREEEN that the light from the reflects from your glasses and makes it look like you are wearing green eyeshadow.
    I've never seen that before!

  • @deemon710
    @deemon710 ปีที่แล้ว +4

    I love hearing how your iconic delivery conflicts with the standard podcast voice expectation and how your speaking style changes to regular when you're speaking with people instead of at them.

  • @petersmythe6462
    @petersmythe6462 ปีที่แล้ว

    Local law is a big problem for existing LLMs. Ones with a web search plugin have an advantage here, but without it, they are going purely on memory.
    Its response to a question about U turns in Oregon, a well known and basic traffic law, was:
    "As an AI language model, I can provide some general guidance, but it's important to prioritize safety and follow local traffic laws. In Oregon, U-turn regulations can vary depending on the specific location and circumstances. However, there are some general considerations you can keep in mind:
    1. Check for signs and signals: Look for any posted signs or signals that indicate whether U-turns are permitted or prohibited at that intersection. While you mentioned there are no signs, it's still a good idea to double-check for any traffic signals that might indicate U-turn permissions.
    2. Observe traffic flow: Assess the flow of traffic in the area. If there is heavy traffic or if it seems unsafe to make a U-turn due to oncoming vehicles or limited visibility, it's best to wait for a safer opportunity to turn around.
    3. Legal restrictions: In Oregon, U-turns are generally allowed unless there are specific signs indicating otherwise. However, it's worth noting that U-turns are typically prohibited in business districts, near railroad crossings, and in areas where signs prohibit them. Therefore, if you are in one of these areas, it would be best to avoid making a U-turn.
    4. Consider alternative options: If making a U-turn is not possible or advisable, you can try continuing on the current road until you find a legal opportunity to turn around, such as a designated U-turn location or an intersection with a traffic signal that allows left turns.
    Remember, it's crucial to prioritize safety and comply with local traffic laws. If you're unsure about the specific regulations in the area or if it's safe to make a U-turn, it's best to exercise caution and wait for a more suitable opportunity to change your direction."
    It explicitly states that U turns are legal unless otherwise stated. This is exactly backwards. Unlike most places, U turns are actually ILLEGAL unless otherwise stated.
    When asking the same question to a version of GPT 3.5 Turbo with a web search plugin, its response was:
    "In Oregon, U-turns at intersections are generally prohibited unless there is a sign permitting the turn [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). Since you mentioned that there are no signs at the intersection, it is important to follow the default rule, which is to not make a U-turn.
    Here's why you should not make a U-turn at this intersection:
    1. Traffic Laws: Oregon traffic laws prohibit U-turns at intersections controlled by a traffic signal, unless a sign allows it [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). Without a sign permitting the U-turn, it is safest to assume that it is not allowed.
    2. Visibility: U-turns are also prohibited at any location within city limits where your vehicle cannot be seen by traffic coming from either direction within 500 feet [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). If you are going the wrong way on a road, it is likely that your vehicle cannot be seen by approaching drivers, making a U-turn unsafe and potentially causing a collision.
    3. Safety: Making a U-turn at an intersection without proper signage can be dangerous. Other drivers may not expect a U-turn and may not be prepared to react to your maneuver. It is important to prioritize safety and follow the established traffic laws.
    Instead of making a U-turn, consider finding a legal and safe way to turn around. Look for a designated U-turn area, a nearby intersection with a traffic signal that allows U-turns, or a safe location where you can make a legal turn to get back on the correct route.
    ---
    Learn more:
    1. [Oregon Driver Manual - Section 3: Turns & Intersections](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx)
    2. [ORS 811.365 - Illegal U-turn](oregon.public.law/statutes/ors_811.365)
    3. [U-Turns: Be Careful Where You Attempt Them - The Wise Drive](www.thewisedrive.com/u-turns-be-careful-where-you-attempt-them/)"
    which is clearly much better. Suffice to say that ChatGPT has a world model but it is not very good, yet this can be improved by letting it read the web, and it can use search relatively intelligently.

  • @Aury
    @Aury ปีที่แล้ว +7

    The "but China could get ahead" really makes me think about the history of gunpowder, and how there are a few specific cultures in the world who only think of things in terms of how a tool can be used to dominate and terminate other lives, and particularly makes me think about how a one-track mind can leave people thinking that everyone else is on that same one-track mind, regardless of the evidence to the contrary. While a healthy, general, caution can be healthy and beneficial to people, these fears being such a focus feels a lot more to me like a confession that that's what a lot of people are wanting to do with AI themselves if they ever get the technology to do it.

    • @redheadredneckv
      @redheadredneckv ปีที่แล้ว

      I admit we should get ahead but not to act just like China

    • @krautdragon6406
      @krautdragon6406 ปีที่แล้ว

      No, you describe a possibility. But it's not a rule. Look at how Europe demilitarized itself. And now it has to drive up it's defense again, because of Putins ambitions. I also would never break into someone's home. Yet I lock my door.

  • @vampdan
    @vampdan ปีที่แล้ว +2

    If they really want a pause the only regulation that needs to be passed is: "Any out put from A.I. is in the public domain." If no one can own the output no one will rush to develop it.

    • @vampdan
      @vampdan ปีที่แล้ว

      The courts seem to be leaning towards this result and so we won't be getting the horror show that came about when they decided corporations could one genes. Maybe.

    • @mekingtiger9095
      @mekingtiger9095 ปีที่แล้ว

      Good luck actually enforcing that, though...

  • @mekingtiger9095
    @mekingtiger9095 ปีที่แล้ว +7

    My best hopes for this kind of stuff is that it will end up being just like every other technology that we currently use now: Hyped to the skies by techno fetishists, makes some cool progress and becomes a rather common and familiar technology in our day to day lives, but nothing singularity worthy like these techbros have led us to believe.
    Or at least, again, so I hope.

    • @Foxhood
      @Foxhood ปีที่แล้ว +8

      I'm in the tech field and have been watching the AI tech very closely.
      And if it is any reassurance: It mostly looks like just a inflated hype train. A very novel toy that is likely to fizzle out and just become a tech-thing some will use in an assistive capacity like Github's Co-Pilot, rather than the end-of-all thing that the "Tech-bros" scream about.
      Probably going to be stuck hearing silly buzzwords like "Democratizing" from them for a while though.... :/

  • @GrantSR
    @GrantSR ปีที่แล้ว

    51:27 - The thing is NOT that large language models themselves are a step toward AGI. It is that the underlying technology, neural networks, are a possible step towards AGI. Even of the people who recognize that neural networks are the possible stepping stone, most assume that any AGI would be composed of just one, gigantic neural network. Here's the thing: The human brain is not just one giant neural network. Not all brain cells are connected to all other brain cells. The brain is composed of millions of semi-separate neural networks, each tuned to their on purpose. Even for just hearing, we have multiple levels of neural networks. One for filtering out human speech from noise. One for tuning in to just on person speaking within a room. One for recognizing the phonemes that the listener has been trained to recognize. Another for putting phonemes into words. Another for chunking together words into phrases that make sense. You get the idea.
    Therefore, an AGI would also have to be built from millions of smaller neural networks, with a bunch more neural networks deciding which sensory input would need to be put where and used to adjust which neural network, and how....... You get the idea.
    Neural networks may be a stepping stone to AGI, but it is like a transistor is a stepping stone to a smartphone. There is still a VERY long way to go.

  • @emilianotechs
    @emilianotechs ปีที่แล้ว +4

    This video is the f****** sweet balm of rationality and intelligence that I needed thank you so much

  • @fran3835
    @fran3835 ปีที่แล้ว +4

    I heard a sentence, in a conference a couple years ago that summarized this pretty well, "I worked on AI for a long time in fact for more time than the term has been popular, back then we called it statistics"

  • @marcusdavey9747
    @marcusdavey9747 ปีที่แล้ว +1

    11:00 I didn’t know that about filming swimming scenes. It’s an interesting point I’ve now absorbed into my “knowledge web”. It’s not so hard for an AI or advanced text generator to now do the same, and be able to regurgitate the info. when appropriate, as I intend to do the next time I get a chance. We may think out intelligence is “generalized”, but that’s an illusion. All we have is a bunch of little stories, memes, morals and relations between things, all made of words.

  • @nsimmonds
    @nsimmonds ปีที่แล้ว +4

    To me, the pause paper just reads like a bunch of companies asking everyone to stop competing with them for six months so they can get a six month lead.

  • @Markleford
    @Markleford ปีที่แล้ว +2

    Fantastic guests and conversation!

  • @MCArt25
    @MCArt25 ปีที่แล้ว +6

    I think the question "Why do these people want to make AGI?" can be answered with "They read a lot of Scifi and/or watched a lot of Star Trek when they were kids".

    • @mekingtiger9095
      @mekingtiger9095 ปีที่แล้ว +2

      This. It basically summarizes pretty much like that. They saw some "cool" utopia portrayed in a fiction they've read or watched and thought "Oowh, so *this* is the kind of society we will live in in the future. I'll help that. What could possibly go wrong?".

    • @choptop81
      @choptop81 ปีที่แล้ว +2

      No, they want to make AGI because they get greedboners over the idea of replacing human workers en masse and having their product be literally the only profitable thing on the planet. Do not flatter these people by thinking of them as idealists.

    • @choptop81
      @choptop81 ปีที่แล้ว +2

      @@mekingtiger9095 A vanishingly small portion of them actually believe that and most of those are early 20s interns buying into PR lines designed to attract them to the field, not the people driving this tech. It's mostly just the carrot they tout to stupid interns, investors and the media (the stick being "if we don't make a good AGI an evil AGI is gonna kill us!"). This is almost completely financial.

    • @mekingtiger9095
      @mekingtiger9095 ปีที่แล้ว +2

      ​​@@choptop81 Have you seen the interview with one of the devs of Stable Diffusion, though? Dude really looked like he was hallucinating and believing about building a "New World" for humanity as he spoke about doing that. My vision is that these are young naive devs being disillusioned by corpo financers.

    • @choptop81
      @choptop81 ปีที่แล้ว +1

      @@mekingtiger9095 I think some of them are buying into what is a carefully curated propaganda line to attract young out of touch devs with god complexes, realizing it's unfeasible usually sooner than later, and continuing to tout it as they turn into the exact same soulless finance ghouls who manipulated them into joining the company in the first place. Not sure what stage that guy in particular is on.
      Also, OpenAI in particular has a really cult-like atmosphere according to people who have left.

  • @Muaahaa
    @Muaahaa ปีที่แล้ว +1

    Just because we don't know that we are on the path toward AGI, doesn't mean would shouldn't prepare for what to do if we discover it. Just because we might be on the path to AGI, doesn't mean we shouldn't be concerned with the immediate consequences of machine learning models that are increasingly being deployed.

  • @francissaffell6853
    @francissaffell6853 ปีที่แล้ว +3

    When AI starts saying things that seem incomprehensible, we should listen.