Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3

แชร์
ฝัง

ความคิดเห็น • 388

  • @harrison325
    @harrison325 3 ปีที่แล้ว +179

    We need Steven Pinker on the podcast again!!

    • @joshuasasfire2759
      @joshuasasfire2759 2 ปีที่แล้ว +1

      Maybe he can be truthfull about Epstein!

    • @alexl4342
      @alexl4342 2 ปีที่แล้ว +1

      Yeah i'd like to hear Pinker on Lex again

    • @alexl4342
      @alexl4342 2 ปีที่แล้ว +2

      @@joshuasasfire2759 Watch Joe Rogan's latest interview with Pinker, they talk about Epstein

    • @auditoryproductions1831
      @auditoryproductions1831 ปีที่แล้ว +1

      I'd like to see Pinker on Lex again

    • @PeterBaumgart1a
      @PeterBaumgart1a ปีที่แล้ว +1

      Especially post GPT4. (I don't think he'd need to revise a lot, if anything.)

  • @colemclain3563
    @colemclain3563 4 ปีที่แล้ว +95

    Let's take a moment to congratulate Lex on how far he's come since 2018; Congratulations Lex!

  • @tanmayjoshi108
    @tanmayjoshi108 4 ปีที่แล้ว +185

    Steven Pinker has a '17th century genius' look

    • @Mufassahehe
      @Mufassahehe ปีที่แล้ว +2

      He looks like a philosophy professor

    • @MrSidney9
      @MrSidney9 ปีที่แล้ว +3

      @@Mufassahehe He is, in effect, a philosopher. He actually looks like Voltaire

  • @alicethornburgh7552
    @alicethornburgh7552 4 ปีที่แล้ว +67

    Outline!
    0:00 the meaning of life
    3:40 biological vs artificial neural networks
    6:06 consciousness
    9:30 existential threat / risks
    34:12 books early on in your life that had a profound impact on the way you saw the world

  • @velvetsprinkles
    @velvetsprinkles ปีที่แล้ว +17

    This interview definitely needs an update! Please have Steven back on.

    • @connorkapooh2002
      @connorkapooh2002 9 หลายเดือนก่อน +1

      pleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeease lexxx pleaaaaase plleeeeeaseee pretty pleaaaaaaaaaaaaaaaaaaaase i would love for dr pinker to come back pleaaase

  • @sirAlfred1888
    @sirAlfred1888 4 ปีที่แล้ว +81

    26:35 That aged well

  • @dylanhirsch-shell9977
    @dylanhirsch-shell9977 4 ปีที่แล้ว +26

    It's interesting that see the evolution of Lex as an interviewer. In this very early episode, he is clearly nervous, as evidenced by how quickly he asks his questions and his tendency to interrupt Pinker. I bet Pinker would be a really great guest to have on again, with a more experienced and comfortable Lex asking deeply probing questions that really get at the heart of how Pinker thinks and what he truly believes.

    • @jasonbernstein2
      @jasonbernstein2 ปีที่แล้ว +1

      YES!

    • @theresaamick
      @theresaamick ปีที่แล้ว

      I definitely notice how irked Pinker becomes when interrupted. I do also see that at times, Lex totally did interrupt but at times he really wanted to have Pinker explain what he just flew by in more depth. I guess still, Lex could hv waited until he completed the thought. Then again, Pinker isn’t asking Lex a question like the first couple did so far. There is a chance that Pinker wouldn’t have stopped at all if he wasn’t interrupted. Definitely an interesting observation of the nuances of communication.
      I was watching a new interview & noticed when Lex was not listening as he was waiting to ask his next question (once or twice only). I suppose that learning to communicate well is imperative but also probably takes a lifetime. One thing about Pinker, though is you can’t keep but notice his irritation with feeling interrupted. 😊
      I just heard Lex say, “Now give me a chance here…” so perhaps he felt he couldn’t finish either. Interesting!

  • @alexl4342
    @alexl4342 2 ปีที่แล้ว +9

    26:28 "I personally don't find it fun to talk about General AI threats because it's a waste of time, we have real things to worry about like pandemics...". This was recorded a year and a half before Covid hit.

  • @WeirdGuy4928
    @WeirdGuy4928 6 ปีที่แล้ว +38

    You need a second camera to point at you.

  • @ericzong1189
    @ericzong1189 4 ปีที่แล้ว +7

    i think the point of AGI is not to be explicitly programmed, therefore we cannot program safety measures .

  • @Sphere723
    @Sphere723 5 ปีที่แล้ว +35

    13:52 I think Pinker is probably quite wrong here. The idea that nuclear fission could be used in a weapon was fairly obvious right after it was discovered. A back of an envelope calculation would tell you can get a city size (or in Einsteins words a "whole port") explosion. It's hard to imagine a situation were this fact of nature is never acted upon by anyone anywhere.

    • @kdaleboley
      @kdaleboley 5 ปีที่แล้ว +11

      And to expand: What if a collection of engineers, or even a few very smart ones, decided to create an A.I. with the specific goal of human destruction? For A.I. to be dangerous it is not required that we loose control. It could be very dangerous indeed while operating exactly as designed. How about that Professor?

    • @georget5874
      @georget5874 5 ปีที่แล้ว +2

      atoms bombs weren't a very good example, because obviously people building them would know they couldn't be detonated unless someone dropped them, but his argument really is that people wouldn't build something that they know could end the world, if they knew it couldn't be controlled, which isn't that unreasonable. the main point here though should be that there is a lot of hype in AI, which suits alot of people in industry and academia because it helps with funding, and that back prop which the current wave of hype is built on, was invented in the 1960s there hasnt been any fundamental new 'discoveries' since then, improvements in AI since then have come about mostly through faster computers and larger sets of training data.... science barely has any understanding of how consciousness works in the human mind, let alone how we might copy it, we are a long long way from rogue AIs taking over the world...

    • @onetwothree4148
      @onetwothree4148 4 ปีที่แล้ว +2

      I think we have already seen that he is correct though. It is much easier to program AI to not harm humans than it is to do anything useful.

    • @2CSST2
      @2CSST2 2 ปีที่แล้ว

      @@onetwothree4148 You're arguing about something different, it's possible that AI never end up being problematic or harmful (and I don't think you're warranted to declare him right about that btw we haven't actually build a super intelligent general AI yet), but the point Pinker was making here is that people would have probably never invented the nuclear bomb if it wasn't for the context of WW2. I also happen to think he's wrong, he gives as an example other potential super weapons that never were built, but the examples he gives are very elaborate things like creating earthquakes.
      The plain fact he's missing here is that everyone understands more readily the power and immediate threat of a huge explosion, so I think the reason the nuclear bomb was invented and not other weapons wasn't the advent of WW2 (although it sure sped up the process) but simply that having a much bigger bomb than any other existing is much more easily appealing than trying some subtle environment manipulation.

  • @AmericanFire33
    @AmericanFire33 6 ปีที่แล้ว +11

    I’m a trucker, I don’t find it soul deadening. I find it liberating. It would be great if I computer co-driver. That would make a lot of sense.

    • @ManicMindTrick
      @ManicMindTrick 6 ปีที่แล้ว +7

      For a person with Stevens cognitive abilities and opportunities in life being a truck driver will seem like mind-deadening work. But he is sitting on an ivory tower. For an average Joe those jobs can be both fulfilling and meaningful and they bring a good income to a family. What are most of you truck drivers going to do when you are replaced completely by self driving systems. Re-education to be a software engineer? Yeah right. We will see extreme divides between the haves and the people who are becoming practically useless and truckers only represent a small section of this new class of people made redundant by technology. I see great poverty, drug addiction and misery moving forward whiles the elite can hide away in decadence and luxury.

    • @rpcruz
      @rpcruz 5 ปีที่แล้ว

      @@ManicMindTrick Are there no things in your life you want to buy but are too expensive? Fancy cars, vacations, spa treatments, clothes, shoes, etc. Plenty of things that truck drivers can do for others, if driving trucks becomes a non-option.

  • @beatthebag
    @beatthebag 6 ปีที่แล้ว +24

    PInker is one of the best teachers of our time.

  • @gaoxiaen1
    @gaoxiaen1 2 ปีที่แล้ว +7

    Bring back Steven Pinker. How can you do a 37-minute interview with him? I've read all of his books. By the way, I still have a few of the Time/Life Science series books.

  • @hueydockens4415
    @hueydockens4415 5 ปีที่แล้ว +5

    Mr. Steven Pinker! I think you are a genius, and I wish I could see all of your documentaries and lectures. you do have a wonderful mind. love your work Mr. Linguistics. Oh loady. I'm 70yrs young and never had the opportunity for school and colleges, had to work my ass off. I'm so proud and its a honor to get to see you do your job. Thanks and infinite prayers.

  • @hongz1787
    @hongz1787 6 ปีที่แล้ว +24

    "Perception of fear is driven by imagined ability, not by data"

    • @lukeb8045
      @lukeb8045 4 ปีที่แล้ว

      close: "Perception of fear is driven by imagined control, not by data"

  • @rahulvats95
    @rahulvats95 ปีที่แล้ว +1

    Books recommended by S. Pinker :
    1) David Deuce by Beginning of Infinity
    2) History of Force by James Paine
    3) 1,2,3,...Infinity by George
    4) Time life Science series (Magazine)
    5) Reflections on Language
    6) Ever Since Darwin by Steven J.
    7) "Language and Communication books" by George Miller

  • @antonrudenko8242
    @antonrudenko8242 5 ปีที่แล้ว +10

    I used to share Pinker's excitement for the elimination of "back-breaking"/difficult jobs using automation & AI. However, this approach seems to discount the notion that human beings (at least some) are "beasts of burden".
    I think of the movie "Only the Brave", where Miles Teller's character's only path out of addiction and delinquency was to take on one of those "back-breaking jobs" as a fire fighter. So automating this type of job, while appealing on the surface, might be a disservice to people who require this hardship to maintain their physical and mental well being, counter-intuitive as it may sound...

    • @onetwothree4148
      @onetwothree4148 4 ปีที่แล้ว +1

      You can have my burden if you want it. I think we're a long way from healthy work loads, at least in my industry...

    • @martinguila
      @martinguila 2 ปีที่แล้ว +3

      I think we all need meaning, something to do. Some goal to strive for.
      When you dont have to work thet doent mean your only alternative is to be a couch potato. Its a similar situation to being finacially independent, and those in that situation dont generally sit staring into wall. Insted they persue what they find meningful, and since they do what they like they may in many cases work even more than other people.

  • @JTheoryScience
    @JTheoryScience 5 ปีที่แล้ว +18

    I enjoyed the questions more then the answers for some strange reason. Fridman has a nice interview style, almost like its a 1side-rehearsed unbias discussion. I suppose its designed this way for a more educational direction? I also like how Fridman will paraphrase as a way of gaining comprehension and affirmation on understanding what Pinker's response to the question was, It also helps me gain an additional perspective on each point myself. I look forward to more in the future.

  • @Pmc07AyeUrDa
    @Pmc07AyeUrDa 4 ปีที่แล้ว +3

    The problem is not with a runaway AI that could turn on its creators. The problem is whether the intentions of the creator are good or bad.

  • @BernardPech
    @BernardPech ปีที่แล้ว

    I highly recommend all the books written by Pinker. He is a very clear thinker and a prime example using reason and empirical data, rather than emotions, to understand the world and human nature.

  • @deeplearningpartnership
    @deeplearningpartnership 6 ปีที่แล้ว +33

    Great interview - but I would like to see a bit more of Lex, especially when he's asking his questions to Steve.

    • @VinetaAglisa
      @VinetaAglisa 5 ปีที่แล้ว

      I agree 100%. Made me angry, his interruptions...

    • @VinetaAglisa
      @VinetaAglisa 5 ปีที่แล้ว

      I meant the interviewee interruptions when Lex was asking questions.

  • @lorenzo-agnes
    @lorenzo-agnes ปีที่แล้ว +1

    One of your best guests. Engaging and enlightening.

  • @thisarawt
    @thisarawt 4 ปีที่แล้ว +2

    Another good convo. One by one ticking off all. Its fun to read the comments..! Thanks Lex. Keep up the good work.

  • @Dondlo46
    @Dondlo46 2 ปีที่แล้ว +3

    His arguments on AI are the best ones I've heard, just be mindful of what you create, AI shouldn't really be a problem if you do it properly

    • @jimwheely6710
      @jimwheely6710 ปีที่แล้ว +1

      How do suggest we do this?

  • @NewportSolar
    @NewportSolar ปีที่แล้ว

    Enjoying this video in 2023.
    Wow, the podcast has come a long way! Well done Lex. 👏

  • @yushauthuman2633
    @yushauthuman2633 ปีที่แล้ว +1

    Was reading something ended up making research on him and here I am 😊😊 thanks ,
    Make my day.

  • @rodrigoff7456
    @rodrigoff7456 ปีที่แล้ว +2

    I'd love to hear a revisit on those topics with him, now that LLMs are taking over

  • @sainath66666
    @sainath66666 6 ปีที่แล้ว +13

    Want more want more want more want more want more want more want more such videos
    Awesome man

  • @Girlintherocket
    @Girlintherocket 6 ปีที่แล้ว +8

    Steven Pinker is so funny. Love F, "as Steven Pinker said...based on my interpretation 20 years ago." Wonderful interview Lex!

  • @classickettlebell2035
    @classickettlebell2035 5 ปีที่แล้ว +21

    Pinker keeps saying don’t build an evil system but he forgets there are evil people out there who will!

    • @rpcruz
      @rpcruz 5 ปีที่แล้ว +1

      And those evil people will use AI safety why?

    • @classickettlebell2035
      @classickettlebell2035 5 ปีที่แล้ว +2

      Ricardo Cruz exactly!

    • @liquidzen906
      @liquidzen906 4 ปีที่แล้ว +8

      Also forgetting that engineers arent always in charge, a government can force an engineer to make something

    • @higgledypiggledycubledy8899
      @higgledypiggledycubledy8899 3 ปีที่แล้ว +3

      What he doesn't get is that good people are not sure how to (or indeed if it's possible to) build a safe system...

  • @petropzqi
    @petropzqi ปีที่แล้ว +1

    Watching this in 2023 when GPT is starting to show evidence of sparks is very entertaining.

  • @matthewrobinson710
    @matthewrobinson710 6 ปีที่แล้ว +3

    I also have deep respect for Steven Pinker and his overall message however, I am not convinced he deeply understands the problems entailed in coding utility functions that account for any possible misalignment of values. It is like making wishes with the devil. If your phrasing is ever so slightly off; unintended consequences could follow. It could be possible, that the only way to get the specific phrasing right, is to foresee ALL possible consequences.

  • @dscott333
    @dscott333 ปีที่แล้ว +1

    Going through all of Lex's back catalog now..

  • @MG-qh6wr
    @MG-qh6wr ปีที่แล้ว

    Really enjoyed Enlightenment Now. Hope you get Steven back on in the near futrure

  • @jasonsomers8224
    @jasonsomers8224 2 ปีที่แล้ว +2

    I found Stephen Pinker independently. Super excited to see he has been on your podcast.

  • @smallprion1256
    @smallprion1256 4 ปีที่แล้ว +1

    I love lex's opening question!

  • @justinwatkins438
    @justinwatkins438 5 ปีที่แล้ว +7

    I am more concerned with the goals of the creator than the creation...bro!

  • @PClanner
    @PClanner 6 ปีที่แล้ว +7

    I would like to add to the advice given to you concerning AI destruction of the human race ... If you do not adequately scope out ALL parameters then oversee ALL outcomes then a sloppy preparation will deliver a questionable product.

  • @luckybrandon
    @luckybrandon 6 ปีที่แล้ว +5

    Finally someone speaking some sense about the ridiculous hypothetical paperclip threat of AI. He beautifully articulates my thoughts that an AGI wouldn't be very intelligent at all if it were to blindly turn the universe into paperclips. I realize this is a hypothetical scenario intended to make a point, which is why I enjoyed hearing this discussion applied to other scenarios like finding the shortest distance for a route that doesn't mean mowing down pedestrians... Great discussion thanks for sharing.

    • @OriginalMindTrick
      @OriginalMindTrick 6 ปีที่แล้ว +1

      I feel this is a type of failure of human intuition. We are born into a world where intelligence and wisdom might not be exactly perfectly overlapping but that those at least are tied together. We fail to realize our brain only represents a tiny little corner on the larger map of possible minds, and it doesn't take much imagination to see very alien but still competent minds where there is no connection or where the connection is totally different. Extremely capable of problem-solving and yet with a totally different goal and drive architecture and the paper clip maximizer thought experiment is only a demonstration of this notion. Of course, this so-called Orthogonality Thesis could be wrong and I'm open to the idea but Pinker did not exactly do a good job of arguing against it. Peter Voss is the only person that has made logical points on this topic that argue against it from what I've come across thus far, but I'm still skeptical.

    • @mattheww797
      @mattheww797 6 ปีที่แล้ว

      wasps aren't particularly intelligent compared to humans but they relentlessly create nests and reproduce.

  • @caterinadelgalles8783
    @caterinadelgalles8783 3 ปีที่แล้ว

    'Always expect the worst and you'll be hailed a prophet' - Tom Lehrer. I just acquired a great quote from a man I had never heard from.

  • @bakkaification
    @bakkaification 6 ปีที่แล้ว +2

    Hey lex loved you interview with Joe Rogan! Great convo! I don’t know if you are familiar with Eakhart Tolle and his work but I encourage you to read A New Earth it’s possibly one of the most insightful books I’ve ever read. The concept of getting rid of the ego needs to be addressed before humans do something dumb and start another war over who’s got the bigger one... if you can contact him I also think Joe would benefit greatly from this book as well perhaps convey its message to the masses Hope all is well and your studies are good!

  • @InfoJunky
    @InfoJunky 6 ปีที่แล้ว +3

    Yesss Lex is going on Rogan in a few weeks?!?!

  • @captainpoil
    @captainpoil 3 ปีที่แล้ว +2

    Oh to have 30 minutes to pick the man's mind. I just started reading The Better Angels... & With a new book coming out in September. Here's to hoping for a 3 hour podcast.

  • @deerwolfunlimited
    @deerwolfunlimited ปีที่แล้ว

    I discovered Pinker in 2010. What a mind.

  • @ottofrank3445
    @ottofrank3445 2 ปีที่แล้ว

    Great Interview! just one funny thing Steven Pinker looks like Thomas Gottschalk , a former german tv presenter

  • @ChrisSeltzer
    @ChrisSeltzer 8 หลายเดือนก่อน

    Absolutely amazing how prescient Pinker was here.

  • @Elitecataphract
    @Elitecataphract 3 ปีที่แล้ว +3

    I feel pretty certain that history will write about Pinker very positively, but like Newtons big mistake playing with Alchemy, Pinker will be remembered as being very wrong about his AI predictions. He just doesn't understand well enough (in my opinion), the potential of a "self-aware" system and the almost incomprehensibly fast thinking it could do. It's not that it will necessarily seek to destroy us, but it might. The problem is that a self-aware AI, if ever conceived, wouldn't be designed to create optimal paper-clip production but would likely decide for itself what its goals are. Whether humans intended to make it self-aware or not, it might be possible. Of course, we need to learn more about what makes something conscious, but it doesn't appear to be an issue of simply more computing power. As Richard Dawkins pointed out in his interview with Lex, the cerebellum has a much higher neuron density than the cerebrum (and more neurons as well), but the cerebellum is not the part of the brain that holds any "consciousness". The cerebellum simply computes body movements, and other unconscious functions. Once we understand what consciousness is or what makes something self-aware, then we might be able to have a more intelligent discussion about how to avoid it. Regardless, if it ever occurs, the self-aware computer can learn about all human understanding of neuroscience, computer science, AI, and machine learning in a matter of minutes or hours. From there, it can enhance itself and then in minutes its enhanced self can do more, etc. It could become an existential threat overnight without anyone knowing. That is a potentially real scenario unless we stay on top of AI development and ensure that people don't make AI self-aware. It might not be possible to make AI self-aware by accident, but until we know more about that, we just don't know.

    • @ninaromm5491
      @ninaromm5491 ปีที่แล้ว

      @ Steven Reed . I like your point, as it reads 2 years down the line. Haven't things got tangled and tricky, beyond what was predicted...?

  • @colmnolan1
    @colmnolan1 4 ปีที่แล้ว +7

    Spot on about the potential for pandemics at 26:35 anyway!

    • @rok4937
      @rok4937 3 ปีที่แล้ว

      Indeed! In retrospective it sounds like a prediction.

    • @daszieher
      @daszieher 3 ปีที่แล้ว +1

      @@rok4937 he is not the only one, who had that in the radar.

  • @user-hh2is9kg9j
    @user-hh2is9kg9j 5 ปีที่แล้ว +1

    The fear of AI started as a joke and in popular movies. Now respected people are seriously talking about it. I am 100% with Steven Pinker I have always held these opinions that he just explained in this video.

    • @paweloneill5888
      @paweloneill5888 5 ปีที่แล้ว

      You are wrong.

    • @user-hh2is9kg9j
      @user-hh2is9kg9j 5 ปีที่แล้ว

      @@paweloneill5888 we don't even have the theoretical technology to create human-like intelligence. we don't understand the brain that we want to replicate. and we don't know if a human-like intelligent unit will necessarily have any selfish motives.....etc it is a series of 100 ifs

    • @paweloneill5888
      @paweloneill5888 5 ปีที่แล้ว

      @@user-hh2is9kg9j We don't have to understand the brain to create an AI that far outperforms it. We already have fairly advanced autonomous learning machines. We are on the brink of creating quantum computers that are capable of computing things in a few minutes/hours that would take all the standard computers in existence today more than 10,000 years to compute. I'm not saying we are all going to die in a few years but if you think there is no real threat of some US/Russian/Chinese AI going rogue then you are naive. What do you think the first task of a Russian or Chinese military AI system will be? Hint.... it won't be self driving cars or a cure for cancer.

  • @exponent8562
    @exponent8562 6 ปีที่แล้ว +7

    Great interview, I love Steve Pinker. Disagree a little on terrorism and a lot on AI. If I’m not mistaken (and not trying to prejudge childfree Pinker), there’s something about not having kids that may hinder the assessment of ‘long-term’ risks.

  • @pookellypoo
    @pookellypoo 5 ปีที่แล้ว +1

    Excellent interview, great questions, wonderful engagement. 10/10 interviewer. Pinker was of course enlightening as usual!

  • @jfescobarbjf
    @jfescobarbjf 6 ปีที่แล้ว +4

    i've already subscribed and listen the podcasts!!! Great content

    • @brandomiranda6703
      @brandomiranda6703 6 ปีที่แล้ว

      which podcast?

    • @jfescobarbjf
      @jfescobarbjf 6 ปีที่แล้ว

      @@brandomiranda6703 lexfridman.com/ai/ .... Search it in google podcast!

  • @KerryOConnor1
    @KerryOConnor1 ปีที่แล้ว

    "its goals will be whatever we set its goal as", I find Pinker very enjoyable but I have no idea how he just says that so casually

  • @78skj
    @78skj ปีที่แล้ว

    This discussion about IA was done 4 years ago. I wonder what Pinker’s views would be on the impact social media has on society. Especially young people who are more susceptible to social contagion. The algorithms bombard has with our own biases and keeping us trapped in an echo chamber. This on a larger scale does impact every aspect of our lives, especially the way we vote etc.

  • @aramchek
    @aramchek 5 ปีที่แล้ว +1

    What gets overlooked, I think, is that AI doesn't understand us at all, and aside from limited applications in medicine, AI DOES NOT make life better in any conceivable way, my life is NOT enriched by people developing better means of tracking me and intruding upon my life with advertisements/etc. or invading my privacy or any of the things it's actually used for.
    And since everyone has begun using it on the internet, I no longer get relevant information, I get ads, when I search for things, or links to "popular" search results that have absolutely nothing to do with what I've searched for, this has the, perhaps, unintended side effect of effectively censoring information.

  • @ScienceAppliedForGood
    @ScienceAppliedForGood 3 ปีที่แล้ว

    This interview was very interesting and helpful.

  • @lorirodgers9474
    @lorirodgers9474 ปีที่แล้ว

    How interesting to hear today on the cusp of DPT5

  • @martinkunev9911
    @martinkunev9911 4 ปีที่แล้ว +4

    Pinker mentioned absolutely nothing that could address the concerns of AI researchers (Nick Bostrom, Eliezer Yudkowsky) about AI safety. "There's no fire alarm for AI" explains very well why we should not be just blind optimists. You cannot reliably test an AI as the "AI in a box" experiment shows. He seems to have no expertise on how software is written.

  • @tosvarsan5727
    @tosvarsan5727 ปีที่แล้ว

    I did not see this one. and I must admit it was very good.

  • @carlosarayapaz6296
    @carlosarayapaz6296 4 ปีที่แล้ว

    Steve Pinker talking that we should be worried about important things, such as, pandemics. God, he was right.

  • @shinnysud1
    @shinnysud1 6 ปีที่แล้ว +2

    You Rock lex keep up the awesome work

  • @parabolic_33
    @parabolic_33 6 ปีที่แล้ว +14

    There are some sneaky cuts in this video, curious why.. at 20:55 for example

    • @lexfridman
      @lexfridman  6 ปีที่แล้ว +24

      Good catch. Any edits is just long pauses with umms or equivalent. I don't do it often, just when it jumps out as I listen after. It's my OCD nature. I'm trying to ignore it more and more, and just post as is.

    • @SoGetMeNow
      @SoGetMeNow 6 ปีที่แล้ว +17

      There is information nested in silence.

    • @lexfridman
      @lexfridman  6 ปีที่แล้ว +24

      @@SoGetMeNow Silence yes. Stuttering tangents of umms no. There is a grey area of course. And I have to make an artistic decision in that regard. Ultimately, the error in the original conversation is always mine as the person responsible for guiding it. Conversation is music, and I'm just now learning this. As an introvert, this is a difficult journey for me.

  • @saahuchintha
    @saahuchintha 4 ปีที่แล้ว +1

    Me from 2020 when Steven pinker says "we need to worry about other important things like Pandamics , climate change and cyber attackes."
    Carona Virus, Australian forest fires and Anonamus comes back...!!!🙊 this guy is a prophet....🙇‍♂️🙇‍♂️🙇‍♂️

  • @azad_agi
    @azad_agi 3 ปีที่แล้ว

    This was very useful Thank You

  • @andrewblomberg3100
    @andrewblomberg3100 3 ปีที่แล้ว

    Great podcast, I just got you from Joe Rogan. So interesting to listen to. Thank you for doing this.

  • @Sam-we7zj
    @Sam-we7zj 2 ปีที่แล้ว

    if the experience of the colour red is a mystery then the answer to "will an AI experience red" should be "i have no clue" not "probably someday"

  • @alchemist_one
    @alchemist_one 6 ปีที่แล้ว +16

    I have nothing but respect for Steven Pinker and the message of his two most recent books. However, I'm a bit more concerned about tail risks of catastrophic events (AI-related or otherwise). Also, I didn't quite follow his line of thought about how AGI development would differ so much from evolutionary development in terms of adversarial qualities. Deep learning, like evolution, is driven by natural selection and gradient descent. Many recent successes in games such as Go and League of Legends rely on adversarial training.
    The same is true of both economies and geo-politics at large. If any given company or nation can gain an edge through a given strategy, competitors who choose not to adopt the strategy become relatively weaker. Engineering discipline might be able to ensure the safety of any given system, but the competitive dynamics provide dangerous incentives. Also, the complexity of systems often surpasses the ability of any one engineer to fully understand and I can say from first hand experimentation that genetic algorithms often yield solutions their creators don't understand.
    Any sort of system driven by natural selection will *inevitably* select for self-preservation and propagation. Enlightenment Now is a fantastic book, but thus far, I'm more swayed by Tyler Cowen's and Sam Harris's concerns about warfare and AGI, respectively.
    Looking forward to your appearance on the JRE podcast!

    • @myothersoul1953
      @myothersoul1953 6 ปีที่แล้ว +5

      Machine evolution is not driven by natural selection. We select the machines we want to survive, nature doesn't.
      In biological evolution the selection criteria is survival, in A.I. the selection is by usefulness to humans and marketability. We don't make machine of the purpose of surviving on their own, we make machines to do tasks. Engineering A.I. and biological evolution are very different processes.

    • @mattheww797
      @mattheww797 6 ปีที่แล้ว +4

      We create A.I. to dominate markets which is by it's nature an adversarial purpose. TH-cam itself is a deep learning A.I. system. It's goal is to get you addicted to clicking on the next video. But Google didn't foresee how the A.I. would go about doing this, and it so happened that it did so by serving up to viewers more exploitative videos, if you happened to watch a video on WWII, the next video it suggests might be an alt-right recruitment video. Google and Microsoft's A.I. also picked up racist tendencies that became so bad that the companies had to step in to correct them.

    • @mattheww797
      @mattheww797 6 ปีที่แล้ว +1

      What do you mean when you say genetic optimization

    • @ryanfranks9441
      @ryanfranks9441 6 ปีที่แล้ว +2

      Pedro Abreu It's clear you are not educated in this.
      "Gradient descent and genetic optimization are completely different optimization".
      All trained neural networks have some form of gradient decent. Refinement methods never expose intermediate strategies developed inside the A.I algorithm through training, they optimize the neural weight values generated through gradient decent error accumulation. See (Backpropagation, Adversarial networks, Refined training data, Fitness functions).

  • @shirtstealer86
    @shirtstealer86 ปีที่แล้ว

    So refreshing to hear someone point out the obvious fact that its men not women who are the ones behaving like these murder robots that we claim to be afraid of.

  • @Alp09111
    @Alp09111 6 ปีที่แล้ว

    nice interview Lex!

  • @Vorsutus
    @Vorsutus 6 ปีที่แล้ว

    Sam Harris' interview with Eliezer Yudkowsky (AI researcher and co-founder of the Machine Intelligence Research Institute in Berkeley, California) in WakingUp#116 contradicts so much of what Pinker says about the topic of AI. Badly designed AGI is easy to foresee happening for two reasons off the top of my head; Money and Security. A quick and dirty AGI will beat a slow and carefully designed AI to market by years. The immediate incentives are not on the side of "slow and careful" engineering. Also in WakingUp#53 Stuart Russell states that we don't know what some of the more advanced AI algorithms are doing half the time. Not hard to imagine it producing unexpected results once it's out in the wild.

  • @Cr4y7-AegisInquisitor
    @Cr4y7-AegisInquisitor 6 ปีที่แล้ว

    oh didn't know Lex is going to be on the Rogan podcast!

  • @dh00mketu
    @dh00mketu 6 ปีที่แล้ว +17

    Science has never been the problem. But the greed.

  • @juniorv.c.1107
    @juniorv.c.1107 4 ปีที่แล้ว

    Excellent, Lex!

  • @gracefulautonomy
    @gracefulautonomy ปีที่แล้ว

    20,34 The problem of replacing soul deadening jobs with AI is not sourcing the funds to replace the workers income. The problem is creating new jobs that are satisfying and meaningful.

  • @otthoheldring
    @otthoheldring ปีที่แล้ว

    Lex - you invariably ask your interlocutors: "What is the meaning OF life?" but never define what you mean by life, or by the question itself, as far as I know. 1. What do you mean by "meaning" in this context? 2. The question implies that there is "meaning" in the first place. Is there? How do you know? 3. By life, do you mean life as a phenomenon? Or just human life? Or just the life of any given person?
    I believe that life as a phenomenon has no more "meaning" than the universe, a star, air, a pencil or a pencil. But there can be meaning in one's life (vs. "of" life in general). It can be fulfillment (as Pinker said). Or satisfaction, connection, belonging, religion, feeling needed or appreciated, etc. As an aside, for most people through the ages, there was no meaning beyond staying alive.

  • @bilbojumper
    @bilbojumper 5 ปีที่แล้ว

    Great interview

  • @montyoso
    @montyoso ปีที่แล้ว

    34:34 Steven Pinker book recommendations.

  • @michaelgreen8456
    @michaelgreen8456 6 ปีที่แล้ว

    Amazing interview

  • @OldGamerNoob
    @OldGamerNoob 6 ปีที่แล้ว +1

    This is a good point that keeping infrastructure out of hands of any A.I. a perfect solution to the A.I. apocalypse concept as well as being something that no one would likely do.
    Even an unwise/malicious actor who placed a mechanical army under an A.I. coordinated strategy software that then bugs out and decides to wipe out humanity, it is almost inconceivable that the whole supply chain of materials for factories creating such armed machines as well as fuel for upkeep of the same would have also been placed under the control of such an A.I.
    As long as such mechanized soldiers are not general purpose in their structure as to be able to take control of said supply chain.

  • @volta2aire
    @volta2aire 6 ปีที่แล้ว +1

    From natural stupidity to general artificial intelligence is a rocky road, absolutely!

  • @anonymous.youtuber
    @anonymous.youtuber 3 ปีที่แล้ว +1

    Human stupidity scares me way more than artificial intelligence.

  • @citiblocsMaster
    @citiblocsMaster 4 ปีที่แล้ว

    Lex invited Pinker on the show by teleporting him from the 17h century into this room

  • @penguinista
    @penguinista 5 ปีที่แล้ว +1

    AI and nuclear weapons are similar in that they are both immensely powerful military technologies. Also, nuclear weapons could be used once a nuclear power is about to lose to AI driven warfare. So the threat of AI is linked to the threat of nukes through human conflict.

  • @davidmoreno1397
    @davidmoreno1397 2 ปีที่แล้ว

    Books Steven Pinker mentioned at the end as books they had an impact on his life.
    The Beginning of Infinity by David Deutsch
    History of force by James Payne
    One, two three Infinity by George Gamov
    Time Life Science series
    On Language by Noam Chomsky
    The Selfish Gene by Richard Dawkins
    The Blind Watchmaker by Richard Dawkins

  • @goe54
    @goe54 6 ปีที่แล้ว

    Here is what I have to say about AI and I am amazed that this is not emphasised yet.
    Humans communicate mainly using finite sequences of a finite set of symbols. This set of sequences is enumerable. Some questions arise.
    1. Can all the information of the universe be coded in an enumerable set of sequences?
    2. Is our thinking process enumerable in nature? Clearly animals have a thinking process without words.
    3.Are the feelings processes different of thinking processes? How much of them can be mapped on a enumerable set of sequences?
    4. What is intuition? Is it an enumerable process of our mind?
    5. Maybe the analogical computing is a better way to follow to be able to replicate the human mind.
    I don't know the answers, but I believe that being caught in an enumerable universe we will not be able to create some AI comparable to humans.

    • @stevejordan7275
      @stevejordan7275 6 ปีที่แล้ว

      I recommend to you - in the strongest possible language - the book *Our Mathematical Universe* by Max Tegmark.

    • @goe54
      @goe54 6 ปีที่แล้ว

      Thanks Steve, I will check that.

  • @LofiWurld
    @LofiWurld ปีที่แล้ว +1

    Interview him again

  • @GroovismOrg
    @GroovismOrg 5 ปีที่แล้ว +1

    Meaning of life: Was to gather knowledge in order to evolve ( our ultimate purpose!?!) Evolving consciousness can only involve some type of miracle, such was needed to have our organs evolve as needed. Drop entropy & have all humans unite with The One!! Groovism is the belief system!!

  • @CtrlAltJ
    @CtrlAltJ 5 ปีที่แล้ว +24

    I love Pinkers books, but I have to say that his argument against the dangers of AI is not at the level that I expected from a person with his knowledge of how daemons and intelligent systems work. He misses the nuances of the matter and is assuming that it's impossible for super intelligence to be decoupled from 'human commonsense' and that these systems will be developed by benevolent liberal entities. He also is taking the position that this is something far away into the future that we need not worry about.

    • @MultiWolfxxx
      @MultiWolfxxx 5 ปีที่แล้ว +2

      Absolutely. I find it painful to watch this level of naivety. As soon as some form of generalized Ai autonomous system becomes possible, say eg drones, people/terrorists will start releasing systems with horrible goals. At massive scale.

    • @CoolIHandIMatt
      @CoolIHandIMatt 5 ปีที่แล้ว +7

      None of what you are saying is consistent with Pinkers argument. Perhaps its you who is missing the nuance of his argument.

    • @joemorgese
      @joemorgese 5 ปีที่แล้ว

      Darth agreed! Very dangerous. Every time I swat a fly. I think about AI swatting me one day!

    • @gepisar
      @gepisar 5 ปีที่แล้ว

      Im inclined to agree. Both. Pinker seems to be saying, on first inspection, there is nothing to worry about, therefore, no point in looking deeper. And Lex, also, says he is saving those lives; well ok, but there are deeper questions, like the intractable position of gun ownership in the USA. IT IS the fundamental position that the price of freedom comes with those deaths from shootings. Likewise, automate all those drivers jobs .... and take humans off the road, and out of every other potentially dangerous situation and ... what is the ripple effect of that?

  • @FlyingOctopus0
    @FlyingOctopus0 6 ปีที่แล้ว +1

    I think that problem with programing AI to not go berserk is that it is difficult to defined goals and constraints that to do not have unintended solution. AI making is more similar to goverment policy making rather than engineering. Any kind of policy or law could be viewed as defining goals and constraints for humans, buisness or other legal entities. If we frame AI problems in such a way than it is obvious that good engineering principles will not save us. To ilustrate this connection further we can image that government wants to encourage innovation, than what policy should it introduce? We expect that if the correct policy is used
    by government then people will innovate and find solutions to various problems. It is similar question to asking what objective should we use for AI so that it finds the solution we want. If we think we can manage AI, let's ask first if we can manage city size number of people. There are tons of failed laws with many loopholes that were exploited and caused real harm. AI could lead to similar situations.
    The problems with AI should be viewed using game theory, economy, and mechanic design. These disciplines deal with systems with many actors having different goals.

  • @shirleycirio6897
    @shirleycirio6897 4 ปีที่แล้ว +1

    wow, we still had single serve plastic water bottles back then!

  • @truthseeker2275
    @truthseeker2275 5 ปีที่แล้ว +1

    I think the greatest risk is in stock trading, where the goal is to win no matter what the consequences, and where an AI race (that is probably already running) could destroy economic systems. Here it will not be the engineers that won't put in the safety systems, but where the traders will disable them.

  • @3DisFuntastic
    @3DisFuntastic ปีที่แล้ว

    I don't agree with Pinker here about the argument that "it would be stupid to build a system like that". If building such a system become relatively simple so one or a small group of people can build it. Then there is almost a guarantee that there are going to be intelligent psychotic people that want to build something like this to drag the whole of humanity or life down in their destructive psychosis. But totally agree if you cannot cope with it enjoy life while we can.

  • @ArnoldWittman
    @ArnoldWittman 9 หลายเดือนก่อน

    Wonderful to see how Pinker, through rationality, can recognize the need for priorities. For example, the number of car accidents per year exceeds the number of terrorist attacks, or that the risk of climate collapse as well as nuclear war is greater than the threat of AI.

  • @Poetry-Reads-and-Writes
    @Poetry-Reads-and-Writes ปีที่แล้ว

    ‘The jobs that will be made obsolete…. If we’re smart enough…we are smart enough to redistribute income.’ Pinker is smart, and probably ethical enough to care about finding ways to find people better jobs or income after AI has displaced them. I doubt that corporations or the governments in bed with them have much of a concern to do that.

  • @kevinfairweather3661
    @kevinfairweather3661 5 ปีที่แล้ว

    Pinker is the man !

  • @alirezaabolghasemidehaqani7186
    @alirezaabolghasemidehaqani7186 2 ปีที่แล้ว

    wow, I can't imagine that it was recorded before the pandemic, with this massive referring to it

  • @4G12
    @4G12 6 ปีที่แล้ว

    Obsession of terrorism is absolutely rational as long as the church of money reigns supreme as it does now. It's not about getting yourself of, it's about making sure everyone else remains worse off than yourself.

  • @betaneptune
    @betaneptune 6 ปีที่แล้ว

    Good interview. Why so many blur-cuts, though?

  • @danschultz9681
    @danschultz9681 5 ปีที่แล้ว +3

    "I don't see any signs that engineers will suddenly do idiotic things."