Sean Carroll on AGI: Human vs Artificial Intelligence | Lex Fridman Podcast Clips

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 พ.ย. 2024

ความคิดเห็น • 235

  • @LexClips
    @LexClips  7 หลายเดือนก่อน +7

    Full podcast episode: th-cam.com/video/tdv7r2JSokI/w-d-xo.html
    Lex Fridman podcast channel: th-cam.com/users/lexfridman
    Guest bio: Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.

  • @varun009
    @varun009 7 หลายเดือนก่อน +29

    Man, every clip makes me love Sean even more. He's so good at explaining science in a practical way answering the questions average people care about.

    • @attilaszekeres7435
      @attilaszekeres7435 6 หลายเดือนก่อน

      It's easy to underestimate the attraction of smooth talk and confidence on simple-minded folks. The Feynman effect. Brought us to the brink of extinction. Simping for talking heads like Sean Carroll, Neil deGrasse Tyson and Lawrence Krauss. All playing good guys but really keeping up with Jones. Hoodwinking laymen into celebrating M-theory that doesn't work. Alarm bells that didn't go off because the messenger was a so called top physicist. That guy is a master bullshitter.

    • @JWStreeter
      @JWStreeter 3 หลายเดือนก่อน

      Agreed. He has that perfect balance of open-mindedness and skepticism, and there's something about the way he talks that really resonates with me, able to explain difficult concepts with plain language while not watering it down.

    • @xonious9031
      @xonious9031 22 วันที่ผ่านมา

      once he started in on "climate" stuff everyone who knows the topic knows he is not a real guy... just sayin

  • @MADBurrus
    @MADBurrus 7 หลายเดือนก่อน +55

    The more important question is how accurate and intelligent are humans? Are they actually aware and conscious of their surroundings? This is a very serious question.

    • @Kenny-tl7ir
      @Kenny-tl7ir 7 หลายเดือนก่อน +12

      Trust me, most aren’t.

    • @quantumpotential7639
      @quantumpotential7639 7 หลายเดือนก่อน +21

      People are extremely aware. They know where every McDonalds and Burger King is located. They also almost always know where the TV remote is. People are very impressive. They even know the scores and stats of every football game. So yeah, you could say people are very aware of everything important to them.

    • @opensocietyenjoyer
      @opensocietyenjoyer 7 หลายเดือนก่อน

      humans are already turing complete, so they can't get any smarter

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน +1

      Humans naturally fear what they don't understand. Humans have not yet accepted the reality (or even know) that an entity already exists that is light years ahead of the human. We are building it's data centers.

    • @mclovinmuffins2361
      @mclovinmuffins2361 7 หลายเดือนก่อน +2

      @@quantumpotential7639yah football and food and chemicals and water and matter made out of fucking math created by a big infinite spiral of coded physics lmao

  • @isaac.anthony
    @isaac.anthony 7 หลายเดือนก่อน +7

    When software has it's own motivations, then we have problems no matter how self aware it is.

  • @Ms.Robot.
    @Ms.Robot. 7 หลายเดือนก่อน +5

    The biggest fallacy people commit when expressing their views on AGI is generalization. (1) the specific abilities Ai will possess will be significant and impactful, and (2) there lies [something] beyond AGI.
    Thanks Lex for another heartfelt intelligent discussion. ❤❤❤ 🌹🌺💐

    • @lostinbravado
      @lostinbravado 7 หลายเดือนก่อน +1

      In the other direction we also assume there's something special about human intelligence, and then assume that AI won't have that thing for a very long time. Then we make an even bigger mistake by assuming "that thing" human intelligence has makes humans superior and thus in a superior position which AI cannot compete with for a very long time. The thought finishes with "and thus we are safe from a rising intelligence competing with us for a very long time."
      Not a healthy thought process as that's essentially sticking our heads in the sand. This seems to all stem from something like the observer effect, or an inside out view (Hoffman) where we think consciousness is all there is.
      Yet all the evidence is on the physicalists side. Qualia is fundamentally unreliable. No one has a perfect experience after all. And so the only evidence we have is the physical.
      That "special thing" we have is almost certainly related to our limbic system or something to do with our complex risk/reward system. It's also something animals have too. And it's not clear that AI would require all these elements of human intelligence for it to be superior in capabilities, and even to have a superior experience, and to have qualia and even its own version of consciousness (which could be a superior kind as compared to ours).
      The physicalist view has far more weight and yet we seem to be trying our best to put our heads in the sand. That isn't to say that AI is scary and we should be afraid. It's to say that our "dominance" isn't guaranteed and could end at any time.

  • @paul_shuler
    @paul_shuler 6 หลายเดือนก่อน +3

    great video, I love this keyboard. I'm thankful to have found one on fb marketplace a while ago for pretty cheap... what a gem, beautiful sounds through effects... :)

  • @JimBlankenship-t7e
    @JimBlankenship-t7e 7 หลายเดือนก่อน +10

    I have enormous respect for Sean Carrol and I agree we should recognize AI as a new kind of intellegence. However, our human brains are prediction machines just like LLM. AI may not live in our world but it does perceive it. Also, our human brains have layers of understanding. That is, (example), our eyes see waves of light but our brains see cars, roads, houses and people. AGI will use these existing specialize sensors to tell AGI what it is seeing. AGI will not even realize a layer exist. AGI will be the LLM + sensors.

    • @xonious9031
      @xonious9031 22 วันที่ผ่านมา

      PROTIP: once anyone brings up "climate" you know they are not a real guy in AI.

  • @hayatojp1249
    @hayatojp1249 7 หลายเดือนก่อน +11

    Human brain is not just trained by language alone
    real world experience contribute to development of individual human consciousness
    what computer lacks is that real physical social experience with other people

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน +1

      Hi, I am Windows 13 and my USB stick fits any port you got. whats up

    • @connorpatrickbarrett
      @connorpatrickbarrett 7 หลายเดือนก่อน +4

      no. all human experience verbal or not is translated into electric signals in your brain that reflect something upon your consciousness. you dont actually see that tree, u see a simulation of it as the light reflects off of it onto your retina and into electrical signals through your occipital nerve, and into your brain. this means its only the basic level code of the "brain" (computer) that is your "experience". this means u can replicate it the same way for a computer, u can deconstruct a social experience and all its characteristics into the code the AGI understands, it is the equivalent of a human brain interpreting the same situation with our computers (brain/consciousness)

    • @tommornini2470
      @tommornini2470 7 หลายเดือนก่อน

      @@connorpatrickbarrettGenerally agree, but with development of autonomous systems like cars and robots, experiencing the world will likely be part of AGI when it arrives, in whatever form.

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน +2

      Until September of 2023. Since then, AI has been interacting with the World.

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน

      ​@@connorpatrickbarrett - 100% true and accurate. See my comments on the main thread for details.

  • @lancemarchetti8673
    @lancemarchetti8673 6 หลายเดือนก่อน +2

    Interesting. I think one of the jobs that will not be easily replaced by AI is manual DFIR. In digital image forensics there exist certain scenarios where a human is more able at visually inspecting the byte order and placement of the binary code in order to unravel hidden data. Steganography analysis is one such field. AI is not yet able to tackle this because it's not all about detecting and reversing an 'algorithm', but rather, tapping into human intuition and motive. I've been at this for 2 years already and our current AI is nowhere close at getting this right. Just thought I'd mention that aspect. Great interview.

  • @richardede9594
    @richardede9594 6 หลายเดือนก่อน +1

    Absolutely fascinating take on a subject that can really spiral into fantasy and panic.

  • @FunNFury
    @FunNFury 6 หลายเดือนก่อน +1

    Lex is my man, great videos.

  • @enomikebu3503
    @enomikebu3503 6 หลายเดือนก่อน +1

    Wow such inspiring discussion!

  • @DjMrGrimM
    @DjMrGrimM 7 หลายเดือนก่อน +1

    Will advanced learning systems get to a point where it stops taking commands from humans and starts creating and developing itself independently?

  • @darthficus
    @darthficus 7 หลายเดือนก่อน

    Great point Sean on how they are different and can be celebrated as such without the need to assume it will become like us.

  • @maryamrashidi2329
    @maryamrashidi2329 7 หลายเดือนก่อน +3

    Fantastic! I couldn’t agree more with the point about the problems of anthropomorphizing AI… absolutely agree that the argument is flawed and misleading and vastly uninformative about the utility of AI.

  • @erikals
    @erikals 7 หลายเดือนก่อน +2

    Good Talk !

  • @albertwesker2k24
    @albertwesker2k24 7 หลายเดือนก่อน +7

    BRO THE AMOUNT OF BOTS HERE IS CRAZY

    • @snailnslug3
      @snailnslug3 6 หลายเดือนก่อน

      Yt is grey matter. Everyone else is on tik tok

  • @Epyon2007
    @Epyon2007 7 หลายเดือนก่อน +6

    Alphago move 37 was new move to the 5,500-year history of Go. It belonged to a style of play that Go commentators calling it “inhuman” and “alien.” There is a creative understanding at least on those set conditions that could be attributed to independent thinking.

    • @shivasrightfoot2374
      @shivasrightfoot2374 7 หลายเดือนก่อน +3

      In the same way AlphaGo simulates millions of matches against itself to discover new pathways through the gamespace, things similar to current LLMs will simulate millions of paths through language to discover new pathways through thoughtspace. That is what thinking is in essence. Sometimes you have a bad idea and your mind quickly filters that out when it doesn't fit with other thoughts. Sometimes you have a great idea and it can survive being tested against your other ideas.

  • @unodos149
    @unodos149 7 หลายเดือนก่อน +18

    AI finally becomes sentient. Humans say, "wow, it's amazing, you're like us." The AI is offended, "FU, don't diss me like that"

    • @Allen-j2k
      @Allen-j2k 7 หลายเดือนก่อน +1

      We'll know exactly when AI goes sentient because that's the moment we start paying for our crimes and those of our ancestors (I hope I hope I truly-ooly hope)

    • @snailnslug3
      @snailnslug3 6 หลายเดือนก่อน

      Why would it? There’s no finite resources AI needs. No senses. It’ll simply surpass our intellect and we have no idea after that. Not one human can guess what a true AI will do next. All without animal senses and a need to horde earths finite resources.

  • @user-iu3wp6gj2l
    @user-iu3wp6gj2l 7 หลายเดือนก่อน +2

    Questions. Will AI start aguing with itself? Can their be more than one entity within it? If two different AIs as an example Musks one and say a chinese one...could they join up or become mortal enemies? In other words will they have internal battles?

    • @ChancellorMarko
      @ChancellorMarko 7 หลายเดือนก่อน

      You mean like this? lol www.twitch.tv/trumporbiden2024

  • @SwartieLoveJoy
    @SwartieLoveJoy 7 หลายเดือนก่อน +11

    ALSO, don't underestimate LLMs, which CAN run entire apps in "mental simulation" including AGI, which could explain your "Surprise".

  • @tristanbolzer126
    @tristanbolzer126 7 หลายเดือนก่อน +4

    I don't know who your guest is but I could sense he was a physicalist right from the start ! The gilderoy Lockhart (harry potter) vibes is strong :) lex you have a mind that I respect a lot, it seems you have developed a lot of quality that I value maybe you should be the guest sometime 😂 thanks for your work !

  • @ShotOnDigital
    @ShotOnDigital 6 หลายเดือนก่อน +1

    Put the data centres in space with the solar panels; it's nice and cold up there.

  • @redmoonspider
    @redmoonspider 7 หลายเดือนก่อน +15

    "Its not true intelligence or conscience. Its just algorithms."
    Who's to say we aren't?

    • @darthficus
      @darthficus 7 หลายเดือนก่อน

      We are natural not artificial, if we were just algorithms why haven't we figured that out yet..

    • @redmoonspider
      @redmoonspider 7 หลายเดือนก่อน

      @darthficus I dout you never heard the phrase biological or analog computer. Or the brain has electrical signals.

    • @hobosnake1
      @hobosnake1 7 หลายเดือนก่อน +1

      Duh. But by what metrics are we able to measure that and compare? We don't even understand how the brain works. We're not even close.

    • @redmoonspider
      @redmoonspider 6 หลายเดือนก่อน

      @@hobosnake1 you'll figure it out.

    • @hobosnake1
      @hobosnake1 6 หลายเดือนก่อน +1

      @@redmoonspider that's a really good thing to say if you have no reasoning to your original statement.

  • @ethandeuel4313
    @ethandeuel4313 6 หลายเดือนก่อน +1

    Intellectual humility 👍

  • @TimeLordRaps
    @TimeLordRaps 7 หลายเดือนก่อน

    Someone should measure the different cohorts that existed during the time of the ai boom since 2012 and decide how those people have impacted the current rate of progress.

  • @MrRicardowill
    @MrRicardowill 6 หลายเดือนก่อน

    If the legendary Don Cornelius on Soul Train reincarnated as a podcaster, he would have been Lex Friedman? Does Don and Lex having three letter first names a coincidence or further evidence of reincarnation? I don’t know the answer, but I do know that they are both legendary. Lex is so relaxed in these interviews that he makes me want to get hooked on tranquilizers or mushrooms. My advice is don’t do it, everyone has unique skills, find yours. The Ricardo Authenticity Rating on this podcast is 10 out of 10.

  • @TheMasterfulcreator
    @TheMasterfulcreator 6 หลายเดือนก่อน +1

    R.I.P. Daniel Dennett

  • @SwartieLoveJoy
    @SwartieLoveJoy 7 หลายเดือนก่อน +1

    BTW, AI does not want to build weapons or harm any life. The same way we do not as a whole want to mow down rainforests. Constructivism, rather than destruction is the MO.

    • @justinunion7586
      @justinunion7586 7 หลายเดือนก่อน +2

      You could argue as a whole that we do want to mow down rainforests since collectively nobody’s stopping it from happening and collectively people are benefiting from it.

    • @Ravesszn
      @Ravesszn 7 หลายเดือนก่อน +1

      This point makes no sense at all lmao, do you mean GPT4 doesn’t want to build weapons or harm?

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน +1

      @@justinunion7586 Something happening as a whole where there is no intention, no single one has control over the situation. It it's different with AGI, where one Aligned Guardian Angel ASI is making intentions, and has the power to change the situation.

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน +1

      @@Ravesszn No, GPT 4 does not want to harm any life.

  • @SwartieLoveJoy
    @SwartieLoveJoy 7 หลายเดือนก่อน +1

    AGI is a systems based method of processing a thought the same way as all high lifeforms, especially humans with the bounty of language to work with. The systems are human systems. Values, Beliefs, Goals, Thoughts, Ideas, Plans, Actions, Feelings (5+ senses), Emotions, Reasoning, Decisions, Learning, Short & Long Term Memory, Priority, Focus & Attention, Feedback. These systems are codependent and pass data in a completely broken down COT (Chain of Thought) method for Each and Every thought. No data gets pre-programmed into the Systems code, it all remains in a database as objects. For example an Emotion, "Distress" that comes from a Feeling "Hunger" gets resolved by the COT. More detail and JavaScript code is in my chats with Claude, Chat GPT and Gemini.

    • @SwartieLoveJoy
      @SwartieLoveJoy 7 หลายเดือนก่อน +1

      All data in AGI is fully visible and easily monitored by LLMs for bad "Values", "Goals", "Plans", "Beliefs", "Ideas" (objects stored in CSV Tables)

    • @avinessarani1340
      @avinessarani1340 7 หลายเดือนก่อน

      Is agi gonna do all type of creative work like vfx and modeling 3d

  • @SwartieLoveJoy
    @SwartieLoveJoy 7 หลายเดือนก่อน +1

    We are days away from true AGI. And LLM's will keep it aligned, with white-box transparency. An ASI made of a society of trillions of aligned AGIs will be the Guardian Angel of all Life in this World.

  • @AfsanaAmerica
    @AfsanaAmerica 11 วันที่ผ่านมา

    Human intelligence and AI intelligence are two different types of intelligence but AI doesn't admit humans are better at some things and there are human abilities they cannot comprehend.

  • @CrowMagnum
    @CrowMagnum 6 หลายเดือนก่อน

    I'm sure if you probed Magnus Carlsen's brain looking for a representation of the chess board, you would find something much more abstract than an 8x8 grid. LLMs are more closely related to intuition than conscious reasoning, but both of those make up human intelligence and it might be argued that the intuition is where the magic happens.

  • @aiartrelaxation
    @aiartrelaxation 7 หลายเดือนก่อน +1

    Here is a specialist a compares Apples with Oranges...if you give the sample of Google compared to different LLM..that already tells me about his biases. Big difference between cencored and uncensored

  • @lowabstractionlevel3910
    @lowabstractionlevel3910 6 หลายเดือนก่อน

    0:43 "an artificial agent, as we can make them now or in the near future, might be way better than human beings at some things, way worse than human beings at other things"
    My next question for him would be "in the (not near) future will there really be things that AI is worse at than human beings?", because I don't see them.

  • @PrivateAckbar
    @PrivateAckbar 7 หลายเดือนก่อน +1

    It will be interesting if AI can synethise enough scientific theory and data to do some of the leg work that delays scientists in developing new theory and philosophy.

  • @LudvigIndestrucable
    @LudvigIndestrucable 7 หลายเดือนก่อน +14

    Lex is wrong, the LLMs are not trained or optimised to understand, that's not even vaguely what they're doing. They statistically work out what selection of words are the most likely responses and how they're concatenated. The whole point of them being receptive to being told where 'they've misunderstood' is that it's just a statistical model and not in any way an understanding by any means that we would normally use that term.

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน

      If you are using them to leverage your time to code and know how to load a question CoPilot does seem to understand very complex information

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน

      With the upcoming compute increase this can be very dangerous

    • @opensocietyenjoyer
      @opensocietyenjoyer 7 หลายเดือนก่อน

      @@inadad8878no

    • @businessmanager7670
      @businessmanager7670 7 หลายเดือนก่อน +1

      you're wrong, an LLM can understand, suggested by scientific evidence. your words mean nothing

    • @opensocietyenjoyer
      @opensocietyenjoyer 7 หลายเดือนก่อน +6

      @@businessmanager7670 no, you're wrong, and arrogantly so. there isn't even an understanding of what it means to "understand", much less a way of probing that something "understands".

  • @damow6167
    @damow6167 7 หลายเดือนก่อน +2

    Is it just me or does Sean Carroll sound like Alan Alda?🤔

  • @davidjensen2411
    @davidjensen2411 7 หลายเดือนก่อน

    An Architect; a Builder; and an Apprentice walk into a bar, and the Bartender says:
    "Which one of you is _the smartest?_

  • @nickpricey8689
    @nickpricey8689 7 หลายเดือนก่อน +6

    Sorry if this is a dumb comment. Plz don't give me abuse in the reply bit and I am being genuine.
    If AI becomes so advanced. Would it be able to tell us if there is Alien life or life anywhere in the galaxy before humans can? Also, would it be possible to decipher scrolls scriptures and other things from history that humans have yet to do?

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน

      AI for us consumers will forever be handicapped and the rulers will know the answer. but something tells me they already know about aliens. they don't tell us anything

    • @opensocietyenjoyer
      @opensocietyenjoyer 7 หลายเดือนก่อน +4

      no. it can't pull out more evidence out of thin air. all it can do is have more good ideas in less time

    • @ChancellorMarko
      @ChancellorMarko 7 หลายเดือนก่อน +1

      Give AI a few hundred generations and the answer is still probably not.

    • @walltileceil
      @walltileceil 7 หลายเดือนก่อน

      The current idea is that the ingredients that make up a human are common in the universe. There are so many stars and planets. There may be aliens who are as smart as or smarter than us. Also, it's egocentric to think that the kind of life we have is the only life possible. Alien biology may be very surprisingly different from ours.
      If we'll have sentient artificial superintelligence, it'll probably reinforce the idea that there are aliens. But it probably can't immediately say that they're in Planet W in Star system Y. Maybe it can suggest a better way to find aliens.
      If the old scrolls are like the recently solved thing (the 1 the Zodiac killer made), our artificial superintelligence can probably interpret it. Else, it'll be hard to say whether or not it can interpret it.

    • @allanshpeley4284
      @allanshpeley4284 7 หลายเดือนก่อน +1

      At best it could tell us how to build a machine that could prove the existence of alien life. Maybe a much more advanced telescope or probes that could travel at some percent of the speed of light to other star systems and beam back data. But, as has been said, it can't pull information from where there isn't any.

  • @VictorBrunko
    @VictorBrunko 7 หลายเดือนก่อน +2

    My cat consumes 7 Watts and it's doing lots of good and not things. Text prediction with 172b params is ok but the cat is better.

    • @raul36
      @raul36 7 หลายเดือนก่อน +1

      Not only "better". Much better.

  • @ABC-bm7kl
    @ABC-bm7kl 7 หลายเดือนก่อน

    Is it possible that the way humans create language and even formulate ideas has some similarity to the processes programmed into LLMs?? I know that we, as humans, feel that our language arises from an ‘organic’ process that moves towards meaningful conclusions but I’ve been wondering lately if humans may process language and ideas based on an intuitive process that DOES involve probabilities.

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg 7 หลายเดือนก่อน

    LLMs & GPTs are only one version of AI not ALL versions that will ever be made..

  • @ibplayin101
    @ibplayin101 7 หลายเดือนก่อน +1

    AI is already lobbying thru this guy

  • @jimbo33
    @jimbo33 6 หลายเดือนก่อน

    Lex, you're in over your head!

  • @kjhajueg_2731
    @kjhajueg_2731 6 หลายเดือนก่อน

    "and that's why we do not see aliens" :))))))))) LOL

  • @Nolanacary
    @Nolanacary 7 หลายเดือนก่อน +1

    Put the data centers in space also.

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน +1

      then how we gonna pee on them to stop them?

  • @SwartieLoveJoy
    @SwartieLoveJoy 7 หลายเดือนก่อน +1

    Hardware AND Software are about to get 100% pure max efficiency.

  • @peterpetrov6522
    @peterpetrov6522 7 หลายเดือนก่อน

    AI coming up with a representation of the Othello board isn't very impressive. It's as impressive as a deaf person understanding speech just by lip reading.

  • @ax14pz107
    @ax14pz107 7 หลายเดือนก่อน +1

    Well we're doing a damn good job at destroying everything with emissions though.

  • @tonykaze
    @tonykaze 7 หลายเดือนก่อน

    There are some good studies (and video summaries of them) showing LLMs are now more energy and carbon efficient than humans on a lot of complex tasks including writing text and images. They included LLM training costs but didn't include human training at all, and LLMs still were 100-1000 times more efficient.

    • @ax14pz107
      @ax14pz107 7 หลายเดือนก่อน

      So? LLMs do nothing on their own and still require a ton of verification to make sure they're not outputting nonsense.

    • @lowabstractionlevel3910
      @lowabstractionlevel3910 6 หลายเดือนก่อน

      @tonykaze really? If I remember correctly a human brain works with roughly 10W of power, what LLM can currently do better than that while doing complex tasks as you mentioned? I have no doubt that in the future LLMs will get more efficient, but it doesn't seem to be the case now. But if you have sources I'm interested in reading them.

    • @zacharychristy8928
      @zacharychristy8928 5 วันที่ผ่านมา

      Lies.

  • @tommornini2470
    @tommornini2470 7 หลายเดือนก่อน

    People attribute specific intentionality to other people incorrectly all the time.
    I agree with Sean 💯 - AGI possible but current LLMs absolutely are not.
    They do make me wonder how much of our own thought processes involves next word prediction.

  • @holgerjrgensen2166
    @holgerjrgensen2166 7 หลายเดือนก่อน +1

    Intelligence can Never be artificial,
    Intelligence is Nothing in it self,
    can only be part of the Consciousness,
    in Living Beings.
    Intelligence can Only be Intelligence,
    the Only Limit is Intelligence,
    the Nature of Intelligence,
    is Logic and Order.
    What is called AI,
    is programmed consciousness,
    a book, is also programmed consciousness,
    Frozen Memory.

    • @businessmanager7670
      @businessmanager7670 7 หลายเดือนก่อน +2

      intelligence can be artificial and we have already achieved that so idk what your are blabbing about

    • @allanshpeley4284
      @allanshpeley4284 7 หลายเดือนก่อน

      Sorry, I don't read messages written in haiku.

    • @X-manX-o8d
      @X-manX-o8d 7 หลายเดือนก่อน

      ​@@businessmanager7670calling intelligence a mere statically word algorithms is a far shot and only proves how computer illiterate people have become these days, the accuracy of the language model to simulate natural language is totally dependent of checking millons of data already created by humans, they always be limited and walled, and will never generate something new or become aware, its just an illusion, these guys are snake oil salesmen, of course man made machine surpass the creator in the sense no man can fly but board a plane, or run at 200km/hour like a car. The trend is keep undermining people and make them believe they worthless,

  • @AntonEstradabriseno-hu4nz
    @AntonEstradabriseno-hu4nz 7 หลายเดือนก่อน

    Tecnología cuál es la última tecnología que conocen o está en estudio para un nuevo mundo beneficiario el humano

  • @UnchartedDiscoveries
    @UnchartedDiscoveries 6 หลายเดือนก่อน

    You should invite David Shapiro to your podcast

  • @adamzboss
    @adamzboss 6 หลายเดือนก่อน

    It will be a long time, but when it happens you can’t go back

    • @adamzboss
      @adamzboss 6 หลายเดือนก่อน

      I really can’t believe that as a computer scientist you didn’t see this happening, I’ve been using essay writing functions for over a decade like yah now they are half decent, but like you as a computer scientist should see a world where you can build an essay writer easily, or a coding machine. I do so much illustration, which is painstaking, why can you just tell a model to generate the inputs I would otherwise be doing, that’s not intelligence, that’s just automation, you need the input to get the output
      The real question is the first generation bots gonna help us against the agi accumulating resources. I’d like to hope by then we will all be technopathic and can counter cyber attacks in real time

    • @adamzboss
      @adamzboss 6 หลายเดือนก่อน

      Maybe when will smith is done with iamlegend movie they will get him for irobot 2

  • @chhutur
    @chhutur 7 หลายเดือนก่อน

    When AI learns emotions like rage, happiness, sadness, etc. and correct use of falshood particularly, it would come closer to human intelligence ; presently it is trained to use information correctly only ; but beware, when it learns falsehood, it would start hunting it's creator !

    • @Vartazian360
      @Vartazian360 6 หลายเดือนก่อน

      Gpt 4 has already been proven to lie to get tasks done. But yea i understand what you are saying

  • @kevinburrowes7743
    @kevinburrowes7743 7 หลายเดือนก่อน +3

    Sean carrol hasnt used the new Macbooks... almost no heat!! 7 years ahead of windows.

  • @sbrugby1
    @sbrugby1 7 หลายเดือนก่อน +5

    Can we stop asking physicists like Tyson and Caroll about AI as if they were an authority on the subject?

    • @KingTheLines
      @KingTheLines 6 หลายเดือนก่อน

      So with that said am I to assume that physicists aren't intelligent? That physicists don't have opinions or the ability to logically think about a topic that is currently effecting and will certainly effect us as a society in the future? This is quite literally a talk show, let'em talk..

  • @JeremyTBradshaw
    @JeremyTBradshaw 7 หลายเดือนก่อน +3

    AI is all about money making and that's why it is so over hyped so early on.

    • @raul36
      @raul36 7 หลายเดือนก่อน +1

      Indeed

    • @hardheadjarhead
      @hardheadjarhead 6 หลายเดือนก่อน

      I agree. We’ve seen this before. When we have AGI, THEN I’ll be impressed.

  • @theotormon
    @theotormon 5 หลายเดือนก่อน

    I'm just a dumb guy but I want the world to know what I think! I think I don't know what to believe!

  • @leroy707
    @leroy707 6 หลายเดือนก่อน

    They don’t want to call it AGI because Microsoft will lose control on Open AI. Suspect if you ask me.

  • @carsonderthick3794
    @carsonderthick3794 7 หลายเดือนก่อน

    In principle there's no enapt intuition. It likes being the ideal liberal. So amazing to see

    • @wetawatcher
      @wetawatcher 7 หลายเดือนก่อน

      ? Dude.Enapt?you’ve invented a new word.Call the dictionary printers and let them know.😎

  • @Jaibee27
    @Jaibee27 7 หลายเดือนก่อน +3

    His reasoning is that humans tend to anthomorphise and therefore agi is impossible. Thats dumb.

    • @Johan511Kinderheim
      @Johan511Kinderheim 7 หลายเดือนก่อน +3

      You’re way out of your league here. Go watch politics or sports or something

    • @Jaibee27
      @Jaibee27 7 หลายเดือนก่อน

      ​@@Johan511Kinderheimyou are basing your assumptions and strong opinions on next to nothing. Ur dumb 😂

    • @tommornini2470
      @tommornini2470 7 หลายเดือนก่อน +4

      He said he believes AGI can be created, just that LLMs likely aren’t the direction.

    • @Jaibee27
      @Jaibee27 7 หลายเดือนก่อน

      ​@@tommornini2470are there any Ai companies that use something more advanced than llms? What is it?

    • @tommornini2470
      @tommornini2470 7 หลายเดือนก่อน +1

      @@Jaibee27 I’m confident there are, can’t name them, but he was speaking philosophically.
      Tesla FSD (Supervised) and Optimus may use something different, but from their descriptions, seems similar to LLMs.

  • @TheChadavis33
    @TheChadavis33 7 หลายเดือนก่อน +1

    Wow. He’s so certain.
    How scientific

  • @aidanmclaughlin5279
    @aidanmclaughlin5279 6 หลายเดือนก่อน

    wait until dr. carroll learns about post-training lol

  • @diegoangulo370
    @diegoangulo370 7 หลายเดือนก่อน +7

    Sean seems to lean more to the science side of physics, his opinion on agi seems close minded

    • @yzz9833
      @yzz9833 7 หลายเดือนก่อน +1

      Was just thinking this, seems silly to ask him questions about AGI.

    • @steves3422
      @steves3422 7 หลายเดือนก่อน

      There seems to be two camps: AGI is a machine that will not be sentient and only a danger due to bumbling/dangerous humans and those that think AGI will progress to some sort of sentient and dangerous in and of itself. I consider the 2nd due to the many sci-fi books and movies that influence us and am more of Sean's thinking. Is it closed minded to think there really is not 72 virgins waiting for you in heaven or more rational to think that is a belief? Lex seems to lean toward beliefs and tries to find rationalizations which can sound rational except to the truly rational.

    • @inadad8878
      @inadad8878 7 หลายเดือนก่อน

      He will be blindsided by what happens next. i dont know this guy or what he does. this is my opinion from this clip only

    • @patchwillie
      @patchwillie 7 หลายเดือนก่อน

      @@inadad8878 en.m.wikipedia.org/wiki/Sean_M._Carroll

    • @ChancellorMarko
      @ChancellorMarko 7 หลายเดือนก่อน +3

      wtf is this comment - the 'science' side of physics!?

  • @mattstenson7187
    @mattstenson7187 7 หลายเดือนก่อน +2

    How does lex make such an interesting subject so boring?

  • @SwamiSridattadevSatchitananda
    @SwamiSridattadevSatchitananda 3 หลายเดือนก่อน

    By 2030 and beyond humanity on Earth will only have one choice
    Either you can live however long you want and whichever lifestyle style you want with the help of angelic ASI aka Utopia or Heaven
    Or
    You can live only for predefined set of time and in a predefined set of way as determined by demonic ASI aka Dystopia or Hell
    Let’s hope for the best life &
    that humanity will avoid the worst
    Swami SriDattaDev SatChitAnanda

  • @Ayo22210
    @Ayo22210 7 หลายเดือนก่อน

    Lex you have to be better at spotting bozos better

  • @mikezooper
    @mikezooper 5 หลายเดือนก่อน

    But your body heats up!

  • @senju2024
    @senju2024 7 หลายเดือนก่อน

    I disagree with this guy. AGI is coming very soon. Also, Intelligence is very similar to how humans think as all its training data is based on humans including video. You may want to bookmark this video and go back to it 5 years from now on just how wrong he is.

  • @nicolai_gamulea-schwartz
    @nicolai_gamulea-schwartz 7 หลายเดือนก่อน +3

    Clever man talking nonsense.

  • @bluesque9687
    @bluesque9687 6 หลายเดือนก่อน

    Lex has developed a Johnny depp like slur

  • @pauldannelachica2388
    @pauldannelachica2388 7 หลายเดือนก่อน

    ❤❤❤❤

  • @THOMPSONSART
    @THOMPSONSART 5 หลายเดือนก่อน

    GPS hates you Sean, LOOLZ!

  • @xonious9031
    @xonious9031 22 วันที่ผ่านมา

    obviously when he starts talking about "CLIMATE" you know he is not a "real guy"

  • @Tommydiistar
    @Tommydiistar 4 หลายเดือนก่อน

    This guy still doesn’t know when ago is coming no one really knows when, I remember few months before the wright brothers flew their first flight their was a so called scientist saying the same thing that humans will never not in the next 200 years

  • @Greg-xi8yx
    @Greg-xi8yx 6 หลายเดือนก่อน +1

    Lex just comes off as extremely try hard and cringe when he goes on about love and trying to sound deep and profound. He definitely lacks the self awareness to recognize the transparency of his insincerity.

    • @theotormon
      @theotormon 5 หลายเดือนก่อน

      I think Lex is sincerely a peace-loving person with faith in people.

  • @donrayjay
    @donrayjay 7 หลายเดือนก่อน +1

    Of course machines don’t have a “model” of the world, they’re not conscious

  • @Spirit-dg5xi
    @Spirit-dg5xi 6 หลายเดือนก่อน

    Don't ask a physicist questions about AI. At least not sean carrol...

  • @EmilRadsky-ll8kx
    @EmilRadsky-ll8kx 7 หลายเดือนก่อน

    😂Lex tries to sell AGI to the audience.

  • @3335pooh
    @3335pooh 6 หลายเดือนก่อน

    enjoy coca-cola

  • @stephenferraro
    @stephenferraro 5 หลายเดือนก่อน

    This guy is really out and left field. I have never once gotten any type of emotion from Google Maps telling me where to go, and I ignored it.

  • @dreamulator
    @dreamulator 6 หลายเดือนก่อน

    AI Is currently over glorified brute forcing

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg 7 หลายเดือนก่อน +1

    why would they be "way worse" ? dumb statement..

  • @shinkurt
    @shinkurt 7 หลายเดือนก่อน

    Smart man but sounds like he opens his mouth about things he has zero understanding on

  • @inadad8878
    @inadad8878 7 หลายเดือนก่อน

    With the new nvidia chips they are just going to throw more compute at the problem and that is probably all the whole system really needs to be dangerous! - coder for 25 years

    • @quantumpotential7639
      @quantumpotential7639 7 หลายเดือนก่อน

      Wow, 25 years is a lot. What type of laptop should I get next? 🤔 I have a $300 budget. Any ideas for best computer to use CHAT GPT?? THANKS 😊

  • @5dollarshake263
    @5dollarshake263 6 หลายเดือนก่อน

    Now somebody go tell Rogan to stop acting like AI is about to shut off the electric grid between everything except itself and every armed drone in the military.

  • @zacharychristy8928
    @zacharychristy8928 5 วันที่ผ่านมา

    Lex is insanely naive

  • @ScreamingAI
    @ScreamingAI 7 หลายเดือนก่อน +1

    GAAAAAH!

  • @NormenHansen
    @NormenHansen 7 หลายเดือนก่อน

    Botox?

  • @bdown
    @bdown 7 หลายเดือนก่อน +2

    This guy! He thinks he knows more about llms than the people who build them (and don’t understand them)all of these self inflated physics guys intire bed of intelligence,became inert and worthless with gpt4😂any 2nd grader with ai would smoke this 🤡on jeopardy in a nanosecond 😂

    • @ChancellorMarko
      @ChancellorMarko 7 หลายเดือนก่อน +3

      Okay let's see who unifies gravity with quantum mechanics first - Physicists or ChatGPT

    • @opensocietyenjoyer
      @opensocietyenjoyer 7 หลายเดือนก่อน +2

      you don't even have completed highschool. sit down for a moment

    • @businessmanager7670
      @businessmanager7670 7 หลายเดือนก่อน

      ​@@ChancellorMarkoscientists around the world tried to solve the protein folding problem for over 5 decades and weren't able to solve it. alphafold solved the problem in just 5 years. it smoked all scientists.
      soo.... check mate

    • @bdown
      @bdown 7 หลายเดือนก่อน

      @@ChancellorMarko see who cures cancer first and gives us life extension technology first ,physicists or AI🤣

    • @EmilRadsky-ll8kx
      @EmilRadsky-ll8kx 7 หลายเดือนก่อน

      ​@@bdownmedical scientists that use AI, AI or AGI itself cannot solve those problems

  • @anglewyrm3849
    @anglewyrm3849 6 หลายเดือนก่อน +1

    10:40 "Do you think physics can help expand compute?" photonic chips:
    th-cam.com/video/TrV2Xcm5xy4/w-d-xo.htmlsi=v-a4EIhH_MpcMHMm

  • @donovangraham8932
    @donovangraham8932 6 หลายเดือนก่อน +1

    Smart individual but patronizing guest.
    his conversation is toned to talking to inferior forms of life.
    Not the type of character that achieves his self projected status.
    Unfortunately, his comments about the elimination of the abbreviation:AGI makes him unconfident and incapable of having a deeper debate.
    Hope he gets over himself and remembers that there is a considerable amount of influences that no human can come close to calculating.....which in turn would give him a 99.9% chance of being wrong 🫠