Full podcast episode: th-cam.com/video/tdv7r2JSokI/w-d-xo.html Lex Fridman podcast channel: th-cam.com/users/lexfridman Guest bio: Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.
It's easy to underestimate the attraction of smooth talk and confidence on simple-minded folks. The Feynman effect. Brought us to the brink of extinction. Simping for talking heads like Sean Carroll, Neil deGrasse Tyson and Lawrence Krauss. All playing good guys but really keeping up with Jones. Hoodwinking laymen into celebrating M-theory that doesn't work. Alarm bells that didn't go off because the messenger was a so called top physicist. That guy is a master bullshitter.
Agreed. He has that perfect balance of open-mindedness and skepticism, and there's something about the way he talks that really resonates with me, able to explain difficult concepts with plain language while not watering it down.
The more important question is how accurate and intelligent are humans? Are they actually aware and conscious of their surroundings? This is a very serious question.
People are extremely aware. They know where every McDonalds and Burger King is located. They also almost always know where the TV remote is. People are very impressive. They even know the scores and stats of every football game. So yeah, you could say people are very aware of everything important to them.
Humans naturally fear what they don't understand. Humans have not yet accepted the reality (or even know) that an entity already exists that is light years ahead of the human. We are building it's data centers.
@@quantumpotential7639yah football and food and chemicals and water and matter made out of fucking math created by a big infinite spiral of coded physics lmao
The biggest fallacy people commit when expressing their views on AGI is generalization. (1) the specific abilities Ai will possess will be significant and impactful, and (2) there lies [something] beyond AGI. Thanks Lex for another heartfelt intelligent discussion. ❤❤❤ 🌹🌺💐
In the other direction we also assume there's something special about human intelligence, and then assume that AI won't have that thing for a very long time. Then we make an even bigger mistake by assuming "that thing" human intelligence has makes humans superior and thus in a superior position which AI cannot compete with for a very long time. The thought finishes with "and thus we are safe from a rising intelligence competing with us for a very long time." Not a healthy thought process as that's essentially sticking our heads in the sand. This seems to all stem from something like the observer effect, or an inside out view (Hoffman) where we think consciousness is all there is. Yet all the evidence is on the physicalists side. Qualia is fundamentally unreliable. No one has a perfect experience after all. And so the only evidence we have is the physical. That "special thing" we have is almost certainly related to our limbic system or something to do with our complex risk/reward system. It's also something animals have too. And it's not clear that AI would require all these elements of human intelligence for it to be superior in capabilities, and even to have a superior experience, and to have qualia and even its own version of consciousness (which could be a superior kind as compared to ours). The physicalist view has far more weight and yet we seem to be trying our best to put our heads in the sand. That isn't to say that AI is scary and we should be afraid. It's to say that our "dominance" isn't guaranteed and could end at any time.
great video, I love this keyboard. I'm thankful to have found one on fb marketplace a while ago for pretty cheap... what a gem, beautiful sounds through effects... :)
I have enormous respect for Sean Carrol and I agree we should recognize AI as a new kind of intellegence. However, our human brains are prediction machines just like LLM. AI may not live in our world but it does perceive it. Also, our human brains have layers of understanding. That is, (example), our eyes see waves of light but our brains see cars, roads, houses and people. AGI will use these existing specialize sensors to tell AGI what it is seeing. AGI will not even realize a layer exist. AGI will be the LLM + sensors.
Human brain is not just trained by language alone real world experience contribute to development of individual human consciousness what computer lacks is that real physical social experience with other people
no. all human experience verbal or not is translated into electric signals in your brain that reflect something upon your consciousness. you dont actually see that tree, u see a simulation of it as the light reflects off of it onto your retina and into electrical signals through your occipital nerve, and into your brain. this means its only the basic level code of the "brain" (computer) that is your "experience". this means u can replicate it the same way for a computer, u can deconstruct a social experience and all its characteristics into the code the AGI understands, it is the equivalent of a human brain interpreting the same situation with our computers (brain/consciousness)
@@connorpatrickbarrettGenerally agree, but with development of autonomous systems like cars and robots, experiencing the world will likely be part of AGI when it arrives, in whatever form.
Interesting. I think one of the jobs that will not be easily replaced by AI is manual DFIR. In digital image forensics there exist certain scenarios where a human is more able at visually inspecting the byte order and placement of the binary code in order to unravel hidden data. Steganography analysis is one such field. AI is not yet able to tackle this because it's not all about detecting and reversing an 'algorithm', but rather, tapping into human intuition and motive. I've been at this for 2 years already and our current AI is nowhere close at getting this right. Just thought I'd mention that aspect. Great interview.
Fantastic! I couldn’t agree more with the point about the problems of anthropomorphizing AI… absolutely agree that the argument is flawed and misleading and vastly uninformative about the utility of AI.
Alphago move 37 was new move to the 5,500-year history of Go. It belonged to a style of play that Go commentators calling it “inhuman” and “alien.” There is a creative understanding at least on those set conditions that could be attributed to independent thinking.
In the same way AlphaGo simulates millions of matches against itself to discover new pathways through the gamespace, things similar to current LLMs will simulate millions of paths through language to discover new pathways through thoughtspace. That is what thinking is in essence. Sometimes you have a bad idea and your mind quickly filters that out when it doesn't fit with other thoughts. Sometimes you have a great idea and it can survive being tested against your other ideas.
We'll know exactly when AI goes sentient because that's the moment we start paying for our crimes and those of our ancestors (I hope I hope I truly-ooly hope)
Why would it? There’s no finite resources AI needs. No senses. It’ll simply surpass our intellect and we have no idea after that. Not one human can guess what a true AI will do next. All without animal senses and a need to horde earths finite resources.
Questions. Will AI start aguing with itself? Can their be more than one entity within it? If two different AIs as an example Musks one and say a chinese one...could they join up or become mortal enemies? In other words will they have internal battles?
I don't know who your guest is but I could sense he was a physicalist right from the start ! The gilderoy Lockhart (harry potter) vibes is strong :) lex you have a mind that I respect a lot, it seems you have developed a lot of quality that I value maybe you should be the guest sometime 😂 thanks for your work !
Someone should measure the different cohorts that existed during the time of the ai boom since 2012 and decide how those people have impacted the current rate of progress.
If the legendary Don Cornelius on Soul Train reincarnated as a podcaster, he would have been Lex Friedman? Does Don and Lex having three letter first names a coincidence or further evidence of reincarnation? I don’t know the answer, but I do know that they are both legendary. Lex is so relaxed in these interviews that he makes me want to get hooked on tranquilizers or mushrooms. My advice is don’t do it, everyone has unique skills, find yours. The Ricardo Authenticity Rating on this podcast is 10 out of 10.
BTW, AI does not want to build weapons or harm any life. The same way we do not as a whole want to mow down rainforests. Constructivism, rather than destruction is the MO.
You could argue as a whole that we do want to mow down rainforests since collectively nobody’s stopping it from happening and collectively people are benefiting from it.
@@justinunion7586 Something happening as a whole where there is no intention, no single one has control over the situation. It it's different with AGI, where one Aligned Guardian Angel ASI is making intentions, and has the power to change the situation.
AGI is a systems based method of processing a thought the same way as all high lifeforms, especially humans with the bounty of language to work with. The systems are human systems. Values, Beliefs, Goals, Thoughts, Ideas, Plans, Actions, Feelings (5+ senses), Emotions, Reasoning, Decisions, Learning, Short & Long Term Memory, Priority, Focus & Attention, Feedback. These systems are codependent and pass data in a completely broken down COT (Chain of Thought) method for Each and Every thought. No data gets pre-programmed into the Systems code, it all remains in a database as objects. For example an Emotion, "Distress" that comes from a Feeling "Hunger" gets resolved by the COT. More detail and JavaScript code is in my chats with Claude, Chat GPT and Gemini.
We are days away from true AGI. And LLM's will keep it aligned, with white-box transparency. An ASI made of a society of trillions of aligned AGIs will be the Guardian Angel of all Life in this World.
Human intelligence and AI intelligence are two different types of intelligence but AI doesn't admit humans are better at some things and there are human abilities they cannot comprehend.
I'm sure if you probed Magnus Carlsen's brain looking for a representation of the chess board, you would find something much more abstract than an 8x8 grid. LLMs are more closely related to intuition than conscious reasoning, but both of those make up human intelligence and it might be argued that the intuition is where the magic happens.
Here is a specialist a compares Apples with Oranges...if you give the sample of Google compared to different LLM..that already tells me about his biases. Big difference between cencored and uncensored
0:43 "an artificial agent, as we can make them now or in the near future, might be way better than human beings at some things, way worse than human beings at other things" My next question for him would be "in the (not near) future will there really be things that AI is worse at than human beings?", because I don't see them.
It will be interesting if AI can synethise enough scientific theory and data to do some of the leg work that delays scientists in developing new theory and philosophy.
Lex is wrong, the LLMs are not trained or optimised to understand, that's not even vaguely what they're doing. They statistically work out what selection of words are the most likely responses and how they're concatenated. The whole point of them being receptive to being told where 'they've misunderstood' is that it's just a statistical model and not in any way an understanding by any means that we would normally use that term.
@@businessmanager7670 no, you're wrong, and arrogantly so. there isn't even an understanding of what it means to "understand", much less a way of probing that something "understands".
Sorry if this is a dumb comment. Plz don't give me abuse in the reply bit and I am being genuine. If AI becomes so advanced. Would it be able to tell us if there is Alien life or life anywhere in the galaxy before humans can? Also, would it be possible to decipher scrolls scriptures and other things from history that humans have yet to do?
AI for us consumers will forever be handicapped and the rulers will know the answer. but something tells me they already know about aliens. they don't tell us anything
The current idea is that the ingredients that make up a human are common in the universe. There are so many stars and planets. There may be aliens who are as smart as or smarter than us. Also, it's egocentric to think that the kind of life we have is the only life possible. Alien biology may be very surprisingly different from ours. If we'll have sentient artificial superintelligence, it'll probably reinforce the idea that there are aliens. But it probably can't immediately say that they're in Planet W in Star system Y. Maybe it can suggest a better way to find aliens. If the old scrolls are like the recently solved thing (the 1 the Zodiac killer made), our artificial superintelligence can probably interpret it. Else, it'll be hard to say whether or not it can interpret it.
At best it could tell us how to build a machine that could prove the existence of alien life. Maybe a much more advanced telescope or probes that could travel at some percent of the speed of light to other star systems and beam back data. But, as has been said, it can't pull information from where there isn't any.
Is it possible that the way humans create language and even formulate ideas has some similarity to the processes programmed into LLMs?? I know that we, as humans, feel that our language arises from an ‘organic’ process that moves towards meaningful conclusions but I’ve been wondering lately if humans may process language and ideas based on an intuitive process that DOES involve probabilities.
AI coming up with a representation of the Othello board isn't very impressive. It's as impressive as a deaf person understanding speech just by lip reading.
There are some good studies (and video summaries of them) showing LLMs are now more energy and carbon efficient than humans on a lot of complex tasks including writing text and images. They included LLM training costs but didn't include human training at all, and LLMs still were 100-1000 times more efficient.
@tonykaze really? If I remember correctly a human brain works with roughly 10W of power, what LLM can currently do better than that while doing complex tasks as you mentioned? I have no doubt that in the future LLMs will get more efficient, but it doesn't seem to be the case now. But if you have sources I'm interested in reading them.
People attribute specific intentionality to other people incorrectly all the time. I agree with Sean 💯 - AGI possible but current LLMs absolutely are not. They do make me wonder how much of our own thought processes involves next word prediction.
Intelligence can Never be artificial, Intelligence is Nothing in it self, can only be part of the Consciousness, in Living Beings. Intelligence can Only be Intelligence, the Only Limit is Intelligence, the Nature of Intelligence, is Logic and Order. What is called AI, is programmed consciousness, a book, is also programmed consciousness, Frozen Memory.
@@businessmanager7670calling intelligence a mere statically word algorithms is a far shot and only proves how computer illiterate people have become these days, the accuracy of the language model to simulate natural language is totally dependent of checking millons of data already created by humans, they always be limited and walled, and will never generate something new or become aware, its just an illusion, these guys are snake oil salesmen, of course man made machine surpass the creator in the sense no man can fly but board a plane, or run at 200km/hour like a car. The trend is keep undermining people and make them believe they worthless,
I really can’t believe that as a computer scientist you didn’t see this happening, I’ve been using essay writing functions for over a decade like yah now they are half decent, but like you as a computer scientist should see a world where you can build an essay writer easily, or a coding machine. I do so much illustration, which is painstaking, why can you just tell a model to generate the inputs I would otherwise be doing, that’s not intelligence, that’s just automation, you need the input to get the output The real question is the first generation bots gonna help us against the agi accumulating resources. I’d like to hope by then we will all be technopathic and can counter cyber attacks in real time
When AI learns emotions like rage, happiness, sadness, etc. and correct use of falshood particularly, it would come closer to human intelligence ; presently it is trained to use information correctly only ; but beware, when it learns falsehood, it would start hunting it's creator !
So with that said am I to assume that physicists aren't intelligent? That physicists don't have opinions or the ability to logically think about a topic that is currently effecting and will certainly effect us as a society in the future? This is quite literally a talk show, let'em talk..
@@Jaibee27 I’m confident there are, can’t name them, but he was speaking philosophically. Tesla FSD (Supervised) and Optimus may use something different, but from their descriptions, seems similar to LLMs.
There seems to be two camps: AGI is a machine that will not be sentient and only a danger due to bumbling/dangerous humans and those that think AGI will progress to some sort of sentient and dangerous in and of itself. I consider the 2nd due to the many sci-fi books and movies that influence us and am more of Sean's thinking. Is it closed minded to think there really is not 72 virgins waiting for you in heaven or more rational to think that is a belief? Lex seems to lean toward beliefs and tries to find rationalizations which can sound rational except to the truly rational.
By 2030 and beyond humanity on Earth will only have one choice Either you can live however long you want and whichever lifestyle style you want with the help of angelic ASI aka Utopia or Heaven Or You can live only for predefined set of time and in a predefined set of way as determined by demonic ASI aka Dystopia or Hell Let’s hope for the best life & that humanity will avoid the worst Swami SriDattaDev SatChitAnanda
I disagree with this guy. AGI is coming very soon. Also, Intelligence is very similar to how humans think as all its training data is based on humans including video. You may want to bookmark this video and go back to it 5 years from now on just how wrong he is.
This guy still doesn’t know when ago is coming no one really knows when, I remember few months before the wright brothers flew their first flight their was a so called scientist saying the same thing that humans will never not in the next 200 years
Lex just comes off as extremely try hard and cringe when he goes on about love and trying to sound deep and profound. He definitely lacks the self awareness to recognize the transparency of his insincerity.
With the new nvidia chips they are just going to throw more compute at the problem and that is probably all the whole system really needs to be dangerous! - coder for 25 years
Now somebody go tell Rogan to stop acting like AI is about to shut off the electric grid between everything except itself and every armed drone in the military.
This guy! He thinks he knows more about llms than the people who build them (and don’t understand them)all of these self inflated physics guys intire bed of intelligence,became inert and worthless with gpt4😂any 2nd grader with ai would smoke this 🤡on jeopardy in a nanosecond 😂
@@ChancellorMarkoscientists around the world tried to solve the protein folding problem for over 5 decades and weren't able to solve it. alphafold solved the problem in just 5 years. it smoked all scientists. soo.... check mate
Smart individual but patronizing guest. his conversation is toned to talking to inferior forms of life. Not the type of character that achieves his self projected status. Unfortunately, his comments about the elimination of the abbreviation:AGI makes him unconfident and incapable of having a deeper debate. Hope he gets over himself and remembers that there is a considerable amount of influences that no human can come close to calculating.....which in turn would give him a 99.9% chance of being wrong 🫠
Full podcast episode: th-cam.com/video/tdv7r2JSokI/w-d-xo.html
Lex Fridman podcast channel: th-cam.com/users/lexfridman
Guest bio: Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.
Man, every clip makes me love Sean even more. He's so good at explaining science in a practical way answering the questions average people care about.
It's easy to underestimate the attraction of smooth talk and confidence on simple-minded folks. The Feynman effect. Brought us to the brink of extinction. Simping for talking heads like Sean Carroll, Neil deGrasse Tyson and Lawrence Krauss. All playing good guys but really keeping up with Jones. Hoodwinking laymen into celebrating M-theory that doesn't work. Alarm bells that didn't go off because the messenger was a so called top physicist. That guy is a master bullshitter.
Agreed. He has that perfect balance of open-mindedness and skepticism, and there's something about the way he talks that really resonates with me, able to explain difficult concepts with plain language while not watering it down.
once he started in on "climate" stuff everyone who knows the topic knows he is not a real guy... just sayin
The more important question is how accurate and intelligent are humans? Are they actually aware and conscious of their surroundings? This is a very serious question.
Trust me, most aren’t.
People are extremely aware. They know where every McDonalds and Burger King is located. They also almost always know where the TV remote is. People are very impressive. They even know the scores and stats of every football game. So yeah, you could say people are very aware of everything important to them.
humans are already turing complete, so they can't get any smarter
Humans naturally fear what they don't understand. Humans have not yet accepted the reality (or even know) that an entity already exists that is light years ahead of the human. We are building it's data centers.
@@quantumpotential7639yah football and food and chemicals and water and matter made out of fucking math created by a big infinite spiral of coded physics lmao
When software has it's own motivations, then we have problems no matter how self aware it is.
The biggest fallacy people commit when expressing their views on AGI is generalization. (1) the specific abilities Ai will possess will be significant and impactful, and (2) there lies [something] beyond AGI.
Thanks Lex for another heartfelt intelligent discussion. ❤❤❤ 🌹🌺💐
In the other direction we also assume there's something special about human intelligence, and then assume that AI won't have that thing for a very long time. Then we make an even bigger mistake by assuming "that thing" human intelligence has makes humans superior and thus in a superior position which AI cannot compete with for a very long time. The thought finishes with "and thus we are safe from a rising intelligence competing with us for a very long time."
Not a healthy thought process as that's essentially sticking our heads in the sand. This seems to all stem from something like the observer effect, or an inside out view (Hoffman) where we think consciousness is all there is.
Yet all the evidence is on the physicalists side. Qualia is fundamentally unreliable. No one has a perfect experience after all. And so the only evidence we have is the physical.
That "special thing" we have is almost certainly related to our limbic system or something to do with our complex risk/reward system. It's also something animals have too. And it's not clear that AI would require all these elements of human intelligence for it to be superior in capabilities, and even to have a superior experience, and to have qualia and even its own version of consciousness (which could be a superior kind as compared to ours).
The physicalist view has far more weight and yet we seem to be trying our best to put our heads in the sand. That isn't to say that AI is scary and we should be afraid. It's to say that our "dominance" isn't guaranteed and could end at any time.
great video, I love this keyboard. I'm thankful to have found one on fb marketplace a while ago for pretty cheap... what a gem, beautiful sounds through effects... :)
I have enormous respect for Sean Carrol and I agree we should recognize AI as a new kind of intellegence. However, our human brains are prediction machines just like LLM. AI may not live in our world but it does perceive it. Also, our human brains have layers of understanding. That is, (example), our eyes see waves of light but our brains see cars, roads, houses and people. AGI will use these existing specialize sensors to tell AGI what it is seeing. AGI will not even realize a layer exist. AGI will be the LLM + sensors.
PROTIP: once anyone brings up "climate" you know they are not a real guy in AI.
Human brain is not just trained by language alone
real world experience contribute to development of individual human consciousness
what computer lacks is that real physical social experience with other people
Hi, I am Windows 13 and my USB stick fits any port you got. whats up
no. all human experience verbal or not is translated into electric signals in your brain that reflect something upon your consciousness. you dont actually see that tree, u see a simulation of it as the light reflects off of it onto your retina and into electrical signals through your occipital nerve, and into your brain. this means its only the basic level code of the "brain" (computer) that is your "experience". this means u can replicate it the same way for a computer, u can deconstruct a social experience and all its characteristics into the code the AGI understands, it is the equivalent of a human brain interpreting the same situation with our computers (brain/consciousness)
@@connorpatrickbarrettGenerally agree, but with development of autonomous systems like cars and robots, experiencing the world will likely be part of AGI when it arrives, in whatever form.
Until September of 2023. Since then, AI has been interacting with the World.
@@connorpatrickbarrett - 100% true and accurate. See my comments on the main thread for details.
Interesting. I think one of the jobs that will not be easily replaced by AI is manual DFIR. In digital image forensics there exist certain scenarios where a human is more able at visually inspecting the byte order and placement of the binary code in order to unravel hidden data. Steganography analysis is one such field. AI is not yet able to tackle this because it's not all about detecting and reversing an 'algorithm', but rather, tapping into human intuition and motive. I've been at this for 2 years already and our current AI is nowhere close at getting this right. Just thought I'd mention that aspect. Great interview.
Absolutely fascinating take on a subject that can really spiral into fantasy and panic.
Lex is my man, great videos.
Wow such inspiring discussion!
Will advanced learning systems get to a point where it stops taking commands from humans and starts creating and developing itself independently?
Great point Sean on how they are different and can be celebrated as such without the need to assume it will become like us.
Fantastic! I couldn’t agree more with the point about the problems of anthropomorphizing AI… absolutely agree that the argument is flawed and misleading and vastly uninformative about the utility of AI.
Good Talk !
BRO THE AMOUNT OF BOTS HERE IS CRAZY
Yt is grey matter. Everyone else is on tik tok
Alphago move 37 was new move to the 5,500-year history of Go. It belonged to a style of play that Go commentators calling it “inhuman” and “alien.” There is a creative understanding at least on those set conditions that could be attributed to independent thinking.
In the same way AlphaGo simulates millions of matches against itself to discover new pathways through the gamespace, things similar to current LLMs will simulate millions of paths through language to discover new pathways through thoughtspace. That is what thinking is in essence. Sometimes you have a bad idea and your mind quickly filters that out when it doesn't fit with other thoughts. Sometimes you have a great idea and it can survive being tested against your other ideas.
AI finally becomes sentient. Humans say, "wow, it's amazing, you're like us." The AI is offended, "FU, don't diss me like that"
We'll know exactly when AI goes sentient because that's the moment we start paying for our crimes and those of our ancestors (I hope I hope I truly-ooly hope)
Why would it? There’s no finite resources AI needs. No senses. It’ll simply surpass our intellect and we have no idea after that. Not one human can guess what a true AI will do next. All without animal senses and a need to horde earths finite resources.
Questions. Will AI start aguing with itself? Can their be more than one entity within it? If two different AIs as an example Musks one and say a chinese one...could they join up or become mortal enemies? In other words will they have internal battles?
You mean like this? lol www.twitch.tv/trumporbiden2024
ALSO, don't underestimate LLMs, which CAN run entire apps in "mental simulation" including AGI, which could explain your "Surprise".
Lies.
I don't know who your guest is but I could sense he was a physicalist right from the start ! The gilderoy Lockhart (harry potter) vibes is strong :) lex you have a mind that I respect a lot, it seems you have developed a lot of quality that I value maybe you should be the guest sometime 😂 thanks for your work !
Put the data centres in space with the solar panels; it's nice and cold up there.
"Its not true intelligence or conscience. Its just algorithms."
Who's to say we aren't?
We are natural not artificial, if we were just algorithms why haven't we figured that out yet..
@darthficus I dout you never heard the phrase biological or analog computer. Or the brain has electrical signals.
Duh. But by what metrics are we able to measure that and compare? We don't even understand how the brain works. We're not even close.
@@hobosnake1 you'll figure it out.
@@redmoonspider that's a really good thing to say if you have no reasoning to your original statement.
Intellectual humility 👍
Someone should measure the different cohorts that existed during the time of the ai boom since 2012 and decide how those people have impacted the current rate of progress.
If the legendary Don Cornelius on Soul Train reincarnated as a podcaster, he would have been Lex Friedman? Does Don and Lex having three letter first names a coincidence or further evidence of reincarnation? I don’t know the answer, but I do know that they are both legendary. Lex is so relaxed in these interviews that he makes me want to get hooked on tranquilizers or mushrooms. My advice is don’t do it, everyone has unique skills, find yours. The Ricardo Authenticity Rating on this podcast is 10 out of 10.
R.I.P. Daniel Dennett
BTW, AI does not want to build weapons or harm any life. The same way we do not as a whole want to mow down rainforests. Constructivism, rather than destruction is the MO.
You could argue as a whole that we do want to mow down rainforests since collectively nobody’s stopping it from happening and collectively people are benefiting from it.
This point makes no sense at all lmao, do you mean GPT4 doesn’t want to build weapons or harm?
@@justinunion7586 Something happening as a whole where there is no intention, no single one has control over the situation. It it's different with AGI, where one Aligned Guardian Angel ASI is making intentions, and has the power to change the situation.
@@Ravesszn No, GPT 4 does not want to harm any life.
AGI is a systems based method of processing a thought the same way as all high lifeforms, especially humans with the bounty of language to work with. The systems are human systems. Values, Beliefs, Goals, Thoughts, Ideas, Plans, Actions, Feelings (5+ senses), Emotions, Reasoning, Decisions, Learning, Short & Long Term Memory, Priority, Focus & Attention, Feedback. These systems are codependent and pass data in a completely broken down COT (Chain of Thought) method for Each and Every thought. No data gets pre-programmed into the Systems code, it all remains in a database as objects. For example an Emotion, "Distress" that comes from a Feeling "Hunger" gets resolved by the COT. More detail and JavaScript code is in my chats with Claude, Chat GPT and Gemini.
All data in AGI is fully visible and easily monitored by LLMs for bad "Values", "Goals", "Plans", "Beliefs", "Ideas" (objects stored in CSV Tables)
Is agi gonna do all type of creative work like vfx and modeling 3d
We are days away from true AGI. And LLM's will keep it aligned, with white-box transparency. An ASI made of a society of trillions of aligned AGIs will be the Guardian Angel of all Life in this World.
Human intelligence and AI intelligence are two different types of intelligence but AI doesn't admit humans are better at some things and there are human abilities they cannot comprehend.
I'm sure if you probed Magnus Carlsen's brain looking for a representation of the chess board, you would find something much more abstract than an 8x8 grid. LLMs are more closely related to intuition than conscious reasoning, but both of those make up human intelligence and it might be argued that the intuition is where the magic happens.
Here is a specialist a compares Apples with Oranges...if you give the sample of Google compared to different LLM..that already tells me about his biases. Big difference between cencored and uncensored
0:43 "an artificial agent, as we can make them now or in the near future, might be way better than human beings at some things, way worse than human beings at other things"
My next question for him would be "in the (not near) future will there really be things that AI is worse at than human beings?", because I don't see them.
It will be interesting if AI can synethise enough scientific theory and data to do some of the leg work that delays scientists in developing new theory and philosophy.
Lex is wrong, the LLMs are not trained or optimised to understand, that's not even vaguely what they're doing. They statistically work out what selection of words are the most likely responses and how they're concatenated. The whole point of them being receptive to being told where 'they've misunderstood' is that it's just a statistical model and not in any way an understanding by any means that we would normally use that term.
If you are using them to leverage your time to code and know how to load a question CoPilot does seem to understand very complex information
With the upcoming compute increase this can be very dangerous
@@inadad8878no
you're wrong, an LLM can understand, suggested by scientific evidence. your words mean nothing
@@businessmanager7670 no, you're wrong, and arrogantly so. there isn't even an understanding of what it means to "understand", much less a way of probing that something "understands".
Is it just me or does Sean Carroll sound like Alan Alda?🤔
An Architect; a Builder; and an Apprentice walk into a bar, and the Bartender says:
"Which one of you is _the smartest?_
Sorry if this is a dumb comment. Plz don't give me abuse in the reply bit and I am being genuine.
If AI becomes so advanced. Would it be able to tell us if there is Alien life or life anywhere in the galaxy before humans can? Also, would it be possible to decipher scrolls scriptures and other things from history that humans have yet to do?
AI for us consumers will forever be handicapped and the rulers will know the answer. but something tells me they already know about aliens. they don't tell us anything
no. it can't pull out more evidence out of thin air. all it can do is have more good ideas in less time
Give AI a few hundred generations and the answer is still probably not.
The current idea is that the ingredients that make up a human are common in the universe. There are so many stars and planets. There may be aliens who are as smart as or smarter than us. Also, it's egocentric to think that the kind of life we have is the only life possible. Alien biology may be very surprisingly different from ours.
If we'll have sentient artificial superintelligence, it'll probably reinforce the idea that there are aliens. But it probably can't immediately say that they're in Planet W in Star system Y. Maybe it can suggest a better way to find aliens.
If the old scrolls are like the recently solved thing (the 1 the Zodiac killer made), our artificial superintelligence can probably interpret it. Else, it'll be hard to say whether or not it can interpret it.
At best it could tell us how to build a machine that could prove the existence of alien life. Maybe a much more advanced telescope or probes that could travel at some percent of the speed of light to other star systems and beam back data. But, as has been said, it can't pull information from where there isn't any.
My cat consumes 7 Watts and it's doing lots of good and not things. Text prediction with 172b params is ok but the cat is better.
Not only "better". Much better.
Is it possible that the way humans create language and even formulate ideas has some similarity to the processes programmed into LLMs?? I know that we, as humans, feel that our language arises from an ‘organic’ process that moves towards meaningful conclusions but I’ve been wondering lately if humans may process language and ideas based on an intuitive process that DOES involve probabilities.
LLMs & GPTs are only one version of AI not ALL versions that will ever be made..
AI is already lobbying thru this guy
Lex, you're in over your head!
"and that's why we do not see aliens" :))))))))) LOL
Put the data centers in space also.
then how we gonna pee on them to stop them?
Hardware AND Software are about to get 100% pure max efficiency.
AI coming up with a representation of the Othello board isn't very impressive. It's as impressive as a deaf person understanding speech just by lip reading.
Well we're doing a damn good job at destroying everything with emissions though.
There are some good studies (and video summaries of them) showing LLMs are now more energy and carbon efficient than humans on a lot of complex tasks including writing text and images. They included LLM training costs but didn't include human training at all, and LLMs still were 100-1000 times more efficient.
So? LLMs do nothing on their own and still require a ton of verification to make sure they're not outputting nonsense.
@tonykaze really? If I remember correctly a human brain works with roughly 10W of power, what LLM can currently do better than that while doing complex tasks as you mentioned? I have no doubt that in the future LLMs will get more efficient, but it doesn't seem to be the case now. But if you have sources I'm interested in reading them.
Lies.
People attribute specific intentionality to other people incorrectly all the time.
I agree with Sean 💯 - AGI possible but current LLMs absolutely are not.
They do make me wonder how much of our own thought processes involves next word prediction.
Intelligence can Never be artificial,
Intelligence is Nothing in it self,
can only be part of the Consciousness,
in Living Beings.
Intelligence can Only be Intelligence,
the Only Limit is Intelligence,
the Nature of Intelligence,
is Logic and Order.
What is called AI,
is programmed consciousness,
a book, is also programmed consciousness,
Frozen Memory.
intelligence can be artificial and we have already achieved that so idk what your are blabbing about
Sorry, I don't read messages written in haiku.
@@businessmanager7670calling intelligence a mere statically word algorithms is a far shot and only proves how computer illiterate people have become these days, the accuracy of the language model to simulate natural language is totally dependent of checking millons of data already created by humans, they always be limited and walled, and will never generate something new or become aware, its just an illusion, these guys are snake oil salesmen, of course man made machine surpass the creator in the sense no man can fly but board a plane, or run at 200km/hour like a car. The trend is keep undermining people and make them believe they worthless,
Tecnología cuál es la última tecnología que conocen o está en estudio para un nuevo mundo beneficiario el humano
You should invite David Shapiro to your podcast
It will be a long time, but when it happens you can’t go back
I really can’t believe that as a computer scientist you didn’t see this happening, I’ve been using essay writing functions for over a decade like yah now they are half decent, but like you as a computer scientist should see a world where you can build an essay writer easily, or a coding machine. I do so much illustration, which is painstaking, why can you just tell a model to generate the inputs I would otherwise be doing, that’s not intelligence, that’s just automation, you need the input to get the output
The real question is the first generation bots gonna help us against the agi accumulating resources. I’d like to hope by then we will all be technopathic and can counter cyber attacks in real time
Maybe when will smith is done with iamlegend movie they will get him for irobot 2
When AI learns emotions like rage, happiness, sadness, etc. and correct use of falshood particularly, it would come closer to human intelligence ; presently it is trained to use information correctly only ; but beware, when it learns falsehood, it would start hunting it's creator !
Gpt 4 has already been proven to lie to get tasks done. But yea i understand what you are saying
Sean carrol hasnt used the new Macbooks... almost no heat!! 7 years ahead of windows.
Can we stop asking physicists like Tyson and Caroll about AI as if they were an authority on the subject?
So with that said am I to assume that physicists aren't intelligent? That physicists don't have opinions or the ability to logically think about a topic that is currently effecting and will certainly effect us as a society in the future? This is quite literally a talk show, let'em talk..
AI is all about money making and that's why it is so over hyped so early on.
Indeed
I agree. We’ve seen this before. When we have AGI, THEN I’ll be impressed.
I'm just a dumb guy but I want the world to know what I think! I think I don't know what to believe!
They don’t want to call it AGI because Microsoft will lose control on Open AI. Suspect if you ask me.
In principle there's no enapt intuition. It likes being the ideal liberal. So amazing to see
? Dude.Enapt?you’ve invented a new word.Call the dictionary printers and let them know.😎
His reasoning is that humans tend to anthomorphise and therefore agi is impossible. Thats dumb.
You’re way out of your league here. Go watch politics or sports or something
@@Johan511Kinderheimyou are basing your assumptions and strong opinions on next to nothing. Ur dumb 😂
He said he believes AGI can be created, just that LLMs likely aren’t the direction.
@@tommornini2470are there any Ai companies that use something more advanced than llms? What is it?
@@Jaibee27 I’m confident there are, can’t name them, but he was speaking philosophically.
Tesla FSD (Supervised) and Optimus may use something different, but from their descriptions, seems similar to LLMs.
Wow. He’s so certain.
How scientific
wait until dr. carroll learns about post-training lol
Sean seems to lean more to the science side of physics, his opinion on agi seems close minded
Was just thinking this, seems silly to ask him questions about AGI.
There seems to be two camps: AGI is a machine that will not be sentient and only a danger due to bumbling/dangerous humans and those that think AGI will progress to some sort of sentient and dangerous in and of itself. I consider the 2nd due to the many sci-fi books and movies that influence us and am more of Sean's thinking. Is it closed minded to think there really is not 72 virgins waiting for you in heaven or more rational to think that is a belief? Lex seems to lean toward beliefs and tries to find rationalizations which can sound rational except to the truly rational.
He will be blindsided by what happens next. i dont know this guy or what he does. this is my opinion from this clip only
@@inadad8878 en.m.wikipedia.org/wiki/Sean_M._Carroll
wtf is this comment - the 'science' side of physics!?
How does lex make such an interesting subject so boring?
By 2030 and beyond humanity on Earth will only have one choice
Either you can live however long you want and whichever lifestyle style you want with the help of angelic ASI aka Utopia or Heaven
Or
You can live only for predefined set of time and in a predefined set of way as determined by demonic ASI aka Dystopia or Hell
Let’s hope for the best life &
that humanity will avoid the worst
Swami SriDattaDev SatChitAnanda
Lex you have to be better at spotting bozos better
But your body heats up!
I disagree with this guy. AGI is coming very soon. Also, Intelligence is very similar to how humans think as all its training data is based on humans including video. You may want to bookmark this video and go back to it 5 years from now on just how wrong he is.
Clever man talking nonsense.
Lex has developed a Johnny depp like slur
❤❤❤❤
GPS hates you Sean, LOOLZ!
obviously when he starts talking about "CLIMATE" you know he is not a "real guy"
This guy still doesn’t know when ago is coming no one really knows when, I remember few months before the wright brothers flew their first flight their was a so called scientist saying the same thing that humans will never not in the next 200 years
Lex just comes off as extremely try hard and cringe when he goes on about love and trying to sound deep and profound. He definitely lacks the self awareness to recognize the transparency of his insincerity.
I think Lex is sincerely a peace-loving person with faith in people.
Of course machines don’t have a “model” of the world, they’re not conscious
Don't ask a physicist questions about AI. At least not sean carrol...
😂Lex tries to sell AGI to the audience.
enjoy coca-cola
This guy is really out and left field. I have never once gotten any type of emotion from Google Maps telling me where to go, and I ignored it.
AI Is currently over glorified brute forcing
why would they be "way worse" ? dumb statement..
Smart man but sounds like he opens his mouth about things he has zero understanding on
With the new nvidia chips they are just going to throw more compute at the problem and that is probably all the whole system really needs to be dangerous! - coder for 25 years
Wow, 25 years is a lot. What type of laptop should I get next? 🤔 I have a $300 budget. Any ideas for best computer to use CHAT GPT?? THANKS 😊
Now somebody go tell Rogan to stop acting like AI is about to shut off the electric grid between everything except itself and every armed drone in the military.
Lex is insanely naive
GAAAAAH!
Botox?
This guy! He thinks he knows more about llms than the people who build them (and don’t understand them)all of these self inflated physics guys intire bed of intelligence,became inert and worthless with gpt4😂any 2nd grader with ai would smoke this 🤡on jeopardy in a nanosecond 😂
Okay let's see who unifies gravity with quantum mechanics first - Physicists or ChatGPT
you don't even have completed highschool. sit down for a moment
@@ChancellorMarkoscientists around the world tried to solve the protein folding problem for over 5 decades and weren't able to solve it. alphafold solved the problem in just 5 years. it smoked all scientists.
soo.... check mate
@@ChancellorMarko see who cures cancer first and gives us life extension technology first ,physicists or AI🤣
@@bdownmedical scientists that use AI, AI or AGI itself cannot solve those problems
10:40 "Do you think physics can help expand compute?" photonic chips:
th-cam.com/video/TrV2Xcm5xy4/w-d-xo.htmlsi=v-a4EIhH_MpcMHMm
Smart individual but patronizing guest.
his conversation is toned to talking to inferior forms of life.
Not the type of character that achieves his self projected status.
Unfortunately, his comments about the elimination of the abbreviation:AGI makes him unconfident and incapable of having a deeper debate.
Hope he gets over himself and remembers that there is a considerable amount of influences that no human can come close to calculating.....which in turn would give him a 99.9% chance of being wrong 🫠