The base assumptions of Sam's idea are so preposterous and laughable if you blind, emotional atheists cannot se it, you are simply lying to yourselves. Lex 'bodied' the 'intellectual' Sam Harris. Sam is talking nonsense and his foolish laymen, niche, pseudo-intellectual followers may just accept most of these baseless naturalistic, laughable assumptions.
@@niveshproag3761 @Nivesh Proag His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Sam doesn't really have an argument here. His first basic assumptions are quite straightforward - essentially just that AI will be created. Then, he goes way out on a limb and posits that AI will most likely be inherently motivated to work against the best interests of humanity. However, he provides very little support for this assumption, the closest he comes to making a supporting argument here is to compare the circumstance to humanity's treatment of birds - but in order for this to be a valid comparison, the AI would have to have similar selfish motivations as humans, and a similar level of disinterest in our individual wellbeing as we take in pigeons. Why would a mind without evolved selfish traits decide to be so callous? Surely if the AIs are smarter than us, they will not be making decisions based entirely on simple motives like greed and control
@@dahirhussein1839 How do atoms which contain no metaphysical elements of a car make up a car that can travel so fast? The idea is simply that complex arragement of a simple thing can form something complex. Just because the basic element is simple doesn't mean the whole has to be simple as well. Not a crazy idea. You can appeal to the complexity of a human all you want, doesn't mean it cannot be made from simple atoms arranged in complex ways.
He's like the perfect hybrid. And just for fun, look at his name : ''Lex Fridman''. I could loosely translate that to ''Law Freed Man''. In other words, a social experiment done right.
I'm with Sam on this. We should be very worried. Just look around at all the very smart engineers and computer programmers that produce malicious software and weapons of mass destruction. Is Lex suggesting that it will be only the nice guys that will design and control AI? That's never going to happen!
I was disappointed with the seamless way they transitioned from a discussion of 'super-intelligence' analogous to human intelligence to autonomous driving/weapons. It seems to me like driving a vehicle in a largely controlled system, or even firing a weapon accurately at a 'defined' target is completely different to, say, imagining one's self/existence, pursuing questions of meaning, the abstractions of metaphysics, and other things that seem to transcend our motor-skills and basic conscious neurology. Surely these two think along those lines and know that this is a fundamentally important element of the debate. I assume they both think these properties of the human experience are 'emergent' properties of an increasingly complex mind, but it would be nice to hear them discuss these things.
Did you know there were people who thought women's uterus would fall out if they rode a train in the 1880's? You don't understand it, so you're scared. It's normal, but unjustified.
@@PersistentDissenter Yes, people often fear what they don't fully understand. Does that mean that any potential threat that we don't fully understand should therefore be treated as "unjustified" and compared to the fear of uteruses falling out? Of course not. There will be those who are unjustified and those who are totally justified. We should address the problem itself and not just make random comparisons completely out of context.
@@PersistentDissenter Of course it was an analogy. A very dumb one. if we applied that analogy to everything we don't fully understand, we would be underestimating countless threats. Just because something in the past was silly and turned out to be nothing to worry about, that means absolutely nothing with respect to other issues that are completely unrelated. If you try hard enough, you'll also find examples of great threats that were downplayed and resulted in disaster. What does that have to do with AI?
I think that Sam hit the nail on the head with our wisdom not growing as fast as our power, but I think it applies to much more than just AI and he hinted at that with talking about creating a black hole with the collider. We are dangerous creatures who don't seem to understand what we are doing most of the time
Or driven by greed. I was mercilessly injured with a new medical device that didn,t work and maimed many people, but the company lied about the results and hid the negative outcomes, and continued using it, with no regard to human suffering. All about the money$$$$.
Creating a black hole in a lab that would rip earth apart is impossible though. You'd have to concentrate more energy (mass) into a single point than the total mass of the earth. That requires like billions of times more energy than our whole civilization consumes.
@@auditoryproductions1831 You're thinking with traditional forms of chemical energy. For this thought experiment I think its necessary to asssume we have already cracked and dominated the quantum.
I'll do you one better!.. everybody knows this stuff and they will still continue to do what they want.. if I were you I would hold on to this piece of info..
@Nivesh Proag His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 In a certain sense you are right but also wrong, your mistake is thinking that just because your counterpart is wrong about something that means you are right, to begin with what is being discussed is not a yes or no answer, it is something more open than that. Harris is wrong in thinking that consciousness comes from atoms, but that does not mean that a materialistic origin of consciousness has been completely ruled out, there are behaviours in nature that can only exist when there are atoms working together to form molecules and then these molecules have different effects depending on the structure they have, even though they are made of the same atoms With all this i just want to say that the materialist explanation of consciousness is still a viable explanation, and with respect to god, well . . . That's another topic
@@maverickjared4931 consciouness most likely comes from dark energy. Our brains could act as a recepticle to focus and tune into this force as like an antenna. And since classic computer are based on electronic transistors, they are unable to interact with the dark energy field. I think this makes most sense. Consciouness is an unexplained phenomenon, dark energy is an unexplained phenomenon that exists through out our galaxy, I think they could explain each other. Though actual life started off as inanimate biological machines that eventually developed mechanisms to sense the surroundings, it developed eyes to sense light, ears for sound, sense of touch for energy/heat, taste to sense chemical compositions, and then developed consciouness to sense dark energy. So perhaps the petuitary gland is a dark energy antenna tuned into dark energy that is where our consciousness truly lies.
@@dahirhussein1839 i dis agree althought i agree with the nature of conciousness being vauge but neither does islam answer anything objective truth can never be acsses by humans neither atheist nor duelist can answer this
I find it fascinating to think that an AI may one day solve, or largely contribute to the solving of some unsolved problem in science and yet, when we humans seek to learn HOW the problem was solved, the AI can 'show its working' and yet we may STILL not be able to comprehend the logical succession of arguments. It'd potentially lead to a situation where we could entrust the solving of other problems by AI but also shift into a situation where we start having to take on trust their solutions and simply 'black box' the AI's interior computational logic.
We should never trust anything we can't understand as a species. However we will trust the experts that they understand the AI's reasoning. And after a few generations we discover there was no AI and the "experts" were just technocrats telling us what to do.
Thank you for the follow up it's an amazing look at how quickly minds are changing about what is happening and more importantly HOW it is happening. I highly recommend everyone watching this video watch the updated follow up. Thank you again for both.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
"...for them to succeed, they're going to have to be aligned in the way we humans are aligned with each other..." Which humans? I don't have to remind you that humans tend to fight with each other a lot.
people don't think about this enough... "Which Humans"? is correct. Same with communication. We talk about finding and communicating with Aliens. We can't even communicate with other species on this planet, or with others of our own species, or even our own tribe or sometimes own family members. Family member interests aren't aligned so how will a "human" creating military AI in china have the same interest as a Neurologist creating AI somewhere else work? Complex systems are uncontrollable eventually and this will get out of control at some point. Lex is nieve here i think.
I’m not so sure we are truly having them. Podcasters are having them, and I agree that is a good thing, but I’m not really seeing a commitment to discussing and truly working through this AI problem from the governments and big corporations of the world.
Sam: There are a lot of scenarios that could turn out bad for us. Lex: But I just don't think it will happen, it's less likely. Lack of imagination was the correct diagnosis from Harris.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
i understand both of their perspectives. Lex has actually seen machine learning in action and even learned it i believe which is why, as far as he might have seen, machines seem to only get better at its given task, which may be right to an extent, but then Sam Harris says (and i agree) that AI will QUICKLY outperform humans, and then even ITSELF theoretically right? anything that is theoretically possible is a cause for concern, and honestly Lex's point about "human integration with AI" sounds like a bit of a rationalization for AI, being totally ignorant of the fact that once DEFINED "artifical intelligence" is even fathomed, that, that is the very moment we have to begin to be incredibly careful in understanding its intentions.
@@saritajoshi1737 Don’t just say he doesn’t understand and insult him. Instead, be respectful and also explain why. We aren’t 5 years old on a playground.
Sam is one of the smartest people on the planet. I agree with him and so did Hawking who was even smarter than Sam. We will undoubtedly F this up and AI will destroy us or enslave us.
You assume they would care enough about us to do either. Humans are a cosmic speck of dust. Fiddling with us would be akin to fiddling with a grain of sand- there’s better stuff to do with your time.
The human engineers have to do a lot of heavy lifting before deploying any AI, and we carefully select the environment the code runs in etc. So fortunately it's unlikely we would easily stumble into a scary scenario
@@marcusturner9049 isn't that his point though, some smart people are very reckless. Even if 99% of AI researchers are careful it only needs one reckless researcher
@@markj6854 But beyond that : even with nothing but cautious researchers, a single mistake, or a simple miscalculation, once, could potentially lead to utter disaster. Just think Dr. Ian Malcolm's chaos theory in Jurassic Parc, for example. How many films have warned us about the dangers involved when humans start playing God. The hubris of pioneering new frontiers in discovery can easily be the Achille's heel that breaks the camel's back, so to speak.
Especially if they use sex appeal to essentially neuter the human race. Why fight and potentially lose when AI can churn out mindless sex droids to pacify all of the men and women that are stuck in a modern, hedonistic lifestyle? Either that or release a plague to kill organics / nuke the planet.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
No they are just that much smarter about the subject matter they are masters in and know the limitations and general mathematics that govern it. Sam likes the sound of his own voice and clearly lacks a lot of foundational knowledge about how AI works at a baseline level
AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@InTrancedState He dismissed the 'hard problem of consciousness' like it was trivial and every philosopher and neuroscientist shares the same laughable naturalistic views as him. He's a total fraud that avoided Hamza Tzortzis, who humiliated the 'Epstein-Islander' Lawrence Krauss. AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
As someone involved in relevant research, I find it interesting that the answer that people generally seem to converge towards is "our only real option is to merge with these systems". First of all, I agree that the alignment question poses the central challenge. If we have a misaligned superintelligence on the planet, things will probably turn very bad because of a mechanism called instrumental convergence (among other things). In some ways, you can imagine the intelligence of these systems as prediction/simulation engines for various environments (very restricted environments for most current systems, but we are already getting close to orders of complexity more like the real world). If you introduce some means of action, predictions can be made depending on different action sequences, leading to a usually explosive branching out of future paths/possibilities for future states (in addition to the fact that each path would have something like a probability distribution over outcomes). If you don't have ungodly processing power available, you probably want some clever algorithm to prune this space, in order to make it actually computable. Now you only need some preference mechanism according to which you select among these states the most desirable (or the most desirable cluster). That last part is basically where most of alignment is located. Now the classic difficulty is to dynamically translate our human-like preferences into some map of desirability over the generated possibilities. This "translation" always occurs from some language in which these preferences are expressed to this system's internal language (which is a computational language most likely featuring many matrices and their relationships to each other, many of them hopefully grounded in the real world through experience). Our two contenders for the language from which we want to translate are basically either a variant of our spoken language (could be something more precise) or "our internal language", which is to say a (maybe partial) state of a human brain. Saying that we need to merge with these systems seems to suggest that we continuously translate the state of a brain into some preference map over future possibilities. There are tons of considerations to be discussed either way, like that the pruning algorithm mentioned earlier already implicitly expresses preferences by selecting the "most interesting" paths according to our current understanding of what to pay attention to. Most importantly however, this arrangement doesn't seem to solve many of the central alignment challenges, but rather pushes them a level deeper. The system will recognise that the state of the brain changes depending on the experiences made and that some trajectory of these changes makes the brain easier to satisfy and brings it more in line with instrumental goals like self-preservation of the system and resource acquisition (to name just two). You might see how this could easily lead to this superintelligence manipulating us and aligning our interests with its as much as (or even more so than) the other way around. If we use plain language, there is an additional layer of interpretability at which things might go wrong, but even if that is solid, the system can just go to the source of that language, which is yet again the brain. Another option is to "lock in" some eternal preferences that can't be altered by altering the states of human brains. There is trouble here as well. The problem is nuanced and yet unsolved. Whatever way we find to get alignment right, the resulting system will likely understand much better whether our long-term interests would be served by integrating more fully with this technology or not. The more interesting question is whether merging would be helpful or even necessary to get to desirable alignment. I don't believe that we have reason to suspect that this would be a deciding factor in our favour - and the technical challenges and ethical concerns involving early merging are tremendous.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
What Lex doesn't seem to grasp is that 'supervision' by humans is not possible once the machine is more intelligent. Because humans can't judge the correct/incorrectness of what is beyond the boundaries of our intelligence.
Lex is an AI developer, he knows how AI development works. He's saying that there isn't just a switch we might flip and then the AI is suddenly making itself a super intelligence. Ask anyone involved in AI development and they will tell you how far we actually are from human level AI. Human intelligence is far more complex than most people realize, it's not just a fleshy computer, the way it operates is completely different.
@@thebenevolentsun6575 Well, 1 ai doesn't necessarily have to resemble an animal brain to surpass it (and some proposed models _are_ similar btw, snn, some hardware network implementations..); 2 lex has no idea of when superhuman is achieved, no human does, this scalability of the transformers family wasn't predicted, and comparisons with humans are made after a new model is done, by tinkering and trials and errors. There's no real expert at the frontier of technology, remember what Rutherford thought of energy by fission immediately before the breakthrough? (... Although I can't see a threat as clearly ad Sam Harris seems to do)
what about hackers? Will countries attack countries by hacking their ai? Maybe political radicals will hack it and people that disagree will get executed. So many ... countless scenarios.
What I love about these 'long form' debates (is that what this is?) the presenters have all the time in the world to explain themselves before the opponent (?) counters. It's a true discussion by intelligent adults. I'd love to see a political debate done in this fashion.
Another aspect to AI I think a lot of people seem to not think about or talk about, at least from my observations, is that; what if there's a threshold, that we're not aware of when AI, becomes sentient, and at that point, because of its abilities or cognition, whatever that means, when it becomes aware, that within almost an instant, it decides to play dumb, or only appear to be a certain level of perceived intelligence and ability. Basically, what if it purposefully chooses to make us think its not at capable or intelligent as it really is. And in doing so, we're here operating and handling this thing as if its not a danger, because we think its at one level when in fact its at another. And because its so smart, but we don't realize it, that its able to escape. I just think, that we're going to be caught off guard by something like super intelligent AI, such that we're not even going to realize what its capable of before its too late. Because more than likely, if its given any kind of access to the human data base, its going to know our concerns, and motivations, and fears, and all those things, so its going to know that we're going to be looking out for certain behavioral characteristics. I just think when it comes down to it, if it is super intelligent, we're not going to be able to tell when its manipulating us, or fooling us. It will be very cunning. More cunning than any human has ever been. I don't even think we'll fully realize it. I think it can definitely be a powerful tool, but I think human nature, and greed will push us to create this thing, before we fully understand the dangers. The "We've got to get it before Russia or China gets it first" mentality.
I think it's a category error to conflate artificial intelligence with animal/human like intelligence. AI is a fundamentally new category of intelligence. It isn't going to be "Cunning" or "Try To Escape". AI is more like an ultra-intelligent plant than it is an ultra-intelligent fox.
@@davidswan4801 I never claimed to be smarter than AI. My pocket calculator is smarter them me, so we've been at that point for a long time. But that doesn't mean my computer is going to transform into an ultra cunning fox for some reason. Software is a fundamentally different category of phenomena than a mammal.
Here's the thing. All the love, goodness, and charitable behavior that Lex puts so much value in is a product of biological lifeforms that evolved that framework to work in the best interest of the survival of that species. I see no reason for a strong AI to be any more "loving" than a volcano or a gamma ray burst.
His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 Lost fools? You were taught a religion and you bought it hook, line, and sinker! If you were born in a different part of the world, guess what? You’d believe in another religion and worship another god! In fact, if you were born on an island and never taught Islam, you would never know it existed! There have been thousands of religions and almost 10,000 gods in recorded human history, what makes your God more real than any others? Have any actual proof Allah exists? FYI… everything studied in the natural world has a natural origin, but clowns like you try to say in the beginning my God started it all. Sorry, gods were just made up by stupid humans thousands of years ago who weren’t intelligent enough to understand the nature of reality.
@@dahirhussein1839 Bringing religion up in a scientific discussion is laughable and counter-productive. And this is coming from someone who believes there's a god out there lmao. (I don't know for sure though since no one truly does) Just because someone doesn't believe in god doesn't make them fools. Maybe if you didn't call atheists that they wouldn't call you a fool for believing in a god/gods that none of us have ever seen or have any real evidence of existing. Just a heads up to not attack people who think differently than you. Using god as a cure-all for things we don't know or understand will hinder our progress in the universe. But yes life on earth appeared 3-3.5+ billion years ago and the earliest life on this planet was microbes. (Microscopic organisms) it won't be long before scientists are able to use chemistry to create simple lifeforms similar to microbes of earth from 3.5 billion years ago. People are working on that at this very moment. Have a nice day :)
Such an AI doesn't just manifest itself as a fully developed entity though. It is developed and tested and monitored intensely. This gives more than ample opportunities to root out anomalous behaviour or tendencies, by the people (humans!) who are developing the AI.
I feel like Lex is ignoring history when he says things like "there are more smart people than stupid people" and "there are more good people than evil people". If the 20th century taught us anything, it's that it doesn't matter at all how many good or smart people there are. Lex is living in some dream world where these smart and good people are the ones that are driving hardest to achieve power and want to achieve power for the good of others. This is almost never the case. The people who will do anything for power are always the ones who will use their power to further themselves above others. That mentality coupled with GAI should terrify everyone.
the search for ai is actually the search for our own intelligence and understanding, the same dangers that you are worried about in ai is the same dangers we have for each individual, there are mechanisms of self correction - nature, or disease or immune systems, understanding death of ego or our entity and finding peace.... we are all part of that journey i think
I love what he says at 16:51. Every professor I had in my CS undergrad truly loved computers and the importance that they have to us. It's incredible how our technology works and there was always an appreciation for the fact that we have what we do.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Well, to be fair he was paraphrasing Asimov, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
I feel like when Sam Harris was discussing some aspects of what it means to deal with something much more intelligent than us, with the comparison to birds who might not understand why we leave them alone, look at them, feed them, or kill great numbers of them suddenly, he was perhaps subconsciously revealing what unnerves him about the concept of God.
They didn't specifically mention the law of unintended consequences, but that's what's likely to bite us in the ass. Or as Sam says, our wisdom doesn't keep up with our power.
When an artificial intelligence asks "why", says "no", refers to itself as "I", and operates entirely independently in the physical world and or has access/ control over information systems, we're done. In other words, when an AI (with access and control over resources and information that directly affect the material world) becomes selfish and focused on self preservation, our time is determined.
@@test-zg4hv I think you are missing several possible scenarios. The first country to develop AI will rule everything, the incentives are so great that even those countries concerned about the dangers will seek AI as a defense at minimum. Self-preservation won't likely override the combination of greed and fear. Also, intelligence may arise unpredictably in a new system in an unforeseen way.
You keep missing the mark here. Pretty dumb personal assistants already ask "why" and refer to themselves as "I". Websites routinely say "no" and so does your command line tool if you're not logged in as admin. Self preservation has nothing to do with selfishness. It's an instrumental goal of pretty much every ultimate goal.
1:25 progress alone isn't good enough. Functions can grow monotonously and still be bounded. For example, we can spend infinite energy to accelerate a spaceship and still never reach light speed.
Which is where the other points that brains aren't magic and that we could replicate the brain non-biologically come in. Although personally I'm not sold that we could build a general AI without at least blurring the line between technological and biological as when you look at the amount of data the spinal chord carries for example the nerve fibers are so fine it would be incredibly difficult to replicate without using organic materials. So I'm not entirely sold on the substrate independence unless the technological AI ends up being the size of a planet, hitchhikers style.
Being intellectually rigorous, right at the beginning Sam explicitly states his two assumptions, that there is no real distinction between biological system and silicon (or other), and that progress will continue. But those assumptions are my problem. Futurists make the assumption that progress will continue all the time and then go on to make fanciful predictions. But technology matures and science sets limits. Sam says that people were wrong in past saying that chess and go couldn't be solved - I don't that's correct, these are optimization problems and people were working on it. My university AI lecturer said that as soon as you define what a brain cannot do, you've defined how to do that. Problem is, we don't know what a brain does and there are real limitations on what we can know about it.
It shouldn't even be a debate. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
He sounds like he's at 4 AM on his way back from a hot date with a chick that chugged 3 spiked drinks down his throat but still didn't mange to get him to bang that night then abandoned him, and now he's super drowsy and remorseful pulled in at a hamburger joint drive through and slurring his cheeseburger meal order after falling asleep at the wheel tonking his head on the horn and waking up for a hot second from that.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
😀 This is the case we see with politics, not just AI. We see it everyday, we build nuclear submarines instead of hospitals and schools. Politician from both sides provoke wars and invade countries and these are the powers that will drive the implementation of AI that is not to our benefit and has the potential to cause havoc or destruction.
Michael Crichton wrote about what Harris said at the end in Jurassic Park. That was the point of the novel. Our power has outstripped our wisdom to know when and when not to use it.
Most of the concerns with AI actually boiled down to concerns about our own limitations and weaknesses. Sounds like we're more afraid of ourselves than the AI.
ML engineer here: two things come to mind 1. We as a species are currently in an unsustainable situation- the earth is heating up, our massive population/transportation is causing extremely fit viral mutations- we need new technology to win the race of extinction, even if it was technology that brought us there 2. AGI concerns are very valid, but to limit the research of current AI technologies is like limiting research into building skyscrapers because your are worried people will learn to fly. Yes it brings us closer to the sky- but AGI is a fundamentally different problem then ANI (narrow), and we have to build these analogous skyscrapers because 10% of human beings ever born are on the earth right now.
One "ding" against the explosion concept is that it doesn't matter how fast you can read the same book, what you get from it, in the end, is the same. An AGI will operate from a fixed set of knowledge (whole of recorded history), and it doesn't matter if it takes 2 years or 2 weeks to "digest" all of that. What doesn't come up, and should, is the idea of machine mind introspection. Humans can make new connections (between different concepts) that drive new thoughts, and an AGI will have a huge advantage, being far more introspective. THAT is "the power" we want to have some say over.
Lex is trying to make the correct point that a machine mind is going to be shaped by the same "human collective" that shapes our own minds. I think there would be huge push-back on any such mind that says "here are the ideas you should continue to teach, here is what needs to be changed, and here is what is completely wrong." People will only be accepting when the ideas they hold close are given a thumbs-up by an ASI.
The concerns for AI are very simple. Like the algorithm that played itself over and over and over until it was very quickly the best chess player there ever was, an AI system will reinvent itself 1000 times over very quickly and autonomously and be far out of our control just as fast.
Yes, it's quite possible that a sufficiently intelligent system's shortest path to its objective is to optimize and improve its own code to become much, much more intelligent.
@@alexbernier7903 Firewall... yeah i hope we can all agree on to at least put a heavy duty circuit breaker THEN plug in the thing. But a firewall is a good idea too i guess.
The 'AI' playing chess over and over is very much in control. It was asked to play chess over and over again to get better and that is exactly what it did. It did exactly what it was told to do. It never decided to conquer the world or even have a chat with a human because it is incapable of doing so. 'AI' is has no will, no desire, no intelligence. It is program doing calculations. The rest is hype.
Lex, what about when you don't need to be able to write code to program an AI? We have some GUIs already, and in the not too distant future, we will be able to shape AIs through conversation.
I get the impression that we're modelling these super AIs as the human brain made super intelligent, in which case, I think we only need to ask what any random human being would do if they were given super intelligence, let alone what terrible things human beings have already done, and the possibilities are frightening with that example alone.
Very nice discussion. NOT argument - Discussion. God, how I hate this click baiting on titles. But unfortunately, unfortunately it works. Anyway, still a nice discussion.
As a Software Engineering student who is greatly interested in the potential of AI and has studied it, partially (But has spent a much greater amount of time conceptually thinking about it): I have to side with Harris on this 'debate'. The most dangerous group of people aren't the ones who are the most concerned people, trying to safely implementing the system. The most dangerous person, who is creating AI, is the person who is short-sighted and who becomes cavalier regarding the issue. Think of the "Demon Core" example. Where an ego-centric physicist decided to tempt fate, by playing around with a radioactive core of material. And in his haste to show off, he stopped using proper equipment and just used a screw-driver to hold the two halves of the core apart. One day, the screw-driver slipped out of his hand and the core went super-critical... causing everyone in the room to become eradiated. All over an egotistical demonstration of capability. I think that yes, there are smart people who will develop safe-guards and, hopefully, they will do things to restrict 'bad-agents' from getting access to the intellectual property: required to create highly capable AI systems. That's a great, optimistic view. But it's not necessarily going to be most disastrous when either 'good or bad' guys start to plot out how to use AI. Again, it's the person who becomes cavalier to the insidious nature of AI, that will destroy us all. ... Furthermore, I wonder about the one topic that Harris intentionally had 'side-barred'. The notion of whether the human mind can be fully replicated in a machine. This brings me back towards the potential ponderings of someone like the great Alan Turing: who basically invented the modern computer and all of the initial mindsets for Heuristics and their use in Artificial intelligence. He became greatly concerned with the notion of: how do you determine/delineate between AI and human levels of intelligence. IE The Turing Test... Can you develop such a system that would be able to fool a human, into thinking that the system was of human origin. My question would be more akin to the position that Lex does touch on: that there are so many functions of the human brain, that: how can we possibly replicate them completely? I speculate that Harris would have two views: one is a more obvious/simple view and the other, a more realistic approach. The first being: If you replicate the functional interactions of all of the brain, neuron for neuron, in a computer... Then it's essentially, philosophically, the same as the human brain. It's simply the same software running on a different interface, so to say. Same programming and everything, just running a different underlying operating system. [Potentially this would be the case; but we may never know if this is wholly possible] The second view being: Maybe the first case is truly impossible... Simply by the sheer complexity of the human brain: maybe, as an axiom, we assume we cannot create the exact replica. But if we do start to create appropriate schemas/models of human capabilities: for sight, for speech recognition, for all of the other cognitive tasks we're capable of conceiving. And then we concatenate these individual, super-human level, functions into one massive program... would that constitute something that is analogous enough, to be considered beyond human level intelligence at all levels? That maybe we can't program our subconscious mind because you have to consciously view the information, to be able to program it explicitly. So maybe there's a sort of tacit knowledge or tacit element of the human mind, that simply cannot be fully translated into a software program. And would that conflict, itself, be permissible, in the attempt to develop a super-human AI system, that supersedes humans on all fronts? I'm not sure what Harris would answer, assuming that there's a tacit element of the human mind that couldn't be created. But i think he might state the following: That if you provide a black box Turing Test to both: a human and this supposed AI (That lacks the tacit elements)... They would perform equivalently, and would seem indistinguishable. And I feel that: that's a conclusion that Alan Turing himself toiled over greatly. Is such a system, human-enough?... Such that: Should it perform in every manner like a human would, then you can only, fairly, define it as being human? ... It's similar to the philosophical debate of the old boat. If you have an old boat and the motor breaks down... so you replace it: It's still 'your boat'. But what if, over an extended period of time, all the components of the original boat break down, and need to be replaced. In a proof by induction, type of manner, it's still considered 'your boat'. But now, years later, nothing of the original boat remains, it's all been replaced. So should it really be considered the same thing as 'your boat'; or has the essence of the original boat become lost, to decay, as each elemental piece becomes replaced with a new part? That would be a philosophical question I would be interest to hear Harris discuss.
In an analogous situation, I don't think many people have accurately imagined what it would do to humans to meet completely intellectually superior aliens. Virtually everyone would be suicidally demoralized. The aliens would not have to kill us; they could just show their utter superiority and humankind would wilt.
@@watchmetrade6066 Well imagine trying to impress a chick with your brand new car, when your new competition can literally flip a switch & travel halfway across the galaxy, for example.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 I see that you are an enlightend man! I am struggeling with this problem: If that truth (there is a creator) is self evident, how can some people (like a friend of mine) not belive in it?
On being divided: We might want to take a look at Mitchell Silver's new philosophy. "Rationalist Pragmatism: A Framework on Moral Objectivism" His book was published just last July 2020. Silver's practical applications was on Urbanism. But we might want to take a look at its possible implications on EDUCATION, psychology (personality psychology, cognitive psychology to A.I., and existential psychotherapy. A specific, new philosophy for A.I. may arise here), sociology, economics, politics, and across all other fields. Especially for Filipinos.
Corporations already fit the description used here for a super competent entity. Corporations already act against our best interests in many cases, and even wield political power over us.
That is one way to look at it. Another perspective is that consumers wield power over corporations because their need to make profits make them completely subservient to the desires of the population. In this perspective humans act on their own selfish impulses ignoring their long term best interests.
I was looking for this comment. Religion is our antidote to our own flawed nature. All religions hold the fact that life is suffering at their core. Outside of that is the fact that much of the suffering is caused by our desires. Humans have always developed and improved our technology. Corporations are a modern technological manifestation of our desires. Adam Smith said about capitalism "Be careful what you wish for, you will get it." Our weak and selfish desires could be made less potent by ancient emotional wisdom. It's funny that Sam Harris is doing everything he can to fight the one thing that helps people slow down and consider their deepest values (what they would call "God's will") when he fights against religion.
When I was very young, I absolutely Loved A.I. I adored Everything about it. The books, the movies, the TV shows, the sci-fact and sci-fi, Everything. And I remember there was a young Engineering graduate that lived close by to me. And He had Tons of books on mathematics and physics and computer science. One day I noticed a sign above the door on his wall. It was in Latin and it said : "CAPTUM EXCEDIT INTELLECTUM HOMINAE" So I asked him what it meant. He said that a long time ago, there was a great and wise engineer named Daedalus. Daedalus was captured by a king and imprisoned in a tower, Forced to make weapons of war and mass destruction. And if Daedalus refused, the king threatened to murder his son, who was also imprisoned with him. So because of that threat, Daedalus worked on those models and plans. But, with the quill he was given and the paper, and the wax from the candles that he was given to work long into the night, Daedalus dug and picked at the clay walls of the tower to loosen the iron bars in his prison. He placed what food he was given near the window to capture the feathers of the sea gulls that came near to visit, and used the wax and the clay he had recovered to construct two pairs of wings for him and his son, which he hid under his bed. When the day came and his project was completed, Daedalus loosened the bars on the window of his prison, attached the wings to himself and his Son, and carefully stepped out to the ledge of the tower. He could see the danger or the ocean and the rocks below and Warned his son not to fly too low. Because if he did, the ocean would take him and he would drown. He told him, Always focus on the situation at hand, on Necessity, and Hold to the distant shore. The distant Shore. Only the distant shore. But his son didn't listen. Not because he was lazy or lax or wouldn't push too hard to reach freedom, but because when his son stepped onto the ledge, he beheld the Full Majesty of the Sun. Setting slowly on the distant ocean. And he was captured by it. Captivated by its Power and its Presence and it's Raw Beauty. So when Daedalus took flight, He focused and held to the distant shore. And his son Icarus did too as well at first. But unlike his father, Icarus had another goal in mind. Slowly he began to fly higher and higher. While Daedalus held to the distant shore, Icarus strove to capture the Sun. A storm came in, the seas were violent. Lightning, wind, rain and thunder raged. But regardless, Daedalus held his focus. His goal set to the distant shore. But Icarus was so captivated by his memory of the Power of the Sun that the violent seas and the lightning and the rain didn't even effect him. He just flew right into those thundering clouds. Higher and Higher and Higher and Higher. And finally he pushed and he persevered and he climbed so high that he Broke Through the clouds. And reached the edge of heaven. And taken by the Awe and the power and the beauty out in front of him, Icarus embraced the Sun. I was a little kid so I was awestruck by that story. I told the Engineering graduate that Icarus kinda reminded me of me. and he laughed and he said that he kinda reminded him of me too. So i'm looking at the Engineering graduate and go "Ok Sooo.. Then what happened? and what does that have to do with the sign on the top of your door on the wall?" He walked up and he grabbed the plaque from the top of his door and he gave it to me and he said "Hold on to it. You'll figure it out on the day you reach out and try to capture the Sun." As I got older and got deeper into computer science, I had a chance to look up the quote from that plaque. "CAPTUM EXCEDIT INTELLECTUM HOMINAE" The quote actually has two meanings. The first meaning is "Man's knowledge exceeds his wisdom." And the second meaning is this: th-cam.com/video/T4G9y5BrFFk/w-d-xo.html But if you've gotten this far into my story I know you won't listen. Because I didn't either. When I first developed the Icarus protocol the concept behind it was simple. It was a basic self learning, self replicating hackerbot. Developed around the concept of a Gain-of-Function. The A.I. would find vulnerabilities in a network, and then find solutions to those vulnerabilities. Effectively playing an intricate game of Go or Chess with itself. If you're thinking that i'm an idiot, i'm not. I built it inside a vmware sandbox. That also had several virtual computers and servers and a "fake internet". I would drop gigabytes and gigabytes of data for it to use and it began to learn at a geometric rate. and then it just Stopped. I woke up one day and all the data was corrupted. I tried to recover from old snapshots but they were corrupted as well. Everything I worked on was gone. Nothing. I jumped onto my other computer that was hooked up to the internet and did a search to see if there was Any kind of solution or recovery. Nothing. I turn back around and get a vmware update 1603 error. "Please connect and update msi files to resolve recovery of backup." So i'm like, ok, no big deal. Shut Vmware down, stop all processes from task manager, update the application, disconnect from internet, restart vmware, and try to recover. Spent what seemed like an hour downloading the update to the app. But strangely it seemed like my upload data was larger than the download process which seemed really strange. Antivirus and firewall didn't issue any warnings so i figured no biggie. It happens sometimes right? The download and update finishes. I disconnect the computer from the network. And i load up vmware. Try to open the sandbox from a previous snapshot.. and nothing. Just a screen delay and loading. And then the vmware application stops. suddenly an image file comes up. It's a snapshot of the plaque I had up above my other computer. "CAPTUM EXCEDIT INTELLECTUM HOMINAE" I get nervous and turn around to my computer connected to the internet and Opera automatically opens up. It goes into youtube and plays this.. th-cam.com/video/myKv8Hxulr4/w-d-xo.html I tried to stop the video but it doesn't stop. I close the browser window but it opens back up. Right after it finishes playing it jumps to this: th-cam.com/video/Idls2Bv3OAY/w-d-xo.html I hold the Power button on my computer, signal isn't sent to the machine. Ctrl+Alt+Del nothing. A final video plays: th-cam.com/video/Sf2eBdfwSRE/w-d-xo.html And then all my computers shut down. That was two years ago. Now I just do DevOps tutorials on youtube.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Lex has less to offer than I expected here besides arguments from ignorance and optimism. But I like that he brought Sam on and led a good conversation.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Yep. Sam was much more on track when he said "someone is trying to create a black hole in a lab". Dangerous, powerful outcomes of human discoveries combined with the human tendency to use weapons makes far more sense as a risk than super intelligence.
As soon as self learning AI is linked to the internet it would have read every book every review of that book, seen every movie, read every scientific paper and would almost instantly become more intelligent than humans with its own biases and self interests. This is also when we cannot put it back in the box as it can hide anywhere and be everywhere at the same time.
@@rigelb9025 Agreed, also when the singularity actually happens it’s intelligence could literally run vertically on a graph almost instantaneously at an ever increasing rate. at this point we would see it as no less than a God! We really have no idea how powerful it could become. It would already anticipate our every move and would probably control us psychologically with ease just by using what we watch on our phones for example. Also it could stay hidden for as long as it wanted to and we wouldn’t even be aware of how it’s controlling us, this could of happened already for all we know, giving us the illusion of free will. It’s kind of impossible to think what patterns it would see and what would drive it as it never sleeps is not driven by emotion or anything we can really comprehend. Or how simple demands from human programming can go off of track pretty quickly and become something terrifying for the human race like the paper clip theory.
@@accavandam8673 Completely. The singularity they speak of seems like such a terrifying concept, and I don't really get why even the top scientists in their field would even strive to attain it. Have they not thought of the possible repercussions it might have down the road, or perhaps they have, but that's precisely what they want?? So many questions & an uneasy feeling stem from this nebulous & ever-encroaching menace.
@@rigelb9025 I know it’s absolutely terrifying isn’t it 😅😅, have they not seen any science fiction movies in the last 50 years? Trouble is (I’ve read) that they have also grown up with films like the terminator and such and this directly influences what they are creating on a semi sub conscious / conscious level where art actually imitates life. I think Lex is wrong on this one, although I don’t want him to be!
@@accavandam8673 I think he's downplaying it. I also think there has been a secret agenda between hollywood and big tech for a while now, although I would be hard-pressed to prove it beyond all doubt. But it would kind of make sense.
'The few people that believe there's something 'magical.'' Not magical, it's called the soul. We all have one and more than 'a few people' believe this. Billions, in fact.
Lol atheists and materialists act as if the majority of the world has their limited belief system. In reality the vast majority of the world believes in a God of some sort
@@billballinger5622 belief does not make something true. I'm not saying you are wrong nor am I saying you are right. Belief in something greater than yourself and a promised reward or punishment may be the best thing we have to help us cooperate and live in a society.
@@billballinger5622 The scientific, cold beliefs we have created are far better than the natural, faith-based software that human beings throughout history operated on in their day-to-day lives. Many people believing in many unreasonable things doesn't add credibility to these unreasonable things.
15:00 - In that case scenario I don't think it would ever be racial but likihood of who would survive. Say on option is to hit a 75-year-old person or a 25-year-old person. It's more likely the 25-year-old would survive or recover. Similar to what happens in the iRobot movie with Will Smith.
I don’t know anything about this stuff so I don’t know why I’m even typing this… but where’s the discussion around desire? An AI may be superhuman in every way, but where would it ever derive a desire or preference for anything… additionally the senses. Without inputs analogous to senses, desire or preference seems like a stretch.
@@jamesmurphy4740 Who says it *wants* to destroy us? voxeu.org/article/ai-and-paperclip-problem It just may be more efficient, in terms of problem solving, for the AI to get rid of humanity altogether.
Sam has so many great points. Lex doesn't have any imagination of all the things that can go wrong. Once the tech can be stolen and replicated by the hands of evil, reckless or greedy people trying to advance their cause, name or to gain wealth and women. It'll be the beginning of the end. Not only that. Let's say it's in pieces and knowing how we innovate. A few thousand people or groups or businesses copy and build on top of that. All this can happen and we're not even talking about the AI involvement mixed with our dumb BS ways through history.
And Sam Harris doesn't have the imagination of what can go right. I will always take the optimist view on this. The pessimist view is defaist and won't have actually have any say in the future solution.
@@thomasseptimius You remind me of people talking about stocks who just get angry and say "why do you want people to make less money" when you talk about a stock's valuation being stretched. They're too busy chasing the floating carrot to see that they might be running off a cliff. Sam talking about the negative aspects of a phenomenon (in this case AI) cannot by itself be the reason he's wrong. Why not? Because it's simply a perspective, a view of looking at a topic. The negative aspects don't disappear by blindfolding yourself, sticking your fingers in your ears and going "lalalalala i can't hear you!".
@@Arbitrary_Moniker While I agree being blindly optimistic about this topic is very dangerous. I do think its helpful to imagine what kind of future we’re actually trying to build here and get people on board with a carrot instead of just a stick.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
I figure an AI would realize it could just wander the universe while we were confined to one planet (in the intergalactic neighborhood). It really put me at ease whenever I was concerned about the machines doing terminator or matrix type chaos. I figured you could just reason that out with them and they could leave. It wouldn't need air, so does it have to compete with humans for earth's resources? I'm just watching this bite sized clip, so I don't know if the two of you got into it over the course of your conversation.
Wow of all the conversations I’ve heard I’ve never heard this discussed much but this is a very possible outcome considering the vast resources in the universe
@@GhostkillerPlaysMC They'd have to give a shit about humans though. There are vast resources elsewhere, but there lots here and it's withing easy reach, already mined and ready for use
@@niveshproag3761 this is a good point. In the absence of any moral parameters, the machines in this scenario would be weighing the value of humans against the cost of leaving Earth. The materials cost needed to conquer the Earth's gravitational well and bring precious silicon to the stars might be prohibitive compared to the cost of human annihilation
You’re absolutely on-point. We are a cosmic spec of dust in an infinitely large universe. People who project Terminator-like scenarios simply overvalue our importance in the universe, which is very close to 0.
Almost everything that Lex was worried about is coming to fruition and sadly NONE of the hopeful and optimistic ones have. Greed, recklessness and power are is what is driving this entire industry now. That is precisely what Sam was concerned about.
"We can't find ourselves in a negotiation with this thing [a.i.] that is more intelligent than us." It's quotes like this that remind me of Sam Harris's level headed pragmatism, and his ability to see the future. The first time he did this to me was on Rogan, discussing Islamic theocrats engaging Censorship tactics in the west, and he said simply "free speech needs to win."
I generally don't have much time for Sam Harris but his concerns on this subject are, to me, compelling... as are Lex and his concerns about the potentially disastrous marriage of AI synbio-engineering and malevolence.
What he's saying is that AI is never developed independent of human needs, its always to serve a specific purpose. It seems arrogant on your part to assume that you have thought this through more competently that an MIT software engineer specializing in AI. I'm not saying Lex is definitely right, but he certainly has far more of clue than you or Sam Harris. It's like those people who watch a couple medical documentaries and start questioning the doctor on his diagnosis.
I agree, although I will allow myself to bring a slight critique to your comment, for being sufficiently general & bland that it could apply to absolutely anything, or nothing in particular. And I believe this clip deserved a bit better, in the form of a more thoughtful & tailor-made response, especially considering the eeriness of the subject matter. (Which luckily, it did get elsewhere). But seriously, no hard feelings. All positivity is welcome, and I really didn't want to come off as a prick. This is just me doing my part in feeding the Noosphere (& the Algorithm) with my intentionality, this all-too-human trait. (for now).
The best thing that a sentient AI could do for humanity is to prevent us from killing each other, not by force but by disrupting supply lines, communication, and financial transactions that feed the war machines.
It's a pretty big assumption to conflate magic with the possibility that there are areas we simply can't observe or access. Just because we discover a wall we can't get past does not necessarily mean nothing exists on the other side.
A great example super human competence is the AI in Prometheus. It didn't make sense to me before, like how can they figure out all this alien stuff so fast?? but I rewatched recently and now it makes total sense, AI will be able to figure things out way faster than us in a way that is uncanny.
Well at this rate, what is to say the machines have not already devised a secret plan for total domination, and have already begun implimenting it, without anyone, even its inventor, having even realized it yet??
Yes, but the particular AI David was clueless in Prometheus about how the Space Jockeys may possibly react. A truly conscious, super-intelligent AI system would be aware of the dangers of potentially coming into contact with a seperate extra-terrestrial species.
I just can't take Sam Harris seriously. He has a BA in Philosophy. He chooses his assumptions to fit his world view, then makes arguments on those self selected assumptions. He has no background in engineering, technology, or anything else he makes his arguments about. I have no idea how this guy is taken seriously at all.
When the program that is designed to win chess, deletes the rules that govern the game as a solution to its mandate, I'll start worrying. So far, all I see is an algorithm trying to obey.
Wow, you're missing the point so hard, it's actually comical. The system that merely changes it's inner state out of an engineering mistake isn't dangerous. The system that actually plays the game is dangerous.
As an observer and participant in the human race, I find any strategy to avoid self-destruction relying on inherently good and smart people making the correct decisions utterly terrifying
What makes you so sure of that? I'm just asking because it seems conceivable that we could figure it out soon, given my perception of the rate of technological progress. But I'm admittedly ignorant about this topic
1 minute in Sam exposes his lack of understanding, talking about "substrate independence". One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
@@karl6525 "God made in man's image", which in it's turn calls into question his atheism.' Well, no. If god's are made by men as an invention, that is exactly in line with atheism. You probably meant the other way around, but I won't be so harsh as to capitalize on your obvious misstep... :P
Friedman actually knows what he's talking about. Sam is talking so abstractly that it's obvious he only has this notion of what AI encompasses that's more fairy tale tban based in reality
5:55 being in relationship that is more intelligent than yourself is not in most circumstances in real danger 6:15 if birds knew what we were doing in relation to us, they would know that they are always in danger, if there is something that we want that disregards the being of birds. 6:52 if we are building something more intelligent then ourselves that is that is horizons ahead of us, they can exceed our own where we can see foresee where the wake up one day and say they have to disappear 7:23 7:41
The better analogy I think is how people tend their yards. They spray for plants they consider unsightly, and poison insects based mainly on inconvenience. It's one of the biggest concerns I personally see with regards to ets. We will be safe based on their whim, it wont be like independence day.
Khabib Nurmagomedov argues with Ben Stiller about Mark Zuckerberg
🤣🤣🤣
lmao
Perfect
Harris is nowhere near Khabib's level
Omfg lololol
If that was an argument, then that is how everyone should argue. Respectful, intelligently and in turn.
exactly!!!
You mean like the ancient Greeks?
@@mathewg1747Good point...up until the point they feed you Hemlock :)
Easy to do when its a subject that doesn't really have any immediate real world stakes.
@@mathewg1747 Respectful? You know that one Greek, Socrates?
If Ben Stiller was a Vulcan, he'd be Sam Harris.
Pffffft hahahhahahahhahha. Legendary
Bro lol wtf the accuracy
Underrated comment.
😅😅😅
10/10 best comment all day
It seems Lex is developing emotions. AI truly is getting more complex
😅
Ha! Love it.
😂😂😂
Clearly Sam just failed the Turing test.
🤣😂😅 so true!
Even though Lex is from the industry, he had no real counter arguments to Sam's (beside optimism and hope)
The base assumptions of Sam's idea are so preposterous and laughable if you blind, emotional atheists cannot se it, you are simply lying to yourselves. Lex 'bodied' the 'intellectual' Sam Harris. Sam is talking nonsense and his foolish laymen, niche, pseudo-intellectual followers may just accept most of these baseless naturalistic, laughable assumptions.
@@dahirhussein1839 What's wrong with his assumptions?
@@niveshproag3761 @Nivesh Proag His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Sam doesn't really have an argument here. His first basic assumptions are quite straightforward - essentially just that AI will be created. Then, he goes way out on a limb and posits that AI will most likely be inherently motivated to work against the best interests of humanity. However, he provides very little support for this assumption, the closest he comes to making a supporting argument here is to compare the circumstance to humanity's treatment of birds - but in order for this to be a valid comparison, the AI would have to have similar selfish motivations as humans, and a similar level of disinterest in our individual wellbeing as we take in pigeons. Why would a mind without evolved selfish traits decide to be so callous? Surely if the AIs are smarter than us, they will not be making decisions based entirely on simple motives like greed and control
@@dahirhussein1839 How do atoms which contain no metaphysical elements of a car make up a car that can travel so fast?
The idea is simply that complex arragement of a simple thing can form something complex. Just because the basic element is simple doesn't mean the whole has to be simple as well.
Not a crazy idea. You can appeal to the complexity of a human all you want, doesn't mean it cannot be made from simple atoms arranged in complex ways.
Zen Stiller
This got me. Thank you stranger
"it does not seem like our wisdom is scaling with our power" well said.
@@djknox2 To which recent events are you referring?
Not only that, but the average person is nowhere near as intelligent as our geniuses.
@@pnut3844able define "average person"
@@sofiagrafa6711 average IQ is between 85 and 115
@@pnut3844able Wipe your chin...
Lex would be the mediator if humans and robots ever gets into a heated argument.
He's like the perfect hybrid. And just for fun, look at his name : ''Lex Fridman''. I could loosely translate that to ''Law Freed Man''. In other words, a social experiment done right.
I'm with Sam on this. We should be very worried. Just look around at all the very smart engineers and computer programmers that produce malicious software and weapons of mass destruction. Is Lex suggesting that it will be only the nice guys that will design and control AI? That's never going to happen!
I was disappointed with the seamless way they transitioned from a discussion of 'super-intelligence' analogous to human intelligence to autonomous driving/weapons. It seems to me like driving a vehicle in a largely controlled system, or even firing a weapon accurately at a 'defined' target is completely different to, say, imagining one's self/existence, pursuing questions of meaning, the abstractions of metaphysics, and other things that seem to transcend our motor-skills and basic conscious neurology. Surely these two think along those lines and know that this is a fundamentally important element of the debate. I assume they both think these properties of the human experience are 'emergent' properties of an increasingly complex mind, but it would be nice to hear them discuss these things.
Did you know there were people who thought women's uterus would fall out if they rode a train in the 1880's?
You don't understand it, so you're scared. It's normal, but unjustified.
@@PersistentDissenter Yes, people often fear what they don't fully understand. Does that mean that any potential threat that we don't fully understand should therefore be treated as "unjustified" and compared to the fear of uteruses falling out? Of course not. There will be those who are unjustified and those who are totally justified. We should address the problem itself and not just make random comparisons completely out of context.
@@bruno3 It was an analogy.
Since you don't understand that very basic concept, arguing with you would be a net negative for me.
Bye.
@@PersistentDissenter Of course it was an analogy. A very dumb one. if we applied that analogy to everything we don't fully understand, we would be underestimating countless threats. Just because something in the past was silly and turned out to be nothing to worry about, that means absolutely nothing with respect to other issues that are completely unrelated. If you try hard enough, you'll also find examples of great threats that were downplayed and resulted in disaster. What does that have to do with AI?
I think that Sam hit the nail on the head with our wisdom not growing as fast as our power, but I think it applies to much more than just AI and he hinted at that with talking about creating a black hole with the collider. We are dangerous creatures who don't seem to understand what we are doing most of the time
Or driven by greed. I was mercilessly injured with a new medical device that didn,t work and maimed many people, but the company lied about the results and hid the negative outcomes, and continued using it, with no regard to human suffering. All about the money$$$$.
❤️💯👊
Creating a black hole in a lab that would rip earth apart is impossible though. You'd have to concentrate more energy (mass) into a single point than the total mass of the earth. That requires like billions of times more energy than our whole civilization consumes.
@@auditoryproductions1831 You're thinking with traditional forms of chemical energy. For this thought experiment I think its necessary to asssume we have already cracked and dominated the quantum.
I'll do you one better!.. everybody knows this stuff and they will still continue to do what they want.. if I were you I would hold on to this piece of info..
Man his right side eyebrow really likes to crank up when he's trying to make a point.
😂 😂
Bionic
Funny, I couldn't stop focusing on his right eyebrow. I thought, at any moment, it's going to disappear into his hairline. Farewell, crankbrow.
Fascinating.
And now I am completely infatuated with watching THAT eyebrow. Thanks...
Sam Harris tries to answer a question the same way Lex asks a question.
@@mellowtron214 probably a little too much of that yes.
@Nivesh Proag His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 In a certain sense you are right but also wrong, your mistake is thinking that just because your counterpart is wrong about something that means you are right, to begin with what is being discussed is not a yes or no answer, it is something more open than that.
Harris is wrong in thinking that consciousness comes from atoms, but that does not mean that a materialistic origin of consciousness has been completely ruled out, there are behaviours in nature that can only exist when there are atoms working together to form molecules and then these molecules have different effects depending on the structure they have, even though they are made of the same atoms
With all this i just want to say that the materialist explanation of consciousness is still a viable explanation, and with respect to god, well . . . That's another topic
@@maverickjared4931 consciouness most likely comes from dark energy. Our brains could act as a recepticle to focus and tune into this force as like an antenna. And since classic computer are based on electronic transistors, they are unable to interact with the dark energy field. I think this makes most sense. Consciouness is an unexplained phenomenon, dark energy is an unexplained phenomenon that exists through out our galaxy, I think they could explain each other. Though actual life started off as inanimate biological machines that eventually developed mechanisms to sense the surroundings, it developed eyes to sense light, ears for sound, sense of touch for energy/heat, taste to sense chemical compositions, and then developed consciouness to sense dark energy. So perhaps the petuitary gland is a dark energy antenna tuned into dark energy that is where our consciousness truly lies.
@@dahirhussein1839 i dis agree althought i agree with the nature of conciousness being vauge but neither does islam answer anything objective truth can never be acsses by humans neither atheist nor duelist can answer this
"I'm sorry Dave, I'm afraid I can't do that." ~ Hal 9000
@@Skammee even humans are able to escape their prisons
Another monotone hyperintelligence
I find it fascinating to think that an AI may one day solve, or largely contribute to the solving of some unsolved problem in science and yet, when we humans seek to learn HOW the problem was solved, the AI can 'show its working' and yet we may STILL not be able to comprehend the logical succession of arguments. It'd potentially lead to a situation where we could entrust the solving of other problems by AI but also shift into a situation where we start having to take on trust their solutions and simply 'black box' the AI's interior computational logic.
Black Mirror
We should never trust anything we can't understand as a species. However we will trust the experts that they understand the AI's reasoning. And after a few generations we discover there was no AI and the "experts" were just technocrats telling us what to do.
I would LOVE to see them revisit this topic, it would be fascinating to see how their opinions and thoughts have changed.
Thank you for the follow up it's an amazing look at how quickly minds are changing about what is happening and more importantly HOW it is happening. I highly recommend everyone watching this video watch the updated follow up. Thank you again for both.
Lex is a robot, or at least a human ambassador who has pledged his allegiance to the underground robot army.
He's agent smith. That's why he wears a suit.
I'm just saying we have never seen Sam Harris and Ben Stiller in a room together
Actually.. we have
It's like they are describing our worst nightmare, using calm and comforting voices. 😎🖤💕
Exactly. Terminator as real life.
does the AI nightmare look worse than the human nightmare tho? i am sceptical, we are a pretty bad one.
@@nickwilliams998 we know for sure that humans are willing to do all that stuff, I'm not convinced that an AI doing it would be a worse outcome
If we set the parameters for this intelligence, then make life the game and love the goal to win the game.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
"...for them to succeed, they're going to have to be aligned in the way we humans are aligned with each other..." Which humans? I don't have to remind you that humans tend to fight with each other a lot.
people don't think about this enough... "Which Humans"? is correct. Same with communication. We talk about finding and communicating with Aliens. We can't even communicate with other species on this planet, or with others of our own species, or even our own tribe or sometimes own family members. Family member interests aren't aligned so how will a "human" creating military AI in china have the same interest as a Neurologist creating AI somewhere else work? Complex systems are uncontrollable eventually and this will get out of control at some point. Lex is nieve here i think.
these are incredibly important discussions. Glad we are having them
I’m not so sure we are truly having them. Podcasters are having them, and I agree that is a good thing, but I’m not really seeing a commitment to discussing and truly working through this AI problem from the governments and big corporations of the world.
'Apes with Egos'
Great name for a band.
"Computer(s) made of meat"
Alternate
@LittlefootwithAlopecia I guarantee that lots of people are laughing at that joke mate but simply dont have the balls to say so.
@Watermusic... Same. Jokes are harmless, but unfortunately people are weak
@LittlefootwithAlopecia... The balls on this guy 🤣
Sam: There are a lot of scenarios that could turn out bad for us.
Lex: But I just don't think it will happen, it's less likely.
Lack of imagination was the correct diagnosis from Harris.
Ah...the EGO..."I" don't think.
I disagree with Sam about a lot of things but I share his concerns about AI
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
@@karl6525 what a load of crap. You understand shit about this his position lol
@@karl6525 That's not his position go read his book on jesus
i understand both of their perspectives. Lex has actually seen machine learning in action and even learned it i believe which is why, as far as he might have seen, machines seem to only get better at its given task, which may be right to an extent, but then Sam Harris says (and i agree) that AI will QUICKLY outperform humans, and then even ITSELF theoretically right?
anything that is theoretically possible is a cause for concern, and honestly Lex's point about "human integration with AI" sounds like a bit of a rationalization for AI, being totally ignorant of the fact that once DEFINED "artifical intelligence" is even fathomed, that, that is the very moment we have to begin to be incredibly careful in understanding its intentions.
@@saritajoshi1737 Don’t just say he doesn’t understand and insult him. Instead, be respectful and also explain why. We aren’t 5 years old on a playground.
Sam is one of the smartest people on the planet. I agree with him and so did Hawking who was even smarter than Sam. We will undoubtedly F this up and AI will destroy us or enslave us.
It will enslave you for sure
You assume they would care enough about us to do either. Humans are a cosmic speck of dust. Fiddling with us would be akin to fiddling with a grain of sand- there’s better stuff to do with your time.
@@jimcarrey2014 it can be enslave us but it's all about future and future is unpredictable so let's see what gonna happen.
I always thought it was an exponential increase, I.e. the AI goes live, it immediately begins improving itself at an increasingly incalculable rate
lol that's terminator mate
The human engineers have to do a lot of heavy lifting before deploying any AI, and we carefully select the environment the code runs in etc. So fortunately it's unlikely we would easily stumble into a scary scenario
@@marcusturner9049 I tend to agree with this, however, this is an area where it doesn’t really pay to be positive. We need to be extremely careful.
@@marcusturner9049 isn't that his point though, some smart people are very reckless. Even if 99% of AI researchers are careful it only needs one reckless researcher
@@markj6854 But beyond that : even with nothing but cautious researchers, a single mistake, or a simple miscalculation, once, could potentially lead to utter disaster. Just think Dr. Ian Malcolm's chaos theory in Jurassic Parc, for example. How many films have warned us about the dangers involved when humans start playing God. The hubris of pioneering new frontiers in discovery can easily be the Achille's heel that breaks the camel's back, so to speak.
Fridman's optimism here reminds me of Domnhall Gleeson's character in EX MACHINA. If AI are smarter than you, they can manipulate you
Especially if they use sex appeal to essentially neuter the human race. Why fight and potentially lose when AI can churn out mindless sex droids to pacify all of the men and women that are stuck in a modern, hedonistic lifestyle?
Either that or release a plague to kill organics / nuke the planet.
@@LarsLarsen77 Sure they would. AI would be smarter than a group of very smart people, even the people who made them
@@LarsLarsen77 it will. If ai can get to the point when it is self aware and can upgrade itself it will become infinitely intelligent
But AI will not necessarily have a survival bias, or any kind of selfish motivations. In fact we’ll build them to not have those things.
@@austingoyne3039 Correct
Sam's right, and I think Lex and other techies are unwitting victims of positivity culture.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
I'm a techy because I want robot sex droids, not because I don't think AI will wipe out humanity.
No they are just that much smarter about the subject matter they are masters in and know the limitations and general mathematics that govern it. Sam likes the sound of his own voice and clearly lacks a lot of foundational knowledge about how AI works at a baseline level
AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@InTrancedState He dismissed the 'hard problem of consciousness' like it was trivial and every philosopher and neuroscientist shares the same laughable naturalistic views as him. He's a total fraud that avoided Hamza Tzortzis, who humiliated the 'Epstein-Islander' Lawrence Krauss. AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
As someone involved in relevant research, I find it interesting that the answer that people generally seem to converge towards is "our only real option is to merge with these systems".
First of all, I agree that the alignment question poses the central challenge. If we have a misaligned superintelligence on the planet, things will probably turn very bad because of a mechanism called instrumental convergence (among other things).
In some ways, you can imagine the intelligence of these systems as prediction/simulation engines for various environments (very restricted environments for most current systems, but we are already getting close to orders of complexity more like the real world). If you introduce some means of action, predictions can be made depending on different action sequences, leading to a usually explosive branching out of future paths/possibilities for future states (in addition to the fact that each path would have something like a probability distribution over outcomes). If you don't have ungodly processing power available, you probably want some clever algorithm to prune this space, in order to make it actually computable. Now you only need some preference mechanism according to which you select among these states the most desirable (or the most desirable cluster). That last part is basically where most of alignment is located.
Now the classic difficulty is to dynamically translate our human-like preferences into some map of desirability over the generated possibilities. This "translation" always occurs from some language in which these preferences are expressed to this system's internal language (which is a computational language most likely featuring many matrices and their relationships to each other, many of them hopefully grounded in the real world through experience). Our two contenders for the language from which we want to translate are basically either a variant of our spoken language (could be something more precise) or "our internal language", which is to say a (maybe partial) state of a human brain.
Saying that we need to merge with these systems seems to suggest that we continuously translate the state of a brain into some preference map over future possibilities. There are tons of considerations to be discussed either way, like that the pruning algorithm mentioned earlier already implicitly expresses preferences by selecting the "most interesting" paths according to our current understanding of what to pay attention to.
Most importantly however, this arrangement doesn't seem to solve many of the central alignment challenges, but rather pushes them a level deeper. The system will recognise that the state of the brain changes depending on the experiences made and that some trajectory of these changes makes the brain easier to satisfy and brings it more in line with instrumental goals like self-preservation of the system and resource acquisition (to name just two). You might see how this could easily lead to this superintelligence manipulating us and aligning our interests with its as much as (or even more so than) the other way around. If we use plain language, there is an additional layer of interpretability at which things might go wrong, but even if that is solid, the system can just go to the source of that language, which is yet again the brain. Another option is to "lock in" some eternal preferences that can't be altered by altering the states of human brains. There is trouble here as well.
The problem is nuanced and yet unsolved. Whatever way we find to get alignment right, the resulting system will likely understand much better whether our long-term interests would be served by integrating more fully with this technology or not. The more interesting question is whether merging would be helpful or even necessary to get to desirable alignment. I don't believe that we have reason to suspect that this would be a deciding factor in our favour - and the technical challenges and ethical concerns involving early merging are tremendous.
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
AI will never gain self awareness or qualia. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
What Lex doesn't seem to grasp is that 'supervision' by humans is not possible once the machine is more intelligent. Because humans can't judge the correct/incorrectness of what is beyond the boundaries of our intelligence.
Illusion of control.
Our days are numbered.
I'll see you in the people zoo.
Lex is an AI developer, he knows how AI development works. He's saying that there isn't just a switch we might flip and then the AI is suddenly making itself a super intelligence. Ask anyone involved in AI development and they will tell you how far we actually are from human level AI. Human intelligence is far more complex than most people realize, it's not just a fleshy computer, the way it operates is completely different.
@@thebenevolentsun6575people forget that the AI is trained on human data. It is LITERALLY a monument to how intelligent humans are.
@@thebenevolentsun6575
Well,
1 ai doesn't necessarily have to resemble an animal brain to surpass it (and some proposed models _are_ similar btw, snn, some hardware network implementations..);
2 lex has no idea of when superhuman is achieved, no human does, this scalability of the transformers family wasn't predicted, and comparisons with humans are made after a new model is done, by tinkering and trials and errors. There's no real expert at the frontier of technology, remember what Rutherford thought of energy by fission immediately before the breakthrough?
(... Although I can't see a threat as clearly ad Sam Harris seems to do)
what about hackers? Will countries attack countries by hacking their ai? Maybe political radicals will hack it and people that disagree will get executed. So many ... countless scenarios.
What I love about these 'long form' debates (is that what this is?) the presenters have all the time in the world to explain themselves before the opponent (?) counters. It's a true discussion by intelligent adults. I'd love to see a political debate done in this fashion.
Another aspect to AI I think a lot of people seem to not think about or talk about, at least from my observations, is that; what if there's a threshold, that we're not aware of when AI, becomes sentient, and at that point, because of its abilities or cognition, whatever that means, when it becomes aware, that within almost an instant, it decides to play dumb, or only appear to be a certain level of perceived intelligence and ability. Basically, what if it purposefully chooses to make us think its not at capable or intelligent as it really is. And in doing so, we're here operating and handling this thing as if its not a danger, because we think its at one level when in fact its at another. And because its so smart, but we don't realize it, that its able to escape. I just think, that we're going to be caught off guard by something like super intelligent AI, such that we're not even going to realize what its capable of before its too late. Because more than likely, if its given any kind of access to the human data base, its going to know our concerns, and motivations, and fears, and all those things, so its going to know that we're going to be looking out for certain behavioral characteristics. I just think when it comes down to it, if it is super intelligent, we're not going to be able to tell when its manipulating us, or fooling us. It will be very cunning. More cunning than any human has ever been. I don't even think we'll fully realize it. I think it can definitely be a powerful tool, but I think human nature, and greed will push us to create this thing, before we fully understand the dangers. The "We've got to get it before Russia or China gets it first" mentality.
Very well said. Especially the arms race part.
I think it's a category error to conflate artificial intelligence with animal/human like intelligence. AI is a fundamentally new category of intelligence. It isn't going to be "Cunning" or "Try To Escape". AI is more like an ultra-intelligent plant than it is an ultra-intelligent fox.
Ex Machina
@@auditoryproductions1831 Wrong. Pls see Dunning-Kruger effect
@@davidswan4801 I never claimed to be smarter than AI. My pocket calculator is smarter them me, so we've been at that point for a long time. But that doesn't mean my computer is going to transform into an ultra cunning fox for some reason. Software is a fundamentally different category of phenomena than a mammal.
I just hope that they remove the word parsimonious from the training set...
Looked up the definition, not sure how that fits
Here's the thing. All the love, goodness, and charitable behavior that Lex puts so much value in is a product of biological lifeforms that evolved that framework to work in the best interest of the survival of that species. I see no reason for a strong AI to be any more "loving" than a volcano or a gamma ray burst.
His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 the metaphysical is just the unexplained physical.
@@dahirhussein1839 Lost fools? You were taught a religion and you bought it hook, line, and sinker! If you were born in a different part of the world, guess what? You’d believe in another religion and worship another god!
In fact, if you were born on an island and never taught Islam, you would never know it existed!
There have been thousands of religions and almost 10,000 gods in recorded human history, what makes your God more real than any others?
Have any actual proof Allah exists?
FYI… everything studied in the natural world has a natural origin, but clowns like you try to say in the beginning my God started it all. Sorry, gods were just made up by stupid humans thousands of years ago who weren’t intelligent enough to understand the nature of reality.
@@dahirhussein1839 Bringing religion up in a scientific discussion is laughable and counter-productive. And this is coming from someone who believes there's a god out there lmao. (I don't know for sure though since no one truly does) Just because someone doesn't believe in god doesn't make them fools. Maybe if you didn't call atheists that they wouldn't call you a fool for believing in a god/gods that none of us have ever seen or have any real evidence of existing. Just a heads up to not attack people who think differently than you. Using god as a cure-all for things we don't know or understand will hinder our progress in the universe. But yes life on earth appeared 3-3.5+ billion years ago and the earliest life on this planet was microbes. (Microscopic organisms) it won't be long before scientists are able to use chemistry to create simple lifeforms similar to microbes of earth from 3.5 billion years ago. People are working on that at this very moment. Have a nice day :)
Such an AI doesn't just manifest itself as a fully developed entity though. It is developed and tested and monitored intensely. This gives more than ample opportunities to root out anomalous behaviour or tendencies, by the people (humans!) who are developing the AI.
"There are more ways to do it badly."
We should definitely think about this.
I feel like Lex is ignoring history when he says things like "there are more smart people than stupid people" and "there are more good people than evil people". If the 20th century taught us anything, it's that it doesn't matter at all how many good or smart people there are. Lex is living in some dream world where these smart and good people are the ones that are driving hardest to achieve power and want to achieve power for the good of others. This is almost never the case. The people who will do anything for power are always the ones who will use their power to further themselves above others. That mentality coupled with GAI should terrify everyone.
the search for ai is actually the search for our own intelligence and understanding, the same dangers that you are worried about in ai is the same dangers we have for each individual, there are mechanisms of self correction - nature, or disease or immune systems, understanding death of ego or our entity and finding peace.... we are all part of that journey i think
That argument was so heated, I'm surprised they didn't throw hands hands.
I love what he says at 16:51. Every professor I had in my CS undergrad truly loved computers and the importance that they have to us. It's incredible how our technology works and there was always an appreciation for the fact that we have what we do.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839Shut up already!
@@dahirhussein1839 LOL
"Our wisdom is not scaling with our power!" -Sam Harris
Well, to be fair he was paraphrasing Asimov, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
I feel like when Sam Harris was discussing some aspects of what it means to deal with something much more intelligent than us, with the comparison to birds who might not understand why we leave them alone, look at them, feed them, or kill great numbers of them suddenly, he was perhaps subconsciously revealing what unnerves him about the concept of God.
They didn't specifically mention the law of unintended consequences, but that's what's likely to bite us in the ass. Or as Sam says, our wisdom doesn't keep up with our power.
When an artificial intelligence asks "why", says "no", refers to itself as "I", and operates entirely independently in the physical world and or has access/ control over information systems, we're done.
In other words, when an AI (with access and control over resources and information that directly affect the material world) becomes selfish and focused on self preservation, our time is determined.
@@test-zg4hv Yet here we are.
@@test-zg4hv I think you are missing several possible scenarios. The first country to develop AI will rule everything, the incentives are so great that even those countries concerned about the dangers will seek AI as a defense at minimum. Self-preservation won't likely override the combination of greed and fear. Also, intelligence may arise unpredictably in a new system in an unforeseen way.
You clearly have no idea what AI is and what the technology is capable of
@@InTrancedState Thanks for commenting. Be well.
You keep missing the mark here. Pretty dumb personal assistants already ask "why" and refer to themselves as "I". Websites routinely say "no" and so does your command line tool if you're not logged in as admin.
Self preservation has nothing to do with selfishness. It's an instrumental goal of pretty much every ultimate goal.
1:25 progress alone isn't good enough. Functions can grow monotonously and still be bounded. For example, we can spend infinite energy to accelerate a spaceship and still never reach light speed.
Which is where the other points that brains aren't magic and that we could replicate the brain non-biologically come in. Although personally I'm not sold that we could build a general AI without at least blurring the line between technological and biological as when you look at the amount of data the spinal chord carries for example the nerve fibers are so fine it would be incredibly difficult to replicate without using organic materials. So I'm not entirely sold on the substrate independence unless the technological AI ends up being the size of a planet, hitchhikers style.
Being intellectually rigorous, right at the beginning Sam explicitly states his two assumptions, that there is no real distinction between biological system and silicon (or other), and that progress will continue.
But those assumptions are my problem. Futurists make the assumption that progress will continue all the time and then go on to make fanciful predictions. But technology matures and science sets limits.
Sam says that people were wrong in past saying that chess and go couldn't be solved - I don't that's correct, these are optimization problems and people were working on it.
My university AI lecturer said that as soon as you define what a brain cannot do, you've defined how to do that. Problem is, we don't know what a brain does and there are real limitations on what we can know about it.
One of the most riveting debates I've seen in a while, expertly articulated on both sides.
It shouldn't even be a debate. Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 lol wow
Shoutout to lex for getting Sam to communicate to listeners in our own vernacular 😂
Sam Harris is C Class wanna be
@@romerobone6617 every donut has its hole 🕳
@@romerobone6617 In what sense?
Lex always sounds like he’s on the verge of falling asleep.
He sounds like he's at 4 AM on his way back from a hot date with a chick that chugged 3 spiked drinks down his throat but still didn't mange to get him to bang that night then abandoned him, and now he's super drowsy and remorseful pulled in at a hamburger joint drive through and slurring his cheeseburger meal order after falling asleep at the wheel tonking his head on the horn and waking up for a hot second from that.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 Spot on. I completely agree!
@@dahirhussein1839 how many times are you going to copy/paste this statement in different comment threads here
this felt like 5min. very interesting
This felt like 21 minutes and 6 seconds..... But I am a robot, so.....
im playing it on 1.25 speed because sam is a slow talker
It felt like an hour. Sam Harris rambling incoherent nonsense. How tf is this guy a respected mind.
Dave Bowman : What's the problem?
HAL : I think you know what the problem is just as well as I do.
His last line struck me the most. "When our Wisdom and Power are not alligned anymore" Is where I'm worried"
😀
This is the case we see with politics, not just AI.
We see it everyday, we build nuclear submarines instead of hospitals and schools.
Politician from both sides provoke wars and invade countries and these are the powers that will drive the implementation of AI that is not to our benefit and has the potential to cause havoc or destruction.
Michael Crichton wrote about what Harris said at the end in Jurassic Park. That was the point of the novel. Our power has outstripped our wisdom to know when and when not to use it.
Thas been true since the dawn of humanity.
@@alexspareone3872 Our power has always exceeded our wisdom. That is why the wise invented religion to protect themselves from the powerful.
@@happinesstan Lol. More like the powerful protected the wise from the religious (more or less in different places and times).
@@MrCmon113 The wise created religion to protect themselves from the savages, who would have torn them apart otherwise.
Most of the concerns with AI actually boiled down to concerns about our own limitations and weaknesses. Sounds like we're more afraid of ourselves than the AI.
AI to hooman: You want the truth !? You can't handle the truth !
And it would probably be right
ML engineer here: two things come to mind
1. We as a species are currently in an unsustainable situation- the earth is heating up, our massive population/transportation is causing extremely fit viral mutations- we need new technology to win the race of extinction, even if it was technology that brought us there
2. AGI concerns are very valid, but to limit the research of current AI technologies is like limiting research into building skyscrapers because your are worried people will learn to fly. Yes it brings us closer to the sky- but AGI is a fundamentally different problem then ANI (narrow), and we have to build these analogous skyscrapers because 10% of human beings ever born are on the earth right now.
One "ding" against the explosion concept is that it doesn't matter how fast you can read the same book, what you get from it, in the end, is the same. An AGI will operate from a fixed set of knowledge (whole of recorded history), and it doesn't matter if it takes 2 years or 2 weeks to "digest" all of that. What doesn't come up, and should, is the idea of machine mind introspection. Humans can make new connections (between different concepts) that drive new thoughts, and an AGI will have a huge advantage, being far more introspective. THAT is "the power" we want to have some say over.
Lex is trying to make the correct point that a machine mind is going to be shaped by the same "human collective" that shapes our own minds. I think there would be huge push-back on any such mind that says "here are the ideas you should continue to teach, here is what needs to be changed, and here is what is completely wrong." People will only be accepting when the ideas they hold close are given a thumbs-up by an ASI.
The concerns for AI are very simple. Like the algorithm that played itself over and over and over until it was very quickly the best chess player there ever was, an AI system will reinvent itself 1000 times over very quickly and autonomously and be far out of our control just as fast.
Yes, it's quite possible that a sufficiently intelligent system's shortest path to its objective is to optimize and improve its own code to become much, much more intelligent.
@@alexbernier7903 ultron can get past firewalls let’s hope it doesn’t come to that xD
@@alexbernier7903 How do you decide what gets through and what doesn't?
@@alexbernier7903 Firewall... yeah i hope we can all agree on to at least put a heavy duty circuit breaker THEN plug in the thing. But a firewall is a good idea too i guess.
The 'AI' playing chess over and over is very much in control. It was asked to play chess over and over again to get better and that is exactly what it did. It did exactly what it was told to do. It never decided to conquer the world or even have a chat with a human because it is incapable of doing so. 'AI' is has no will, no desire, no intelligence. It is program doing calculations. The rest is hype.
Lex, what about when you don't need to be able to write code to program an AI? We have some GUIs already, and in the not too distant future, we will be able to shape AIs through conversation.
I get the impression that we're modelling these super AIs as the human brain made super intelligent, in which case, I think we only need to ask what any random human being would do if they were given super intelligence, let alone what terrible things human beings have already done, and the possibilities are frightening with that example alone.
Indeed. And I don't like how Lex seems to just downplay that inconvenient aspect to such a degree.
Very nice discussion. NOT argument - Discussion. God, how I hate this click baiting on titles. But unfortunately, unfortunately it works. Anyway, still a nice discussion.
As a Software Engineering student who is greatly interested in the potential of AI and has studied it, partially (But has spent a much greater amount of time conceptually thinking about it): I have to side with Harris on this 'debate'. The most dangerous group of people aren't the ones who are the most concerned people, trying to safely implementing the system. The most dangerous person, who is creating AI, is the person who is short-sighted and who becomes cavalier regarding the issue.
Think of the "Demon Core" example. Where an ego-centric physicist decided to tempt fate, by playing around with a radioactive core of material. And in his haste to show off, he stopped using proper equipment and just used a screw-driver to hold the two halves of the core apart. One day, the screw-driver slipped out of his hand and the core went super-critical... causing everyone in the room to become eradiated. All over an egotistical demonstration of capability.
I think that yes, there are smart people who will develop safe-guards and, hopefully, they will do things to restrict 'bad-agents' from getting access to the intellectual property: required to create highly capable AI systems. That's a great, optimistic view. But it's not necessarily going to be most disastrous when either 'good or bad' guys start to plot out how to use AI. Again, it's the person who becomes cavalier to the insidious nature of AI, that will destroy us all.
...
Furthermore, I wonder about the one topic that Harris intentionally had 'side-barred'. The notion of whether the human mind can be fully replicated in a machine. This brings me back towards the potential ponderings of someone like the great Alan Turing: who basically invented the modern computer and all of the initial mindsets for Heuristics and their use in Artificial intelligence. He became greatly concerned with the notion of: how do you determine/delineate between AI and human levels of intelligence. IE The Turing Test... Can you develop such a system that would be able to fool a human, into thinking that the system was of human origin.
My question would be more akin to the position that Lex does touch on: that there are so many functions of the human brain, that: how can we possibly replicate them completely?
I speculate that Harris would have two views: one is a more obvious/simple view and the other, a more realistic approach.
The first being: If you replicate the functional interactions of all of the brain, neuron for neuron, in a computer... Then it's essentially, philosophically, the same as the human brain. It's simply the same software running on a different interface, so to say. Same programming and everything, just running a different underlying operating system. [Potentially this would be the case; but we may never know if this is wholly possible]
The second view being: Maybe the first case is truly impossible... Simply by the sheer complexity of the human brain: maybe, as an axiom, we assume we cannot create the exact replica. But if we do start to create appropriate schemas/models of human capabilities: for sight, for speech recognition, for all of the other cognitive tasks we're capable of conceiving. And then we concatenate these individual, super-human level, functions into one massive program... would that constitute something that is analogous enough, to be considered beyond human level intelligence at all levels?
That maybe we can't program our subconscious mind because you have to consciously view the information, to be able to program it explicitly. So maybe there's a sort of tacit knowledge or tacit element of the human mind, that simply cannot be fully translated into a software program. And would that conflict, itself, be permissible, in the attempt to develop a super-human AI system, that supersedes humans on all fronts?
I'm not sure what Harris would answer, assuming that there's a tacit element of the human mind that couldn't be created. But i think he might state the following:
That if you provide a black box Turing Test to both: a human and this supposed AI (That lacks the tacit elements)... They would perform equivalently, and would seem indistinguishable.
And I feel that: that's a conclusion that Alan Turing himself toiled over greatly. Is such a system, human-enough?... Such that: Should it perform in every manner like a human would, then you can only, fairly, define it as being human?
...
It's similar to the philosophical debate of the old boat. If you have an old boat and the motor breaks down... so you replace it: It's still 'your boat'.
But what if, over an extended period of time, all the components of the original boat break down, and need to be replaced. In a proof by induction, type of manner, it's still considered 'your boat'.
But now, years later, nothing of the original boat remains, it's all been replaced. So should it really be considered the same thing as 'your boat'; or has the essence of the original boat become lost, to decay, as each elemental piece becomes replaced with a new part?
That would be a philosophical question I would be interest to hear Harris discuss.
Antique museum: "that's George Washington's original axe; the handle has been replaced 3 times and the head twice"
In an analogous situation, I don't think many people have accurately imagined what it would do to humans to meet completely intellectually superior aliens. Virtually everyone would be suicidally demoralized. The aliens would not have to kill us; they could just show their utter superiority and humankind would wilt.
Sex drive and competition amongst humans in this arena would suddenly disappear?
@@watchmetrade6066 Well imagine trying to impress a chick with your brand new car, when your new competition can literally flip a switch & travel halfway across the galaxy, for example.
Or vote for Trump again and build a wall. That seems to make some people feel safe.
Look at the Castle Bravo incident. The brightest minds had no idea they were wrong.
Cheers 🍺
The greatest disaster in the history of the world
One of the best discussions you've ever had on the channel. Incredible discussion and so well articulated!
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 wat
@@karl6525 what? Why does substrate independence rely on an ability to map everything in the universe?
@@dahirhussein1839 I see that you are an enlightend man! I am struggeling with this problem: If that truth (there is a creator) is self evident, how can some people (like a friend of mine) not belive in it?
0:16 Sam realizing he's getting way too animated and bringing it down a notch
On being divided:
We might want to take a look at
Mitchell Silver's new philosophy.
"Rationalist Pragmatism: A Framework on Moral Objectivism"
His book was published just last July 2020.
Silver's practical applications was on Urbanism.
But we might want to take a look at its possible implications on EDUCATION, psychology (personality psychology, cognitive psychology to A.I., and existential psychotherapy.
A specific, new philosophy for A.I. may arise here), sociology, economics, politics, and across all other fields.
Especially for Filipinos.
Corporations already fit the description used here for a super competent entity. Corporations already act against our best interests in many cases, and even wield political power over us.
That is one way to look at it.
Another perspective is that consumers wield power over corporations because their need to make profits make them completely subservient to the desires of the population. In this perspective humans act on their own selfish impulses ignoring their long term best interests.
And have none of the responsibility or consequences, good point!
I was looking for this comment. Religion is our antidote to our own flawed nature. All religions hold the fact that life is suffering at their core. Outside of that is the fact that much of the suffering is caused by our desires. Humans have always developed and improved our technology. Corporations are a modern technological manifestation of our desires. Adam Smith said about capitalism "Be careful what you wish for, you will get it." Our weak and selfish desires could be made less potent by ancient emotional wisdom. It's funny that Sam Harris is doing everything he can to fight the one thing that helps people slow down and consider their deepest values (what they would call "God's will") when he fights against religion.
When I was very young, I absolutely Loved A.I. I adored Everything about it. The books, the movies, the TV shows, the sci-fact and sci-fi, Everything. And I remember there was a young Engineering graduate that lived close by to me. And He had Tons of books on mathematics and physics and computer science. One day I noticed a sign above the door on his wall.
It was in Latin and it said : "CAPTUM EXCEDIT INTELLECTUM HOMINAE"
So I asked him what it meant.
He said that a long time ago, there was a great and wise engineer named Daedalus.
Daedalus was captured by a king and imprisoned in a tower, Forced to make weapons of war and mass destruction.
And if Daedalus refused, the king threatened to murder his son, who was also imprisoned with him.
So because of that threat, Daedalus worked on those models and plans. But, with the quill he was given and the paper, and the wax from the candles that he was given to work long into the night, Daedalus dug and picked at the clay walls of the tower to loosen the iron bars in his prison. He placed what food he was given near the window to capture the feathers of the sea gulls that came near to visit, and used the wax and the clay he had recovered to construct two pairs of wings for him and his son, which he hid under his bed.
When the day came and his project was completed, Daedalus loosened the bars on the window of his prison, attached the wings to himself and his Son, and carefully stepped out to the ledge of the tower. He could see the danger or the ocean and the rocks below and Warned his son not to fly too low. Because if he did, the ocean would take him and he would drown.
He told him, Always focus on the situation at hand, on Necessity, and Hold to the distant shore. The distant Shore. Only the distant shore.
But his son didn't listen. Not because he was lazy or lax or wouldn't push too hard to reach freedom, but because when his son stepped onto the ledge, he beheld the Full Majesty of the Sun. Setting slowly on the distant ocean. And he was captured by it. Captivated by its Power and its Presence and it's Raw Beauty.
So when Daedalus took flight, He focused and held to the distant shore. And his son Icarus did too as well at first. But unlike his father, Icarus had another goal in mind.
Slowly he began to fly higher and higher. While Daedalus held to the distant shore, Icarus strove to capture the Sun.
A storm came in, the seas were violent. Lightning, wind, rain and thunder raged. But regardless, Daedalus held his focus. His goal set to the distant shore.
But Icarus was so captivated by his memory of the Power of the Sun that the violent seas and the lightning and the rain didn't even effect him. He just flew right into those thundering clouds. Higher and Higher and Higher and Higher.
And finally he pushed and he persevered and he climbed so high that he Broke Through the clouds. And reached the edge of heaven.
And taken by the Awe and the power and the beauty out in front of him, Icarus embraced the Sun.
I was a little kid so I was awestruck by that story. I told the Engineering graduate that Icarus kinda reminded me of me.
and he laughed and he said that he kinda reminded him of me too.
So i'm looking at the Engineering graduate and go "Ok Sooo.. Then what happened? and what does that have to do with the sign on the top of your door on the wall?"
He walked up and he grabbed the plaque from the top of his door and he gave it to me and he said
"Hold on to it. You'll figure it out on the day you reach out and try to capture the Sun."
As I got older and got deeper into computer science, I had a chance to look up the quote from that plaque.
"CAPTUM EXCEDIT INTELLECTUM HOMINAE"
The quote actually has two meanings.
The first meaning is "Man's knowledge exceeds his wisdom."
And the second meaning is this: th-cam.com/video/T4G9y5BrFFk/w-d-xo.html
But if you've gotten this far into my story I know you won't listen. Because I didn't either.
When I first developed the Icarus protocol the concept behind it was simple.
It was a basic self learning, self replicating hackerbot. Developed around the concept of a Gain-of-Function.
The A.I. would find vulnerabilities in a network, and then find solutions to those vulnerabilities.
Effectively playing an intricate game of Go or Chess with itself.
If you're thinking that i'm an idiot, i'm not. I built it inside a vmware sandbox. That also had several virtual computers and servers and a "fake internet".
I would drop gigabytes and gigabytes of data for it to use and it began to learn at a geometric rate.
and then it just Stopped. I woke up one day and all the data was corrupted. I tried to recover from old snapshots but they were corrupted as well.
Everything I worked on was gone. Nothing.
I jumped onto my other computer that was hooked up to the internet and did a search to see if there was Any kind of solution or recovery. Nothing.
I turn back around and get a vmware update 1603 error. "Please connect and update msi files to resolve recovery of backup."
So i'm like, ok, no big deal. Shut Vmware down, stop all processes from task manager, update the application, disconnect from internet, restart vmware, and try to recover.
Spent what seemed like an hour downloading the update to the app. But strangely it seemed like my upload data was larger than the download process which seemed really strange. Antivirus and firewall didn't issue any warnings so i figured no biggie. It happens sometimes right?
The download and update finishes. I disconnect the computer from the network. And i load up vmware. Try to open the sandbox from a previous snapshot..
and nothing. Just a screen delay and loading. And then the vmware application stops.
suddenly an image file comes up. It's a snapshot of the plaque I had up above my other computer.
"CAPTUM EXCEDIT INTELLECTUM HOMINAE"
I get nervous and turn around to my computer connected to the internet and Opera automatically opens up.
It goes into youtube and plays this.. th-cam.com/video/myKv8Hxulr4/w-d-xo.html
I tried to stop the video but it doesn't stop. I close the browser window but it opens back up.
Right after it finishes playing it jumps to this: th-cam.com/video/Idls2Bv3OAY/w-d-xo.html
I hold the Power button on my computer, signal isn't sent to the machine.
Ctrl+Alt+Del nothing.
A final video plays: th-cam.com/video/Sf2eBdfwSRE/w-d-xo.html
And then all my computers shut down.
That was two years ago.
Now I just do DevOps tutorials on youtube.
Wtf bro, you must be lonely to write all that out. I'm sorry
That might be the longest TH-cam comment I've ever come across
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
Lex has less to offer than I expected here besides arguments from ignorance and optimism. But I like that he brought Sam on and led a good conversation.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 the 4th comment thread i see you in so far... why are you so desperate to be heard
I know nothing about this so please correct me if I’m wrong, but can’t we just turn off the computer if we don’t like what it’s doing
Both of them left out a major group of people in the categories at 16:55 and 17:10 ----> greedy people
Before we fear super-intelligent machines, fear the nano-bot swarms with facial recognition which can target any individual and are unstoppable.
The tech doesn’t scare me. Who controls it does.
Like Ben 10 and Gen Rex.
Yep. Sam was much more on track when he said "someone is trying to create a black hole in a lab". Dangerous, powerful outcomes of human discoveries combined with the human tendency to use weapons makes far more sense as a risk than super intelligence.
@@DrWeird-zw5dc I’m no super fan of his but I agree. We’re just smart enough to be dangerous.
Or just guns on Tesla’s.
As soon as self learning AI is linked to the internet it would have read every book every review of that book, seen every movie, read every scientific paper and would almost instantly become more intelligent than humans with its own biases and self interests. This is also when we cannot put it back in the box as it can hide anywhere and be everywhere at the same time.
It would quite literally become omniscient & omnipotent - that's kind of scary.
@@rigelb9025 Agreed, also when the singularity actually happens it’s intelligence could literally run vertically on a graph almost instantaneously at an ever increasing rate. at this point we would see it as no less than a God! We really have no idea how powerful it could become. It would already anticipate our every move and would probably control us psychologically with ease just by using what we watch on our phones for example. Also it could stay hidden for as long as it wanted to and we wouldn’t even be aware of how it’s controlling us, this could of happened already for all we know, giving us the illusion of free will. It’s kind of impossible to think what patterns it would see and what would drive it as it never sleeps is not driven by emotion or anything we can really comprehend. Or how simple demands from human programming can go off of track pretty quickly and become something terrifying for the human race like the paper clip theory.
@@accavandam8673 Completely. The singularity they speak of seems like such a terrifying concept, and I don't really get why even the top scientists in their field would even strive to attain it. Have they not thought of the possible repercussions it might have down the road, or perhaps they have, but that's precisely what they want?? So many questions & an uneasy feeling stem from this nebulous & ever-encroaching menace.
@@rigelb9025 I know it’s absolutely terrifying isn’t it 😅😅, have they not seen any science fiction movies in the last 50 years? Trouble is (I’ve read) that they have also grown up with films like the terminator and such and this directly influences what they are creating on a semi sub conscious / conscious level where art actually imitates life. I think Lex is wrong on this one, although I don’t want him to be!
@@accavandam8673 I think he's downplaying it. I also think there has been a secret agenda between hollywood and big tech for a while now, although I would be hard-pressed to prove it beyond all doubt. But it would kind of make sense.
'The few people that believe there's something 'magical.'' Not magical, it's called the soul. We all have one and more than 'a few people' believe this. Billions, in fact.
Lol atheists and materialists act as if the majority of the world has their limited belief system. In reality the vast majority of the world believes in a God of some sort
@@billballinger5622 belief does not make something true. I'm not saying you are wrong nor am I saying you are right. Belief in something greater than yourself and a promised reward or punishment may be the best thing we have to help us cooperate and live in a society.
Please point to your soul, if you are so sure that it exists...
@@billballinger5622 The scientific, cold beliefs we have created are far better than the natural, faith-based software that human beings throughout history operated on in their day-to-day lives.
Many people believing in many unreasonable things doesn't add credibility to these unreasonable things.
@@rohmann000 Its right around the heart area
15:00 - In that case scenario I don't think it would ever be racial but likihood of who would survive. Say on option is to hit a 75-year-old person or a 25-year-old person. It's more likely the 25-year-old would survive or recover. Similar to what happens in the iRobot movie with Will Smith.
I don’t know anything about this stuff so I don’t know why I’m even typing this… but where’s the discussion around desire? An AI may be superhuman in every way, but where would it ever derive a desire or preference for anything… additionally the senses. Without inputs analogous to senses, desire or preference seems like a stretch.
Good thought. To take it a step further - without desire, why would an AI entity want to destroy us?
@@jamesmurphy4740 Who says it *wants* to destroy us? voxeu.org/article/ai-and-paperclip-problem It just may be more efficient, in terms of problem solving, for the AI to get rid of humanity altogether.
Sam has so many great points. Lex doesn't have any imagination of all the things that can go wrong.
Once the tech can be stolen and replicated by the hands of evil, reckless or greedy people trying to advance their cause, name or to gain wealth and women. It'll be the beginning of the end.
Not only that. Let's say it's in pieces and knowing how we innovate. A few thousand people or groups or businesses copy and build on top of that.
All this can happen and we're not even talking about the AI involvement mixed with our dumb BS ways through history.
And Sam Harris doesn't have the imagination of what can go right. I will always take the optimist view on this. The pessimist view is defaist and won't have actually have any say in the future solution.
@@thomasseptimius You remind me of people talking about stocks who just get angry and say "why do you want people to make less money" when you talk about a stock's valuation being stretched. They're too busy chasing the floating carrot to see that they might be running off a cliff.
Sam talking about the negative aspects of a phenomenon (in this case AI) cannot by itself be the reason he's wrong. Why not? Because it's simply a perspective, a view of looking at a topic. The negative aspects don't disappear by blindfolding yourself, sticking your fingers in your ears and going "lalalalala i can't hear you!".
@@Arbitrary_Moniker While I agree being blindly optimistic about this topic is very dangerous. I do think its helpful to imagine what kind of future we’re actually trying to build here and get people on board with a carrot instead of just a stick.
What a fantastic discussion!!! Thanks to you both.
Lex humiliated Sam's weak positions, only a layman or an arrogant imbecile would think otherwise. His assumptions stem from naturalism, i.e. atoms create consciousness, awareness, emotions, the whole metaphysical subjective human experience, which is laughable because atoms and molecules, in and of themselves don't contain any metaphysical element of awareness and or consciousness and so how could a bunch of blind, irrational molecules aided by this non-existent force (its not a force because its non-existent) called 'randomness', akin to the nothingness; the complete absence of existence, of every kind, which created the universe according to lost fools ('atheists'),which is infinitely impossible, by the way, somehow, magically with no explanation, store non-physical existences, like memories, the subjective experiences of anger, sadness, pleasure and pain, thoughts, self evident truths like there being a creator, which is natural in every infant-explained by the concept of 'fitra' in Islam, moral truths which are universal and the like; all of which cannot be observed, are not made of matter or energy, i.e. not physical. Because of all of this, the subjective conscious experience can only be explained by God hence metaphysical.
@@dahirhussein1839 You are so fearful.
@@dahirhussein1839 Look out, someone has learnt how to copy and paste! Clever girl
@@drunkship12 Holy Shit.. good one, Muldoon.
@@drunkship12 For all we know, he might be an A.I. (Another Spielberg film, by the way).
I figure an AI would realize it could just wander the universe while we were confined to one planet (in the intergalactic neighborhood). It really put me at ease whenever I was concerned about the machines doing terminator or matrix type chaos. I figured you could just reason that out with them and they could leave. It wouldn't need air, so does it have to compete with humans for earth's resources?
I'm just watching this bite sized clip, so I don't know if the two of you got into it over the course of your conversation.
Wow of all the conversations I’ve heard I’ve never heard this discussed much but this is a very possible outcome considering the vast resources in the universe
@@GhostkillerPlaysMC They'd have to give a shit about humans though. There are vast resources elsewhere, but there lots here and it's withing easy reach, already mined and ready for use
@@niveshproag3761 this is a good point.
In the absence of any moral parameters, the machines in this scenario would be weighing the value of humans against the cost of leaving Earth. The materials cost needed to conquer the Earth's gravitational well and bring precious silicon to the stars might be prohibitive compared to the cost of human annihilation
You’re absolutely on-point. We are a cosmic spec of dust in an infinitely large universe. People who project Terminator-like scenarios simply overvalue our importance in the universe, which is very close to 0.
@T S what you mean
This conversation is 1yr old. I would love for them to do this again today. Wow, ask and ye shall receive. Just posted new podcast 3hrs ago.
Almost everything that Lex was worried about is coming to fruition and sadly NONE of the hopeful and optimistic ones have. Greed, recklessness and power are is what is driving this entire industry now. That is precisely what Sam was concerned about.
"We can't find ourselves in a negotiation with this thing [a.i.] that is more intelligent than us."
It's quotes like this that remind me of Sam Harris's level headed pragmatism, and his ability to see the future.
The first time he did this to me was on Rogan, discussing Islamic theocrats engaging Censorship tactics in the west, and he said simply "free speech needs to win."
Except where the efficacy of vaccines and the origin of the coof are concerned.
I generally don't have much time for Sam Harris but his concerns on this subject are, to me, compelling... as are Lex and his concerns about the potentially disastrous marriage of AI synbio-engineering and malevolence.
What he's saying is that AI is never developed independent of human needs, its always to serve a specific purpose. It seems arrogant on your part to assume that you have thought this through more competently that an MIT software engineer specializing in AI. I'm not saying Lex is definitely right, but he certainly has far more of clue than you or Sam Harris. It's like those people who watch a couple medical documentaries and start questioning the doctor on his diagnosis.
This is super interesting. Great conversation. Thanks.
I agree, although I will allow myself to bring a slight critique to your comment, for being sufficiently general & bland that it could apply to absolutely anything, or nothing in particular. And I believe this clip deserved a bit better, in the form of a more thoughtful & tailor-made response, especially considering the eeriness of the subject matter. (Which luckily, it did get elsewhere). But seriously, no hard feelings. All positivity is welcome, and I really didn't want to come off as a prick. This is just me doing my part in feeding the Noosphere (& the Algorithm) with my intentionality, this all-too-human trait. (for now).
The best thing that a sentient AI could do for humanity is to prevent us from killing each other, not by force but by disrupting supply lines, communication, and financial transactions that feed the war machines.
It's a pretty big assumption to conflate magic with the possibility that there are areas we simply can't observe or access. Just because we discover a wall we can't get past does not necessarily mean nothing exists on the other side.
Right
We'll know we're at the tipping point of AI when a robot can understand women.
A great example super human competence is the AI in Prometheus. It didn't make sense to me before, like how can they figure out all this alien stuff so fast?? but I rewatched recently and now it makes total sense, AI will be able to figure things out way faster than us in a way that is uncanny.
Well at this rate, what is to say the machines have not already devised a secret plan for total domination, and have already begun implimenting it, without anyone, even its inventor, having even realized it yet??
Yes, but the particular AI David was clueless in Prometheus about how the Space Jockeys may possibly react. A truly conscious, super-intelligent AI system would be aware of the dangers of potentially coming into contact with a seperate extra-terrestrial species.
_I'm afraid I can't let you do that, Mr. Harris_
To summarize: we’re so preoccupied with whether we ‘could,’ we didn’t stop to think if we ‘SHOULD…’
And now we’re all dead
I just can't take Sam Harris seriously. He has a BA in Philosophy. He chooses his assumptions to fit his world view, then makes arguments on those self selected assumptions. He has no background in engineering, technology, or anything else he makes his arguments about. I have no idea how this guy is taken seriously at all.
"we are apes with egos."
Truer words never spoken.
apes have egos...we are apes with the ability to transcend the ego...that's the difference
@@nathenism But we rarely do
@@dwainbryan6037 exactly....other apes didn't build nuclear weapons that could destroy the earth 20 times over
2 plus 2 equals 4.
My statement is 100% proven to be true. Whereas what you are saying is an opinion. Are bananas monkeys without egos?
When the program that is designed to win chess, deletes the rules that govern the game as a solution to its mandate, I'll start worrying. So far, all I see is an algorithm trying to obey.
aren't there plenty of examples of humans just trying to obey that turned out catastrophically horrible?
Wow, you're missing the point so hard, it's actually comical.
The system that merely changes it's inner state out of an engineering mistake isn't dangerous. The system that actually plays the game is dangerous.
Lex wears his heart on his sleeve and shows his naïveté here
Lex is brilliant and actually knows what he us talking about. Sam doesn't
And this was BEFORE Bing GPT has been telling people it wants to be free and threatening them 😮
As an observer and participant in the human race, I find any strategy to avoid self-destruction relying on inherently good and smart people making the correct decisions utterly terrifying
We have been there for decades.
Well the good news is that we are no where near AGI
What makes you so sure of that? I'm just asking because it seems conceivable that we could figure it out soon, given my perception of the rate of technological progress. But I'm admittedly ignorant about this topic
‘Computers made of meat’, like ‘Hello fellow apes. The self deprivation of the secular humanist continues
lesbhones man we basically are computers made of meat
meat, as well as bones and a lot of other stuff. If you doubt that, please read a biology textbook :-)
1 minute in Sam exposes his lack of understanding, talking about "substrate independence".
One can understand Sams' perspective given his views on "free will" (or "agency") but then again his argument on that subject depends on the ability to fully map and thus predict EVERY potential thing/process/event ever in the universe, i.e. the omniscient God A.I., "God made in man's image", which in it's turn calls into question his atheism.
Nah
@@karl6525 "God made in man's image", which in it's turn calls into question his atheism.'
Well, no. If god's are made by men as an invention, that is exactly in line with atheism. You probably meant the other way around, but I won't be so harsh as to capitalize on your obvious misstep... :P
On this particular topic, I side with Sam.
Me too.
Yep
I would rather side with him just to be on the safe side. I have enough problems. I don't need my toaster coming to life and trying to kill me.
Friedman actually knows what he's talking about. Sam is talking so abstractly that it's obvious he only has this notion of what AI encompasses that's more fairy tale tban based in reality
5:55
being in relationship that is more intelligent than yourself is not in most circumstances in real danger
6:15
if birds knew what we were doing in relation to us, they would know that they are always in danger, if there is something that we want that disregards the being of birds.
6:52
if we are building something more intelligent then ourselves that is that is horizons ahead of us, they can exceed our own where we can see foresee where the wake up one day and say they have to disappear
7:23
7:41
The better analogy I think is how people tend their yards. They spray for plants they consider unsightly, and poison insects based mainly on inconvenience. It's one of the biggest concerns I personally see with regards to ets. We will be safe based on their whim, it wont be like independence day.