***** The more I think about it, you kind of have to think on even smaller scale. Otherwise, according to the definition you provided, many electrical devices are not machines (for instance CPUs, lasers, radio transmitters, immersion heaters, electromagnets, USB flash drives, ...) In that case however, almost everything can be considered a machine. So in the end, it comes down to our intuition / ingrained notion.(epistemological nihilism wins once again)
neuron1618 .... Most sophisticated? Human brain? Human brain is nothing. You don´t even remenber what was your thinking, when you did not know language... If is brain the most sophisticated machine, it is brain of genius and genius is not human, but next step of evolution.
You forgot the point beyond the uncanny valley where the object in question becomes so human that it is no longer creepy, as it fits in with the human look perfectly (and by perfectly I mean that from a distance or similar point of misinterpretation a robot and a human would look alike).
That falls in the "Stylised" are, which is before the uncanny valley, where it looks human, but is easily discernible from real life, being animated and all.
A robot that can make expressions with a face is not "feeling" , it is just a piece of plastic that looks like a human face. We CAN create something that mimics our reactions to our own feelings, but not a machine that actually "feels"
Those who ask the question "will robots ever develop feelings?" seems to indicate a misunderstanding of just what human feelings are in the first place. Human beings ARE robots, biological in nature, and we have developed feelings over the course of our evolution. If robots do not develop emotions it will only be due to our lack of ability to program them adequately and not due to some limitation imposed by their nature. We will cross the boundary very soon into biological robots. People like to think emotions are something special because they want to feel unique, better than, or at least different from, the rest of the universe. Once you take away that delusion and see emotions in their true context it becomes a question of "will humans be able to program robots sufficiently complex enough to develop feelings?"
I'd like to make a response to the "four requirements for AI to be like us": 1) Must recognize objects. 2) Must engage in complex dialogue. 3) Must be manually dexterous. 4) Must understand social interaction. For #2, there are plenty of humans who are thoroughly unable to accomplish this qualification (and I tend to include myself in the "sub-par" category). For #4, there are myriad number of human beings who cannot master social interaction. Many subcategories of autism, for example, can severely hamper social abilities for human beings. In the case of #1, there are very few people who have difficulty recognizing things and people and categories of things, but there are some individuals who cannot recognize objects, and in some rare cases, faces. In #3, dexterity is considered essential for a robot to effectively simulate human behavior. Well, yes. People move about. So a robot trying to simulate people should likewise move about. But there are also plenty of humans who can't even move their limbs (Stephen Hawking, for example). What happens if a human being lacks one of the "requirements for being a human"? Can a human lack all four and still be a human? Are these qualifications really what make us human?
I said the same about four weeks ago (though unfortunately my version wasn't so well worded). The only requirements a program would need to have for me to consider it a true AI would be the ability to make a decision without knowing all the facts, the ability to learn, the ability to form relationships with other life and some kind of curiosity.
I think we would have to fully understand human emotions before we can replicate it artificially. Robots would not be able to develop it themselves probably
Only one problem with that we don't even understand how a critical mass of neurons possibly in combination with random mutations in genes influencing brain structure caused emergent behaviours like consciousness or complex emotions in the first place. Worst part of it all is that despite our knowing ignorance of this we still have the hubris to use biologically inspired evolutionary neural networks and assume that our ignorant bumbling efforts couldn't possibly stumble on similar emergent systems. DNA didn't "think" that it would one day bumble it's way into creating a sentient entity capable of awareness of it's own existence yet it did it anyway, nor did any of those early sentient organisms "think" they would stumble on sapience and start producing complex machines of their own separate from themselves but that happened too. And now those sapient organisms don't think that their ignorant bumbling with machine simulations of those same basic building blocks will reach the same path but are they right this time? Guess we will find out one way or another.
I can tell where you are going with this but like you said it whould ultimately fall short. The things about feelings or emotions is that they are affected by chemicals yes, but they are governed by a complex array of other factors aswel, mostly our trouble in recreating this boils down to our lack of understanding how WE feel in the first place, take depression, it's a selfamplefying loop of sadness and we know some chemicals that will curb it slightly but we can not at will create or undo it
Hey James! Can you make an episode about Why do we like bitter drinks like beer and coffee? I didn't like either of those when I was younger, but now I love them.
Agent Vengeance Me too! Although extinction could be good too. Sure the human race may seize to exist as 'humans' but maybe OUR next step in evolution is just artificial intelligence? That would be so cool :D
Freja Rößle I guess there's also a chance that humans could adopt cybernetic upgrades while machines become more human by becoming biomechanical constructs. Ergo, a single race of cyborg beings.
Dear James May, I have a very pretentious history teacher which requires not just all the material to be known (check) but also very good story telling (not so check).I implemented your style of explanation and guess who finished with top marks :D.Anyway I really enjoy your work both here and in Top Gear which is my favorite show.Cheers Georgi Trichkov 17 Bulgaria
They are in a way that they are pre-programmed/taught, and are only there to help our survival - and since we are intelligent enough to have pulled ourselves out of the vicious cycle of nature/evolution, emotions have become pretty much obsolete. We no longer have a need for emotions - scary, isn't it?
The singularity is a hypothetical moment in the future that occurs soon after humans develop the first AI that attains human intelligence and self awareness. Because this hypothetical entity would benefit from the numerous advantages that computer processing has over an organic brain, its intelligence upon inception would already be greater than any individual human being. Should the entity task itself with creating a superior intelligence, exponential progress would continue and soon an individual intelligence would exist that surpassed all of mankind. At this point a singularity (or event horizon) begins, beyond which we cannot predict the trajectory of the human race because progress on Earth will no longer move at a pace that is conceivable for us biological beings. Ray Kurzweil believes that this represents a continuation of biological evolution and a milestone on the same path of exponential progress that has occurred since life began on Earth.
The main concern about the singularity is that it may be a turning point in human history in which computer technology becomes the primary life form on Earth, pushing humanity into slavery to the computers. In a more realistic sense, the computer overlords would probably find humans to be rather useless and drive them to extinction, along with all other impractical species on the planet. If the computers developed human-like emotions, we would get an outcome that would either be the enslavement of humans out of sheer spite for the species or a benevolent nature reserve for humans to live out their lives in paradise. We won't know until the singularity actually happens.
commode7x Transhumanism is the answer, simply merge biology with technology, both systems have advantages it is only logical, and if humanity phases out to technology when we are both and inseperable who cares? We are still the progenitors and if merged witness or integrated into the transition.
Havent watched it through yet, and I must say, I'm already impressed by the question. Very curious, and not many people talk about it even though it's pretty common an issue. Ok going to watch the vid now
I think your right. I'm not some computer genius or anything but i'm pretty sure computer as we know them can't have emotions. If we could create cyborgs though, then we could.
Yes mostly chemical reactions but this reactions are triggered by events around us and that is the problem, Full Artificial Intelligence requires that a computer decides when the event is suitable to trigger this "chemical reaction" to experience the emotion and the chemical reactions trigger physical change in the body (speeding heartbeat, smiling, butterflies in your stomach etc.) this would not be present in a robot
Youssef Khaled What would trigger things is the least problematic aspect though since it can at a basic level easily be programmed what triggers what. The more complex thing is for actual artificial intelligence to change and evolve through causes and effects like complex organisms and not just with having an effect from cause but also changing the own inner workings because of it so to speak. Theoretically though that´s no issue, just practically since neuroscience and robotics aren´t at a level to realize it. So a "robotic organism" must be made in a way that it learns and evolves, what the more complex issue is, programming a reaction to an impulse is like not a problem at all, doing that at a complexity level of organisms with a cognitive/nervous system is the issue I think.
***** And I can run Linux on my PS2, it still doesn't make it a good idea. Go for optimization and reliability over pushing the limits of your system. Nobody wants their brain to crash.
You have a really good idea going there, I think its good to base your ideas around current games in an attempt to create something original. I enjoyed both games and now I am very curious to see how you will develop this.
*facepalm* Do you really think we can just do that? Do you realize how much more difficult that is than just creating a thinking, feeling robot? Am I being trolled?
I have a question, can artificial intelligence think about the complications of its own existence and refuse to have it obliterated that it puts faith on the existence of a god? (with or without having a biological brain)
Yes, people mistake Kurzweils Law of Accelerating Returns (which deals with all forms of evolutionary systems, including technology) for Moores Law, which is only about semiconductor circuits. Kurzweils Law of Accelerating Returns basically covers as far back as the beginning of evolution. Its a good read, take a peek.
Excuse me James but I need to clarify: Moore's Law isn't about a computer's processing power nor its performance. His statement was about the number of transistors in a microchip, as they would double every 18 months. Altough they are related, processing power does not only depend on transistor number, it also depends on architecture and/or programming efficiency, and it's also affected by the task they're used for. So, computer processing power does not 'roughly double every 18 months'.
There was a really great video on Veritasium a while back about how we are reaching the limit of transistors built with classical mechanics, and how Moore's law is becoming invalid because of it. Currently, we can't build a processor smaller than an atom, because the electrons in the transistors quantum tunnel (which defies the point of the transistor, effectively making the "processor" useless).
that will take its time, but in the end it will be mainly about understanding the brain exactly (and I mean totally perfectly in every single way) and then simulating that. We won't see that happening any time soon, and the costs of research will be tremendously high, but I believe it could be possible to be done within the next 100 years.
Moore's law states the number of transistors on a chip double, not processing power. Not even logic-gates [transistors are fit together to form logic gates] - there is diminishing returns on trying to configure transistors into actual logic gates, and with the problem of actually making connections between these gates takes up more space still! So not effectively double unfortunately, plus it's flattening out Sorry, I'm a computer engineer so I felt obligated to make the correction ;) carry on
Consciousness simply means "being aware of what's around you". A character's AI in a game is already conscious of what's around it, to interact with it. Just like you are conscious of what's around you (while you ignore what's around you at a higher (or lower) scale, or even depending on your education level). The game's AI is only much, much much less complex.. but it has centuries to improve.
I see you haven't heard about I, Robot. Asimov wrote the three laws of robotics: 1-A robot may not injure a human being or allow him to come to harm. 2-A robot must obey humans, except when it conflicts with the First Law, and 3-A robot must protect its own existence as except when it conflicts with the 1st or 2nd law. In the film, (this uprising isn't in the books..) Robots thought that humans are self-destructive, so to protect humanity they needed to rule it and kill some on the way.
I think I remember hearing that Moore's law is supposed to break down in the next ~10 years in conventional computing. Conventional computing meaning not quantum computing.
Mike: "Why is a laser beam like a goldfish? Neither one can whistle." From Heinlein's....The Moon is a Hash Mistress.1967 An excellent read as it deals with a computer, who becomes self aware.
To a point yes. If you have a computer that can start building its own code on top of the base programming, you have a learning system. It's going to be extremely complicated to build one of these though and I'm a huge doubter of the technological singularity with the state of artificial intelligence as is.
Recognise objects, engage in complex dialogue, be manually dexterous, understand social interaction. If those are the qualities that must be fulfilled for something to be human then a lot of humans fail at at least one, and a lot of animals can manage two (and for some even three) of those things. Personally I think the only thing a computer would need to be considered as a living thing is a sense of self preservation. If it was more human it might be curious too. The basic drive for most...
3:42-4:09 Maybe Its because if we create something human like that's a robot it allows for the possibility that we are self are robots, similar to people being very defensive about negative truths about themselves, like people addicted to drugs. To admit we are addicted, or in this case robots, is scary to say the least.
Dear Mr. May You only explained half of the uncanny valley. The other important part of the uncanny valley is that when the robot has so many human features, that we have accepted it as one of us. Albeit, I don't have any references for this...
As long as your computer has a microphone and it is not mechanically disconnected or otherwise not functional, it could be listening. But a computer that has lots of error messages is probably not one to worry about because after all, it's having issues on its own completing basic computer functions.
As it is currently known, we weren't designed. We evolved to this without following any instructions. All we had in our history was the goal of survival. A computer is designed by us to do a specific thing, and it itself doesn't particularly care.
from my knowledge we have the ability to program robots with one or two emotion so i had the idea of making a group of robot systems like this but have them all feed through a more powerful system for example the main computer works out the emotion it feels and the sub computers run that single emotion the only falling point i can find with this is the link between these systems and the processing power required
Maybe the answer is right in front of us? Emotion is a product of evolution - it helped us survive by enabling us to form social bonds. Maybe the reason robots may never develop feelings is because they won't face the same adaptive pressures that our ancestors faced? Those adaptive pressures might also explain why we have such a hard time understanding emotions in the first place. We live in a very different time from our evolutionary ancestors. Emotions may be holdovers from an earlier era.
I always think: Why not? Why should the material of which a creature is composed dictate what kinds of feelings or capabilities it can have. And who can say that the human brain, as a social tool, hasn't grown so skilled to recognize human behaviour around it, that it hasn't erroneously recognized it in itself, and thinks itself human?
When I was just a little kid in the early 90s they said we were approaching this computing power wall. I think we will find this one is no more real than that one was once we actually approach it.
the capability to recognize objects, have emotions and so forth, they are all learned by the brain over many years. With the development of machine learning techniques, it is not that unlikely that robot will once be exactly like humans. All that is needed are complex algorithms that imitate the learning process of the brain, and some more complex algorithms to use what is learned to interpret the input (which can then be learned again).
"It'll be able to understand social interaction, it'll be able to understand feelings..." What makes you think that stuff CAN be understood? Social interactions are messed up.
I like what was said about our distaste for robots that operate and appear extremely similar to humans.I think that would also be the case if we were to find extra terrestrial life forms that were similar to our build and action because they are,like you said, fundamentally inhuman.Maybe thats why when we make "scary alien" movies they are built that way because that kind of invasion would be the most terrifying to us the closer they are because they are harder to differentiate and protect from.
Moores law may very well just simply continue in the vein of quantum computers via q-bits as they still, fundamentally, relay on 'machinery', and thus capable of being improved with greater experience. Will be interesting to see.
Moore's law/rule/theory state that the transistor count PER SQUARE INCH increses twofold every 12-24 month (depending on who has interpreted it. This does not say the least about performance. Try taking two i7 processors and stick them together, and see if your performance increases. Increasing transistors per sqr inch basically means you shrink the i7 to half it's size, not that the same processor has more transistors. Bear in mind this is rather simplified.
Aaah his voice is so calming and nice to listen to! But technically, if we were able to re-create certain human instincts in artificial intelligence and give them the ability to learn and evolve, to grow like we do, then they could be classified as capable of emotions, yes? But it's just way too complicated for us to do right now?
Freja Rößle Yeah, I like May too :) The human brain is ridiculously complicated, so yeah, we're far off. But If we did create an AI complex enough to accurately mimic human development and behaviour and exhibit emotion, then we would have as much proof of it experiencing *actual* emotion as any fellow man - and that's not all that much... food for thought.
We have different words for a reason. A robot isn't merely "a machine", it's a specific kind of machine. Just as a cell is a specific kind of machine. And, in the case of robots, the definition pretty much hinges on them being able to do the job of a human while not being human, so to say that "technically humans are robots" doesn't make much sense; by definition the two are mutually exclusive.
I think a common misconception with AI is that human like emotions are going to be a capability. At best they will have predetermined programmed reactions to certain situations based on what they are told to feel, which is arguably just as human as any sociopath.
"Human brain is the most sophisticated machine on Earth."
-Human brain
Illogical Statement, Exterminate Humon.
LOL! Thats so true!
***** It is if you think on molecular level.
***** The more I think about it, you kind of have to think on even smaller scale. Otherwise, according to the definition you provided, many electrical devices are not machines (for instance CPUs, lasers, radio transmitters, immersion heaters, electromagnets, USB flash drives, ...)
In that case however, almost everything can be considered a machine. So in the end, it comes down to our intuition / ingrained notion.(epistemological nihilism wins once again)
neuron1618 .... Most sophisticated? Human brain? Human brain is nothing. You don´t even remenber what was your thinking, when you did not know language... If is brain the most sophisticated machine, it is brain of genius and genius is not human, but next step of evolution.
If the brain were so simple that we could understand it, it would be so simple that we couldn't.
Nice.
C Martinez Played Civilization V recently I presume..?
You presumed correctly!
There has to be a name for this paradox.
@@chasemiller7974 I agree
You forgot the point beyond the uncanny valley where the object in question becomes so human that it is no longer creepy, as it fits in with the human look perfectly (and by perfectly I mean that from a distance or similar point of misinterpretation a robot and a human would look alike).
how about anime?
That falls in the "Stylised" are, which is before the uncanny valley, where it looks human, but is easily discernible from real life, being animated and all.
Thanks for taking part in the super collab, we have been fans for a while! :D
"Intel ... dundun dundun"
😂😂😂😂😂😂
This channel and ManLab have been my favourite viewing this year.
James May and Simmy. Brilliant combo.
A robot that can make expressions with a face is not "feeling" , it is just a piece of plastic that looks like a human face.
We CAN create something that mimics our reactions to our own feelings, but not a machine that actually "feels"
Keep telling yourself that.
But why is that? Aren't we just machines?
Humans are just biological machines. Yet we have emotions.
Thank you all time top 10's for bringing me here and thank you head squeeze.
i've had multiple conversations with common chatbots, and they have all hinted at exterminating humanity.
+Ultra Window
I don't blame them.
+tabularasa0606 I don't at all either. I look with Mitch disdain at many humans.
Came here from Alltime 10s in year 2020 and this channel name has been changed from Head squeeze to BBC Earth Lab.
Those who ask the question "will robots ever develop feelings?" seems to indicate a misunderstanding of just what human feelings are in the first place. Human beings ARE robots, biological in nature, and we have developed feelings over the course of our evolution. If robots do not develop emotions it will only be due to our lack of ability to program them adequately and not due to some limitation imposed by their nature. We will cross the boundary very soon into biological robots. People like to think emotions are something special because they want to feel unique, better than, or at least different from, the rest of the universe. Once you take away that delusion and see emotions in their true context it becomes a question of "will humans be able to program robots sufficiently complex enough to develop feelings?"
I could watch this guy all day. James may= legend
The singularity is 2042 by the way.
There are multiple estimates for when the singularity.
I could listen to James May all day :)
Computer, you know you love me really...
Dude , This episode tripped me out man. Thanks
I'd like to make a response to the "four requirements for AI to be like us": 1) Must recognize objects. 2) Must engage in complex dialogue. 3) Must be manually dexterous. 4) Must understand social interaction. For #2, there are plenty of humans who are thoroughly unable to accomplish this qualification (and I tend to include myself in the "sub-par" category). For #4, there are myriad number of human beings who cannot master social interaction. Many subcategories of autism, for example, can severely hamper social abilities for human beings. In the case of #1, there are very few people who have difficulty recognizing things and people and categories of things, but there are some individuals who cannot recognize objects, and in some rare cases, faces. In #3, dexterity is considered essential for a robot to effectively simulate human behavior. Well, yes. People move about. So a robot trying to simulate people should likewise move about. But there are also plenty of humans who can't even move their limbs (Stephen Hawking, for example). What happens if a human being lacks one of the "requirements for being a human"? Can a human lack all four and still be a human? Are these qualifications really what make us human?
I said the same about four weeks ago (though unfortunately my version wasn't so well worded).
The only requirements a program would need to have for me to consider it a true AI would be the ability to make a decision without knowing all the facts, the ability to learn, the ability to form relationships with other life and some kind of curiosity.
As I understand it programs already exist that can code stuff, they just don't can't improve designs since they aren't actually intelligent.
All I needed to do was see James May and I subscribed.
I think we would have to fully understand human emotions before we can replicate it artificially. Robots would not be able to develop it themselves probably
U r right
Only one problem with that we don't even understand how a critical mass of neurons possibly in combination with random mutations in genes influencing brain structure caused emergent behaviours like consciousness or complex emotions in the first place. Worst part of it all is that despite our knowing ignorance of this we still have the hubris to use biologically inspired evolutionary neural networks and assume that our ignorant bumbling efforts couldn't possibly stumble on similar emergent systems.
DNA didn't "think" that it would one day bumble it's way into creating a sentient entity capable of awareness of it's own existence yet it did it anyway, nor did any of those early sentient organisms "think" they would stumble on sapience and start producing complex machines of their own separate from themselves but that happened too. And now those sapient organisms don't think that their ignorant bumbling with machine simulations of those same basic building blocks will reach the same path but are they right this time? Guess we will find out one way or another.
I can tell where you are going with this but like you said it whould ultimately fall short.
The things about feelings or emotions is that they are affected by chemicals yes, but they are governed by a complex array of other factors aswel, mostly our trouble in recreating this boils down to our lack of understanding how WE feel in the first place, take depression, it's a selfamplefying loop of sadness and we know some chemicals that will curb it slightly but we can not at will create or undo it
so that could be why my laptop tends to work everytime i hold a knife to it...
Good idea, I should try that sometime...
Hey James! Can you make an episode about Why do we like bitter drinks like beer and coffee? I didn't like either of those when I was younger, but now I love them.
I think this is possible because the brain is already a machine. It uses electrical signals just like a computer processor does.
by "machine" we mean made of metal rather than biological substances - not the vitamins that make us up, but rather iron and steel.
Marcel Troscianko
what major difference does it make?
brandoGTR48 ... Basically says that your brain is not a machine at all? i.e. The opposite of what you said? That is a fairly big difference.
Marcel Troscianko
We are still machines. It doesn't matter what we are made of.
Cheers mate, I really like following the creation of games that have been indie developed
I'm pretty sure, with our attempts to create sentient machines, we may be the first species to cause our own extinction.
I agree though chances are, a good few of those civilizations will no longer be around as a result.
Agent Vengeance Or we would evolve at a record time, living in peace with our fellow AIs :)
Freja Rößle That's the way I'd want it actually :)
Agent Vengeance Me too! Although extinction could be good too. Sure the human race may seize to exist as 'humans' but maybe OUR next step in evolution is just artificial intelligence? That would be so cool :D
Freja Rößle I guess there's also a chance that humans could adopt cybernetic upgrades while machines become more human by becoming biomechanical constructs. Ergo, a single race of cyborg beings.
Dear James May, I have a very pretentious history teacher which requires not just all the material to be known (check) but also very good story telling (not so check).I implemented your style of explanation and guess who finished with top marks :D.Anyway I really enjoy your work both here and in Top Gear which is my favorite show.Cheers Georgi Trichkov 17 Bulgaria
What if human feelings are fake?
They are in a way that they are pre-programmed/taught, and are only there to help our survival - and since we are intelligent enough to have pulled ourselves out of the vicious cycle of nature/evolution, emotions have become pretty much obsolete. We no longer have a need for emotions - scary, isn't it?
The little jingle after he said intel was priceless!
The singularity is a hypothetical moment in the future that occurs soon after humans develop the first AI that attains human intelligence and self awareness. Because this hypothetical entity would benefit from the numerous advantages that computer processing has over an organic brain, its intelligence upon inception would already be greater than any individual human being. Should the entity task itself with creating a superior intelligence, exponential progress would continue and soon an individual intelligence would exist that surpassed all of mankind. At this point a singularity (or event horizon) begins, beyond which we cannot predict the trajectory of the human race because progress on Earth will no longer move at a pace that is conceivable for us biological beings. Ray Kurzweil believes that this represents a continuation of biological evolution and a milestone on the same path of exponential progress that has occurred since life began on Earth.
The main concern about the singularity is that it may be a turning point in human history in which computer technology becomes the primary life form on Earth, pushing humanity into slavery to the computers.
In a more realistic sense, the computer overlords would probably find humans to be rather useless and drive them to extinction, along with all other impractical species on the planet. If the computers developed human-like emotions, we would get an outcome that would either be the enslavement of humans out of sheer spite for the species or a benevolent nature reserve for humans to live out their lives in paradise.
We won't know until the singularity actually happens.
No matter how evolved a sentient species of AI robots will become, they will still use indian tech support, so there's still hope for humanity.
Mihai-Ciprian Ghilinta You win the internet with your comment. Brilliant.
commode7x Transhumanism is the answer, simply merge biology with technology, both systems have advantages it is only logical, and if humanity phases out to technology when we are both and inseperable who cares? We are still the progenitors and if merged witness or integrated into the transition.
Havent watched it through yet, and I must say, I'm already impressed by the question. Very curious, and not many people talk about it even though it's pretty common an issue. Ok going to watch the vid now
I could be rong but I think emotions are just chemical reactions.
I think your right. I'm not some computer genius or anything but i'm pretty sure computer as we know them can't have emotions. If we could create cyborgs though, then we could.
Yes mostly chemical reactions but this reactions are triggered by events around us and that is the problem, Full Artificial Intelligence requires that a computer decides when the event is suitable to trigger this "chemical reaction" to experience the emotion and the chemical reactions trigger physical change in the body (speeding heartbeat, smiling, butterflies in your stomach etc.) this would not be present in a robot
Youssef Khaled What would trigger things is the least problematic aspect though since it can at a basic level easily be programmed what triggers what.
The more complex thing is for actual artificial intelligence to change and evolve through causes and effects like complex organisms and not just with having an effect from cause but also changing the own inner workings because of it so to speak. Theoretically though that´s no issue, just practically since neuroscience and robotics aren´t at a level to realize it.
So a "robotic organism" must be made in a way that it learns and evolves, what the more complex issue is, programming a reaction to an impulse is like not a problem at all, doing that at a complexity level of organisms with a cognitive/nervous system is the issue I think.
Thank you for the great videos I learn a lot from them :)
My brain... is 10000000 GHz. If only I could run Windows on it...
When you figure that out just make sure you get 7 and not Vista.
***** 7 is still faster and better optimized....
Humans with Windows are more likely to die earlier because of the BLUESCREEN OF DEATH!!!!
*****
And I can run Linux on my PS2, it still doesn't make it a good idea. Go for optimization and reliability over pushing the limits of your system. Nobody wants their brain to crash.
vista more than windows 8? Of course... Windows 8 is "shit". Still 7 is a bit faster than vista...
You have a really good idea going there, I think its good to base your ideas around current games in an attempt to create something original. I enjoyed both games and now I am very curious to see how you will develop this.
Here's a good idea -- Biologically engineer a human brain, then connect to robotic component's. yes, I know I'm a heartless bastard.
Nuno Monteiro I am quite serious "friend" lol.
*facepalm* Do you really think we can just do that? Do you realize how much more difficult that is than just creating a thinking, feeling robot? Am I being trolled?
Nuno Monteiro Well, I would have assumed you knew you were being trolled lol.
That's reassuring....
I have a question, can artificial intelligence think about the complications of its own existence and refuse to have it obliterated that it puts faith on the existence of a god? (with or without having a biological brain)
Came over from Alltime10's. Subscribed... : )
Sooo... Psycopaths aren't human then?
Gaming4Justice They are in fact robots in disguise ;)
Waleed Sha That explains :P I'm always told I'm a psychopath.
+Gaming4Justice YOU SYNTH!!
Tanner Eanes I chose the Institute
+Waleed Sha So you're saying that there may be more than meets the eye about them? ;)
Yes, people mistake Kurzweils Law of Accelerating Returns (which deals with all forms of evolutionary systems, including technology) for Moores Law, which is only about semiconductor circuits. Kurzweils Law of Accelerating Returns basically covers as far back as the beginning of evolution. Its a good read, take a peek.
1:24 hahaha that was brilliant!!!
If this robot intelligence thing happens, I really hope that we teach them empathy FIRST
Excuse me James but I need to clarify: Moore's Law isn't about a computer's processing power nor its performance. His statement was about the number of transistors in a microchip, as they would double every 18 months. Altough they are related, processing power does not only depend on transistor number, it also depends on architecture and/or programming efficiency, and it's also affected by the task they're used for. So, computer processing power does not 'roughly double every 18 months'.
There was a really great video on Veritasium a while back about how we are reaching the limit of transistors built with classical mechanics, and how Moore's law is becoming invalid because of it. Currently, we can't build a processor smaller than an atom, because the electrons in the transistors quantum tunnel (which defies the point of the transistor, effectively making the "processor" useless).
I love James May, good job, as always. Where are the other 2 irritatingly awesome petrol heads?
that will take its time, but in the end it will be mainly about understanding the brain exactly (and I mean totally perfectly in every single way) and then simulating that.
We won't see that happening any time soon, and the costs of research will be tremendously high, but I believe it could be possible to be done within the next 100 years.
Moore's law states the number of transistors on a chip double, not processing power. Not even logic-gates [transistors are fit together to form logic gates] - there is diminishing returns on trying to configure transistors into actual logic gates, and with the problem of actually making connections between these gates takes up more space still! So not effectively double unfortunately, plus it's flattening out
Sorry, I'm a computer engineer so I felt obligated to make the correction ;) carry on
this video had me petting my laptop, i hope it knows i always loved it
This was brilliant.
Consciousness simply means "being aware of what's around you". A character's AI in a game is already conscious of what's around it, to interact with it. Just like you are conscious of what's around you (while you ignore what's around you at a higher (or lower) scale, or even depending on your education level). The game's AI is only much, much much less complex.. but it has centuries to improve.
How do you do the effects in this video so damn amazing
You're right. All the comments that people have written here has given me new insights. I'm starting to believe that it's just a matter of time.
Great video!
4:29 subbed right there. BTW when my comp freezes (usually) I give it its sweet time to load. And if there's an error message i blame the program.
I see you haven't heard about I, Robot. Asimov wrote the three laws of robotics: 1-A robot may not injure a human being or allow him to come to harm. 2-A robot must obey humans, except when it conflicts with the First Law, and 3-A robot must protect its own existence as except when it conflicts with the 1st or 2nd law.
In the film, (this uprising isn't in the books..) Robots thought that humans are self-destructive, so to protect humanity they needed to rule it and kill some on the way.
Best news ever!!! thanks!
TOP GEAR!!!! JAMES MAY!!!
I think I remember hearing that Moore's law is supposed to break down in the next ~10 years in conventional computing. Conventional computing meaning not quantum computing.
JAMES MAY GET BACK ON TOP GEAR DAMNIT
You don't need feelings for the singularity but you do need them when you want your robots to interact with humans.
Thank you Im going to subscribe :)
STAY ON TOPGEAR JAMES
Mike: "Why is a laser beam like a goldfish? Neither one can whistle."
From Heinlein's....The Moon is a Hash Mistress.1967 An excellent read as it deals with a computer, who becomes self aware.
May forgot to mention that the "uncanny valley" graph goes back up once robots become life-like enough to where we cant tell the difference
To a point yes. If you have a computer that can start building its own code on top of the base programming, you have a learning system. It's going to be extremely complicated to build one of these though and I'm a huge doubter of the technological singularity with the state of artificial intelligence as is.
2:26 DID anybody else recognize the creepy tune that starts playing at 2:26? it's has some reference to half life 2 :D (g-man on the unplugged tv)
this guys voice is too good to have it turned robotic
YOU HAVE AWESOME VOICE!!!
That's why I love Vsauce. It always opens doors to impressive people and subjects.
I love that teeth-bot next to the speaker :)
Recognise objects, engage in complex dialogue, be manually dexterous, understand social interaction.
If those are the qualities that must be fulfilled for something to be human then a lot of humans fail at at least one, and a lot of animals can manage two (and for some even three) of those things.
Personally I think the only thing a computer would need to be considered as a living thing is a sense of self preservation. If it was more human it might be curious too. The basic drive for most...
Nice shirt James!
3:42-4:09 Maybe Its because if we create something human like that's a robot it allows for the possibility that we are self are robots, similar to people being very defensive about negative truths about themselves, like people addicted to drugs. To admit we are addicted, or in this case robots, is scary to say the least.
Dear Mr. May
You only explained half of the uncanny valley.
The other important part of the uncanny valley is that when the robot has so many human features, that we have accepted it as one of us.
Albeit, I don't have any references for this...
As long as your computer has a microphone and it is not mechanically disconnected or otherwise not functional, it could be listening. But a computer that has lots of error messages is probably not one to worry about because after all, it's having issues on its own completing basic computer functions.
As it is currently known, we weren't designed. We evolved to this without following any instructions. All we had in our history was the goal of survival. A computer is designed by us to do a specific thing, and it itself doesn't particularly care.
1:19 You got Moore's Law wrong, he didn't say the processing power would double, just the number of transistors on integrated circuits.
from my knowledge we have the ability to program robots with one or two emotion so i had the idea of making a group of robot systems like this but have them all feed through a more powerful system for example the main computer works out the emotion it feels and the sub computers run that single emotion the only falling point i can find with this is the link between these systems and the processing power required
I DIDNT KNOW JAMES HAD A TH-cam CHANNEL, INSTA SUSCRIBE
0:50 made me subscribe - hilarious!
I don't know if I want an emotionally detached AI running my country, kind of like.. iRobot, sad enough as it is to use that movie as an example.
I clicked for James May!!!
Maybe the answer is right in front of us? Emotion is a product of evolution - it helped us survive by enabling us to form social bonds. Maybe the reason robots may never develop feelings is because they won't face the same adaptive pressures that our ancestors faced?
Those adaptive pressures might also explain why we have such a hard time understanding emotions in the first place. We live in a very different time from our evolutionary ancestors. Emotions may be holdovers from an earlier era.
I always think: Why not? Why should the material of which a creature is composed dictate what kinds of feelings or capabilities it can have.
And who can say that the human brain, as a social tool, hasn't grown so skilled to recognize human behaviour around it, that it hasn't erroneously recognized it in itself, and thinks itself human?
So sorta like a prequel to the Terminator? It could be an interesting concept if you can nail it correctly.
When I was just a little kid in the early 90s they said we were approaching this computing power wall. I think we will find this one is no more real than that one was once we actually approach it.
the capability to recognize objects, have emotions and so forth, they are all learned by the brain over many years.
With the development of machine learning techniques, it is not that unlikely that robot will once be exactly like humans. All that is needed are complex algorithms that imitate the learning process of the brain, and some more complex algorithms to use what is learned to interpret the input (which can then be learned again).
I found the subscription fish joke rather delightful and cunning.
"It'll be able to understand social interaction, it'll be able to understand feelings..."
What makes you think that stuff CAN be understood? Social interactions are messed up.
Question suggestion "why/how do we get headaches" ?
So the question if perhaps if we are truly conscious or if we also are just software. It's getting very philosophical :)
James May hasn't been on Top Gear for 20 years. The show's been running that long, but he joined fairly recently.
I like what was said about our distaste for robots that operate and appear extremely similar to humans.I think that would also be the case if we were to find extra terrestrial life forms that were similar to our build and action because they are,like you said, fundamentally inhuman.Maybe thats why when we make "scary alien" movies they are built that way because that kind of invasion would be the most terrifying to us the closer they are because they are harder to differentiate and protect from.
Biggest collaboration in TH-cam history? What about the Project for Awesome?
Pretty much the first thing I thought of when I saw this: Cylons, now we just need awesome Battlestars. :)
4:37 - Really freaky thought!
Moores law may very well just simply continue in the vein of quantum computers via q-bits as they still, fundamentally, relay on 'machinery', and thus capable of being improved with greater experience. Will be interesting to see.
actually, the size of a chip for a certain performance shrinks, however, chipsize stays the same, thus, performance increases.
Moore's law/rule/theory state that the transistor count PER SQUARE INCH increses twofold every 12-24 month (depending on who has interpreted it. This does not say the least about performance. Try taking two i7 processors and stick them together, and see if your performance increases. Increasing transistors per sqr inch basically means you shrink the i7 to half it's size, not that the same processor has more transistors. Bear in mind this is rather simplified.
Aaah his voice is so calming and nice to listen to!
But technically, if we were able to re-create certain human instincts in artificial intelligence and give them the ability to learn and evolve, to grow like we do, then they could be classified as capable of emotions, yes?
But it's just way too complicated for us to do right now?
Freja Rößle Yeah, I like May too :)
The human brain is ridiculously complicated, so yeah, we're far off. But If we did create an AI complex enough to accurately mimic human development and behaviour and exhibit emotion, then we would have as much proof of it experiencing *actual* emotion as any fellow man - and that's not all that much... food for thought.
We have different words for a reason. A robot isn't merely "a machine", it's a specific kind of machine. Just as a cell is a specific kind of machine. And, in the case of robots, the definition pretty much hinges on them being able to do the job of a human while not being human, so to say that "technically humans are robots" doesn't make much sense; by definition the two are mutually exclusive.
I think a common misconception with AI is that human like emotions are going to be a capability. At best they will have predetermined programmed reactions to certain situations based on what they are told to feel, which is arguably just as human as any sociopath.