A couple things I forgot to add... Saying AI learns at a much slower rate then humans would be correct but wrong in certain cases. To say humans learn how to drive or master a game in minutes is wrong. It takes millions of patterns stored to be able to do those tasks. So a human has a lifetime advantage. So while you could compare them it is wrong to do it in this case since one is always task based that can change with programming and one remains the same to do any task. So it may take a program longer to learn a game but it won't be able to turn around and cook supper with the same underlying structure which builds the code. Task based AI will always be able to out perform "humans" due to the fact we have to cook supper too. So while you could compare the two at certain spots both are different and need to be treated that way. To be able to walk talk and go to school took more time then it did to program the code for example. (Ignore the years of science going into it) And which is smarter... the person or machine? So I separate them. And as for object and image inference the reason my mind choose to rebuild the image instead of looking for a whip and chair or leash was it had no positive associations to the image being correct since nothing in my minds association processes told me the image should be correct. So because of this it returned a negative and redrew it instead of using what was inputed and running with that. So for image inference you grab the closest objects and draw the image. Our mind then checks and sees if anything returns negative in the image and if not it accepts it as positive. To do that you need a data set similar to the minds, or have to build it. But the problem is we can move, etc, to figure it out. But images often cannot and we have a lifetime. So it often makes it difficult. So in basic you have to do basic inference to grab the nearest object and then do a check with the past. You also have to deal with many other things that may pop up such as when it accepts things it should not have. All which the mind learns to do since we are used to our inputs giving us false perceptions of the world. Thus we naturally filter out the difference. At least most times. But making robots ignore inputs they should not like humans do is not on the development list yet. But as far as I know that is how the mind does it. I won't say AGI because that don't exist. Learned it all with bubble gum and string I did. Lol. For instance if you looked and seen a sailboat in the sky... you would do a double take because your perceived state says it should not exist. Then you look again and find out it is a cloud, telephone pole, and the neighbors sheets hanging in the wind. And if on the double take you cannot find negative we just accept it even if we don't want to accept it given it was strange and does not fit in with any data we have no matter how hard we try. Which makes us think sailboats float, or may, but are extremely uncertain or doubtful given the stored data. Because why wouldn't you believe your own mind when nothing else tells you that you should not? And as for cat and rat level intelligence... you have all the technology but are not putting it together and have to build some of the pieces you need. Which you can with today's tech. You'll figure it out sooner than you think. Imho.
Our perceived state works like a nerve passing electricity through it. It then triggers more inputs off it. So if you were driving a car and the perceived state said you were going off the road it would then trigger this negative effect and look for ways to change the state to a better one. But AGI taught me that so you might not want to pay attention to it. Lol. We observe the state of the world and accept patterns that do not change our perceived state to a negative one. Then we use them in the future. Prediction... still wrong. Sorry but if you want to predict like a rat, cat, human or advanced human you have to build AGI. Any other way I try to make it works gets too clunky and you may as well go to AGI. We deal with uncertainty through the process so you'd have to build in a routine to deal with it because it happens naturally in the human mind and AGI. In AGI energy functions disappear except for inference and it boils down to simple numbers to check anything. Much faster. Images predicting through inference is simply grabbing a similar image and assuming it is the same. But it has problems. For example I was at work and walked down a hall and turned to the right instead of left. I did my task and turned around and there was a small chair with a black sweater on it down the other hallway. Neither is usually there so in my mind it was a dog getting ready to attack. Because that is what the rounded shoulders on the chair looked like. It didn't move and I had time to think about how it was impossible for a dog to be there so I concentrated on it and began to rebuild the item in my mind in the dark. Starting with a pattern on the pocket and not the dark image as a whole. So without a check like the mind does that can lead to problems. But that is how it is done. And if you want to build cat or dog level intelligence you are just one step away from full AGI. Two technically but you already have to create one of them and it just repeats. Maybe it is because i'm tired and cannot sleep but I really thought he would make it through a video without implying the science of AGI does not exist when it holds all the answers to his questions, and claims it is his career goal. Which he could easily do with his resources and a boss whom hired him to build it. Correct?
A couple things I forgot to add...
Saying AI learns at a much slower rate then humans would be correct but wrong in certain cases. To say humans learn how to drive or master a game in minutes is wrong. It takes millions of patterns stored to be able to do those tasks. So a human has a lifetime advantage. So while you could compare them it is wrong to do it in this case since one is always task based that can change with programming and one remains the same to do any task. So it may take a program longer to learn a game but it won't be able to turn around and cook supper with the same underlying structure which builds the code. Task based AI will always be able to out perform "humans" due to the fact we have to cook supper too. So while you could compare the two at certain spots both are different and need to be treated that way.
To be able to walk talk and go to school took more time then it did to program the code for example. (Ignore the years of science going into it) And which is smarter... the person or machine? So I separate them.
And as for object and image inference the reason my mind choose to rebuild the image instead of looking for a whip and chair or leash was it had no positive associations to the image being correct since nothing in my minds association processes told me the image should be correct. So because of this it returned a negative and redrew it instead of using what was inputed and running with that.
So for image inference you grab the closest objects and draw the image. Our mind then checks and sees if anything returns negative in the image and if not it accepts it as positive.
To do that you need a data set similar to the minds, or have to build it. But the problem is we can move, etc, to figure it out. But images often cannot and we have a lifetime. So it often makes it difficult.
So in basic you have to do basic inference to grab the nearest object and then do a check with the past. You also have to deal with many other things that may pop up such as when it accepts things it should not have. All which the mind learns to do since we are used to our inputs giving us false perceptions of the world. Thus we naturally filter out the difference. At least most times. But making robots ignore inputs they should not like humans do is not on the development list yet.
But as far as I know that is how the mind does it. I won't say AGI because that don't exist. Learned it all with bubble gum and string I did. Lol.
For instance if you looked and seen a sailboat in the sky... you would do a double take because your perceived state says it should not exist. Then you look again and find out it is a cloud, telephone pole, and the neighbors sheets hanging in the wind. And if on the double take you cannot find negative we just accept it even if we don't want to accept it given it was strange and does not fit in with any data we have no matter how hard we try. Which makes us think sailboats float, or may, but are extremely uncertain or doubtful given the stored data. Because why wouldn't you believe your own mind when nothing else tells you that you should not?
And as for cat and rat level intelligence... you have all the technology but are not putting it together and have to build some of the pieces you need. Which you can with today's tech.
You'll figure it out sooner than you think. Imho.
Our perceived state works like a nerve passing electricity through it. It then triggers more inputs off it. So if you were driving a car and the perceived state said you were going off the road it would then trigger this negative effect and look for ways to change the state to a better one. But AGI taught me that so you might not want to pay attention to it. Lol.
We observe the state of the world and accept patterns that do not change our perceived state to a negative one. Then we use them in the future.
Prediction... still wrong. Sorry but if you want to predict like a rat, cat, human or advanced human you have to build AGI. Any other way I try to make it works gets too clunky and you may as well go to AGI.
We deal with uncertainty through the process so you'd have to build in a routine to deal with it because it happens naturally in the human mind and AGI.
In AGI energy functions disappear except for inference and it boils down to simple numbers to check anything. Much faster.
Images predicting through inference is simply grabbing a similar image and assuming it is the same. But it has problems. For example I was at work and walked down a hall and turned to the right instead of left. I did my task and turned around and there was a small chair with a black sweater on it down the other hallway. Neither is usually there so in my mind it was a dog getting ready to attack. Because that is what the rounded shoulders on the chair looked like. It didn't move and I had time to think about how it was impossible for a dog to be there so I concentrated on it and began to rebuild the item in my mind in the dark. Starting with a pattern on the pocket and not the dark image as a whole. So without a check like the mind does that can lead to problems. But that is how it is done.
And if you want to build cat or dog level intelligence you are just one step away from full AGI. Two technically but you already have to create one of them and it just repeats.
Maybe it is because i'm tired and cannot sleep but I really thought he would make it through a video without implying the science of AGI does not exist when it holds all the answers to his questions, and claims it is his career goal. Which he could easily do with his resources and a boss whom hired him to build it. Correct?