Thank you for this interview. I always look forward to seeing Geoffrey Hinton. He is a living genius, and it's always a pleasure to listen to his British accent.
He is incredibly smart .. I'm wondering why he is not writing any book to share his huge knowledge with the public .. Recently I read a book by Jeff Hawkins and I would be interested reading a book written by Hinton.
Thank you for this content! Probably the best intuitive explanation of why tranformers are so good! Geoff is the G.O.A.T.! I love how he shares his knowledge
Great discussion, thanks! He says "Reinforcement learning is like an icing on the cake. You don't want to learn how the world works using reinforcement signals. Most of the learning is going to be unsupervised learning". I can't help but think about the question "What's at the intersection of unsupervised and reinforcement learning?".
Hinton JUST CANNOT talk about combined models. Bummer as I'd love to hear his impressions but somewhat understandable given the amount of NDAs he most likely signed over the past decade. But thanks Craig for trying at least 3 times to ask him about it haha :D
By combined models do you mean things like putting together SimCLR and capsules and transformers? He is talking about that now: arxiv.org/abs/2102.12627 if that is what you mean.
I don't do much AI so I have to infer some terms but in regards to Back relaxation and Greedy bottom up... if that means the first uses small parts to build larger ones and the other grabs the largest object closest to it... I can tell you the mind uses both. It needs the first to build anything and the second for speed. To understand we can build a robot. We put cameras in for eyes. This will be at a frame rate of 30fps for example purposes. So you would have to build up every object, then the concepts, then the movements, then action concepts, etc, by building all from scratch at a rate of 30fps. I ain't that smart lol. So basically it grabs the most similar, or through inference, to build it all. So it uses both at the same time which is why in the end both will work the same as long as it has the needed data. We can do this through hardware. And so does the mind. And all children, given a "normal" functioning mind, can do what Newton did. You just have to have the same associations and goals. Though life circumstances, inputs, etc, can prevent it. If you can teach it a "normal" mind can learn it. Otherwise you couldn't teach it. Just saying. ;-) Most of the first capsule talk was correct. AI is a collection of tasks to do what a mind does naturally. Like creating short cuts. So that is on the right track. I still think you should all go to AGI since that in basic is what you are trying to achieve. Though both is needed but what you really want is AGI with no ability to choose outcomes you do not want. Thus create a task you put various levels of positive associations to, or negative, to create whatever you want to do based on the core. So you need the core without doing a bunch of tasks that slow down the computing process. And the mind does it the best since a piece of meat that can do all we see.
I speak Spanish, but I'm afraid that this function (dubbing) does not have the mobile that I use; only the subtitles, but they are not good. Anyway thanks for the recommendation, be well 👍🙂
If an Ai is programmed beforehand, what makes you expect a programmed machine can evolve past it's programming? You aren't even sure if human evolution is possible much less machine evolution.
Thank you for this interview. I always look forward to seeing Geoffrey Hinton. He is a living genius, and it's always a pleasure to listen to his British accent.
The Algorithm brought me here and it is the best thing that happened to me today. What a gem! Keep it up, Mister!
He is incredibly smart .. I'm wondering why he is not writing any book to share his huge knowledge with the public ..
Recently I read a book by Jeff Hawkins and I would be interested reading a book written by Hinton.
Thank you for this content! Probably the best intuitive explanation of why tranformers are so good! Geoff is the G.O.A.T.! I love how he shares his knowledge
Very little to share other than blatant obfuscation and meaningless blather.
Sad to see such a great content not getting enough exposure
😶😚
0:37 I really wish Tomáš Mikolov had been there, winning the Turing Award with them.. we all do 🇨🇿🇸🇰
Thanks for allowing the great man speak and not interrupting him too much like other dunces do.
Great discussion, thanks! He says "Reinforcement learning is like an icing on the cake. You don't want to learn how the world works using reinforcement signals. Most of the learning is going to be unsupervised learning". I can't help but think about the question "What's at the intersection of unsupervised and reinforcement learning?".
ChatGPT!
Interesting to look back on this video 2 years later & after the release of new models like gpt4
Wow, the density of information in this video requires me to watch multiple times haha
[Quality content, thanks for sharing]
Thanks for sharing the video.
you are very good , thank you for providing such interview
Thx for the content !
Such great content
very nice njoyed it thank you!
Sets don't exist. They are a navigational relation metaphor. plus discrete systems cannot scale. hence wrong direction.
It was a really nice conversation!!!!
Nice episode
This great man who helped the world to start a new industrial revolution.
Super, thanks👏👏
Hinton JUST CANNOT talk about combined models. Bummer as I'd love to hear his impressions but somewhat understandable given the amount of NDAs he most likely signed over the past decade. But thanks Craig for trying at least 3 times to ask him about it haha :D
By combined models do you mean things like putting together SimCLR and capsules and transformers? He is talking about that now: arxiv.org/abs/2102.12627 if that is what you mean.
@@runvnc208 isnt that just theoretical work. He didnt actually describe a "combined model" in that paper.
Keep it up!
I don't do much AI so I have to infer some terms but in regards to Back relaxation and Greedy bottom up... if that means the first uses small parts to build larger ones and the other grabs the largest object closest to it... I can tell you the mind uses both. It needs the first to build anything and the second for speed.
To understand we can build a robot. We put cameras in for eyes. This will be at a frame rate of 30fps for example purposes. So you would have to build up every object, then the concepts, then the movements, then action concepts, etc, by building all from scratch at a rate of 30fps.
I ain't that smart lol.
So basically it grabs the most similar, or through inference, to build it all. So it uses both at the same time which is why in the end both will work the same as long as it has the needed data. We can do this through hardware. And so does the mind.
And all children, given a "normal" functioning mind, can do what Newton did. You just have to have the same associations and goals. Though life circumstances, inputs, etc, can prevent it.
If you can teach it a "normal" mind can learn it. Otherwise you couldn't teach it. Just saying. ;-)
Most of the first capsule talk was correct. AI is a collection of tasks to do what a mind does naturally. Like creating short cuts. So that is on the right track.
I still think you should all go to AGI since that in basic is what you are trying to achieve. Though both is needed but what you really want is AGI with no ability to choose outcomes you do not want. Thus create a task you put various levels of positive associations to, or negative, to create whatever you want to do based on the core. So you need the core without doing a bunch of tasks that slow down the computing process. And the mind does it the best since a piece of meat that can do all we see.
Incredible insights
This video should have millions of views and thousands of likes. Too bad it's in English 👍👏
what language would you suggest? I can dub it and re-upload.
I speak Spanish, but I'm afraid that this function (dubbing) does not have the mobile that I use; only the subtitles, but they are not good. Anyway thanks for the recommendation, be well 👍🙂
If an Ai is programmed beforehand, what makes you expect a programmed machine can evolve past it's programming? You aren't even sure if human evolution is possible much less machine evolution.
useful
will ai ever translate what north americans speak into english?
Great interviewee, but not so great interviewer. I think you should learn how to really structure an interview better.
And he just quit today
🙏
I didn't know he was a failed carpenter.
Does Geoff Hinton believe in God ???
This is really stupid and it shows just how easily Satan can use people - even the smart ones...