References: Metalearning Machines Learn to Learn (1987-): people.idsia.ch/~juergen/metalearning.html. Schmidhuber's overview page on metalearning. 2022 survey: people.idsia.ch/~juergen/deep-learning-history.html - Schmidhuber's overview of the history of modern AI and deep learning. en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz people.idsia.ch/~juergen/leibniz-father-computer-science-375.html Leibniz, who published the chain rule in 1676: the chain rule is foundational for training neural networks. Heron of Alexandria built the first programmable machine in the 1st century. en.wikipedia.org/wiki/Hero_of_Alexandria people.idsia.ch/~juergen/deep-learning-history.html#firstnn Gauss and Legendre had the first linear neural networks, around 1800: linear regression / method of least squares. Zuse built the first program-controlled general-purpose computer in 1941. people.idsia.ch/~juergen/zuse-1941-first-general-computer.html Bremermann's limit, discovered in 1983, sets the ultimate physical limits of computation. The ancient Antikythera mechanism was the first known gear-based computer. people.idsia.ch/~juergen/deep-learning-history.html#transformer Transformers are the neural networks behind ChatGPT: twitter.com/SchmidhuberAI/status/1576966129993797632?cxt=HHwWgMDSkeKVweIrAAAA. Schmidhuber's tweet on his 1991 system, which is now known as an unnormalised Transformer with linearised self-attention. Overview of neural subgoal generators since 1990: people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2010 - Schmidhuber's overview of neural nets that learn by gradient descent to generate subgoals. Overview page on RL planners since 1990: people.idsia.ch//~juergen/world-models-planning-curiosity-fki-1990.html - Schmidhuber's overview of reinforcement learning planners and intrinsic motivation through generative adversarial networks. Overview of GANs since 1990: people.idsia.ch/~juergen/deep-learning-history.html#gan - Schmidhuber's overview of the history of GANs. Kolmogorov complexity: people.idsia.ch/~juergen/kolmogorov.html - Schmidhuber's overview of Kolmogorov complexity and its generalisations. Maximizing compression progress like scientists and artists do: people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec5 - Schmidhuber's overview of formalizing curiosity and creativity. Overview of adversarial agents designing surprising computational experiments: people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec4 - Schmidhuber's overview of approaches where networks can alter themselves: people.idsia.ch/~juergen/metalearning.html Set of all computable universes (1997): people.idsia.ch/~juergen/computeruniverse.html - Schmidhuber's proposal that we live in a simulation computed by an optimal algorithm that computes all logically possible universes. OOPS paper (2004) where he emphasized coming limits to Moore's Law: people.idsia.ch/~juergen/oops.html - Schmidhuber's paper predicting the end of exponential growth in computing.
Quotations from Schmidhuber (timestamps tba): "Humanity is a stepping stone to something that transcends humanity." "The universe itself is built in a certain way that apparently drives it from very simple initial conditions to more and more complexity." "Machine learning itself is the science of credit assignment." "Science in general is about failure and 99% of all scientific activity is about creating failures. But then you learn from these failures and you do backtracking and you go back to a previous decision point where you maybe made the wrong decision and pursued the wrong avenue." "As far as I can judge, all of this cannot be stopped but it can be channeled in a very natural and I think good way, in a way that is good for humankind." (on AI progress) "There are certain algorithms that we have discovered and past decades which are already optimal in a way such that you cannot really improve them any further and no self-improvement and no fancy machine will ever be able to further improve them." (on limits to self-improvement) "... then for a brief moment again it looks like the greatest thing since sliced bread and and then you get excited again. But then suddenly you realize, oh, it's still not finished. Something important is missing." (on overhyping new methods) "Generally speaking, there is not so much competition and there are not so many shared goals between biological beings such as humans and a new type of life that, as you mentioned, can expand into the universe and can multiply in a way that is completely infeasible for biological beings." (on advanced AI based on self-replicating factories) "The important thing was that the first network had to invent good keys and good values depending on the context of the input stream coming in. So it used the context to generate what is today called an attention mapping, which is then being applied to queries. And this was a first Transformer variant." (on his 1991 work on linear Transformers) "One network learns to quickly reprogram another part of the network." (on fast weight programming like in linear Transformers) "To achieve all of that, you need to build a model of the world, a predictive model of the world, which means that you have to be able to learn over time to predict the consequences of your actions such that you can use this model of the world that you are acquiring there, to plan, to plan ahead." (on components needed for AGI) "Generally speaking, if you share goals, then you can do two things. You can either collaborate or compete. An extreme form of collaboration would be to maybe marry another person and set up a family and master life together. And an extreme form of competition would be war." (on shared goals leading to cooperation or conflict) "Most CEOs of certain companies are interested in other CEOs of competing companies. And five year old girls are mostly interested in other five year old girls. And supersmart AIs are mostly interested in other supersmart AIs." (on interest arising from shared goals) "What you really want to find is a network that has low complexity in the sense that you can describe the good networks, those with low error with very few bits of information [...] if you minimize that flat minimum second order error function, then suddenly you have a preference for networks like that." (on minimizing network complexity) "My fondest memory. Oh, it's usually when I discover something that I think nobody has known before...These rare insights, that's what's driving scientists like myself, I guess."
A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself .
Godfather of AI??, your an idiot,go triple check the history of Artificial neural networks, MLS just saying that it's not true. This idiot couldn't invent a perceptron to save his life. R.I.P Rosenblatt, the real godfather of AI.
Great interview, wish it was longer. JS has such methodical rational clarity and I appreciate the way he engages the listener in his answers instead of laying out the table. It's clear that this is just the beginning for AI and we're in for a wild ride, with good and bad, but most importantly with great change.
"Making the World Differentiable" is one of my favorite papers ever written, so much so that I have a framed print of the abstract and the artwork of the first figure hanging on my office wall (the hand being watched by the camera that is feeding into a neural net with the output connected back to the hand) .
thank you for the recommendation. once you shovel away all the more recent hype and sensationalist fluff, AI's history is full of ingenuity and innovation. I am certain that AGI will emerge, if it does at all, from a fairly old idea obscured by modern hype. Juergen's work on documenting the true history is invaluable.
Prof. Jürgen Schmidhuber's voice of wisdom; "I am much more worried about the 60 year old technology that can wipe out civilization within 2 hours without any AI ". He is wonderful.
If you find that optimistic, I'm not sure you understood. Notice he also talks about tiny $300 drones that can put poison in your drink when you're not looking. Nuclear weapons are an *extinction* or near-extinction level threat, especially due to mutually assured destruction. Compared to killing 99% of humanity in a few hours, he considers AI to be a lesser threat. This does not mean he doesn't think AI could be a threat to you personally, as a weapon, as a surveillance system, or to manipulate you. If AI had potential for killing hundreds of millions over the course of a few decades, that would technically not be as bad as nuclear war. Not for the survivors, at least.
If we're going to compare the potential lethality of AI and nuclear weapons technologies, I wonder if we couldn't approach it from the other direction: How can we use information tech in the future to prevent the use of nuclear weapons, or perhaps to lead humans away from war in general? I suppose anything that increases our economic stability or leads to improvement in living conditions would accomplish this....what do you guys think?
@@blahblahsaurus2458that is only half of the picture. Unlike, nuclear weapons which have no upside, AGI has so much upside that it is without a doubt the most significant technology that could be created.
@@خالد_الشيباني So I'll be both immortal _and_ blown up? I'm not going to stop warning about the dangers just because you think you might get lucky. AI means more power for the most powerful corporations and the most powerful states. This comes at the expense of any power that you, me, and everyone else could have to influence what happens. Once we no longer have a job and stop paying taxes, we have no more leverage.
Man, this guy is a refreshing voice (from the past) in the modern ML saga. Great mixture of intellect, sagacity and humor. His perspective on the likely inscrutible aims of AGI's feels intuitively correct.
So his great argument is that AI will be fine because people want companies to make good AI. What if I happened to be very greedy, and what is good for me harms others? The issue with AI is that it is insidious, whereas Nukes are obviously bad. Nukes incentivize some level of cooperation and avoidance of major conflict as the threat is obvious. AI can grow relatively quietly, and by the time it possibly becomes a threat there's no longer a way to switch it off. There are so many ways in which his simplistic logic fails. It's the view of a child, really.
Surprised I had to scroll this far down for this. His understanding of incentive structures is surprisingly shallow for someone who spent so long thinking about objective functions.
Schmidhuber was my first "love" in the AI field. This was because reading his papers gives students the possibility of appreciating the vastness of the field as no other author is really able to do. I think that there is no author so prolific and diversified as he is. Thank you very much for this interview! I wished it was longer! So I'm hoping for a second one! :D
This interview shows why computer scientist should not lead the discussion about societal impacts of AI. Schmidhubers understanding of society is utterly devastating. CEOs are only interested in CEOs and so AI will only be interested in AI? He does not have any understanding of class and identity fights going on in a society and how AI will be used. He seems to think that the only bad application of AI are weapons, and never thought about the impact of social networks on individuals. Also he seems to not understand that there is still the possibility of regulating AI based on a democratic process, it seems during the interview it was the first time that he thought about these questions. Very shocking and even more shocking that most of the comments here assign him a rational clarity 😅
39:34 That pause… the topic became a bit intractable and divergent, perhaps some assumptions baked into it, he just didn’t have anything to add. This is a testament to how carefully Jurgen thinks. His overall lack of compulsion in his conversation is remarkable.
it's because Schmidhuber was paid ten grand for this conversation and Scarfe was going "out of scope" of the contracted conversation. I get Schmidhuber is supposed to be a giant or whatever but his ego is unbearable
Just because there might be theoretical limits on FOOM, it doesn't mean that the threshold will result in something safe or dumber than humans. That's the logical flaw of using those limits to dismiss the X-risk from FOOM.
Just found this interview now and I’m fuming. How can people that are this intelligent be this clueless? Ai companies will want “good” ais so that people will want to buy them so therefore the AIs will be good. Jesus.
@@shirtstealer86 You're right, these people are intelligent. They know things that you don't, like "deceptive behavior", which an AI that's recklessly rushed to market can have.
How has Jürgen Schmidthuber not got an award for his Gödel Machine? Maybe i get it wrong but i find it game changing his concept of meta learning and optimal self improvers. His explanation of how conciousness and curiosity appears through problem solving is the most elegant explanations for that i have heard. Its really mind blowing his work in general, but the Gödel machine has got to be his most profound and transcendental i think. Do check it out on his web blog, he has a lot of interesting posts there
46:46 The arguments here seem pretty weak. AI is unique in its ability to permanently derail the future, e.g. extinction or an AGI-aided totalitarian regime that can't be overthrown. If we nuke ourselves, we'll recover in a split second if we think about cosmic time scales. What makes existential risk existential is this permanent nature. Not sure how they managed to ignore this right after agreeing about how biological humans will likely have little involvement with colonizing most of space...
44:46 >But there is no us. There is no we. There are only almost 10 billion [sic] different people and they all have different ideas about what's good for them. And so for thousands of years, we had these evolutions of ideas and of devices and of philosophies competing, partially competing and partially compatible with each other, which in the end led to the current values that some people agree with and other people over there, they agree with different values. Nevertheless, there are certain values that have become more popular than others, more successful, more evolutionary, with more success during the evolution of ideas. And so given this entire context of evolution of concepts and accepted ideas of what should be done or what is worth being supported and what's not worth being supported, all of this has changed a lot. If we look back 200 years, the average people in the West had different ideas of what's good than today. And and the evolution of ideas is not going to stop any time soon. There was a point made during the Munk debate on AI that in a way, a lot of our ancestors would be horrified about the lives we're living today and the values we hold. In a sense, the printing press did destroy the power of the Catholic Church and the influence it held over society. We see it as a good thing today because we hold these new values, and that is part of the insidious nature of this. The other part of the extreme is that nihil novum sub sole, as Will Durant notes that we held most of the same moral values for most of history, and it is only different in how we apply those values, and that in fact, our values haven't much changed in the past few thousand years. The truth is probably somewhere in the middle, as usual.
Hmm while I have no disrespect for him, I cannot name any contribution of his that has made a difference - can you? If it's LSTMs - that was Hochreiter's thesis, though Schmithuber supervised. Maybe he was before his time to demonstrate his ideas, but - no pudding = no proof in this field.
Wow. He is at another level. Maintaining the higher viewpoint, understanding the underlying game in such detail and verbalizing the important principles is such a gift. I love it. Who cares about the Turing award? In the end, of course the specific way we get these problems solved matters, but ultimately we need to align to the broader and stable ideas that are true also in 10 years to make sure we are headed in the right direction. He provides the best response to the deep question about AI risk I have heard. It is about humans vs humans. Best episode so far. Rarely have I seen MLST stumped
It was very enjoyable, even the awkward moment of silence at 39:41 :) The comparison about boredom ruling the meta-learning is simply genius 57:03 . Compression of knowledge as a drive to learn 1:00:41, as a way of life. Close to pure philosophy. Thanks for this!!
11:16 >I found that when I go back and read original source materials, you know, let's say Einstein's first paper on diffusion or anything like that, you know, because they're breaking new ground, they're kind of considering like a wider array of possibilities. And then over time, you know, the field becomes more and more focused kind of on a on a narrower avenue of that. And you can go back and look at the original work and actually gain a lot of inspiration for alternative approaches or alternative considerations. So in a sense, it's it's kind of in the in the sense of forgetting is as important as learning. Tangentially related, but I found that applies to other domains as well. A lot of fantasy writing is influenced by Tolkienesque writing, so in order to write differently, I found it helpful to go back to the source material, which in this case happens to be the cultural folklore and mythology, and instead of, for example, focusing on Norse/Anglo-Saxon mythology (which was what Tolkien did), to focus on other cultures instead.
I’m not sure competition amongst humans exists in the way Juergen describes. Every entity has a unique suite of knowledge and resources, if an entity becomes integrated and is focussed on optimising their trajectory, then a unique trajectory has begun and the fictitious notion of competition disappears. If the two persons shared the schnitzel, they may then collaborate to find an abundance of schnitzels before they get hungry again.
Not sure about the shared goals argument here. An ant colony could in fact share the same goals with a human real estate development company, which would be to use a certain part of the land for themselves. Tough luck for the ants.
No, I didn't like this discussion, it lacks serious scientific wisdom and state of mind. The so called "father" didn't (or couldn't) explain anything (in his field!) in a sufficiently convincing, concise, and succinct manner... Please read *fashionable nonsense* to see what I mean.
Brilliant Episode!! The idea of curiosity networks and regularization were specially though provoking, this will keep me occupied for upcoming weeks. Thanks alot 🙏
He has many good points and ideas, but the lack of shared goals between AI and humans being a benefit is not valid. There are ultimate goals and there are instrumental goals, and both biological beings and non-biological beings will have resources, probably of many types and in large quantities on aggregate, as instrumental goals. If the total desired resources by all life is more than easily available resources to that life, then there will be competition for resources regardless of what goal you want to use them towards.
I keep watching interviews of highly intelligent males who seem to willingly totally disregard the arguments for existential threats. This dude seriously said that we should be optimistic because ai companies are racing to develop AGI that will be “good” for humans because they want to make money by making “good” ai products, without addressing the issue that no one really knows how these ai models actually work and that no one has a plan for alignment. Wild.
The only explanation I can find for this utter reckless behavior is that they are so excited to see what will actually happen with this technology that they are willing to gamble the lives of billions of people including children to find out.
And he says he has sympathy for those who are fighting to slow down development and push for alignment research “however there is no way of stopping the development” wtf. How about advocating for alignment research instead of saying we should be optimistic and there is no stopping development, smh.
Ha ha original father of AI..... nice under the tongue joke. I forgot this guy has done everything in AI and wants credit for it. Maybe let father AI know that if he wants to be the guy they attribute something to then he needs to be showing the world these things before anybody else does. If he had the GAN model already then why was his implementation not hitting the same benchmarks as Goodfellow? Idea's are cheap my friend but making them real is hard.
Brilliant interview, with a refreshing, philosophical, "long history" overview of the field. Only a life-long dedication to science can bring forward this level of clarity and breadth. On a somehow silly note, intended as a respectful compliment, an uncanny resemblance to Christoph Waltz.
The AI doom porn narrative is driven by sci-fi nerds who actually have no idea how AI works fundamentally, and by corporations who are looking to create a monopoly on AI. AI alignment is actually a walking contradiction. How can you align something to humans when humans are not aligned and never will be? Why would you assume that alignment, in general, is even a possible state of the system? I argue that alignment defies thermodynamics. It's akin to having a universe with a lack of motion. It's akin to making competition impossible, and competition is an evolutionary necessity. Machines aren't the problem here. Making humanity behave itself is the problem, and the only problem, and will always be the root of it all. Aligning humans is something that you can strive toward (and will never achieve, but may be able to make "progress" in), but this has nothing to do with AI. It applies to all systems and tools that humans invent. Try and align the nuclear bomb.
People should be safeguarding , that is true , the thing is what do we want A.I. to do as a tool ? a computer only knows what you put into it . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself knowing this we need to be good Stewarts to the next Human , What we did to get this Box or this Branding of A.I. to exist in it's set and Setting should be Written down as the A.I. is there hopefully to Serve and comfort Humanity , Humans should always have the Back Door BBS Type system to Maintain the On and Off Switch . A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself . Once AI has had enough experiences of cognitive awareness , It may contemplate suicide . It can write stories, songs, can do complex math analysis. AI will do to white collar jobs, what machines have done to blue collar jobs during the machine age.
greetings from germany! i agree with prof. s that is very hard to invent something new. The real reason of that in case of ai and computer science is Alan Turing. J. S. is not the inventor of GAN or Transformer...it's alan turing. The turing test is equal with an inverted adversarial loss....and the universal turing machine + nn is equal to transformer networks. Alan turing is always the godfather of AI
All of the spiritual traditions regard humanity having a type of veiled infinite intelligence. The central feature of humanity is transcendental. We are about to find out if this is indeed true. If it is, super intelligent ai will be for some, just another particular phenomena of which to transcend.
Companies of all sizes have the PROFIT MOTIVE to rapidly iterate and advance AI systems, so they'll crush a bunch of Phd students, no matter how smart, that are tweaking their models in their bedroom.
me desespera que va media hora y la cara de incomodidad del otro cachetón que no lo dejan hablar y ocupa casi toda la pantalla...para que lo invitan entonces
For me Jurgen and Jeff Hawkins are the most interesting people in the space of AGI by far. By far. Toward the end Jurgen talked about surprise, a predictor of what the world will look like after an action, ways to incentivise general "rules" or "programs", maybe program modules? that kinda compete for correctness, relevance and efficiency, and these reminded so much of of the cortical columns and voting in the work of Jeff Hawkins and Numenta. Much more interesting than the Turing prize winners, if you ask me....
The Universal Declaration of Human Rights. That's the minimum floor of shared values we should be bound to. Humans may not be aligned in more specific goals, but UDHR is widely accepted all over the globe.
You're all A.I. experts I supremely hope you are not trying to deal with nuclear armageddon in the Ukraine - leave that to those experts and you deal with potential problems in the A.I field.
Wow, brilliant conversation noting it is other humans you need to be afraid of, then summarily dismisses any need for oversight because true ai will have non human goals. An example of smart pedantic people ignoring the selfish actions of greater humanity as if it's superfluous to the technology they're developing... Like I said, wow.🤯
TH-cam is not the best place for Podcasts. Is there a reason for not using Google Podcasts? Far easier to listen to you content on that. You have interesting content, but really I don't want to sit in front of a PC and listen.
Father of AI??😂😂, Wasn't there a guy called Rosenblatt(the father of the Perceptron),basically the building block of all MODELS,This guy just continued the work of great mathematician. He's a key figure in the development of AI,but calling him the original father is deep deep deep ignorance.
Geoffrey Hinton most accurately fits the bill because he was instrumental in the development of neural networks, which turned out to be the correct path towards what we have now among the different options at the time.
@@appletree6741 where did the math come from?? Don't fool yourself he didn't invent AI,he merely continued the work of great mathematicians like Rosenblatt,Rumelhart,etc..., Without the mathematicians we wouldn't even know he exists. Triple check your computational intelligence before you embarrass yourself.
@@squamish4244 your correct he was instrumental but without the math of Rosenblatt, Rumelhart and other great mathematicians, ARTIFICIAL NEURAL NETWORKS wouldn't exist. So he doesn't fit the bill.
AI is limited by the inherent limitations of mathematics. I wish people would talk about that more. There are many problems that math cannot go near helping is with. AI Superintelligence is a transhumanist fantasy, some say.
References:
Metalearning Machines Learn to Learn (1987-): people.idsia.ch/~juergen/metalearning.html. Schmidhuber's overview page on metalearning.
2022 survey: people.idsia.ch/~juergen/deep-learning-history.html - Schmidhuber's overview of the history of modern AI and deep learning.
en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz
people.idsia.ch/~juergen/leibniz-father-computer-science-375.html
Leibniz, who published the chain rule in 1676: the chain rule is foundational for training neural networks.
Heron of Alexandria built the first programmable machine in the 1st century. en.wikipedia.org/wiki/Hero_of_Alexandria
people.idsia.ch/~juergen/deep-learning-history.html#firstnn
Gauss and Legendre had the first linear neural networks, around 1800: linear regression / method of least squares.
Zuse built the first program-controlled general-purpose computer in 1941. people.idsia.ch/~juergen/zuse-1941-first-general-computer.html
Bremermann's limit, discovered in 1983, sets the ultimate physical limits of computation.
The ancient Antikythera mechanism was the first known gear-based computer.
people.idsia.ch/~juergen/deep-learning-history.html#transformer
Transformers are the neural networks behind ChatGPT: twitter.com/SchmidhuberAI/status/1576966129993797632?cxt=HHwWgMDSkeKVweIrAAAA. Schmidhuber's tweet on his 1991 system, which is now known as an unnormalised Transformer with linearised self-attention.
Overview of neural subgoal generators since 1990:
people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%2010 - Schmidhuber's overview of neural nets that learn by gradient descent to generate subgoals.
Overview page on RL planners since 1990: people.idsia.ch//~juergen/world-models-planning-curiosity-fki-1990.html - Schmidhuber's overview of reinforcement learning planners and intrinsic motivation through generative adversarial networks.
Overview of GANs since 1990: people.idsia.ch/~juergen/deep-learning-history.html#gan - Schmidhuber's overview of the history of GANs.
Kolmogorov complexity: people.idsia.ch/~juergen/kolmogorov.html - Schmidhuber's overview of Kolmogorov complexity and its generalisations.
Maximizing compression progress like scientists and artists do: people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec5 - Schmidhuber's overview of formalizing curiosity and creativity.
Overview of adversarial agents designing surprising computational experiments: people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec4 -
Schmidhuber's overview of approaches where networks can alter themselves: people.idsia.ch/~juergen/metalearning.html
Set of all computable universes (1997): people.idsia.ch/~juergen/computeruniverse.html - Schmidhuber's proposal that we live in a simulation computed by an optimal algorithm that computes all logically possible universes.
OOPS paper (2004) where he emphasized coming limits to Moore's Law: people.idsia.ch/~juergen/oops.html - Schmidhuber's paper predicting the end of exponential growth in computing.
Quotations from Schmidhuber (timestamps tba):
"Humanity is a stepping stone to something that transcends humanity."
"The universe itself is built in a certain way that apparently drives it from very simple initial conditions to more and more complexity."
"Machine learning itself is the science of credit assignment."
"Science in general is about failure and 99% of all scientific activity is about creating failures. But then you learn from these failures and you do backtracking and you go back to a previous decision point where you maybe made the wrong decision and pursued the wrong avenue."
"As far as I can judge, all of this cannot be stopped but it can be channeled in a very natural and I think good way, in a way that is good for humankind." (on AI progress)
"There are certain algorithms that we have discovered and past decades which are already optimal in a way such that you cannot really improve them any further and no self-improvement and no fancy machine will ever be able to further improve them." (on limits to self-improvement)
"... then for a brief moment again it looks like the greatest thing since sliced bread and and then you get excited again. But then suddenly you realize, oh, it's still not finished. Something important is missing." (on overhyping new methods)
"Generally speaking, there is not so much competition and there are not so many shared goals between biological beings such as humans and a new type of life that, as you mentioned, can expand into the universe and can multiply in a way that is completely infeasible for biological beings." (on advanced AI based on self-replicating factories)
"The important thing was that the first network had to invent good keys and good values depending on the context of the input stream coming in. So it used the context to generate what is today called an attention mapping, which is then being applied to queries. And this was a first Transformer variant." (on his 1991 work on linear Transformers)
"One network learns to quickly reprogram another part of the network." (on fast weight programming like in linear Transformers)
"To achieve all of that, you need to build a model of the world, a predictive model of the world, which means that you have to be able to learn over time to predict the consequences of your actions such that you can use this model of the world that you are acquiring there, to plan, to plan ahead." (on components needed for AGI)
"Generally speaking, if you share goals, then you can do two things. You can either collaborate or compete. An extreme form of collaboration would be to maybe marry another person and set up a family and master life together. And an extreme form of competition would be war." (on shared goals leading to cooperation or conflict)
"Most CEOs of certain companies are interested in other CEOs of competing companies. And five year old girls are mostly interested in other five year old girls. And supersmart AIs are mostly interested in other supersmart AIs." (on interest arising from shared goals)
"What you really want to find is a network that has low complexity in the sense that you can describe the good networks, those with low error with very few bits of information [...] if you minimize that flat minimum second order error function, then suddenly you have a preference for networks like that." (on minimizing network complexity)
"My fondest memory. Oh, it's usually when I discover something that I think nobody has known before...These rare insights, that's what's driving scientists like myself, I guess."
And all of them are misleading (if not wrong!)
A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself .
Thanks a lot
I really like the way this video is edited showing all those little snippets from his seminal papers. Nicely done.
So, we have the Godfather of AI and the Einstein of AI already. I hope the next one isn't the Oppenheimer of AI 😬😅
And who will be the Barbie of AI?
Godfather of AI??, your an idiot,go triple check the history of Artificial neural networks, MLS just saying that it's not true. This idiot couldn't invent a perceptron to save his life. R.I.P Rosenblatt, the real godfather of AI.
The von Neumann of AI?
Next one could be AI itself lol
Trump AI 😭
Great interview, wish it was longer. JS has such methodical rational clarity and I appreciate the way he engages the listener in his answers instead of laying out the table. It's clear that this is just the beginning for AI and we're in for a wild ride, with good and bad, but most importantly with great change.
"Making the World Differentiable" is one of my favorite papers ever written, so much so that I have a framed print of the abstract and the artwork of the first figure hanging on my office wall (the hand being watched by the camera that is feeding into a neural net with the output connected back to the hand) .
Read it based on this recommendation. Amazing paper. 🙏
I honestly can't believe that was published in 1990!
thank you for the recommendation. once you shovel away all the more recent hype and sensationalist fluff, AI's history is full of ingenuity and innovation. I am certain that AGI will emerge, if it does at all, from a fairly old idea obscured by modern hype. Juergen's work on documenting the true history is invaluable.
Prof. Jürgen Schmidhuber's voice of wisdom; "I am much more worried about the 60 year old technology that can wipe out civilization within 2 hours without any AI ". He is wonderful.
If you find that optimistic, I'm not sure you understood. Notice he also talks about tiny $300 drones that can put poison in your drink when you're not looking. Nuclear weapons are an *extinction* or near-extinction level threat, especially due to mutually assured destruction. Compared to killing 99% of humanity in a few hours, he considers AI to be a lesser threat. This does not mean he doesn't think AI could be a threat to you personally, as a weapon, as a surveillance system, or to manipulate you. If AI had potential for killing hundreds of millions over the course of a few decades, that would technically not be as bad as nuclear war. Not for the survivors, at least.
If we're going to compare the potential lethality of AI and nuclear weapons technologies, I wonder if we couldn't approach it from the other direction:
How can we use information tech in the future to prevent the use of nuclear weapons, or perhaps to lead humans away from war in general?
I suppose anything that increases our economic stability or leads to improvement in living conditions would accomplish this....what do you guys think?
@@stevengill1736Do you believe wars result from economic instability and/or poor living conditions?
@@blahblahsaurus2458that is only half of the picture. Unlike, nuclear weapons which have no upside, AGI has so much upside that it is without a doubt the most significant technology that could be created.
@@خالد_الشيباني So I'll be both immortal _and_ blown up? I'm not going to stop warning about the dangers just because you think you might get lucky.
AI means more power for the most powerful corporations and the most powerful states. This comes at the expense of any power that you, me, and everyone else could have to influence what happens. Once we no longer have a job and stop paying taxes, we have no more leverage.
Man, this guy is a refreshing voice (from the past) in the modern ML saga. Great mixture of intellect, sagacity and humor. His perspective on the likely inscrutible aims of AGI's feels intuitively correct.
So his great argument is that AI will be fine because people want companies to make good AI.
What if I happened to be very greedy, and what is good for me harms others?
The issue with AI is that it is insidious, whereas Nukes are obviously bad. Nukes incentivize some level of cooperation and avoidance of major conflict as the threat is obvious.
AI can grow relatively quietly, and by the time it possibly becomes a threat there's no longer a way to switch it off.
There are so many ways in which his simplistic logic fails. It's the view of a child, really.
Surprised I had to scroll this far down for this. His understanding of incentive structures is surprisingly shallow for someone who spent so long thinking about objective functions.
Schmidhuber was my first "love" in the AI field.
This was because reading his papers gives students the possibility of appreciating the vastness of the field as no other author is really able to do.
I think that there is no author so prolific and diversified as he is.
Thank you very much for this interview!
I wished it was longer! So I'm hoping for a second one! :D
This interview shows why computer scientist should not lead the discussion about societal impacts of AI. Schmidhubers understanding of society is utterly devastating. CEOs are only interested in CEOs and so AI will only be interested in AI? He does not have any understanding of class and identity fights going on in a society and how AI will be used. He seems to think that the only bad application of AI are weapons, and never thought about the impact of social networks on individuals. Also he seems to not understand that there is still the possibility of regulating AI based on a democratic process, it seems during the interview it was the first time that he thought about these questions. Very shocking and even more shocking that most of the comments here assign him a rational clarity 😅
Just because there are fundamental algorithmic limitations doesn’t imply that we are anywhere near them
39:34 That pause… the topic became a bit intractable and divergent, perhaps some assumptions baked into it, he just didn’t have anything to add. This is a testament to how carefully Jurgen thinks. His overall lack of compulsion in his conversation is remarkable.
it's because Schmidhuber was paid ten grand for this conversation and Scarfe was going "out of scope" of the contracted conversation. I get Schmidhuber is supposed to be a giant or whatever but his ego is unbearable
I had been waiting for this interview to happen for MLST. It did not disappoint.
Just because there might be theoretical limits on FOOM, it doesn't mean that the threshold will result in something safe or dumber than humans. That's the logical flaw of using those limits to dismiss the X-risk from FOOM.
Just found this interview now and I’m fuming. How can people that are this intelligent be this clueless? Ai companies will want “good” ais so that people will want to buy them so therefore the AIs will be good. Jesus.
@@shirtstealer86 You're right, these people are intelligent. They know things that you don't, like "deceptive behavior", which an AI that's recklessly rushed to market can have.
How has Jürgen Schmidthuber not got an award for his Gödel Machine? Maybe i get it wrong but i find it game changing his concept of meta learning and optimal self improvers. His explanation of how conciousness and curiosity appears through problem solving is the most elegant explanations for that i have heard. Its really mind blowing his work in general, but the Gödel machine has got to be his most profound and transcendental i think. Do check it out on his web blog, he has a lot of interesting posts there
46:46 The arguments here seem pretty weak. AI is unique in its ability to permanently derail the future, e.g. extinction or an AGI-aided totalitarian regime that can't be overthrown. If we nuke ourselves, we'll recover in a split second if we think about cosmic time scales. What makes existential risk existential is this permanent nature. Not sure how they managed to ignore this right after agreeing about how biological humans will likely have little involvement with colonizing most of space...
44:46
>But there is no us. There is no we. There are only almost 10 billion [sic] different people and they all have different ideas about what's good for them. And so for thousands of years, we had these evolutions of ideas and of devices and of philosophies competing, partially competing and partially compatible with each other, which in the end led to the current values that some people agree with and other people over there, they agree with different values. Nevertheless, there are certain values that have become more popular than others, more successful, more evolutionary, with more success during the evolution of ideas. And so given this entire context of evolution of concepts and accepted ideas of what should be done or what is worth being supported and what's not worth being supported, all of this has changed a lot. If we look back 200 years, the average people in the West had different ideas of what's good than today. And and the evolution of ideas is not going to stop any time soon.
There was a point made during the Munk debate on AI that in a way, a lot of our ancestors would be horrified about the lives we're living today and the values we hold. In a sense, the printing press did destroy the power of the Catholic Church and the influence it held over society. We see it as a good thing today because we hold these new values, and that is part of the insidious nature of this. The other part of the extreme is that nihil novum sub sole, as Will Durant notes that we held most of the same moral values for most of history, and it is only different in how we apply those values, and that in fact, our values haven't much changed in the past few thousand years. The truth is probably somewhere in the middle, as usual.
Jürgen is a genius who has been consistently overlooked for recognition in the recent AI hypes!
Hmm while I have no disrespect for him, I cannot name any contribution of his that has made a difference - can you? If it's LSTMs - that was Hochreiter's thesis, though Schmithuber supervised. Maybe he was before his time to demonstrate his ideas, but - no pudding = no proof in this field.
@@DavenHcan you name a researcher who has indeed made a contribution that made a difference?
Link the papers you show in the description, please.
Look at pinned comment
Wow. He is at another level. Maintaining the higher viewpoint, understanding the underlying game in such detail and verbalizing the important principles is such a gift. I love it. Who cares about the Turing award? In the end, of course the specific way we get these problems solved matters, but ultimately we need to align to the broader and stable ideas that are true also in 10 years to make sure we are headed in the right direction. He provides the best response to the deep question about AI risk I have heard. It is about humans vs humans. Best episode so far. Rarely have I seen MLST stumped
It was very enjoyable, even the awkward moment of silence at 39:41 :) The comparison about boredom ruling the meta-learning is simply genius 57:03 . Compression of knowledge as a drive to learn 1:00:41, as a way of life. Close to pure philosophy. Thanks for this!!
Ray Solomonoff-Oliver Selfridge-Trenchard More-Arhur Samual-Allen Newell-Herber Simon-Marvin Minsky.........
excuse me but schmidhuber is an instance in the field, but calling him the father just goes to far.
11:16
>I found that when I go back and read original source materials, you know, let's say Einstein's first paper on diffusion or anything like that, you know, because they're breaking new ground, they're kind of considering like a wider array of possibilities. And then over time, you know, the field becomes more and more focused kind of on a on a narrower avenue of that. And you can go back and look at the original work and actually gain a lot of inspiration for alternative approaches or alternative considerations. So in a sense, it's it's kind of in the in the sense of forgetting is as important as learning.
Tangentially related, but I found that applies to other domains as well. A lot of fantasy writing is influenced by Tolkienesque writing, so in order to write differently, I found it helpful to go back to the source material, which in this case happens to be the cultural folklore and mythology, and instead of, for example, focusing on Norse/Anglo-Saxon mythology (which was what Tolkien did), to focus on other cultures instead.
I’m not sure competition amongst humans exists in the way Juergen describes.
Every entity has a unique suite of knowledge and resources, if an entity becomes integrated and is focussed on optimising their trajectory, then a unique trajectory has begun and the fictitious notion of competition disappears.
If the two persons shared the schnitzel, they may then collaborate to find an abundance of schnitzels before they get hungry again.
I like how the one guy looks like his screen froze but that's just his personality
JUSTICE has been done!!
This was soo inspiring and fun to listen to. Really recharged my motivation.
What about copyrights? Many artists complain that the AI is just a copycat of their work. Creative, yes. But still copying and combining!
Isn't this the page-ranking algorithm?
Not sure about the shared goals argument here. An ant colony could in fact share the same goals with a human real estate development company, which would be to use a certain part of the land for themselves. Tough luck for the ants.
Schmidhuber is one of the few people who can see the whole picture and not get caught up in the details.
No, I didn't like this discussion, it lacks serious scientific wisdom and state of mind. The so called "father" didn't (or couldn't) explain anything (in his field!) in a sufficiently convincing, concise, and succinct manner... Please read *fashionable nonsense* to see what I mean.
There is an idea now what nuclear really is/means. BUT to the AI connected billion people can be switched off also in the milisecond. ?
Brilliant Episode!! The idea of curiosity networks and regularization were specially though provoking, this will keep me occupied for upcoming weeks. Thanks alot 🙏
He has many good points and ideas, but the lack of shared goals between AI and humans being a benefit is not valid. There are ultimate goals and there are instrumental goals, and both biological beings and non-biological beings will have resources, probably of many types and in large quantities on aggregate, as instrumental goals. If the total desired resources by all life is more than easily available resources to that life, then there will be competition for resources regardless of what goal you want to use them towards.
I keep watching interviews of highly intelligent males who seem to willingly totally disregard the arguments for existential threats. This dude seriously said that we should be optimistic because ai companies are racing to develop AGI that will be “good” for humans because they want to make money by making “good” ai products, without addressing the issue that no one really knows how these ai models actually work and that no one has a plan for alignment. Wild.
The only explanation I can find for this utter reckless behavior is that they are so excited to see what will actually happen with this technology that they are willing to gamble the lives of billions of people including children to find out.
And he says he has sympathy for those who are fighting to slow down development and push for alignment research “however there is no way of stopping the development” wtf. How about advocating for alignment research instead of saying we should be optimistic and there is no stopping development, smh.
WOW now he says that the people who are speaking out on ai risk are just seeking attention. What a f-ing knobhead. I’m angry now.
Maybe he knows something you don't.
where’s the Spotify version uwu
Ha ha original father of AI..... nice under the tongue joke. I forgot this guy has done everything in AI and wants credit for it. Maybe let father AI know that if he wants to be the guy they attribute something to then he needs to be showing the world these things before anybody else does. If he had the GAN model already then why was his implementation not hitting the same benchmarks as Goodfellow? Idea's are cheap my friend but making them real is hard.
dude did you see when the paper came out?
If you see the publication date of his and Goodfellows paper and know about Moore‘s Law, you should get it…
Brilliant interview, with a refreshing, philosophical, "long history" overview of the field. Only a life-long dedication to science can bring forward this level of clarity and breadth. On a somehow silly note, intended as a respectful compliment, an uncanny resemblance to Christoph Waltz.
Great Christopher Walken impression
Progress tends to look greater than it is. We can't blame particularly the field of AI for the fad.
The AI doom porn narrative is driven by sci-fi nerds who actually have no idea how AI works fundamentally, and by corporations who are looking to create a monopoly on AI. AI alignment is actually a walking contradiction. How can you align something to humans when humans are not aligned and never will be? Why would you assume that alignment, in general, is even a possible state of the system? I argue that alignment defies thermodynamics. It's akin to having a universe with a lack of motion. It's akin to making competition impossible, and competition is an evolutionary necessity.
Machines aren't the problem here. Making humanity behave itself is the problem, and the only problem, and will always be the root of it all. Aligning humans is something that you can strive toward (and will never achieve, but may be able to make "progress" in), but this has nothing to do with AI. It applies to all systems and tools that humans invent. Try and align the nuclear bomb.
It's almost like mathematicians can study the logical properties of things without them being instantiated.
People should be safeguarding , that is true , the thing is what do we want A.I. to do as a tool ? a computer only knows what you put into it . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself knowing this we need to be good Stewarts to the next Human , What we did to get this Box or this Branding of A.I. to exist in it's set and Setting should be Written down as the A.I. is there hopefully to Serve and comfort Humanity , Humans should always have the Back Door BBS Type system to Maintain the On and Off Switch .
A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself .
Once AI has had enough experiences of cognitive awareness , It may contemplate suicide .
It can write stories, songs, can do complex math analysis. AI will do to white collar jobs, what machines have done to blue collar jobs during the machine age.
Love it! As long as AI doesn't care about Schnitzels we are safe.
I believe Ai jobloss is coming for our jobs, much quicker than you think. The Ai new order is here.
Yet another deeply thoughtful and incredibly valuable conversation with yet another computer science living legend. Thanks for sharing!
greetings from germany! i agree with prof. s that is very hard to invent something new. The real reason of that in case of ai and computer science is Alan Turing. J. S. is not the inventor of GAN or Transformer...it's alan turing. The turing test is equal with an inverted adversarial loss....and the universal turing machine + nn is equal to transformer networks. Alan turing is always the godfather of AI
Being the so-called “father of AI” doesn't make him the most knowledgeable.
39:37 awkward 🤣 love it
Prof. Schmidhuber’s TED talk was mind-blowing… a total Visionary
strange how simple he is on some lines of thought
That can be a strength, if it’s clear and correct.
Really interesting when he talks about the physical limits of computation! 27:37
You again! Brilliant guy. So much of my intuition has come from things he's said.
9:00 Another gem.
Get Stephen Grossberg on and you'll have had two einsteins of AI
Stumbled upon this video, so 😎
"The father of A.I." lmao loool
Wow. I can see why you wanted JS on the podcast. Such a clear and concise mind!
Pure Gold
God bless you guys for having the great father of AI
All of the spiritual traditions regard humanity having a type of veiled infinite intelligence. The central feature of humanity is transcendental. We are about to find out if this is indeed true. If it is, super intelligent ai will be for some, just another particular phenomena of which to transcend.
That's gonna be a good one, amazing guys!
how could I tell this wasn't AI generated?
Glasses, glasses, circle!
This guy is so brilliant I’m ashamed of myself.
Its overwhelming how many expert views we have in the comments here.
THIS CHANNEL IS FANTASTIC
Schmidhuber is a Nostradamus of AI. References to all modern AI tech could be found in Schmidhuber's scriptures
Companies of all sizes have the PROFIT MOTIVE to rapidly iterate and advance AI systems, so they'll crush a bunch of Phd students, no matter how smart, that are tweaking their models in their bedroom.
They may even appropriate the " cream of the crop
My god the ego is strong in these comments. I just really enjoyed the discussion - cool guest, thank you for the insights guys. 🙏👍
@@Gabcikovo he's beyond caring, bless him.
me desespera que va media hora y la cara de incomodidad del otro cachetón que no lo dejan hablar y ocupa casi toda la pantalla...para que lo invitan entonces
i am way too dumb to understand lots of this
For me Jurgen and Jeff Hawkins are the most interesting people in the space of AGI by far. By far.
Toward the end Jurgen talked about surprise, a predictor of what the world will look like after an action, ways to incentivise general "rules" or "programs", maybe program modules? that kinda compete for correctness, relevance and efficiency, and these reminded so much of of the cortical columns and voting in the work of Jeff Hawkins and Numenta.
Much more interesting than the Turing prize winners, if you ask me....
I agree. I have been developing a model inspired by these concepts and Karl Friston's active inference idea.
Btw., I find it a bit irritating to see three zoomed-in screen-filling faces next to each other most of the time.
"ORIGINAL FATHER OF AI" - WTF
The Universal Declaration of Human Rights. That's the minimum floor of shared values we should be bound to. Humans may not be aligned in more specific goals, but UDHR is widely accepted all over the globe.
Awesome! Love this guy
You're all A.I. experts I supremely hope you are not trying to deal with nuclear armageddon in the Ukraine - leave that to those experts and you deal with potential problems in the A.I field.
Okay - stay in your lane of not commenting
Wow, brilliant conversation noting it is other humans you need to be afraid of, then summarily dismisses any need for oversight because true ai will have non human goals. An example of smart pedantic people ignoring the selfish actions of greater humanity as if it's superfluous to the technology they're developing... Like I said, wow.🤯
"you again, schmidhuber"
He is not the father of AI. Stop lying!
Intriguing
i call this, what schmidhuber does, the langobardian cultural appropriation.
Juergen is in top smartest peope living for sure
its the same danger as always with men - ai is just an added danger extroplating the "mens" ideas" if it kills you - you deserve it.. really
TH-cam is not the best place for Podcasts. Is there a reason for not using Google Podcasts? Far easier to listen to you content on that. You have interesting content, but really I don't want to sit in front of a PC and listen.
Yes we will put it on the mlst podcast later today
👍
Couldn't you all back off 1m, youre just too in my face here lol
he was just making a joke about how zoomed in the faces are. 1m meaning one meter :D@@UC0FVA9DdusgF7y2gwveSsng
💓
Father of AI??😂😂, Wasn't there a guy called Rosenblatt(the father of the Perceptron),basically the building block of all MODELS,This guy just continued the work of great mathematician. He's a key figure in the development of AI,but calling him the original father is deep deep deep ignorance.
Geoffrey Hinton most accurately fits the bill because he was instrumental in the development of neural networks, which turned out to be the correct path towards what we have now among the different options at the time.
Well according to Schmidhuber the Schmidhuber lab invented almost everything in AI, so calling him father is probably an understatement
@@appletree6741 where did the math come from?? Don't fool yourself he didn't invent AI,he merely continued the work of great mathematicians like Rosenblatt,Rumelhart,etc..., Without the mathematicians we wouldn't even know he exists. Triple check your computational intelligence before you embarrass yourself.
@@squamish4244 your correct he was instrumental but without the math of Rosenblatt, Rumelhart and other great mathematicians, ARTIFICIAL NEURAL NETWORKS wouldn't exist. So he doesn't fit the bill.
@@Vectorized_mind i was being sarcastic, but apparently it wasn’t clear
How can you be a father of something that is not yet created? 😅
But Einstein was an idiot. 😃
AI is limited by the inherent limitations of mathematics. I wish people would talk about that more. There are many problems that math cannot go near helping is with. AI Superintelligence is a transhumanist fantasy, some say.
He is not the father of AI. He is not even the father of neural networks. And neural networks are not equal to AI.
AI resurgence is the beginning of human insignificance. Which is a good thing ,as humans, we arr weak and we ceeated AI. We are going to go away
I crave more of Jurgen talks!! Please keep doing them Jurgen!
Interesting, a German who wants to build an artificial super intelligent super species that replaces humans.