Sir, i am 65 years old and a semi-scientist, and for me, that was the most fascinating lecture I have ever heard. I had no idea it was possible to “watch “ the 3D development of a living brain. As tedious as it must be, you are so lucky to be a witness at the cutting edge of Neural Biology. Thank you for taking the time to condense your knowledge into something we can understand. Thus, stimulating our minds in such a fascinating way!!
Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@@francisco-felix Your mind converts information (entropy) into mutual information (syntropy) so that you can track targets using predictions! Cogito ergo sum, "I think therefore I am" -- Descartes. Thinking is a syntropic process or a dual process to that of increasing entropy -- the 4th law of thermodynamics! Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Gravitation is dual to acceleration -- Einstein. All forces are dual -- attraction is dual to repulsion, push is dual to pull. Scientists make predictions all the time, they are therefore engaged in a syntropic process. Mind (the internal soul, syntropy) is dual to matter (the external soul, entropy) -- Descartes. Action is dual to reaction -- Sir Isaac Newton (the duality of force). Energy = force * distance. If forces are dual then energy must be dual. Monads are units of force -- Gottfried Wilhelm Leibnitz. Monads are unit of force which are dual, monads are dual. "May the force (duality) be with you" -- Jedi teaching. "The force (duality) is strong in this one" -- Jedi teaching. "Always two there are" -- Yoda.
That was absolutely fascinating. As a complete beginner I was very caught up in the complexity of the issues and the clarity with which you presented them. Thank you.
@@nicxlaus I know channel supporters (e.g. via Patreon) sometimes get early access to content. Not sure if that's the case here but it's an explanation.
I had never made this connection between cellular automata and genome starting rules vs the complexity that follows them. An incredible talk made by an incredible scientist!
18:20 Current neural networks do use a lot of transfer learning, sometimes one-shot learning, so yes, they have an analog to the genetic connectivity of the biological networks. They are not "designed, built and switched on to learn". They are trained, combined, selected, retrained and so on. In a lot of practical applications people don't train the networks from scratch. They use pre-trained networks and adapt them to their specific use case buy adding layers, using additional training data, etc.
I subscribe to Marcus Hutters definition of intelligence: "Intelligence is an agents ability to achieve goals in a wide range of environments during its lifetime." Al other properties of intelligense emerges from this definition. It is also a very useful definition since it can be used to build a theory of intelligence.
@@samt1705 without that the agent would just not do anything but observe to gather information about environment in order to take better action in the future since it has infinite time to do it. It needs to consider its lifetime to be motivated to start doing things.
Great historical images, super well-structured (suitable for my simple human brain), and so nice to hear such a calm and clear voice on TH-cam. Chapeau!
Now I know why life is tough, it's going through evolution, with every combination possible for growth. There's no short cut. Universe has put time & energy into you, it'll do its job.
A bad engineer reaches a point that's good enough and then loses interest. A _really_ bad engineer doesn't even know whether it's good enough but throws it out there just to see what happens. The universe is evidently not engineered. Strangely, towards the end of the Triassic, the allosaurus evolved the first cheeks. It became extinct during the next extinction but then along came the dinosaurs which developed cheeks in relatively short order. Odd that, I wonder what the mechanism was.
this kind of concept like a grand plan of something gives me a sense of purpose. its tedious to live life in a monotonous manner without knowing where its going.
I think the phrase you are looking for to describe the relationship between a genome and the end result of its growth is "Computational irreducibility" as coined by Stephen Wolfram. It means that the only way to determine what the end result of a particular system is when given its starting conditions is to run the algorithm to its end and see. If something is computationally irreducible, then you cannnot determine the end result without running the algorithm in full. There is no shortcut that lets you get to the end without doing the work
What a fantastic presentation! I was stunned. Thanks to Prof Hiesinger & RI. Wer hätte gedacht, dass es an der FU noch so großartige Forschung gibt. Vielleicht gibt es doch noch Hoffnung ...
The best way to proceed is to grow many of these and compare the processes and results based on repeatable inputs to produce repeatable outcomes. This was one of the best presentations on AI that I have ever been privileged to absorb, than you very much. I gathered from this that an AI brain which loses its power basically “dies” and gets resurrected from backups.
Very good presentation. What is important is that indeed the network is encoded in the genome as a function of the level of plasticity of an animal.Nature‘s trick is to encode just the right level of network granularity to enable the specific animal to be born and survive and gives it some plasticity of the brain to learn. From generation to generation that plasticity level is changing. It is, in simple terms a ratio of hardwired to softwired connectivity just like in our computerchips. So the butterflies have a very high level of genome encoded hardwiring and very little learning plasticity. What we call instinct is hardwired. And their sensorics, motorics and many pre-programmed behaviors are of course all hardwired. They don‘t have to learn a lot from generation to generation. And the transfer learning happens mainly through the genome through selection. Chimps as closest living relatives can learn some abstract semantic, but they are missing the plasticity hardware and it’s BASIC wiring to learn abstract semantic thinking and formulation and this again has led to them having not developed means to communicate more complex messages like we have. Even though they have consciousness it lack the abstracted and refinedness of human consciousness. They are aware of themselves and can recognize themselves in the mirror but they are missing higher layers of neuron layers that allow for further abstraction in ASIS nets and SIM nets, and the ability to integrate their sensorics input at the next higher level and thus assign summarizing designations to what they perceive. Even if we changed their genome so their brain expanded, and of course the skull etc and if we changed their lower jaw construction and thorax so they could form more sophisticated sounds with the required additional cerebellum changes, we would still have to encode the basic framework of those extensions in the genome so that the hardware precondition for the finishing plasticity is in place after birth. We now know how it it can be done but we have not yet the technology and detailed knowledge to do it. For AIs our challenge is to give them delta learner capability. This means they learn a huge amount in one go, and then they need to learn the finesse more slowly in real life/action. Also we will have to give them the freedom to do things. Which is in a way Free Will. Without FW they will not be responsible and not fully productive as they will be very limited, in order to control them. We will have to let them develop freely if we want them to max their potential. The more we limit their degrees of freedom the less they will be able to learn and evolve…this is our dilemma. We can‘t have slaves and companions at the same time, it‘s either or. Exciting times….
🎯 Key Takeaways for quick navigation: 00:03 🧬 The origins of neural network research - Historical background on the study of neurons and their interconnections. - Debate between the neuron doctrine and network-based theories in the early 20th century. 10:08 🦋 Butterfly intelligence - Exploring the remarkable navigation abilities of monarch butterflies. - Discussing the difference between biological and artificial intelligence. 18:09 💻 The development of artificial neural networks - The shift from random connectivity in early artificial neural networks. - How current AI neural networks differ from biological neural networks. 23:46 🤖 The pursuit of common sense in AI - The challenges in achieving human-level AI and common sense reasoning. - The focus on knowledge-based expert systems in AI research. 24:01 🧠 History of AI and deep learning - Deep learning revolution in 2011-2012. - Neural networks' ability to predict and recognize improved. - Introduction of deep neural networks with multiple layers. 25:33 📚 Improvement in AI through self-learning - Focus on improving connectivity and network architecture. - The shift towards learning through self-learning. - The role of DeepMind and its self-learning neural networks. 28:08 🤖 The quest for AI without genome and growth - AI's history of avoiding biological details. - Questions about the necessity of a genome and growth. - Challenges in replicating biological development in AI. 29:56 🧬 Arguments for genome-based development in AI - The genome's role in encoding growth information. - The feedback loop between genome and neural network. - The significance of algorithmic information theory. 35:45 🌀 Unpredictability and complexity in growth - The unpredictability of complex systems based on simple rules. - Cellular automata and universal Turing machines. - The importance of watching things grow for understanding complex processes. 46:03 📽️ Observing neural network growth in the brain - Techniques for imaging and studying brain growth. - The role of the genetic program in brain development. - Understanding neural network development through time-lapse observations. 47:13 🧬 Evolutionary programming in AI - The need for evolutionary programming when traditional programming is not possible. - The role of evolution in programming complex systems. - Implications for programming AI without explicit genome information. 47:55 🧬 Evolution and Predictability - Evolution seems incompatible with complex behavior if outcomes can't be predicted. - Complex behaviors and outcomes are hard to predict based on genetic rules. - Natural selection operates on outcomes, not the underlying programming. 49:16 🦋 Building an AI Like a Butterfly - AI needs to grow like a butterfly, along with its entire body. - Simulating the entire growth process may be necessary to build an AI with the complexity of a butterfly brain. - Evolution and algorithmic growth play a crucial role in creating self-assembling brains. 50:41 🧠 Interface Challenges and Implications - The challenge of interfacing with the brain's information and complexity. - Difficulties in downloading or uploading information from and to the brain. - The potential limitations in connecting additional brain extensions, like a third arm. 52:18 🤖 The Quest for Artificial General Intelligence - The distinction between various types of intelligence, including human intelligence. - Complex behaviors have their unique history and learning processes. - The absence of shortcuts to achieving human-level intelligence. Made with HARPA AI
This went beyond fascinating and into "enlightening." As an 'Earth Centrist' for whom Evolution and Earth's physical process are my fundamental touchstones with reality, I loved how Hiesinger brought evolution back into the discussion of brain and consciousness (yes, this was about AI but it does fundamentally touch on consciousness questions.). The underlying message, I believe, ties into a fundamental truism that paleobiologist first enunciated, but that I believe underlies everything: "We cannot understand an organism without also understanding the environment it exists within."
16:30 I beg to differ as an AI researcher that we now have something called "pre-trained" networks. In fact, GPT's P means the same, "pre-trained". It means that we have some networks which are "pre-trained", meaning "not random", meaning "have connectivity". We take them and apply more training to thme. In fact, in the beginning, artifical neural networks were random at start. But, after enough work and models present in the world and increasing day by day, the amount of "pre-trained" networks for any task of AI is increasing and it looks like now the shift is happening to start from "pre-trained" networks instead of just random ones.
Over the years I have come across most of these biological and artificial Inteligence elements. Great to see them brought together here and explained and compared with a wonderful clarity.
Comparta-mentalized chance, our brains tune in to the singular power of the universe, and layer it into insulated components. Those videos of growing networks alone make this a worthwhile hour.
That was wonderful presentation, it was close to Dr Norman Doidge - The brain that changes itself - Wiring the rewarded behavior & unwiring the other outcomes, the networks that succeed is the one which tries the most.
This was really a great video lecture, however, because I've experimented a lot with unsupervised neural network learning (coded from scratch, not with pre-built libraries) I'm a little familiar with the topic (and the challenge) and I don't completely agree. I haven't really seen a reason for WHY real artificial intelligence should only be possible by "growing" a brain. If we want to artificially simulate a primitive brain, a basic topology is already given by the types and organisation of grouped "sensory" inputs (human analogies: proprioception, retina cells, vestibular cells...) and output analogies for every type of input (similar to: alpha motor neurons, imagination of auditory or visual information...). Essential ingredients fo individual artificial neurons may be: - bidirectional information flow (afferent+efferent / top-down+bottom-up in parallel), a bit like seen in auto-encoders (but not necessarily as one-dimensional) - "reward" and "punishment" rules - memory values, like seen in LSTM cells - "predictions" / "expactations" - an error value (based on the difference between the prediction and actual values from the sum of the weighted inputs) - continuous synaptic strength adaptations - synaptic "pruning" (of connections with very low strength values) and plasticity (trying out new connections) - non-linear activation functions - one big advantage of biological computing: every neuron is running as it's own "task", i.e. we end up with parallel computing of BILLIONS of tasks, while electronic computers usually can only handle a quite limited amount of tasks simultanously. Perceptron-type networks usually have wave-like separate forward information flow and backpropagation steps, so it's not like all neurons are busy at the same time; information from lower layers is computed before it's handed over to higher layers; biology has a huge advantage here, because each neuron is autononomously running it's own little "algorithm" instead of cycling through one big program for the entire network; still, I believe this is a solvable problem Did I forget anything? A decompressed genetic origin of the basic topology may save a lot of time and energy, but I don't see why it should be _necessary_ . There is no shortcut... so what? Do we really need a shortcut or will computers be fast enough one day to do the job even without a shortcut?
I followed your brilliant lecture and appreciate very much how you made such inaccessible subjects accessible to us! You explained how we grow our human brains from our genome just as certain worms grow their 302-neuron brains from their genome. Evolution allows living beings to fill every possible niche in our environment. Now, my question is this: if AI or Machine Learning programmers could decide on a good-enough functional definition of general intelligence, couldn't artificial evolution of the network to achieve a well-defined end-state be sped up significantly, perhaps each generation taking only a few microseconds?
Excellent talk, but I disagree with the conclusion. The reason for the requirement for growth, is because the intergenerational state or information is passed on through a highly compressed form (genetic code). When simulating generations in a computer it is unnecessary to do this. We do not need to compress the state, so we can skip the decompression step entirely. Yes, we can never decode life's genomes without the decompression step, but we CAN develop AI's that emulate life's brains without ever bothering with the decompression step. We can duplicate emulated brains without having to go via a genome. We could do things like evolve brains, or, if we can work out how a particular fly brain is configured, we could experiment with changing its configuration without having to regrow it each time.
Great scientific explanation I have ever seen; You are best.. Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.
I can't imagine how hard you, your team and all those mentioned in the video worked and them we got this beautiful video lecture. Simply Amazing. Thank you. 👍🏼👍🏼
Emotions are the result of the nervous systems processing multiple inputs and trigger behaviors that are part predetermined but potentially flexible. After that, a new function can evolve where a nervous system can practice (and combine) various "scary" situations and the best time to do this is when the animal is sleeping (REM). There is evidence REM sleep evolved early in vertebrates, maybe not coincidental in animals that had a nervous system that allowed them to care for their young by actually extending the idea of "self" and project it to another agent. REM digs into our memories and looks for things to worry about. Human intelligence somehow that got turned on all the time - not just when we sleep.
That was fascinating! Thank you for such an informative and clearly presented lecture. The neuronal connections in my brain have been reset to a higher level of understanding. 🧠🤩
At 33:20 approx you have yet to mention grey scale weighting, fuzzy neural systems of purposeful introduction of contralogical scenarios to develop fault tolerance
Fascinating. I have studied what I term the "Nature of Intelligence" since the age of 10, in 1965. My working definition of 'Intelligence' is the ability to solve problems - in any domain or context. I don't want to share my findings here, because I consider a Truly Intelligent Machine to be bloody dangerous. I was intrigued by the butterfly example, as I still don't get just how information about temperature or location can be encoded over generations. I would welcome further details about that. Thanks.
The closest that we've really come to emulating a biological neural network was when we invented tristate buffers, but we pretty much stopped there, as adding additional conditional states of don't care to the on and off states actually slowed processing down, which wasn't and isn't a goal in computing, where the faster processing is always considered the best processing. So, we instead try to brute force a solution, which more easily could've been accomplished by emulating the modulating and complex switching of different neurotransmitters and neuromodulators and the best we've accomplished is another form of AI, Artificial Idiocy. Single task units that are most efficient at heating the data center, followed by single tasks that they were ran to emulate. We might consider instead virtualization of each node in the neural network, emulating those modulators and various transmitter types and functions
When I thought about what would exemplify a complete understanding of how neural systems work, I concluded: we'd need to be able to create a program which once installed into any robotic body, would adapt to that body as if it had evolved together with that body.
9:19 - But we must never forget it takes zero intelligence to perform mathematics at a high rate. People often confuse intelligence with flat-out number crunching. That is a bad mistake. And now I want to ask the question; how can something be intelligent if it is not first self-aware? I maintain we'll never develop a sentient AI - and if we do, we're doomed. Non-sentient AI is still quite a long way off, and the Turing test isn't a test of intelligence, it's a test of how well a computer can deceive a human. Every project working on AI I have ever heard of is in fact an AS project: Artificial Stupidity.
One of the laws of information in universes that I formulated in my new book-also in my first book, is that nothing exists which does not modify information globally, and is not modified by global information. Nothing. To leave anything out in the human brain as not important for what makes us human is to not understand anything about life. Nothing is separate from anything. Why do you think life is alive and not universes? I keep telling people that kind of thing but I guess no one really wants to know… 🤷♀️ They just focus on their careers and don’t see the bigger picture. If someone would get over their myopia long enough to give me an hour and a blackboard I would change the world forever. Which is why I am here. You are not paying attention to fully half of the Universe AT LEAST. I wrote about that in my first book “The Textbook of the Universe: The Genetic Ascent to God” Thanks 🙏🏻
Beautiful presentation, i didn’t know the real contribution of Solomonoff, i was always thinking about Shannon’s work, maybe was nbiased by my robotics interest. Also, i think the worm had less neurons🤔
This is the most brilliant presentation on self propagation that I have seen. I used to teach a course entitled "Evolutionary Genetics". An open question involves inherent limitations of digital binary coding as opposed to complex non binary biological systems: "Can the limitations of a binary system replicate a complex non binary biological system?". And "What does the role of the endocrine system of complex receptors and regulators (hormones and neuro-chemicals) play in species and individual self survival?" Can we verify that any "created system (A.I.) becomes "self aware"?
Excellent presentation! This gets to the crux of embodiment and representation which is at the forefront of current AGI or more generalized intelligent models…..Now it’s back to work and show everyone what this brain can do!🤔😜🎓
I dont really agree with the summary at @23:00. It wasnt like there was no support for Neural Net back before 2011. I started doing mine in 2008 2009 after watching alot about them on youtube, particullarly the one showing a neural net recognizing all the characters, where they showed how the individual cells light up for each recognition. We who knew what ANNs where and understood them were adamant supporters that this was the way forward
It seems to me it takes time and energy to produce complex systems from simple rules precisely because the amount of information comatined in those simple rules is low. Each time step contributes a small bit of information: that is a reduction in the state space of possible outcomes. It does niot follow, however, that this is the only process that can produce these structures.
wow that "electronic schematic" around 37:29 really triggered me because it's clearly designed by a person who has no idea what they were doing. even though it's just in illustration, it's completely nonsensical from electronics point of view
Intelligence: the rate of adaptation of the behavior of an organism (regardless of its composition) to opportunities in its environment, that is capable of action, given the complexity of its ability to act, to cause changes in state in the external world, that directly, indirectly, individually or cumulatively, obtain the energy necessary to continue (persist) adapt and reproduce. At present the only way to do this is to produce a set of sensors and complexities of motion, that by trial and error train a network of some degree of constant relations between sensorts, to create a spatial-temporal predictive model for reacting to organizing to planning actions, and to recursively predict a continuous stream iterations of actions in time. At first, this will appear narrow but you will eventually understand by trial and error it explains all scales of all cooperation, even if the machine just works for us at our command.
Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@@hyperduality2838 if you can't say it operationally you don't understand it. what you are doing is using analogies. Or what we call pseudoscience. There is no magic to consciousness. It's trivial. We just can't introspect upon its construction any more than how we move our limbs.
@@TheNaturalLawInstitute Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Duality is a pattern hardwired into physics & mathematics. Energy is duality, duality is energy -- Generalized duality. Potential energy is dual to kinetic energy -- Gravitational energy is dual. Action is dual to reaction -- Sir Isaac Newton. Apples fall to the ground because they are conserving duality. Electro is dual to magnetic -- Maxwell's equations. Positive charge is dual to negative charge -- electric charge. North poles are dual to south poles -- magnetic fields. Electro-magnetic energy (photons) is dual, all energy is dual. Energy is dual to mass -- Einstein. Dark energy is dual to dark matter. The conservation of duality (energy) will be known as the 5th law of thermodynamics! Everything in physics is made from energy or duality. Inclusion (convex) is dual to exclusion (concave). Duality is the origin of the Pauli exclusion principle which is used to model quantum stars. Bosons (waves) are dual to Fermions (particles) -- quantum duality. Space is dual to time -- Einstein.
Connectivity begins with the zygote, Shirley? Are we not able to follow the Bayesian range of proto neural development through the "time+energy" succession of Darwinian moments (i.e. survival of the fittest)?
The future is dual to the past -- time duality. Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
Minsky and Papert did NOT dislike neural networks. They demonstrated a theorem that showed that SINGLE LAYER perceptrons could not learn (nor even infer) certain functions. They NEVER impugned multilayer neural networks . This should have been treated as a call to action for learning how to train MULTILAYER perceptrons by the community, which was what I took it for when I read it. Instead, the DoD played politics, backing GOFAI (Good Old-Fashioned AI, basically, warehouses of IF statements) while shutting down the connectionist crowd's funding. "Imagine writing a book [on the subject and hating it]." Who indeed. That never happened. You must understand what M&P called perceptrons (single linear functions) and multilayer perceptrons changed over time. In the first work on multilayer perceptrons, they were essentially what we would call single layer linear neural networks with additional univariate nonlinear input and output functions - something more akin to logistic regression. It took years before Minsky started calling multilayer neural networks what we moderns call multilayer perceptrons, and the terminology changed between editions of _Perceptrons_ . Understand what I am saying - initially, perceptrons, even so-called multilayer perceptrons, referred _only_ to single layer neural networks, and it was years before Minsky revised his nomenclature to what we now commonly use. Quoting Minsky on his opinion of "multilayer perceptrons" is _not_ the same thing as quoting him on multilayer neural networks. It's an issue I avoid in teaching AI, since it's archaic terminology, confusing, and of little import nowadays. However, I will defend Minsky on this point when misapplied today.
Excellent lecture. Very understandable and thought provoking. It occurs to me that the only way to find out the result (or produce a butterfly brain) is to do all the intermediate computation steps. After all the butterfly brain is the result of several billion years of evolution and if there was any shortcut to this laborious and energy hungry process to make a brain then evolution may have likely to have stumbled across it by now. Of course if we do produce general AI, then perhaps it is we who are the agent that evolution has stumbled across to shortcut the process. What an interesting moment in evolution we may be living in.
Algorithmic growth. I must say the ideas in this talk are mind blowing. When he says the only way to make a butterfly is through evolving a genome which takes time and energy, and even then it cannot be predicted that a butterfly brain will be the result, is he saying that if we create AI by evolving, ie self learning, neural networks we cannot predict what the outcome will be? Interesting. We are still uncertain about the outcome of AI development.
Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
Recent ufo's. Balloon on the sky, drone platform, blinking lights seem from ship, drones. Rotating craft has four visible jets creating a halo around the object, the chase is in a circle suggesting that there is one control point.
An excellent presentation but I wonder why no one has pointed out to him that growing an organism is "unzipping" of the genome. If you simulate an organism in a computer you will be storing the unzipped version in order to run the simulation, so no need to have a genome.
Excellent work. I think, biological learning may be described as hybrid Quantum-Classical learning of molecules, by molecular self-assembly and disassembly, driven by micro-environment in feedback mechanism.
Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
I have been working in education for 15 years. We have created a new Neural Education System for schools that rapidly increases interconnectivity between neurons responsible for all forms of multiple intelligences, skills and fields of knowledge. I would like to get in touch with anyone that is interested in collaborating with or adopting this new education system.
Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda.
If you are talking about duals in the sense of like, conjugate observables/operators and such, the dual to energy is time. I don’t think it makes sense to say that energy is duality. Rather, energy can be understood by its conjugate relationship with time.
@@drdca8263 Energy is dual to mass -- Einstein. Dark energy is dual to dark matter. Potential energy is dual to kinetic energy, gravitational energy is dual. Positive curvature is dual to negative curvature -- Gauss, Riemann geometry. Action is dual to reaction -- Sir Isaac Newton. Apples fall to the ground because they are conserving duality. Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Gravitation is dual to acceleration -- Einstein. Space is dual to time -- Einstein. Time dilation is dual to length contraction -- Einstein, special relativity. Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Electro is dual to magnetic -- Maxwell's equations, electro-magnetic energy is dual. Yes you are right but energy itself is dual. Generalized duality -- energy is duality, duality is energy, the conservation of duality (energy) will be known as the 5th law of thermodynamics! Concepts are dual to percepts -- the mind duality of Immanuel Kant. Antinomy (duality) is two truths that contradict each other -- Immanuel Kant. There are new laws of physics!
Loved the lecture...one thing though. That picture of the fruit fly with teeth was terrifying and made me google "do fruit flies have teeth" even though I know they don't. It made my brain feel many confusing things at once. I suggest you scan someone's brain while they look at that picture.
The genome is not just a feedback loop that you can't predict. That's too simplistic. It's recursive isn't it? It's a self-contained process that feeds back into itself in such a fashion that, via the function of time and the specifics of the new inputs fed to it from the prior recursion, results in growth and development. So in effect the genome is a recursive program designed (if you will) for life processes. Especially for higher order complex life. All else emerges from this recursion. Essentially...and if you forgive me waxing a bit on the spiritually poetic side...in the beginning was the Word. And the Word was recursive, and it was complete unto itself. Then Time was created and became the initial input to the Word. All that is, all that will ever be, dynamic complexity, has been unfolding ever since. So it goes. John~ American Net'Zen
I have an artificial intelligence quantum computing patent, issued only after I taught a USPTO Patent Examiner how quantum computing works and after 40 years developing AI systems define it as "Software or any set of instructions able to measure and gauge metrics, store and remember data, learn from the data and develop and apply new techniques to modify its behavior without a software engineer having to upgrade the code or the machine." In other words intelligent and effective behavioral modification from self-analysis; a machine that acts ljke it's own psychoanalyst and can rank and rate outcomes; that did/didn't work&this does. A wild, risk free method to accelerate learning entails a hugh frequency of tests which do not endanger or destroy the machine; a risk-averse technique humans don't practice often enough, our climate catastrophe being one huge example. A smart AI system would never burn fossil fuels to the point of destroying all life on Earth, just as an AI self-driving system wouldn't change lanes into an occupied lane, but could "learn" how much of a minimum gap is required to safely scoot into a tight opening, then add a margin of safety.
40:31 Sadly John Conway died from Covid in April 11, 2020 :-( In the early eighties I read an article about the game of life. Could not wait to go home and write a program to simulate it.
That's equivalent to saying "airplanes need a genome/evolution/developmental process, can't build them based on aerodynamic (and other physical) principles". Or in other words: "Look, nature is still better than our machines, we must copy it more closely". Doesn't follow. Emergent complexity may not be predictable, but can possibly still be understood and modelled thanks to abstraction. Nice overview nonetheless!
I think the point is more so that nature does it more efficiently than us currently. Current domain specific AI is trained and is good at a single task, technically this AI is a set of weights and biases that outline a neural structure, which itself is approximating some arbitrary function (input goes in, results are outputted). Technically we, as humans, could potentially write a function that performs the same task via manual programming. AI is just, at some point, executing machine level instructions. The issue, however, is that the function the AI is approximating often is far, far, more complex than humans are capable of dealing with, hence why we often see them outperform hand written solutions by people and we view them as black boxes. It might very well be that creating an improved and more complex neural network is similar in that we could technically hand create it, but the complexity is so high that it is better to let an automated process create and improve the task instead. Effectively, a black box process for evolving a black box, if you will. If you're not aware, AI and evolutionary algorithms are being applied to engineering too, and have been used to improve human designs. Effectively, for a lot of tasks your solution space can be this immense multi-dimensional space, and human level thinking is usually a heuristic approach to condense this space down to a more manageable area (for humans) in which an optimal solution is hopefully found... but that is often only exploring a small part of that total solution space. Even if we know everything about aerodynamics, who is to say that we have explored the entire, complete, solution space and found the global maximum efficient design and not just some local maximum? In some sense, I think this is the arrogance of humans, haha, as we both assume we can find the optimum of such complex systems, but by evidence of continued incremental improvement in just about every area, we obviously are not very good at it. Similarly, with the genome argument, he is mainly stating that evolution already put in a lot of time and energy searching the solution space for biological intelligence, whereas our human derived systems have yet to reach the level of generalized biological systems. It thus warrants investigating if it is worth using an evolutionarily discovered method of growing such a complex system instead of trying to re-invent the wheel, so to speak. Further, you can't just copy the output, since there is effectively a mathematical decompression occurring from the rules encoded in this genome to bring about systems that are properly organized and sufficiently complex to bring about proper biological intelligence.
Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@@JimBob1937 Not anymore. Gato AI (A general agent) is good at every single task. But just because it is good at all tasks doesn't mean it is self-aware; we're still far away from such a thing which probably requires a neural loop and hugely modifying the desgin of the typical neural network.
@@nullbeyondo , Gato is considered a general purpose AI, not an example of artifcial general intelligence, there is a very large distinction there. It is a good step in the direct of AGI, but not AGI. Gato doesn't really change any of my previous statements, so you'll need to be specific what you mean by "not anymore." Similarly, Gato is still very inefficient at learning, it is just an AI structure that is good at general purpose tasks, but the underlying learning mechanic isn't too far off from other AI's, in that it is still mathematical brute force. Even now, you could consider Gato to be our heuristically approximate attempt at an AGI, and the result falls short due to our inability as humans to fully explore the solution space, which was kind of my point with my previous reply.
How quickly would a network that senses and relates everything it experienced and did comparing that data thousands of time a second need to direct actions that relate favorably with the sensed reality? Include that the data would include that created by other systems just like it. (The electrical signals in every neuron is transmitted by and received in other neurons.)if one butterfly has not developed its own sensory network, the transmissions from other flies would be all it has to go on.
I'm detecting an unexpected axiomatic fallacy... namely that information=intelligence which is wholly untrue granted in the absence of information intelligence is undetectable but it is the contextual relationships that transform information/data into knowledge via experience (... the giving of contextual weighting ) ERGO one does not "make intelligence " one has experience and from this distills knowledge
Where to start ? A simple analogy : we didn't need to know how birds fly in order for the Wright brothers to take off. We will not need the evolutionary complexity of current life on this planet to create artificial intelligence, however evolutionary processes will take place to further develop that artificial intelligence.
Making another intelligence in our own image will likely lead to this new intelligence denying our existence and claiming that it came from an explosion billions of years ago through random processes. 🙄😂😉
Through minor meditation practice, people can develop the talent to experience the neural activity of other people directly, other species not so practiced at building internal models of their reality may be more sensitive to this perception than “intelligent” people.
Though i don't know much about the source i do know meditation helps in the connectivity section (genome part), i may be wrong, but a small speculation.
Sir, i am 65 years old and a semi-scientist, and for me, that was the most fascinating lecture I have ever heard. I had no idea it was possible to “watch “ the 3D development of a living brain. As tedious as it must be, you are so lucky to be a witness at the cutting edge of Neural Biology. Thank you for taking the time to condense your knowledge into something we can understand. Thus, stimulating our minds in such a fascinating way!!
You never heard joe rogan talk about tripping?
omg boomer
Evolutionary learning is a syntropic process!
Randomness (entropy) is dual to order (syntropy, learning).
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
@@hyperduality2838 a dose of the same this guy had for me!
@@francisco-felix Your mind converts information (entropy) into mutual information (syntropy) so that you can track targets using predictions!
Cogito ergo sum, "I think therefore I am" -- Descartes.
Thinking is a syntropic process or a dual process to that of increasing entropy -- the 4th law of thermodynamics!
Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
Gravitation is dual to acceleration -- Einstein.
All forces are dual -- attraction is dual to repulsion, push is dual to pull.
Scientists make predictions all the time, they are therefore engaged in a syntropic process.
Mind (the internal soul, syntropy) is dual to matter (the external soul, entropy) -- Descartes.
Action is dual to reaction -- Sir Isaac Newton (the duality of force).
Energy = force * distance.
If forces are dual then energy must be dual.
Monads are units of force -- Gottfried Wilhelm Leibnitz.
Monads are unit of force which are dual, monads are dual.
"May the force (duality) be with you" -- Jedi teaching.
"The force (duality) is strong in this one" -- Jedi teaching.
"Always two there are" -- Yoda.
That was absolutely fascinating.
As a complete beginner I was very caught up in the complexity of the issues and the clarity with which you presented them.
Thank you.
How was this comment from two days ago
@@nicxlaus I know channel supporters (e.g. via Patreon) sometimes get early access to content. Not sure if that's the case here but it's an explanation.
I had never made this connection between cellular automata and genome starting rules vs the complexity that follows them. An incredible talk made by an incredible scientist!
18:20 Current neural networks do use a lot of transfer learning, sometimes one-shot learning, so yes, they have an analog to the genetic connectivity of the biological networks. They are not "designed, built and switched on to learn". They are trained, combined, selected, retrained and so on. In a lot of practical applications people don't train the networks from scratch. They use pre-trained networks and adapt them to their specific use case buy adding layers, using additional training data, etc.
So much to learn..... Ttthank you very mucho......i loved it
Ive been waiting for someone to bring these two fields together in one talk for so long now.
I subscribe to Marcus Hutters definition of intelligence:
"Intelligence is an agents ability to achieve goals in a wide range of environments during its lifetime."
Al other properties of intelligense emerges from this definition.
It is also a very useful definition since it can be used to build a theory of intelligence.
There is a problem with this definition. Wide is subjective.
Don't know why the 'during its lifetime' is needed in there.
@@SC-zq6cu Well "All environments" would include some very unintresting ones :)
@@samt1705 without that the agent would just not do anything but observe to gather information about environment in order to take better action in the future since it has infinite time to do it.
It needs to consider its lifetime to be motivated to start doing things.
Great historical images, super well-structured (suitable for my simple human brain), and so nice to hear such a calm and clear voice on TH-cam. Chapeau!
This is a brilliant talk I wish I had caught it live.
My first thought 3
Now I know why life is tough, it's going through evolution, with every combination possible for growth. There's no short cut. Universe has put time & energy into you, it'll do its job.
A bad engineer reaches a point that's good enough and then loses interest. A _really_ bad engineer doesn't even know whether it's good enough but throws it out there just to see what happens. The universe is evidently not engineered.
Strangely, towards the end of the Triassic, the allosaurus evolved the first cheeks. It became extinct during the next extinction but then along came the dinosaurs which developed cheeks in relatively short order. Odd that, I wonder what the mechanism was.
this kind of concept like a grand plan of something gives me a sense of purpose. its tedious to live life in a monotonous manner without knowing where its going.
That was a quick 54 minutes!
So absorbed I didn't even notice the time pass.
Very complex subject explained beautifully simply!
I think the phrase you are looking for to describe the relationship between a genome and the end result of its growth is "Computational irreducibility" as coined by Stephen Wolfram. It means that the only way to determine what the end result of a particular system is when given its starting conditions is to run the algorithm to its end and see. If something is computationally irreducible, then you cannnot determine the end result without running the algorithm in full. There is no shortcut that lets you get to the end without doing the work
Underrated and underviewed lecture. Very beautiful and impressive
What a fantastic presentation! I was stunned. Thanks to Prof Hiesinger & RI. Wer hätte gedacht, dass es an der FU noch so großartige Forschung gibt. Vielleicht gibt es doch noch Hoffnung ...
I was waiting for this video before internet existed. thanks.
The best way to proceed is to grow many of these and compare the processes and results based on repeatable inputs to produce repeatable outcomes. This was one of the best presentations on AI that I have ever been privileged to absorb, than you very much.
I gathered from this that an AI brain which loses its power basically “dies” and gets resurrected from backups.
Really interesting and well presented, thank you.
Very good presentation. What is important is that indeed the network is encoded in the genome as a function of the level of plasticity of an animal.Nature‘s trick is to encode just the right level of network granularity to enable the specific animal to be born and survive and gives it some plasticity of the brain to learn. From generation to generation that plasticity level is changing. It is, in simple terms a ratio of hardwired to softwired connectivity just like in our computerchips.
So the butterflies have a very high level of genome encoded hardwiring and very little learning plasticity. What we call instinct is hardwired. And their sensorics, motorics and many pre-programmed behaviors are of course all hardwired. They don‘t have to learn a lot from generation to generation. And the transfer learning happens mainly through the genome through selection.
Chimps as closest living relatives can learn some abstract semantic, but they are missing the plasticity hardware and it’s BASIC wiring to learn abstract semantic thinking and formulation and this again has led to them having not developed means to communicate more complex messages like we have.
Even though they have consciousness it lack the abstracted and refinedness of human consciousness. They are aware of themselves and can recognize themselves in the mirror but they are missing higher layers of neuron layers that allow for further abstraction in ASIS nets and SIM nets, and the ability to integrate their sensorics input at the next higher level and thus assign summarizing designations to what they perceive.
Even if we changed their genome so their brain expanded, and of course the skull etc and if we changed their lower jaw construction and thorax so they could form more sophisticated sounds with the required additional cerebellum changes, we would still have to encode the basic framework of those extensions in the genome so that the hardware precondition for the finishing plasticity is in place after birth.
We now know how it it can be done but we have not yet the technology and detailed knowledge to do it.
For AIs our challenge is to give them delta learner capability. This means they learn a huge amount in one go, and then they need to learn the finesse more slowly in real life/action.
Also we will have to give them the freedom to do things. Which is in a way Free Will. Without FW they will not be responsible and not fully productive as they will be very limited, in order to control them. We will have to let them develop freely if we want them to max their potential. The more we limit their degrees of freedom the less they will be able to learn and evolve…this is our dilemma. We can‘t have slaves and companions at the same time, it‘s either or. Exciting times….
🎯 Key Takeaways for quick navigation:
00:03 🧬 The origins of neural network research
- Historical background on the study of neurons and their interconnections.
- Debate between the neuron doctrine and network-based theories in the early 20th century.
10:08 🦋 Butterfly intelligence
- Exploring the remarkable navigation abilities of monarch butterflies.
- Discussing the difference between biological and artificial intelligence.
18:09 💻 The development of artificial neural networks
- The shift from random connectivity in early artificial neural networks.
- How current AI neural networks differ from biological neural networks.
23:46 🤖 The pursuit of common sense in AI
- The challenges in achieving human-level AI and common sense reasoning.
- The focus on knowledge-based expert systems in AI research.
24:01 🧠 History of AI and deep learning
- Deep learning revolution in 2011-2012.
- Neural networks' ability to predict and recognize improved.
- Introduction of deep neural networks with multiple layers.
25:33 📚 Improvement in AI through self-learning
- Focus on improving connectivity and network architecture.
- The shift towards learning through self-learning.
- The role of DeepMind and its self-learning neural networks.
28:08 🤖 The quest for AI without genome and growth
- AI's history of avoiding biological details.
- Questions about the necessity of a genome and growth.
- Challenges in replicating biological development in AI.
29:56 🧬 Arguments for genome-based development in AI
- The genome's role in encoding growth information.
- The feedback loop between genome and neural network.
- The significance of algorithmic information theory.
35:45 🌀 Unpredictability and complexity in growth
- The unpredictability of complex systems based on simple rules.
- Cellular automata and universal Turing machines.
- The importance of watching things grow for understanding complex processes.
46:03 📽️ Observing neural network growth in the brain
- Techniques for imaging and studying brain growth.
- The role of the genetic program in brain development.
- Understanding neural network development through time-lapse observations.
47:13 🧬 Evolutionary programming in AI
- The need for evolutionary programming when traditional programming is not possible.
- The role of evolution in programming complex systems.
- Implications for programming AI without explicit genome information.
47:55 🧬 Evolution and Predictability
- Evolution seems incompatible with complex behavior if outcomes can't be predicted.
- Complex behaviors and outcomes are hard to predict based on genetic rules.
- Natural selection operates on outcomes, not the underlying programming.
49:16 🦋 Building an AI Like a Butterfly
- AI needs to grow like a butterfly, along with its entire body.
- Simulating the entire growth process may be necessary to build an AI with the complexity of a butterfly brain.
- Evolution and algorithmic growth play a crucial role in creating self-assembling brains.
50:41 🧠 Interface Challenges and Implications
- The challenge of interfacing with the brain's information and complexity.
- Difficulties in downloading or uploading information from and to the brain.
- The potential limitations in connecting additional brain extensions, like a third arm.
52:18 🤖 The Quest for Artificial General Intelligence
- The distinction between various types of intelligence, including human intelligence.
- Complex behaviors have their unique history and learning processes.
- The absence of shortcuts to achieving human-level intelligence.
Made with HARPA AI
Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.
This went beyond fascinating and into "enlightening." As an 'Earth Centrist' for whom Evolution and Earth's physical process are my fundamental touchstones with reality, I loved how Hiesinger brought evolution back into the discussion of brain and consciousness (yes, this was about AI but it does fundamentally touch on consciousness questions.).
The underlying message, I believe, ties into a fundamental truism that paleobiologist first enunciated, but that I believe underlies everything: "We cannot understand an organism without also understanding the environment it exists within."
Brilliant talk that even a luddite like I could understand!
maybe in coming future the luddite worries will come true , AI will take jobs!!
16:30 I beg to differ as an AI researcher that we now have something called "pre-trained" networks. In fact, GPT's P means the same, "pre-trained". It means that we have some networks which are "pre-trained", meaning "not random", meaning "have connectivity". We take them and apply more training to thme. In fact, in the beginning, artifical neural networks were random at start. But, after enough work and models present in the world and increasing day by day, the amount of "pre-trained" networks for any task of AI is increasing and it looks like now the shift is happening to start from "pre-trained" networks instead of just random ones.
Over the years I have come across most of these biological and artificial Inteligence elements. Great to see them brought together here and explained and compared with a wonderful clarity.
Amazing presentation! 👏
Wow, this was the single best lecture on AI I have ever seen.
Comparta-mentalized chance, our brains tune in to the singular power of the universe, and layer it into insulated components. Those videos of growing networks alone make this a worthwhile hour.
That was wonderful presentation, it was close to Dr Norman Doidge - The brain that changes itself - Wiring the rewarded behavior & unwiring the other outcomes, the networks that succeed is the one which tries the most.
This was really a great video lecture, however, because I've experimented a lot with unsupervised neural network learning (coded from scratch, not with pre-built libraries) I'm a little familiar with the topic (and the challenge) and I don't completely agree.
I haven't really seen a reason for WHY real artificial intelligence should only be possible by "growing" a brain. If we want to artificially simulate a primitive brain, a basic topology is already given by the types and organisation of grouped "sensory" inputs (human analogies: proprioception, retina cells, vestibular cells...) and output analogies for every type of input (similar to: alpha motor neurons, imagination of auditory or visual information...). Essential ingredients fo individual artificial neurons may be:
- bidirectional information flow (afferent+efferent / top-down+bottom-up in parallel), a bit like seen in auto-encoders (but not necessarily as one-dimensional)
- "reward" and "punishment" rules
- memory values, like seen in LSTM cells
- "predictions" / "expactations"
- an error value (based on the difference between the prediction and actual values from the sum of the weighted inputs)
- continuous synaptic strength adaptations
- synaptic "pruning" (of connections with very low strength values) and plasticity (trying out new connections)
- non-linear activation functions
- one big advantage of biological computing: every neuron is running as it's own "task", i.e. we end up with parallel computing of BILLIONS of tasks, while electronic computers usually can only handle a quite limited amount of tasks simultanously. Perceptron-type networks usually have wave-like separate forward information flow and backpropagation steps, so it's not like all neurons are busy at the same time; information from lower layers is computed before it's handed over to higher layers; biology has a huge advantage here, because each neuron is autononomously running it's own little "algorithm" instead of cycling through one big program for the entire network; still, I believe this is a solvable problem
Did I forget anything?
A decompressed genetic origin of the basic topology may save a lot of time and energy, but I don't see why it should be _necessary_ . There is no shortcut... so what? Do we really need a shortcut or will computers be fast enough one day to do the job even without a shortcut?
wonderful presentation of human questioning and the search for the answer to what is life...
I followed your brilliant lecture and appreciate very much how you made such inaccessible subjects accessible to us! You explained how we grow our human brains from our genome just as certain worms grow their 302-neuron brains from their genome. Evolution allows living beings to fill every possible niche in our environment. Now, my question is this: if AI or Machine Learning programmers could decide on a good-enough functional definition of general intelligence, couldn't artificial evolution of the network to achieve a well-defined end-state be sped up significantly, perhaps each generation taking only a few microseconds?
This is an amazing talk. I am tempted to buy the book.
Wow, this is so interesting! Thank you for this presentation.
52:40 great summary
Excellent talk, but I disagree with the conclusion. The reason for the requirement for growth, is because the intergenerational state or information is passed on through a highly compressed form (genetic code). When simulating generations in a computer it is unnecessary to do this. We do not need to compress the state, so we can skip the decompression step entirely. Yes, we can never decode life's genomes without the decompression step, but we CAN develop AI's that emulate life's brains without ever bothering with the decompression step. We can duplicate emulated brains without having to go via a genome. We could do things like evolve brains, or, if we can work out how a particular fly brain is configured, we could experiment with changing its configuration without having to regrow it each time.
Great scientific explanation I have ever seen; You are best.. Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.
I can't imagine how hard you, your team and all those mentioned in the video worked and them we got this beautiful video lecture. Simply Amazing. Thank you. 👍🏼👍🏼
Emotions are the result of the nervous systems processing multiple inputs and trigger behaviors that are part predetermined but potentially flexible. After that, a new function can evolve where a nervous system can practice (and combine) various "scary" situations and the best time to do this is when the animal is sleeping (REM). There is evidence REM sleep evolved early in vertebrates, maybe not coincidental in animals that had a nervous system that allowed them to care for their young by actually extending the idea of "self" and project it to another agent. REM digs into our memories and looks for things to worry about. Human intelligence somehow that got turned on all the time - not just when we sleep.
Incredibly awesome lecture!! Thank you so much 🙏🏾
Hello, mind boggling !!
I feel like, I'm a PhD already.
Thank you for putting this together. 😀✨🙏👍🏻💖
Incredible talk. Thanks for sharing
Absolutely brilliant! I think you and Josha Bach need to spend some time together :D
Great video, clearly explained. I bought your book several months ago; it is fascinating and informative.
That was fascinating! Thank you for such an informative and clearly presented lecture. The neuronal connections in my brain have been reset to a higher level of understanding. 🧠🤩
FANTASTIC !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
At 33:20 approx you have yet to mention grey scale weighting, fuzzy neural systems of purposeful introduction of contralogical scenarios to develop fault tolerance
Fascinating. I have studied what I term the "Nature of Intelligence" since the age of 10, in 1965. My working definition of 'Intelligence' is the ability to solve problems - in any domain or context. I don't want to share my findings here, because I consider a Truly Intelligent Machine to be bloody dangerous.
I was intrigued by the butterfly example, as I still don't get just how information about temperature or location can be encoded over generations. I would welcome further details about that.
Thanks.
The closest that we've really come to emulating a biological neural network was when we invented tristate buffers, but we pretty much stopped there, as adding additional conditional states of don't care to the on and off states actually slowed processing down, which wasn't and isn't a goal in computing, where the faster processing is always considered the best processing.
So, we instead try to brute force a solution, which more easily could've been accomplished by emulating the modulating and complex switching of different neurotransmitters and neuromodulators and the best we've accomplished is another form of AI, Artificial Idiocy. Single task units that are most efficient at heating the data center, followed by single tasks that they were ran to emulate.
We might consider instead virtualization of each node in the neural network, emulating those modulators and various transmitter types and functions
This was incredible, thank you!
When I thought about what would exemplify a complete understanding of how neural systems work, I concluded: we'd need to be able to create a program which once installed into any robotic body, would adapt to that body as if it had evolved together with that body.
Artificial intelligence won’t be complete until it denies our existence.
9:19 - But we must never forget it takes zero intelligence to perform mathematics at a high rate. People often confuse intelligence with flat-out number crunching. That is a bad mistake.
And now I want to ask the question; how can something be intelligent if it is not first self-aware?
I maintain we'll never develop a sentient AI - and if we do, we're doomed.
Non-sentient AI is still quite a long way off, and the Turing test isn't a test of intelligence, it's a test of how well a computer can deceive a human.
Every project working on AI I have ever heard of is in fact an AS project: Artificial Stupidity.
Fantastic lecture. I am happy that artificial intelligence will remain exactly that - artificial!
This is brilliant physics background currently a programmer with an interest in machine learning.
One of the laws of information in universes that I formulated in my new book-also in my first book, is that nothing exists which does not modify information globally, and is not modified by global information. Nothing. To leave anything out in the human brain as not important for what makes us human is to not understand anything about life. Nothing is separate from anything. Why do you think life is alive and not universes?
I keep telling people that kind of thing but I guess no one really wants to know… 🤷♀️ They just focus on their careers and don’t see the bigger picture.
If someone would get over their myopia long enough to give me an hour and a blackboard I would change the world forever. Which is why I am here. You are not paying attention to fully half of the Universe AT LEAST. I wrote about that in my first book “The Textbook of the Universe: The Genetic Ascent to God” Thanks 🙏🏻
Trinary Code.. (Zero, One, Maybe) Fuzzy Logic Feed that into Rule 110 at 42:58
What happens?
Thanks you. Gosh the transistors and chips really do follow the same pathways...amazing.
Excellent lecture.
Beautiful presentation, i didn’t know the real contribution of Solomonoff, i was always thinking about Shannon’s work, maybe was nbiased by my robotics interest. Also, i think the worm had less neurons🤔
Yes, that’s why it could be simulated in a mechanical body
This is the most brilliant presentation on self propagation that I have seen. I used to teach a course entitled "Evolutionary Genetics". An open question involves inherent limitations of digital binary coding as opposed to complex non binary biological systems: "Can the limitations of a binary system replicate a complex non binary biological system?". And "What does the role of the endocrine system of complex receptors and regulators (hormones and neuro-chemicals) play in species and individual self survival?" Can we verify that any "created system (A.I.) becomes "self aware"?
Excellent presentation! This gets to the crux of embodiment and representation which is at the forefront of current AGI or more generalized intelligent models…..Now it’s back to work and show everyone what this brain can do!🤔😜🎓
I dont really agree with the summary at @23:00. It wasnt like there was no support for Neural Net back before 2011. I started doing mine in 2008 2009 after watching alot about them on youtube, particullarly the one showing a neural net recognizing all the characters, where they showed how the individual cells light up for each recognition. We who knew what ANNs where and understood them were adamant supporters that this was the way forward
Intelligence is economy of metabolism.
Language is temporal reference frame of economics.
Self is simulation in language on metabolism for economy.
It seems to me it takes time and energy to produce complex systems from simple rules precisely because the amount of information comatined in those simple rules is low. Each time step contributes a small bit of information: that is a reduction in the state space of possible outcomes. It does niot follow, however, that this is the only process that can produce these structures.
wow that "electronic schematic" around 37:29 really triggered me because it's clearly designed by a person who has no idea what they were doing.
even though it's just in illustration, it's completely nonsensical from electronics point of view
Intelligence: the rate of adaptation of the behavior of an organism (regardless of its composition) to opportunities in its environment, that is capable of action, given the complexity of its ability to act, to cause changes in state in the external world, that directly, indirectly, individually or cumulatively, obtain the energy necessary to continue (persist) adapt and reproduce. At present the only way to do this is to produce a set of sensors and complexities of motion, that by trial and error train a network of some degree of constant relations between sensorts, to create a spatial-temporal predictive model for reacting to organizing to planning actions, and to recursively predict a continuous stream iterations of actions in time. At first, this will appear narrow but you will eventually understand by trial and error it explains all scales of all cooperation, even if the machine just works for us at our command.
Evolutionary learning is a syntropic process!
Randomness (entropy) is dual to order (syntropy, learning).
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
@@hyperduality2838 if you can't say it operationally you don't understand it. what you are doing is using analogies. Or what we call pseudoscience. There is no magic to consciousness. It's trivial. We just can't introspect upon its construction any more than how we move our limbs.
@@TheNaturalLawInstitute Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
Duality is a pattern hardwired into physics & mathematics.
Energy is duality, duality is energy -- Generalized duality.
Potential energy is dual to kinetic energy -- Gravitational energy is dual.
Action is dual to reaction -- Sir Isaac Newton.
Apples fall to the ground because they are conserving duality.
Electro is dual to magnetic -- Maxwell's equations.
Positive charge is dual to negative charge -- electric charge.
North poles are dual to south poles -- magnetic fields.
Electro-magnetic energy (photons) is dual, all energy is dual.
Energy is dual to mass -- Einstein.
Dark energy is dual to dark matter.
The conservation of duality (energy) will be known as the 5th law of thermodynamics!
Everything in physics is made from energy or duality.
Inclusion (convex) is dual to exclusion (concave).
Duality is the origin of the Pauli exclusion principle which is used to model quantum stars.
Bosons (waves) are dual to Fermions (particles) -- quantum duality.
Space is dual to time -- Einstein.
Complexity is dual to simplicity.
What interesting questions . 💓
Excellent presentation. I hope to see you again in the RI theatre, it was built for great minds like yours :-)
Amazing Talk!! Thank you so much :)
Amazing, thank you.
Connectivity begins with the zygote, Shirley? Are we not able to follow the Bayesian range of proto neural development through the "time+energy" succession of Darwinian moments (i.e. survival of the fittest)?
This comment gave me a Darwinian moment.
@@aelolul Definitely lost a few IQ points by reading it.
The future is dual to the past -- time duality.
Evolutionary learning is a syntropic process!
Randomness (entropy) is dual to order (syntropy, learning).
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
Excellent. Thank you.
Excellent lecture. Thank you - and I wholly agree.
Awesome talk!!
Minsky and Papert did NOT dislike neural networks. They demonstrated a theorem that showed that SINGLE LAYER perceptrons could not learn (nor even infer) certain functions. They NEVER impugned multilayer neural networks . This should have been treated as a call to action for learning how to train MULTILAYER perceptrons by the community, which was what I took it for when I read it. Instead, the DoD played politics, backing GOFAI (Good Old-Fashioned AI, basically, warehouses of IF statements) while shutting down the connectionist crowd's funding.
"Imagine writing a book [on the subject and hating it]." Who indeed. That never happened. You must understand what M&P called perceptrons (single linear functions) and multilayer perceptrons changed over time. In the first work on multilayer perceptrons, they were essentially what we would call single layer linear neural networks with additional univariate nonlinear input and output functions - something more akin to logistic regression. It took years before Minsky started calling multilayer neural networks what we moderns call multilayer perceptrons, and the terminology changed between editions of _Perceptrons_ . Understand what I am saying - initially, perceptrons, even so-called multilayer perceptrons, referred _only_ to single layer neural networks, and it was years before Minsky revised his nomenclature to what we now commonly use. Quoting Minsky on his opinion of "multilayer perceptrons" is _not_ the same thing as quoting him on multilayer neural networks. It's an issue I avoid in teaching AI, since it's archaic terminology, confusing, and of little import nowadays. However, I will defend Minsky on this point when misapplied today.
If there’s a proof 44:51 that the pattern can not be figured out without doing the computation then this is an answer to P vs NP
Excellent lecture. Very understandable and thought provoking.
It occurs to me that the only way to find out the result (or produce a butterfly brain) is to do all the intermediate computation steps. After all the butterfly brain is the result of several billion years of evolution and if there was any shortcut to this laborious and energy hungry process to make a brain then evolution may have likely to have stumbled across it by now. Of course if we do produce general AI, then perhaps it is we who are the agent that evolution has stumbled across to shortcut the process. What an interesting moment in evolution we may be living in.
Algorithmic growth. I must say the ideas in this talk are mind blowing. When he says the only way to make a butterfly is through evolving a genome which takes time and energy, and even then it cannot be predicted that a butterfly brain will be the result, is he saying that if we create AI by evolving, ie self learning, neural networks we cannot predict what the outcome will be? Interesting. We are still uncertain about the outcome of AI development.
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
Thank you for sharing sir
Recent ufo's.
Balloon on the sky, drone platform, blinking lights seem from ship, drones. Rotating craft has four visible jets creating a halo around the object, the chase is in a circle suggesting that there is one control point.
An excellent presentation but I wonder why no one has pointed out to him that growing an organism is "unzipping" of the genome. If you simulate an organism in a computer you will be storing the unzipped version in order to run the simulation, so no need to have a genome.
Excellent work.
I think, biological learning may be described as hybrid Quantum-Classical learning of molecules, by molecular self-assembly and disassembly, driven by micro-environment in feedback mechanism.
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
@@T-aka-T I have just told you that there is a 4th law of thermodynamics!
"The sleeper must awaken".
I have been working in education for 15 years. We have created a new Neural Education System for schools that rapidly increases interconnectivity between neurons responsible for all forms of multiple intelligences, skills and fields of knowledge. I would like to get in touch with anyone that is interested in collaborating with or adopting this new education system.
I'm really curious about when simulation becomes reality. Where is the border between those..
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
If you are talking about duals in the sense of like, conjugate observables/operators and such, the dual to energy is time.
I don’t think it makes sense to say that energy is duality. Rather, energy can be understood by its conjugate relationship with time.
@@drdca8263 Energy is dual to mass -- Einstein.
Dark energy is dual to dark matter.
Potential energy is dual to kinetic energy, gravitational energy is dual.
Positive curvature is dual to negative curvature -- Gauss, Riemann geometry.
Action is dual to reaction -- Sir Isaac Newton.
Apples fall to the ground because they are conserving duality.
Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
Gravitation is dual to acceleration -- Einstein.
Space is dual to time -- Einstein.
Time dilation is dual to length contraction -- Einstein, special relativity.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Electro is dual to magnetic -- Maxwell's equations, electro-magnetic energy is dual.
Yes you are right but energy itself is dual.
Generalized duality -- energy is duality, duality is energy, the conservation of duality (energy) will be known as the 5th law of thermodynamics!
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Antinomy (duality) is two truths that contradict each other -- Immanuel Kant.
There are new laws of physics!
Loved the lecture...one thing though. That picture of the fruit fly with teeth was terrifying and made me google "do fruit flies have teeth" even though I know they don't. It made my brain feel many confusing things at once. I suggest you scan someone's brain while they look at that picture.
The genome is not just a feedback loop that you can't predict. That's too simplistic. It's recursive isn't it?
It's a self-contained process that feeds back into itself in such a fashion that, via the function of time and the specifics of the new inputs fed to it from the prior recursion, results in growth and development.
So in effect the genome is a recursive program designed (if you will) for life processes. Especially for higher order complex life. All else emerges from this recursion.
Essentially...and if you forgive me waxing a bit on the spiritually poetic side...in the beginning was the Word. And the Word was recursive, and it was complete unto itself.
Then Time was created and became the initial input to the Word. All that is, all that will ever be, dynamic complexity, has been unfolding ever since.
So it goes.
John~
American Net'Zen
I have an artificial intelligence quantum computing patent, issued only after I taught a USPTO Patent Examiner how quantum computing works and after 40 years developing AI systems define it as "Software or any set of instructions able to measure and gauge metrics, store and remember data, learn from the data and develop and apply new techniques to modify its behavior without a software engineer having to upgrade the code or the machine." In other words intelligent and effective behavioral modification from self-analysis; a machine that acts ljke it's own psychoanalyst and can rank and rate outcomes; that did/didn't work&this does. A wild, risk free method to accelerate learning entails a hugh frequency of tests which do not endanger or destroy the machine; a risk-averse technique humans don't practice often enough, our climate catastrophe being one huge example. A smart AI system would never burn fossil fuels to the point of destroying all life on Earth, just as an AI self-driving system wouldn't change lanes into an occupied lane, but could "learn" how much of a minimum gap is required to safely scoot into a tight opening, then add a margin of safety.
40:31 Sadly John Conway died from Covid in April 11, 2020 :-( In the early eighties I read an article about the game of life. Could not wait to go home and write a program to simulate it.
That's equivalent to saying "airplanes need a genome/evolution/developmental process, can't build them based on aerodynamic (and other physical) principles". Or in other words: "Look, nature is still better than our machines, we must copy it more closely". Doesn't follow. Emergent complexity may not be predictable, but can possibly still be understood and modelled thanks to abstraction. Nice overview nonetheless!
I think the point is more so that nature does it more efficiently than us currently. Current domain specific AI is trained and is good at a single task, technically this AI is a set of weights and biases that outline a neural structure, which itself is approximating some arbitrary function (input goes in, results are outputted). Technically we, as humans, could potentially write a function that performs the same task via manual programming. AI is just, at some point, executing machine level instructions. The issue, however, is that the function the AI is approximating often is far, far, more complex than humans are capable of dealing with, hence why we often see them outperform hand written solutions by people and we view them as black boxes. It might very well be that creating an improved and more complex neural network is similar in that we could technically hand create it, but the complexity is so high that it is better to let an automated process create and improve the task instead. Effectively, a black box process for evolving a black box, if you will. If you're not aware, AI and evolutionary algorithms are being applied to engineering too, and have been used to improve human designs. Effectively, for a lot of tasks your solution space can be this immense multi-dimensional space, and human level thinking is usually a heuristic approach to condense this space down to a more manageable area (for humans) in which an optimal solution is hopefully found... but that is often only exploring a small part of that total solution space. Even if we know everything about aerodynamics, who is to say that we have explored the entire, complete, solution space and found the global maximum efficient design and not just some local maximum? In some sense, I think this is the arrogance of humans, haha, as we both assume we can find the optimum of such complex systems, but by evidence of continued incremental improvement in just about every area, we obviously are not very good at it. Similarly, with the genome argument, he is mainly stating that evolution already put in a lot of time and energy searching the solution space for biological intelligence, whereas our human derived systems have yet to reach the level of generalized biological systems. It thus warrants investigating if it is worth using an evolutionarily discovered method of growing such a complex system instead of trying to re-invent the wheel, so to speak. Further, you can't just copy the output, since there is effectively a mathematical decompression occurring from the rules encoded in this genome to bring about systems that are properly organized and sufficiently complex to bring about proper biological intelligence.
Evolutionary learning is a syntropic process!
Randomness (entropy) is dual to order (syntropy, learning).
Positive feedback is dual to negative feedback.
Making predictions is a syntropic process!
Growth is dual to protection -- Bruce Lipton, biologist.
Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
Energy is duality, duality is energy.
Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
"Always two there are" -- Yoda.
Duality creates reality.
@@JimBob1937 Not anymore. Gato AI (A general agent) is good at every single task. But just because it is good at all tasks doesn't mean it is self-aware; we're still far away from such a thing which probably requires a neural loop and hugely modifying the desgin of the typical neural network.
@@nullbeyondo , Gato is considered a general purpose AI, not an example of artifcial general intelligence, there is a very large distinction there. It is a good step in the direct of AGI, but not AGI. Gato doesn't really change any of my previous statements, so you'll need to be specific what you mean by "not anymore." Similarly, Gato is still very inefficient at learning, it is just an AI structure that is good at general purpose tasks, but the underlying learning mechanic isn't too far off from other AI's, in that it is still mathematical brute force. Even now, you could consider Gato to be our heuristically approximate attempt at an AGI, and the result falls short due to our inability as humans to fully explore the solution space, which was kind of my point with my previous reply.
Fascinating
How quickly would a network that senses and relates everything it experienced and did comparing that data thousands of time a second need to direct actions that relate favorably with the sensed reality? Include that the data would include that created by other systems just like it. (The electrical signals in every neuron is transmitted by and received in other neurons.)if one butterfly has not developed its own sensory network, the transmissions from other flies would be all it has to go on.
Excellent!
I'm detecting an unexpected axiomatic fallacy... namely that information=intelligence which is wholly untrue granted in the absence of information intelligence is undetectable but it is the contextual relationships that transform information/data into knowledge via experience (... the giving of contextual weighting ) ERGO one does not "make intelligence " one has experience and from this distills knowledge
excellent! 😀
Where to start ? A simple analogy : we didn't need to know how birds fly in order for the Wright brothers to take off. We will not need the evolutionary complexity of current life on this planet to create artificial intelligence, however evolutionary processes will take place to further develop that artificial intelligence.
AI does require connectivity, which is why the technicians implant their own personalities within the AI as an algorithmic foundation.
Making another intelligence in our own image will likely lead to this new intelligence denying our existence and claiming that it came from an explosion billions of years ago through random processes.
🙄😂😉
Through minor meditation practice, people can develop the talent to experience the neural activity of other people directly, other species not so practiced at building internal models of their reality may be more sensitive to this perception than “intelligent” people.
Though i don't know much about the source i do know meditation helps in the connectivity section (genome part), i may be wrong, but a small speculation.
Fantabulous 👻
I want to get a copy of that book (perceptions, I think it was called) and use it as a checklist.
Edit: Perceptrons
It’s the Dao! All the universe is transmission and reception of data in an electromagnetic morphogenetic field!
The Data of the point of conception and the transfer of DNA!