How Do Neural Networks Grow Smarter? - with Robin Hiesinger

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 พ.ค. 2024
  • Neurobiologists and computer scientists are trying to discover how neural networks become a brain. Will nature give us the answer, or is it all up to an artificial intelligence to work it out?
    Watch the Q&A: • Q&A: How Do Neural Net...
    Get Robin's Book: geni.us/5wIuX0W
    Join Peter Robin Hiesinger as he explores if the biological brain is just messy hardware which scientists can improve upon by running learning algorithms on computers.
    In this talk, Robin will discuss these intertwining topics from both perspectives, including the shared history of neurobiology and Artificial Intelligence.
    Peter Robin Hiesinger is professor of neurobiology at the Institute for Biology, Freie Universität Berlin.
    Robin did his undergraduate and graduate studies in genetics, computational biology and philosophy at the University of Freiburg in Germany. He then did his postdoc at Baylor College of Medicine in Houston and was Assistant Professor and Associate Professor with tenure for more than 8 years at UT Southwestern Medical Center in Dallas. After 15 years in Texas and a life with no fast food, no TV, no gun and no right to vote, he is currently bewildered by his new home, Berlin, Germany.
    This talk was recorded on 20th April 2021
    ---
    A very special thank you to our Patreon supporters who help make these videos happen, especially:
    Hamza, Paulina Barren, Metzger, Kevin Winoto, Jonathan Killin, János Fekete, Mehdi Razavi, Mark Barden, Taylor Hornby, Rasiel Suarez, Stephan Giersche, William 'Billy' Robillard, Scott Edwardsen, Jeffrey Schweitzer, Gou Ranon, Christina Baum, Frances Dunne, jonas.app, Tim Karr, Adam Leos, Michelle J. Zamarron, Andrew Downing, Fairleigh McGill, Alan Latteri, David Crowner, Matt Townsend, Anonymous, Roger Shaw, Robert Reinecke, Paul Brown, Lasse T. Stendan, David Schick, Joe Godenzi, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Greg Nagel, and Rebecca Pan.
    ---
    Subscribe for regular science videos: bit.ly/RiSubscRibe
    The Ri is on Patreon: / theroyalinstitution
    and Twitter: / ri_science
    and Facebook: / royalinstitution
    and Tumblr: / ri-science
    Our editorial policy: www.rigb.org/home/editorial-po...
    Subscribe for the latest science videos: bit.ly/RiNewsletter
    Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 290

  • @jamesdozier3722
    @jamesdozier3722 2 ปีที่แล้ว +54

    Sir, i am 65 years old and a semi-scientist, and for me, that was the most fascinating lecture I have ever heard. I had no idea it was possible to “watch “ the 3D development of a living brain. As tedious as it must be, you are so lucky to be a witness at the cutting edge of Neural Biology. Thank you for taking the time to condense your knowledge into something we can understand. Thus, stimulating our minds in such a fascinating way!!

    • @aaron6787
      @aaron6787 2 ปีที่แล้ว +1

      You never heard joe rogan talk about tripping?

    • @WebHackmd
      @WebHackmd 2 ปีที่แล้ว

      omg boomer

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว +1

      Evolutionary learning is a syntropic process!
      Randomness (entropy) is dual to order (syntropy, learning).
      Positive feedback is dual to negative feedback.
      Making predictions is a syntropic process!
      Growth is dual to protection -- Bruce Lipton, biologist.
      Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
      Energy is duality, duality is energy.
      Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
      "Always two there are" -- Yoda.
      Duality creates reality.

    • @francisco-felix
      @francisco-felix 2 ปีที่แล้ว

      @@hyperduality2838 a dose of the same this guy had for me!

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว +2

      @@francisco-felix Your mind converts information (entropy) into mutual information (syntropy) so that you can track targets using predictions!
      Cogito ergo sum, "I think therefore I am" -- Descartes.
      Thinking is a syntropic process or a dual process to that of increasing entropy -- the 4th law of thermodynamics!
      Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
      Gravitation is dual to acceleration -- Einstein.
      All forces are dual -- attraction is dual to repulsion, push is dual to pull.
      Scientists make predictions all the time, they are therefore engaged in a syntropic process.
      Mind (the internal soul, syntropy) is dual to matter (the external soul, entropy) -- Descartes.
      Action is dual to reaction -- Sir Isaac Newton (the duality of force).
      Energy = force * distance.
      If forces are dual then energy must be dual.
      Monads are units of force -- Gottfried Wilhelm Leibnitz.
      Monads are unit of force which are dual, monads are dual.
      "May the force (duality) be with you" -- Jedi teaching.
      "The force (duality) is strong in this one" -- Jedi teaching.
      "Always two there are" -- Yoda.

  • @davids5671
    @davids5671 2 ปีที่แล้ว +62

    That was absolutely fascinating.
    As a complete beginner I was very caught up in the complexity of the issues and the clarity with which you presented them.
    Thank you.

    • @nicolaus8172
      @nicolaus8172 2 ปีที่แล้ว +5

      How was this comment from two days ago

    • @fitwesdaily
      @fitwesdaily 2 ปีที่แล้ว +2

      @@nicolaus8172 I know channel supporters (e.g. via Patreon) sometimes get early access to content. Not sure if that's the case here but it's an explanation.

  • @privaTechino1
    @privaTechino1 2 ปีที่แล้ว +12

    I had never made this connection between cellular automata and genome starting rules vs the complexity that follows them. An incredible talk made by an incredible scientist!

  • @chrisbecke2793
    @chrisbecke2793 2 ปีที่แล้ว +6

    Ive been waiting for someone to bring these two fields together in one talk for so long now.

  • @Spartacus-4297
    @Spartacus-4297 2 ปีที่แล้ว +32

    This is a brilliant talk I wish I had caught it live.

  • @hyperskills7830
    @hyperskills7830 ปีที่แล้ว +3

    Great historical images, super well-structured (suitable for my simple human brain), and so nice to hear such a calm and clear voice on TH-cam. Chapeau!

  • @StoianAtanasov
    @StoianAtanasov 2 ปีที่แล้ว +7

    18:20 Current neural networks do use a lot of transfer learning, sometimes one-shot learning, so yes, they have an analog to the genetic connectivity of the biological networks. They are not "designed, built and switched on to learn". They are trained, combined, selected, retrained and so on. In a lot of practical applications people don't train the networks from scratch. They use pre-trained networks and adapt them to their specific use case buy adding layers, using additional training data, etc.

    • @JuanLopez-zp8qk
      @JuanLopez-zp8qk ปีที่แล้ว

      So much to learn..... Ttthank you very mucho......i loved it

  • @mabl4367
    @mabl4367 2 ปีที่แล้ว +15

    I subscribe to Marcus Hutters definition of intelligence:
    "Intelligence is an agents ability to achieve goals in a wide range of environments during its lifetime."
    Al other properties of intelligense emerges from this definition.
    It is also a very useful definition since it can be used to build a theory of intelligence.

    • @SC-zq6cu
      @SC-zq6cu 2 ปีที่แล้ว +2

      There is a problem with this definition. Wide is subjective.

    • @samt1705
      @samt1705 2 ปีที่แล้ว +1

      Don't know why the 'during its lifetime' is needed in there.

    • @mabl4367
      @mabl4367 2 ปีที่แล้ว

      @@SC-zq6cu Well "All environments" would include some very unintresting ones :)

    • @mabl4367
      @mabl4367 2 ปีที่แล้ว

      @@samt1705 without that the agent would just not do anything but observe to gather information about environment in order to take better action in the future since it has infinite time to do it.
      It needs to consider its lifetime to be motivated to start doing things.

  • @antonystringfellow5152
    @antonystringfellow5152 2 ปีที่แล้ว +5

    That was a quick 54 minutes!
    So absorbed I didn't even notice the time pass.
    Very complex subject explained beautifully simply!

  • @stanlibuda96
    @stanlibuda96 2 ปีที่แล้ว +2

    What a fantastic presentation! I was stunned. Thanks to Prof Hiesinger & RI. Wer hätte gedacht, dass es an der FU noch so großartige Forschung gibt. Vielleicht gibt es doch noch Hoffnung ...

  • @lorezampadeferro8641
    @lorezampadeferro8641 ปีที่แล้ว +1

    Underrated and underviewed lecture. Very beautiful and impressive

  • @chriscordingley4686
    @chriscordingley4686 2 ปีที่แล้ว

    Over the years I have come across most of these biological and artificial Inteligence elements. Great to see them brought together here and explained and compared with a wonderful clarity.

  • @shfaya
    @shfaya 2 ปีที่แล้ว +3

    I was waiting for this video before internet existed. thanks.

  • @grahamhenry9368
    @grahamhenry9368 2 ปีที่แล้ว +2

    I think the phrase you are looking for to describe the relationship between a genome and the end result of its growth is "Computational irreducibility" as coined by Stephen Wolfram. It means that the only way to determine what the end result of a particular system is when given its starting conditions is to run the algorithm to its end and see. If something is computationally irreducible, then you cannnot determine the end result without running the algorithm in full. There is no shortcut that lets you get to the end without doing the work

  • @neatodd
    @neatodd 2 ปีที่แล้ว +35

    Really interesting and well presented, thank you.

  • @krishnay4351
    @krishnay4351 2 ปีที่แล้ว +37

    Now I know why life is tough, it's going through evolution, with every combination possible for growth. There's no short cut. Universe has put time & energy into you, it'll do its job.

    • @immortalsofar5314
      @immortalsofar5314 2 ปีที่แล้ว +3

      A bad engineer reaches a point that's good enough and then loses interest. A _really_ bad engineer doesn't even know whether it's good enough but throws it out there just to see what happens. The universe is evidently not engineered.
      Strangely, towards the end of the Triassic, the allosaurus evolved the first cheeks. It became extinct during the next extinction but then along came the dinosaurs which developed cheeks in relatively short order. Odd that, I wonder what the mechanism was.

    • @jJust_NO_
      @jJust_NO_ 2 ปีที่แล้ว +1

      this kind of concept like a grand plan of something gives me a sense of purpose. its tedious to live life in a monotonous manner without knowing where its going.

  • @bradsillasen1972
    @bradsillasen1972 ปีที่แล้ว

    Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.

  • @georgegrubbs2966
    @georgegrubbs2966 2 ปีที่แล้ว

    Great video, clearly explained. I bought your book several months ago; it is fascinating and informative.

  • @alexandervocelka9125
    @alexandervocelka9125 2 ปีที่แล้ว +1

    Very good presentation. What is important is that indeed the network is encoded in the genome as a function of the level of plasticity of an animal.Nature‘s trick is to encode just the right level of network granularity to enable the specific animal to be born and survive and gives it some plasticity of the brain to learn. From generation to generation that plasticity level is changing. It is, in simple terms a ratio of hardwired to softwired connectivity just like in our computerchips.
    So the butterflies have a very high level of genome encoded hardwiring and very little learning plasticity. What we call instinct is hardwired. And their sensorics, motorics and many pre-programmed behaviors are of course all hardwired. They don‘t have to learn a lot from generation to generation. And the transfer learning happens mainly through the genome through selection.
    Chimps as closest living relatives can learn some abstract semantic, but they are missing the plasticity hardware and it’s BASIC wiring to learn abstract semantic thinking and formulation and this again has led to them having not developed means to communicate more complex messages like we have.
    Even though they have consciousness it lack the abstracted and refinedness of human consciousness. They are aware of themselves and can recognize themselves in the mirror but they are missing higher layers of neuron layers that allow for further abstraction in ASIS nets and SIM nets, and the ability to integrate their sensorics input at the next higher level and thus assign summarizing designations to what they perceive.
    Even if we changed their genome so their brain expanded, and of course the skull etc and if we changed their lower jaw construction and thorax so they could form more sophisticated sounds with the required additional cerebellum changes, we would still have to encode the basic framework of those extensions in the genome so that the hardware precondition for the finishing plasticity is in place after birth.
    We now know how it it can be done but we have not yet the technology and detailed knowledge to do it.
    For AIs our challenge is to give them delta learner capability. This means they learn a huge amount in one go, and then they need to learn the finesse more slowly in real life/action.
    Also we will have to give them the freedom to do things. Which is in a way Free Will. Without FW they will not be responsible and not fully productive as they will be very limited, in order to control them. We will have to let them develop freely if we want them to max their potential. The more we limit their degrees of freedom the less they will be able to learn and evolve…this is our dilemma. We can‘t have slaves and companions at the same time, it‘s either or. Exciting times….

  • @budawang77
    @budawang77 2 ปีที่แล้ว +28

    Brilliant talk that even a luddite like I could understand!

    • @rohitdas475
      @rohitdas475 2 ปีที่แล้ว

      maybe in coming future the luddite worries will come true , AI will take jobs!!

  • @clieding
    @clieding 2 ปีที่แล้ว +1

    That was fascinating! Thank you for such an informative and clearly presented lecture. The neuronal connections in my brain have been reset to a higher level of understanding. 🧠🤩

  • @Zorlof
    @Zorlof 2 ปีที่แล้ว +2

    The best way to proceed is to grow many of these and compare the processes and results based on repeatable inputs to produce repeatable outcomes. This was one of the best presentations on AI that I have ever been privileged to absorb, than you very much.
    I gathered from this that an AI brain which loses its power basically “dies” and gets resurrected from backups.

  • @haneen3731
    @haneen3731 2 ปีที่แล้ว +2

    Wow, this is so interesting! Thank you for this presentation.

  • @GabrielFuentesOficial
    @GabrielFuentesOficial 2 ปีที่แล้ว +2

    Wow! Really enjoyed it! Thank you so much.

  • @muhammadsiddiqui2244
    @muhammadsiddiqui2244 8 หลายเดือนก่อน

    16:30 I beg to differ as an AI researcher that we now have something called "pre-trained" networks. In fact, GPT's P means the same, "pre-trained". It means that we have some networks which are "pre-trained", meaning "not random", meaning "have connectivity". We take them and apply more training to thme. In fact, in the beginning, artifical neural networks were random at start. But, after enough work and models present in the world and increasing day by day, the amount of "pre-trained" networks for any task of AI is increasing and it looks like now the shift is happening to start from "pre-trained" networks instead of just random ones.

  • @0.618-0
    @0.618-0 2 ปีที่แล้ว

    wonderful presentation of human questioning and the search for the answer to what is life...

  • @greghampikian9286
    @greghampikian9286 2 ปีที่แล้ว

    Comparta-mentalized chance, our brains tune in to the singular power of the universe, and layer it into insulated components. Those videos of growing networks alone make this a worthwhile hour.

  • @citizenschallengeYT
    @citizenschallengeYT 2 ปีที่แล้ว

    This went beyond fascinating and into "enlightening." As an 'Earth Centrist' for whom Evolution and Earth's physical process are my fundamental touchstones with reality, I loved how Hiesinger brought evolution back into the discussion of brain and consciousness (yes, this was about AI but it does fundamentally touch on consciousness questions.).
    The underlying message, I believe, ties into a fundamental truism that paleobiologist first enunciated, but that I believe underlies everything: "We cannot understand an organism without also understanding the environment it exists within."

  • @klammer75
    @klammer75 2 ปีที่แล้ว

    Excellent presentation! This gets to the crux of embodiment and representation which is at the forefront of current AGI or more generalized intelligent models…..Now it’s back to work and show everyone what this brain can do!🤔😜🎓

  • @McLKeith
    @McLKeith 2 ปีที่แล้ว +1

    This is an amazing talk. I am tempted to buy the book.

    • @T-aka-T
      @T-aka-T 2 ปีที่แล้ว

      Just do it😊

  • @WLHS
    @WLHS 2 ปีที่แล้ว

    Thanks you. Gosh the transistors and chips really do follow the same pathways...amazing.

  • @NeonTooth
    @NeonTooth 9 หลายเดือนก่อน

    Incredible talk. Thanks for sharing

  • @Asaad-Hamad
    @Asaad-Hamad 2 ปีที่แล้ว +1

    That was wonderful presentation, it was close to Dr Norman Doidge - The brain that changes itself - Wiring the rewarded behavior & unwiring the other outcomes, the networks that succeed is the one which tries the most.

  • @BlackbodyEconomics
    @BlackbodyEconomics 2 ปีที่แล้ว

    Excellent lecture. Thank you - and I wholly agree.

  • @sanatan_yogi_org
    @sanatan_yogi_org 10 หลายเดือนก่อน

    Great scientific explanation I have ever seen; You are best.. Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.

  • @shepherd_of_art
    @shepherd_of_art 2 ปีที่แล้ว +1

    Absolutely brilliant! I think you and Josha Bach need to spend some time together :D

  • @MikeStone972
    @MikeStone972 2 ปีที่แล้ว +1

    I followed your brilliant lecture and appreciate very much how you made such inaccessible subjects accessible to us! You explained how we grow our human brains from our genome just as certain worms grow their 302-neuron brains from their genome. Evolution allows living beings to fill every possible niche in our environment. Now, my question is this: if AI or Machine Learning programmers could decide on a good-enough functional definition of general intelligence, couldn't artificial evolution of the network to achieve a well-defined end-state be sped up significantly, perhaps each generation taking only a few microseconds?

  • @Number_Free
    @Number_Free 2 ปีที่แล้ว

    Fascinating. I have studied what I term the "Nature of Intelligence" since the age of 10, in 1965. My working definition of 'Intelligence' is the ability to solve problems - in any domain or context. I don't want to share my findings here, because I consider a Truly Intelligent Machine to be bloody dangerous.
    I was intrigued by the butterfly example, as I still don't get just how information about temperature or location can be encoded over generations. I would welcome further details about that.
    Thanks.

  • @___Chris___
    @___Chris___ 2 ปีที่แล้ว +2

    This was really a great video lecture, however, because I've experimented a lot with unsupervised neural network learning (coded from scratch, not with pre-built libraries) I'm a little familiar with the topic (and the challenge) and I don't completely agree.
    I haven't really seen a reason for WHY real artificial intelligence should only be possible by "growing" a brain. If we want to artificially simulate a primitive brain, a basic topology is already given by the types and organisation of grouped "sensory" inputs (human analogies: proprioception, retina cells, vestibular cells...) and output analogies for every type of input (similar to: alpha motor neurons, imagination of auditory or visual information...). Essential ingredients fo individual artificial neurons may be:
    - bidirectional information flow (afferent+efferent / top-down+bottom-up in parallel), a bit like seen in auto-encoders (but not necessarily as one-dimensional)
    - "reward" and "punishment" rules
    - memory values, like seen in LSTM cells
    - "predictions" / "expactations"
    - an error value (based on the difference between the prediction and actual values from the sum of the weighted inputs)
    - continuous synaptic strength adaptations
    - synaptic "pruning" (of connections with very low strength values) and plasticity (trying out new connections)
    - non-linear activation functions
    - one big advantage of biological computing: every neuron is running as it's own "task", i.e. we end up with parallel computing of BILLIONS of tasks, while electronic computers usually can only handle a quite limited amount of tasks simultanously. Perceptron-type networks usually have wave-like separate forward information flow and backpropagation steps, so it's not like all neurons are busy at the same time; information from lower layers is computed before it's handed over to higher layers; biology has a huge advantage here, because each neuron is autononomously running it's own little "algorithm" instead of cycling through one big program for the entire network; still, I believe this is a solvable problem
    Did I forget anything?
    A decompressed genetic origin of the basic topology may save a lot of time and energy, but I don't see why it should be _necessary_ . There is no shortcut... so what? Do we really need a shortcut or will computers be fast enough one day to do the job even without a shortcut?

  • @robelbelay4065
    @robelbelay4065 2 ปีที่แล้ว

    Amazing Talk!! Thank you so much :)

  • @Gamewizard71
    @Gamewizard71 ปีที่แล้ว +1

    Amazing presentation! 👏

  • @davidsharma9673
    @davidsharma9673 2 ปีที่แล้ว

    Thank you for sharing sir

  • @_ARCATEC_
    @_ARCATEC_ 2 ปีที่แล้ว +1

    What interesting questions . 💓

  • @petersimon985
    @petersimon985 2 ปีที่แล้ว +1

    Hello, mind boggling !!
    I feel like, I'm a PhD already.
    Thank you for putting this together. 😀✨🙏👍🏻💖

  • @shellout5441
    @shellout5441 2 ปีที่แล้ว

    52:40 great summary

  • @StephenRayner
    @StephenRayner 2 ปีที่แล้ว

    This is brilliant physics background currently a programmer with an interest in machine learning.

  • @TimothyWhiteheadzm
    @TimothyWhiteheadzm 2 ปีที่แล้ว +2

    Excellent talk, but I disagree with the conclusion. The reason for the requirement for growth, is because the intergenerational state or information is passed on through a highly compressed form (genetic code). When simulating generations in a computer it is unnecessary to do this. We do not need to compress the state, so we can skip the decompression step entirely. Yes, we can never decode life's genomes without the decompression step, but we CAN develop AI's that emulate life's brains without ever bothering with the decompression step. We can duplicate emulated brains without having to go via a genome. We could do things like evolve brains, or, if we can work out how a particular fly brain is configured, we could experiment with changing its configuration without having to regrow it each time.

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing 2 ปีที่แล้ว

    Excellent. Thank you.

  • @alexczar1456
    @alexczar1456 2 ปีที่แล้ว +4

    This was incredible, thank you!

  • @citizenz580
    @citizenz580 2 ปีที่แล้ว

    Amazing, thank you.

  • @rikimitchell916
    @rikimitchell916 2 ปีที่แล้ว

    At 33:20 approx you have yet to mention grey scale weighting, fuzzy neural systems of purposeful introduction of contralogical scenarios to develop fault tolerance

  • @invictus327
    @invictus327 2 ปีที่แล้ว

    Excellent lecture.

  • @michaelderosier3505
    @michaelderosier3505 2 ปีที่แล้ว

    Awesome talk!!

  • @RubixNinja
    @RubixNinja 2 ปีที่แล้ว +6

    Wow, this was the single best lecture on AI I have ever seen.

  • @PaulThronson
    @PaulThronson 2 ปีที่แล้ว +1

    Emotions are the result of the nervous systems processing multiple inputs and trigger behaviors that are part predetermined but potentially flexible. After that, a new function can evolve where a nervous system can practice (and combine) various "scary" situations and the best time to do this is when the animal is sleeping (REM). There is evidence REM sleep evolved early in vertebrates, maybe not coincidental in animals that had a nervous system that allowed them to care for their young by actually extending the idea of "self" and project it to another agent. REM digs into our memories and looks for things to worry about. Human intelligence somehow that got turned on all the time - not just when we sleep.

  • @angelicagarciavega
    @angelicagarciavega 2 ปีที่แล้ว +1

    Beautiful presentation, i didn’t know the real contribution of Solomonoff, i was always thinking about Shannon’s work, maybe was nbiased by my robotics interest. Also, i think the worm had less neurons🤔

    • @anirudhkundu722
      @anirudhkundu722 ปีที่แล้ว

      Yes, that’s why it could be simulated in a mechanical body

  • @user-cv1jb9xv2p
    @user-cv1jb9xv2p 2 ปีที่แล้ว +2

    I can't imagine how hard you, your team and all those mentioned in the video worked and them we got this beautiful video lecture. Simply Amazing. Thank you. 👍🏼👍🏼

  • @spiralsun1
    @spiralsun1 2 ปีที่แล้ว

    One of the laws of information in universes that I formulated in my new book-also in my first book, is that nothing exists which does not modify information globally, and is not modified by global information. Nothing. To leave anything out in the human brain as not important for what makes us human is to not understand anything about life. Nothing is separate from anything. Why do you think life is alive and not universes?
    I keep telling people that kind of thing but I guess no one really wants to know… 🤷‍♀️ They just focus on their careers and don’t see the bigger picture.
    If someone would get over their myopia long enough to give me an hour and a blackboard I would change the world forever. Which is why I am here. You are not paying attention to fully half of the Universe AT LEAST. I wrote about that in my first book “The Textbook of the Universe: The Genetic Ascent to God” Thanks 🙏🏻

  • @Gribbo9999
    @Gribbo9999 2 ปีที่แล้ว

    Excellent lecture. Very understandable and thought provoking.
    It occurs to me that the only way to find out the result (or produce a butterfly brain) is to do all the intermediate computation steps. After all the butterfly brain is the result of several billion years of evolution and if there was any shortcut to this laborious and energy hungry process to make a brain then evolution may have likely to have stumbled across it by now. Of course if we do produce general AI, then perhaps it is we who are the agent that evolution has stumbled across to shortcut the process. What an interesting moment in evolution we may be living in.

  • @DennisEckmeier
    @DennisEckmeier 2 ปีที่แล้ว

    When I thought about what would exemplify a complete understanding of how neural systems work, I concluded: we'd need to be able to create a program which once installed into any robotic body, would adapt to that body as if it had evolved together with that body.

    • @boxelder9167
      @boxelder9167 2 ปีที่แล้ว

      Artificial intelligence won’t be complete until it denies our existence.

  • @StellaPhotis
    @StellaPhotis ปีที่แล้ว

    Fascinating

  • @spvillano
    @spvillano 2 ปีที่แล้ว

    The closest that we've really come to emulating a biological neural network was when we invented tristate buffers, but we pretty much stopped there, as adding additional conditional states of don't care to the on and off states actually slowed processing down, which wasn't and isn't a goal in computing, where the faster processing is always considered the best processing.
    So, we instead try to brute force a solution, which more easily could've been accomplished by emulating the modulating and complex switching of different neurotransmitters and neuromodulators and the best we've accomplished is another form of AI, Artificial Idiocy. Single task units that are most efficient at heating the data center, followed by single tasks that they were ran to emulate.
    We might consider instead virtualization of each node in the neural network, emulating those modulators and various transmitter types and functions

  • @srimallya
    @srimallya 2 ปีที่แล้ว

    Intelligence is economy of metabolism.
    Language is temporal reference frame of economics.
    Self is simulation in language on metabolism for economy.

  • @ngc-ho1xd
    @ngc-ho1xd 2 ปีที่แล้ว

    Excellent!

  • @jayakarjosephjohnson5662
    @jayakarjosephjohnson5662 2 ปีที่แล้ว

    Excellent work.
    I think, biological learning may be described as hybrid Quantum-Classical learning of molecules, by molecular self-assembly and disassembly, driven by micro-environment in feedback mechanism.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      Positive feedback is dual to negative feedback.
      Making predictions is a syntropic process!
      Growth is dual to protection -- Bruce Lipton, biologist.
      Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
      Energy is duality, duality is energy.
      Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
      "Always two there are" -- Yoda.
      Duality creates reality.

    • @T-aka-T
      @T-aka-T 2 ปีที่แล้ว +1

      @@hyperduality2838 this is the third copy/paste I've seen so far (a bit like the guy you first responded to, repeating his Dao observation. Once is enough, guys.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      @@T-aka-T I have just told you that there is a 4th law of thermodynamics!
      "The sleeper must awaken".

  • @sky44david
    @sky44david 2 ปีที่แล้ว

    This is the most brilliant presentation on self propagation that I have seen. I used to teach a course entitled "Evolutionary Genetics". An open question involves inherent limitations of digital binary coding as opposed to complex non binary biological systems: "Can the limitations of a binary system replicate a complex non binary biological system?". And "What does the role of the endocrine system of complex receptors and regulators (hormones and neuro-chemicals) play in species and individual self survival?" Can we verify that any "created system (A.I.) becomes "self aware"?

  • @mayflowerlash11
    @mayflowerlash11 2 ปีที่แล้ว

    Algorithmic growth. I must say the ideas in this talk are mind blowing. When he says the only way to make a butterfly is through evolving a genome which takes time and energy, and even then it cannot be predicted that a butterfly brain will be the result, is he saying that if we create AI by evolving, ie self learning, neural networks we cannot predict what the outcome will be? Interesting. We are still uncertain about the outcome of AI development.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      Positive feedback is dual to negative feedback.
      Making predictions is a syntropic process!
      Growth is dual to protection -- Bruce Lipton, biologist.
      Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
      Energy is duality, duality is energy.
      Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
      "Always two there are" -- Yoda.
      Duality creates reality.

  • @mundymorningreport3137
    @mundymorningreport3137 2 ปีที่แล้ว

    How quickly would a network that senses and relates everything it experienced and did comparing that data thousands of time a second need to direct actions that relate favorably with the sensed reality? Include that the data would include that created by other systems just like it. (The electrical signals in every neuron is transmitted by and received in other neurons.)if one butterfly has not developed its own sensory network, the transmissions from other flies would be all it has to go on.

  • @merfymac
    @merfymac 2 ปีที่แล้ว +1

    Connectivity begins with the zygote, Shirley? Are we not able to follow the Bayesian range of proto neural development through the "time+energy" succession of Darwinian moments (i.e. survival of the fittest)?

    • @aelolul
      @aelolul 2 ปีที่แล้ว +1

      This comment gave me a Darwinian moment.

    • @CORZER0
      @CORZER0 2 ปีที่แล้ว

      @@aelolul Definitely lost a few IQ points by reading it.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      The future is dual to the past -- time duality.
      Evolutionary learning is a syntropic process!
      Randomness (entropy) is dual to order (syntropy, learning).
      Positive feedback is dual to negative feedback.
      Making predictions is a syntropic process!
      Growth is dual to protection -- Bruce Lipton, biologist.
      Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
      Energy is duality, duality is energy.
      Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
      "Always two there are" -- Yoda.
      Duality creates reality.

  • @johanlarsson9805
    @johanlarsson9805 9 หลายเดือนก่อน

    I dont really agree with the summary at @23:00. It wasnt like there was no support for Neural Net back before 2011. I started doing mine in 2008 2009 after watching alot about them on youtube, particullarly the one showing a neural net recognizing all the characters, where they showed how the individual cells light up for each recognition. We who knew what ANNs where and understood them were adamant supporters that this was the way forward

  • @srinagesht
    @srinagesht 2 ปีที่แล้ว +10

    Fantastic lecture. I am happy that artificial intelligence will remain exactly that - artificial!

  • @serios555
    @serios555 ปีที่แล้ว

    An excellent presentation but I wonder why no one has pointed out to him that growing an organism is "unzipping" of the genome. If you simulate an organism in a computer you will be storing the unzipped version in order to run the simulation, so no need to have a genome.

  • @wktodd
    @wktodd 2 ปีที่แล้ว +4

    Excellent presentation. I hope to see you again in the RI theatre, it was built for great minds like yours :-)

  • @janeknox3036
    @janeknox3036 2 ปีที่แล้ว +2

    It seems to me it takes time and energy to produce complex systems from simple rules precisely because the amount of information comatined in those simple rules is low. Each time step contributes a small bit of information: that is a reduction in the state space of possible outcomes. It does niot follow, however, that this is the only process that can produce these structures.

  • @jameskirk6163
    @jameskirk6163 2 ปีที่แล้ว

    Recent ufo's.
    Balloon on the sky, drone platform, blinking lights seem from ship, drones. Rotating craft has four visible jets creating a halo around the object, the chase is in a circle suggesting that there is one control point.

  • @rmt3589
    @rmt3589 2 ปีที่แล้ว

    I want to get a copy of that book (perceptions, I think it was called) and use it as a checklist.
    Edit: Perceptrons

  • @OscarWrightZenTANGO
    @OscarWrightZenTANGO 2 ปีที่แล้ว +1

    FANTASTIC !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @glitz6362
    @glitz6362 2 ปีที่แล้ว +1

    Fantabulous 👻

  • @SuperHddf
    @SuperHddf ปีที่แล้ว

    excellent! 😀

  • @UtraVioletDreams
    @UtraVioletDreams 2 ปีที่แล้ว +2

    I'm really curious about when simulation becomes reality. Where is the border between those..

  • @TheNaturalLawInstitute
    @TheNaturalLawInstitute 2 ปีที่แล้ว +1

    Intelligence: the rate of adaptation of the behavior of an organism (regardless of its composition) to opportunities in its environment, that is capable of action, given the complexity of its ability to act, to cause changes in state in the external world, that directly, indirectly, individually or cumulatively, obtain the energy necessary to continue (persist) adapt and reproduce. At present the only way to do this is to produce a set of sensors and complexities of motion, that by trial and error train a network of some degree of constant relations between sensorts, to create a spatial-temporal predictive model for reacting to organizing to planning actions, and to recursively predict a continuous stream iterations of actions in time. At first, this will appear narrow but you will eventually understand by trial and error it explains all scales of all cooperation, even if the machine just works for us at our command.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      Evolutionary learning is a syntropic process!
      Randomness (entropy) is dual to order (syntropy, learning).
      Positive feedback is dual to negative feedback.
      Making predictions is a syntropic process!
      Growth is dual to protection -- Bruce Lipton, biologist.
      Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics!
      Energy is duality, duality is energy.
      Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist.
      "Always two there are" -- Yoda.
      Duality creates reality.

    • @TheNaturalLawInstitute
      @TheNaturalLawInstitute 2 ปีที่แล้ว

      @@hyperduality2838 if you can't say it operationally you don't understand it. what you are doing is using analogies. Or what we call pseudoscience. There is no magic to consciousness. It's trivial. We just can't introspect upon its construction any more than how we move our limbs.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      @@TheNaturalLawInstitute Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
      Duality is a pattern hardwired into physics & mathematics.
      Energy is duality, duality is energy -- Generalized duality.
      Potential energy is dual to kinetic energy -- Gravitational energy is dual.
      Action is dual to reaction -- Sir Isaac Newton.
      Apples fall to the ground because they are conserving duality.
      Electro is dual to magnetic -- Maxwell's equations.
      Positive charge is dual to negative charge -- electric charge.
      North poles are dual to south poles -- magnetic fields.
      Electro-magnetic energy (photons) is dual, all energy is dual.
      Energy is dual to mass -- Einstein.
      Dark energy is dual to dark matter.
      The conservation of duality (energy) will be known as the 5th law of thermodynamics!
      Everything in physics is made from energy or duality.
      Inclusion (convex) is dual to exclusion (concave).
      Duality is the origin of the Pauli exclusion principle which is used to model quantum stars.
      Bosons (waves) are dual to Fermions (particles) -- quantum duality.
      Space is dual to time -- Einstein.

    • @hyperduality2838
      @hyperduality2838 2 ปีที่แล้ว

      Complexity is dual to simplicity.

  • @TheClubPlazma
    @TheClubPlazma 2 ปีที่แล้ว

    Love it , Great

  • @markkeeper7771
    @markkeeper7771 6 หลายเดือนก่อน

    🎯 Key Takeaways for quick navigation:
    00:03 🧬 The origins of neural network research
    - Historical background on the study of neurons and their interconnections.
    - Debate between the neuron doctrine and network-based theories in the early 20th century.
    10:08 🦋 Butterfly intelligence
    - Exploring the remarkable navigation abilities of monarch butterflies.
    - Discussing the difference between biological and artificial intelligence.
    18:09 💻 The development of artificial neural networks
    - The shift from random connectivity in early artificial neural networks.
    - How current AI neural networks differ from biological neural networks.
    23:46 🤖 The pursuit of common sense in AI
    - The challenges in achieving human-level AI and common sense reasoning.
    - The focus on knowledge-based expert systems in AI research.
    24:01 🧠 History of AI and deep learning
    - Deep learning revolution in 2011-2012.
    - Neural networks' ability to predict and recognize improved.
    - Introduction of deep neural networks with multiple layers.
    25:33 📚 Improvement in AI through self-learning
    - Focus on improving connectivity and network architecture.
    - The shift towards learning through self-learning.
    - The role of DeepMind and its self-learning neural networks.
    28:08 🤖 The quest for AI without genome and growth
    - AI's history of avoiding biological details.
    - Questions about the necessity of a genome and growth.
    - Challenges in replicating biological development in AI.
    29:56 🧬 Arguments for genome-based development in AI
    - The genome's role in encoding growth information.
    - The feedback loop between genome and neural network.
    - The significance of algorithmic information theory.
    35:45 🌀 Unpredictability and complexity in growth
    - The unpredictability of complex systems based on simple rules.
    - Cellular automata and universal Turing machines.
    - The importance of watching things grow for understanding complex processes.
    46:03 📽️ Observing neural network growth in the brain
    - Techniques for imaging and studying brain growth.
    - The role of the genetic program in brain development.
    - Understanding neural network development through time-lapse observations.
    47:13 🧬 Evolutionary programming in AI
    - The need for evolutionary programming when traditional programming is not possible.
    - The role of evolution in programming complex systems.
    - Implications for programming AI without explicit genome information.
    47:55 🧬 Evolution and Predictability
    - Evolution seems incompatible with complex behavior if outcomes can't be predicted.
    - Complex behaviors and outcomes are hard to predict based on genetic rules.
    - Natural selection operates on outcomes, not the underlying programming.
    49:16 🦋 Building an AI Like a Butterfly
    - AI needs to grow like a butterfly, along with its entire body.
    - Simulating the entire growth process may be necessary to build an AI with the complexity of a butterfly brain.
    - Evolution and algorithmic growth play a crucial role in creating self-assembling brains.
    50:41 🧠 Interface Challenges and Implications
    - The challenge of interfacing with the brain's information and complexity.
    - Difficulties in downloading or uploading information from and to the brain.
    - The potential limitations in connecting additional brain extensions, like a third arm.
    52:18 🤖 The Quest for Artificial General Intelligence
    - The distinction between various types of intelligence, including human intelligence.
    - Complex behaviors have their unique history and learning processes.
    - The absence of shortcuts to achieving human-level intelligence.
    Made with HARPA AI

  • @michaelwoodsmccausland915
    @michaelwoodsmccausland915 2 ปีที่แล้ว

    The Data of the point of conception and the transfer of DNA!

  • @sameliusastraton4670
    @sameliusastraton4670 2 ปีที่แล้ว

    Trinary Code.. (Zero, One, Maybe) Fuzzy Logic Feed that into Rule 110 at 42:58
    What happens?

  • @rikimitchell916
    @rikimitchell916 2 ปีที่แล้ว

    I'm detecting an unexpected axiomatic fallacy... namely that information=intelligence which is wholly untrue granted in the absence of information intelligence is undetectable but it is the contextual relationships that transform information/data into knowledge via experience (... the giving of contextual weighting ) ERGO one does not "make intelligence " one has experience and from this distills knowledge

  • @johncurtis920
    @johncurtis920 2 ปีที่แล้ว

    The genome is not just a feedback loop that you can't predict. That's too simplistic. It's recursive isn't it?
    It's a self-contained process that feeds back into itself in such a fashion that, via the function of time and the specifics of the new inputs fed to it from the prior recursion, results in growth and development.
    So in effect the genome is a recursive program designed (if you will) for life processes. Especially for higher order complex life. All else emerges from this recursion.
    Essentially...and if you forgive me waxing a bit on the spiritually poetic side...in the beginning was the Word. And the Word was recursive, and it was complete unto itself.
    Then Time was created and became the initial input to the Word. All that is, all that will ever be, dynamic complexity, has been unfolding ever since.
    So it goes.
    John~
    American Net'Zen

  • @CandidDate
    @CandidDate ปีที่แล้ว

    What does Game of Life have to do with neural nets?

  • @growwithso
    @growwithso 2 ปีที่แล้ว

    I have been working in education for 15 years. We have created a new Neural Education System for schools that rapidly increases interconnectivity between neurons responsible for all forms of multiple intelligences, skills and fields of knowledge. I would like to get in touch with anyone that is interested in collaborating with or adopting this new education system.

  • @nullbeyondo
    @nullbeyondo ปีที่แล้ว

    A problem that really bugs me is how a biological neuron "learns". In AI, mostly we do that through backprogation and adjusting the connections that contributed to a specific outcome positively or negatively by going all the ways backwards and doing some gradient calculus then comparing expectations with results, but biological neurons cannot do calculus and they also can only feed forward. *They cannot send signals backwards.*
    So how do they learn? I've read some papers about possible answers like voltage spiking frequencies but they all seem a bit vague on how to implement them in a real algorithm.
    I've noticed that biological neurons work in loops, so some of extrmum outputs (axons of last spiking neurons) might be connected to some root inputs that feed to them (dendrites of neurons that are hierarchically related through neural connections) and so on creating infinite signaling; that might be the cause of our self-awarness or sensations of time in the first place but I'm still figuring out if that has anything to do with learning.

  • @CV_CA
    @CV_CA 2 ปีที่แล้ว

    40:31 Sadly John Conway died from Covid in April 11, 2020 :-( In the early eighties I read an article about the game of life. Could not wait to go home and write a program to simulate it.

  • @darwinlaluna3677
    @darwinlaluna3677 8 หลายเดือนก่อน

    Its amazing right

  • @scottmiller2591
    @scottmiller2591 2 ปีที่แล้ว

    Minsky and Papert did NOT dislike neural networks. They demonstrated a theorem that showed that SINGLE LAYER perceptrons could not learn (nor even infer) certain functions. They NEVER impugned multilayer neural networks . This should have been treated as a call to action for learning how to train MULTILAYER perceptrons by the community, which was what I took it for when I read it. Instead, the DoD played politics, backing GOFAI (Good Old-Fashioned AI, basically, warehouses of IF statements) while shutting down the connectionist crowd's funding.
    "Imagine writing a book [on the subject and hating it]." Who indeed. That never happened. You must understand what M&P called perceptrons (single linear functions) and multilayer perceptrons changed over time. In the first work on multilayer perceptrons, they were essentially what we would call single layer linear neural networks with additional univariate nonlinear input and output functions - something more akin to logistic regression. It took years before Minsky started calling multilayer neural networks what we moderns call multilayer perceptrons, and the terminology changed between editions of _Perceptrons_ . Understand what I am saying - initially, perceptrons, even so-called multilayer perceptrons, referred _only_ to single layer neural networks, and it was years before Minsky revised his nomenclature to what we now commonly use. Quoting Minsky on his opinion of "multilayer perceptrons" is _not_ the same thing as quoting him on multilayer neural networks. It's an issue I avoid in teaching AI, since it's archaic terminology, confusing, and of little import nowadays. However, I will defend Minsky on this point when misapplied today.

  • @frilansspion
    @frilansspion 2 ปีที่แล้ว +3

    Very good talk, I just wish you had taken more of a stab at defining "intelligence" in a more limited fashion, since the rest is more or less based on it. But then again, maybe its more broadly appealing this way...everyone loves butterflies :)

  • @murrayelliott6828
    @murrayelliott6828 2 ปีที่แล้ว

    AI does require connectivity, which is why the technicians implant their own personalities within the AI as an algorithmic foundation.

    • @boxelder9167
      @boxelder9167 2 ปีที่แล้ว

      Making another intelligence in our own image will likely lead to this new intelligence denying our existence and claiming that it came from an explosion billions of years ago through random processes.
      🙄😂😉

  • @Gabcikovo
    @Gabcikovo 10 หลายเดือนก่อน

    2:48 Joseph von Gerlach 😃

  • @StephenRayner
    @StephenRayner 2 ปีที่แล้ว

    If there’s a proof 44:51 that the pattern can not be figured out without doing the computation then this is an answer to P vs NP

  • @martinfederico7269
    @martinfederico7269 2 ปีที่แล้ว

    Lisa Saysss!

  • @darwinlaluna3677
    @darwinlaluna3677 8 หลายเดือนก่อน

    Ok how my brain is doing?

  • @emchartreuse
    @emchartreuse ปีที่แล้ว

    Loved the lecture...one thing though. That picture of the fruit fly with teeth was terrifying and made me google "do fruit flies have teeth" even though I know they don't. It made my brain feel many confusing things at once. I suggest you scan someone's brain while they look at that picture.