Thinking About Thinking
Thinking About Thinking
  • 147
  • 29 993
Reflections from the British Ambassador to Italy, Lord Llewellyn
Remarks from The Right Honourable Lord Llewellyn, British Ambassador to Italy, at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org).
Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind.
© Thinking About Thinking, Inc, a 501(c)3 Nonprofit registered in New Jersey, USA.
thinkingaboutthinking.org
มุมมอง: 75

วีดีโอ

Implicit generative models using kernel similarity matching - Shubham Choudhary (Harvard)
มุมมอง 3953 หลายเดือนก่อน
Virtual talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)3 ...
A comp model of learning flexible navigation in a maze by layout-conforming replay of place cells
มุมมอง 1043 หลายเดือนก่อน
Virtual talk by Yuanxiang Gao (Institute of Theoretical Physics, Chinese Academy of Sciences) at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Inte...
Seeing through another’s eyes - modeling & correcting for individual differences in color appearance
มุมมอง 1303 หลายเดือนก่อน
Virtual talk from Camilla Simoncelli (University of Nevada Reno, USA) at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Go...
Cognitive Acausality Principle and a new kind of Phenomenological Mathematics - Michael A. Popov
มุมมอง 3073 หลายเดือนก่อน
Virtual talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)3 ...
Local prediction-learning in high-dimensional spaces enables neural networks to plan-Wolfgang Maass
มุมมอง 6133 หลายเดือนก่อน
Keynote talk from Professor Wolfgang Maass (Technische Universität Graz) at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and...
Symmetry and Universality - Dr Sophia Sanborn (Science)
มุมมอง 10K3 หลายเดือนก่อน
Invited talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)3 ...
Modeling sensorimotor circuits with ML: hypotheses, inductive biases, latent noise and curricula
มุมมอง 783 หลายเดือนก่อน
Spotlight talk from Alexander Mathis (EPFL) at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking ...
Fast and slow synaptic plasticity enables concurrent control and learning - Brendan Bicknell (UCL)
มุมมอง 1643 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Competition between reactivating memories mediates long-delay credit assignment - Subhadra Mokashe
มุมมอง 503 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Cell types and layers shape the geometry of neural representations: a biophysical model of neocortex
มุมมอง 893 หลายเดือนก่อน
Spotlight talk from Steeve Laquitaine (EPFL) at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking...
The Ontogeny of the Grid Cell Network - the Topology of Neural Representations -Erik Hermansen
มุมมอง 1923 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Confidence estimation and second-order errors in cortical circuits - Arno Granier
มุมมอง 413 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Clones of biological agents solving cognitive tasks: brain computation paradigms-Sofia Raglio
มุมมอง 933 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Neural subspaces in three Parietal areas during reaching planning and execution - Stefano Diomedi
มุมมอง 853 หลายเดือนก่อน
Spotlight talk at the 5th International Convention on the Mathematics of Neuroscience and Artificial Intelligence, Rome, 2024 (neuromonster.org). Recorded and hosted with generous funding from the Kavli Foundation, Gatsby Foundation, Templeton Foundation, Harvard University, European Research Council, Artificial Intelligence Journal, and Google DeepMind. © Thinking About Thinking, Inc, a 501(c)...
Neural Prioritisation of Past Solutions Supports Generalisation - Sam Hall-McMaster (Harvard)
มุมมอง 263 หลายเดือนก่อน
Neural Prioritisation of Past Solutions Supports Generalisation - Sam Hall-McMaster (Harvard)
Online network reconfiguration: non-synaptic learning in RNNs - Cristiano Capone (ISS)
มุมมอง 343 หลายเดือนก่อน
Online network reconfiguration: non-synaptic learning in RNNs - Cristiano Capone (ISS)
Is visual cortex really “language-aligned”? Colin Conwell (Johns Hopkins)
มุมมอง 843 หลายเดือนก่อน
Is visual cortex really “language-aligned”? Colin Conwell (Johns Hopkins)
A normative account of the psychometric function, and changes with stimulus and reward distributions
มุมมอง 313 หลายเดือนก่อน
A normative account of the psychometric function, and changes with stimulus and reward distributions
Modeling behavioral imprecision from neural representations - Matteo Alleman (Columbia)
มุมมอง 293 หลายเดือนก่อน
Modeling behavioral imprecision from neural representations - Matteo Alleman (Columbia)
Optimal mental representation of social networks explains biases in social learning and perception
มุมมอง 403 หลายเดือนก่อน
Optimal mental representation of social networks explains biases in social learning and perception
Discovery of Cognitive Strategies for Information Sampling with Deep Neural Cognitive Modelling
มุมมอง 1603 หลายเดือนก่อน
Discovery of Cognitive Strategies for Information Sampling with Deep Neural Cognitive Modelling
Grounding Language about Belief in a Bayesian Theory-of-Mind - Lance Ying (Harvard)
มุมมอง 513 หลายเดือนก่อน
Grounding Language about Belief in a Bayesian Theory-of-Mind - Lance Ying (Harvard)
The maximum occupancy principle as a generative model of realistic behavior - Jorge Ramírez-Ruiz
มุมมอง 703 หลายเดือนก่อน
The maximum occupancy principle as a generative model of realistic behavior - Jorge Ramírez-Ruiz
Better modeling of human vision by incorporating robustness to blur in convolutional neural networks
มุมมอง 363 หลายเดือนก่อน
Better modeling of human vision by incorporating robustness to blur in convolutional neural networks
Extensions of the Hierarchical Gaussian Filter to Wiener diffusion processes - Antonino Visalli
มุมมอง 533 หลายเดือนก่อน
Extensions of the Hierarchical Gaussian Filter to Wiener diffusion processes - Antonino Visalli
Dynamic computational phenotyping of human cognition- Roey Schurr (Harvard)
มุมมอง 353 หลายเดือนก่อน
Dynamic computational phenotyping of human cognition- Roey Schurr (Harvard)
Policy regularization in the brain enables robustness and flexibility - Lucy Lai (Harvard)
มุมมอง 753 หลายเดือนก่อน
Policy regularization in the brain enables robustness and flexibility - Lucy Lai (Harvard)
Developmental differences in exploration reveal differences in structure inference - Nora Harhen
มุมมอง 273 หลายเดือนก่อน
Developmental differences in exploration reveal differences in structure inference - Nora Harhen
Latent learning progress guides hierarchical goal selection in humans - Gaia Molinaro (UC Berkeley)
มุมมอง 2523 หลายเดือนก่อน
Latent learning progress guides hierarchical goal selection in humans - Gaia Molinaro (UC Berkeley)

ความคิดเห็น

  • @AzarMoayedi
    @AzarMoayedi 2 หลายเดือนก่อน

    ❤❤👍

  • @azaderahimi6701
    @azaderahimi6701 2 หลายเดือนก่อน

  • @MDNQ-ud1ty
    @MDNQ-ud1ty 3 หลายเดือนก่อน

    While this might be pedantic, the symmetries shown are not symmetries because the object has to be exactly the same. Putting colors and the F on the hexagon prevents the rotations from being symmetric. You have to literally get the same object that you started with. Any differences will prevent the transformation from being a mathematical symmetry. X = TX. E.g., the symmetries of a transformation are it's fixed points. The set of all transformations that leave an object fixed is the set of symmetries. Typically one ignores any labels to keep track of the "fixed points" and one can consider the colors as labels of the vertices but the F is not rotationally symmetric. If one considers the F as simply a sort of "compass" that tells one the rotation or mirror then it is ok. But one has to understand the color and the symbol are not part of the object under consideration. The idea of the symmetries is rather symbol in that if an object is invariant under certain transformations then one cannot tell the difference between it and any possible other symmetric state it is in. The group of symmetries helps one understand the possible states it is in and the structure that may exist in the way the object can move around in these "symmetric states". One may wonder why it matters if the object is exactly the same in each symmetry. It matters because the object may not be actually identical but is "perceived to be" when one ignores certain features. Hence in actuality there is some differences but if you can't tell the difference in some scenario you can still count the possible differences using the symmetries. E.g., imagine a square with painted sides. If you project the square onto the wall the colored sides are all black and you cannot tell the differences. You know the symmetries which is D4. But if the square does have colored sides you know that D4 counts the possible ways that those colors could be hidden. That is, there are no symmetries of the colored square but D4 symmetries of the non-colored one. This tells you that there are D4 possibilities(and what they could be) of that the non-colored square could represent. You then may be able to use this to further understand the square in a larger context. So understanding symmetries is a way to understand something about the innate structure of the object ignoring any "attached details" (as the details can only be attached in a specific away according to the symmetries). E.g., they are non-essential modifications while symmetries are concerned with essential modifications. If an object as the same symmetry group as another it is likely the same or "isomorphic" to it.

    • @NightmareCourtPictures
      @NightmareCourtPictures 2 หลายเดือนก่อน

      Ya bro, I think this was a bit pedantic 😂 Because i think it was obvious why she used colors and the letter F in her symmetry slide (to differentiate the different transformations from each other) for people in the audience. Rather than just showing 6 identical hexagons (which would look exactly the same since they are invariant under translation) the added color and letter show exactly how much they are rotated.

    • @MDNQ-ud1ty
      @MDNQ-ud1ty 2 หลายเดือนก่อน

      @@NightmareCourtPictures It's only obvious to someone that understands a a group. If someone doesn't understand what a group is and has some experience they won't get it. The only reason you get it is because you actually have some past experience with it. It's only pedantic to you. Because she explains what a group it suggests there are people in the audience that does not understand it(else she wouldn't have wasted her time).

  • @gabrielescheler2522
    @gabrielescheler2522 3 หลายเดือนก่อน

    Insanity squared

  • @candyblack4287
    @candyblack4287 3 หลายเดือนก่อน

    nice talk and handsome speaker :)

  • @Funnyguy992
    @Funnyguy992 3 หลายเดือนก่อน

    😍

  • @vtrandal
    @vtrandal 3 หลายเดือนก่อน

    These two talks (Sophia Sanborn and Michael Bronstein on geometric deep learning) complement each other regarding symmetry of the convolution operator. [ th-cam.com/video/w6Pw4MOzMuo/w-d-xo.htmlsi=NAZq0xXlcBlD07FW ]

  • @dennisalbert6115
    @dennisalbert6115 3 หลายเดือนก่อน

    Incorporate constructor theory

  • @laulaja-7186
    @laulaja-7186 3 หลายเดือนก่อน

    Thanks for posting!

  • @ruhtranortep
    @ruhtranortep 3 หลายเดือนก่อน

    This works connects quite strongly across fields. There seems to be some work to be done with toroidal harmonic groups?

  • @MDNQ-ud1ty
    @MDNQ-ud1ty 3 หลายเดือนก่อน

    It is very likley theat the reason the same patterns show up is because of the machinery of NN's which involve matrices(inner product spaces), integrals, etc. These are simply the generalized basis functions for NN's. E.g., similar to how a box will shape the energy it contains within it and limit it to a fixed set of discrete frequencies or the same with a vibrating string. The machine of NN's is effectively using the same fundamental mathematics that has always been used. It's in the type of mathematics we use that unifies everything we do. The question is if there are other types of "mathematics" that exist that are completely different. It seems our mathematics is universal rather than anything in particular that uses it. That it shows up biologically is, IMO, what proves our mathematics is complete and general. Of course once one realizes that our mathematics was created by biological machines it really isn't that much of a leap(hence mathematics is simply an emergence of us and "us" is biological).

  • @Adhil_parammel
    @Adhil_parammel 3 หลายเดือนก่อน

    He named after ilya

  • @WalterSamuels
    @WalterSamuels 3 หลายเดือนก่อน

    Interestingly the picture at 21:25 looks like the same patterns you see with cymatics. They also match that of the probability distribution clouds of atomic orbitals of an electron. The wikipedia page for atomic orbitals has a nice picture.

  • @hareshsingh8168
    @hareshsingh8168 3 หลายเดือนก่อน

    A really fantastic talk. Thank you.

  • @rubncarmona
    @rubncarmona 3 หลายเดือนก่อน

    This is great! Been thinking about this a lot so it's very validating as well. Thanks for the talk and upload

  • @SonnyGeorgeVlogs
    @SonnyGeorgeVlogs 3 หลายเดือนก่อน

    👏

  • @aaomms7986
    @aaomms7986 3 หลายเดือนก่อน

    OMG i love this kind of topic

  • @BartvandenDonk
    @BartvandenDonk 3 หลายเดือนก่อน

    Very interesting. Nature has it's invariants. Even on atomic scale. I wonder where this will lead.

  • @ZhanMorli
    @ZhanMorli 3 หลายเดือนก่อน

    Здравствуйте. Эйнштейн создал, теорию относительности но сам не ссылался на проделанный опыт Майкельсона Морли для подтверждения своей теории. Но при этом у него, мечта была, чтобы такой же опыт проделать в поезде или самолёте. Просьба. Помощь найти, кто захочет стать автором изобретения. В Китае на предприятии по выпуску Волоконно-оптических гироскопов, возможно договориться. Производить устройство ГИБРИД гироскопы. Эти устройства будут использоваться в качестве учебного пособия в школах, высших учебных заведениях. Также ВОЗМОЖНО будет с помощью «гибрид гироскопа» сделать научные открытия; в астрономии, астрофизики, космологии, высшей теоретической физики, …

  • @amortalbeing
    @amortalbeing 3 หลายเดือนก่อน

    thanks this was very interesting

  • @ATH42069
    @ATH42069 3 หลายเดือนก่อน

    those bananas hit

  • @nonamenoname1942
    @nonamenoname1942 3 หลายเดือนก่อน

    1:22 you know it absolutely wouldn't hurt to remind the public that the table on a right is a russian chemist Dmitri Mendeleev creation.

    • @amortalbeing
      @amortalbeing 3 หลายเดือนก่อน

      what table?

    • @nonamenoname1942
      @nonamenoname1942 3 หลายเดือนก่อน

      @@amortalbeing Periodic table of the elements

  • @Tazerthebeaver
    @Tazerthebeaver 3 หลายเดือนก่อน

    Fantastic video thank you

  • @GerardSans
    @GerardSans 3 หลายเดือนก่อน

    While his thesis has some merits it fails when comparing human beings with AI. The most important error is that we don’t share a ground truth or from a semiotic perspective a referent. Whatever the representation is flawed if it’s pointing to something different or is not grounded in the same thing. AI is grounded on its training data not reality. Eg: science fiction, social media, Internet sources are all biased, subjective and not necessarily grounded in reality.

    • @azi_and_razi
      @azi_and_razi 3 หลายเดือนก่อน

      But what is the source of those biases? Humankind. So are we really grounded in reality or maybe just in our flawed representation of reality? Maybe it's better for AI to be grounded on training data produced by us rather than in true reality? Are you sure AI will be aligned with us when grounded in real world?

    • @jks234
      @jks234 3 หลายเดือนก่อน

      Sounds like you're explaining how the AI and humans do the same thing. When I speak English with you and you can understand me, where is the ground truth of English? Something quite akin to training data. There is no reality of English. English has and will shift and change. The words we use are simply the words we hear others use and we adjust with them. (No cap, foreal.) We use training data to inform inference. "Ground truth" is a sorting exercise. In a way, it can be thought of as "meta-training data". Where we have evaluated the information out there and chosen the ones we deem acceptable. We cannot get much better than that. No human automatically learns "ground truth". We use other accepted measurements to define ground truth. And then, using these tools we deem objective, we collectively agree upon ground truth. But I wasn't the one that learned that. I once again, learned it using the tribal collective of training data.

    • @azi_and_razi
      @azi_and_razi 3 หลายเดือนก่อน

      @@jks234 So do you assume language is biased but somehow pople learn grand truth and in our brains the ground truth resides? People are not objective beings. We are all subjective and the best thing we can do is intersubjectivity. Even with science.

    • @davidebic
      @davidebic 3 หลายเดือนก่อน

      I don't know if you ever heard of the Platonic Representation Hypothesis. A little while back a paper came out showing how with model size increase, distances of same concepts in LLMs and Image Generation Networks converged to some quantity. Aka even though LLMs and Diffusion Models work on completely different data types (texts and images), they converge to the same inner representation of the world as they get better. Still an hypothesis, but there was some good statistical evidence that at least for now this is happening.

  • @Lolleka
    @Lolleka 3 หลายเดือนก่อน

    There's a PNAS paper observing how almost all neural nets learn the same manifold. techxplore.com/news/2024-03-replica-theory-deep-neural-networks.pdf

  • @mattwillis3219
    @mattwillis3219 3 หลายเดือนก่อน

    25min video on looking at how a DNN represents that of a 4F system, i mean it is actually how they were designed.... what has this team done except for checking the math works? oh yeah in mice brain we see similar patterns to the "Gabor filters" lol

    • @alexkonopatski429
      @alexkonopatski429 3 หลายเดือนก่อน

      Hey, what do you mean by "how they were designed"? Do you have a paper about that or can you elaborate a little more? Have a nice day!

    • @MartinDlabaja
      @MartinDlabaja 3 หลายเดือนก่อน

      Even if something is obvious to you, it is very hard to come up with formalized models. Everyone knows orbits are round. But math is not obvious.

    • @ATH42069
      @ATH42069 3 หลายเดือนก่อน

      Instead of flaunting your intellect and scoffing at the work of others, you ought to try helping others experience astonishment through your observations of reality.

    • @volpir4672
      @volpir4672 3 หลายเดือนก่อน

      @@alexkonopatski429 CCD manufacture, go down that rabbithole

    • @WalterSamuels
      @WalterSamuels 3 หลายเดือนก่อน

      What have you done? I don't see your contributions. I found this video very interesting and enlightening.

  • @rubncarmona
    @rubncarmona 3 หลายเดือนก่อน

    I've been thinking about fourier-like components in neural networks and now I see this talk. Amazing, didn't expect it to participate in such a meaningful way

  • @devmehta4144
    @devmehta4144 3 หลายเดือนก่อน

    AMAZING

  • @cembirler
    @cembirler 3 หลายเดือนก่อน

    Such an interesting topic and a great presentation!

  • @suvrotica
    @suvrotica 3 หลายเดือนก่อน

    This was a great presentation. One of the best i have seen in a long time.

  • @GerardSans
    @GerardSans 3 หลายเดือนก่อน

    There is enough evidence for not having converging learning already in AI. Internal representations in the latent space via high dimensional embeddings are not compatible even within models of the same family. Let alone making any links with the human brain which are highly speculative anyway and not supported by science.

    • @GerardSans
      @GerardSans 3 หลายเดือนก่อน

      If anything the plasticity of the brain indicates that each brain creates its own internal structures which obviously can’t possibly converge within individuals or similar evolutionary species. There’s no fundamental or universal representation of knowledge.

  • @paulbaker4491
    @paulbaker4491 3 หลายเดือนก่อน

    Very interesting. Thanks for sharing.

  • @LuisAware
    @LuisAware 3 หลายเดือนก่อน

    Very interesting, bringing system 1/2 to synapses 🎉

  • @AryaKode
    @AryaKode 3 หลายเดือนก่อน

    He said Timber not timbre 😢, I jest, great presentation

    • @jks234
      @jks234 3 หลายเดือนก่อน

      Just looked it up for anyone curious. Timber is pronounced as you would expect. (Tim Burr) It means wood material. Timbre is pronounced "tamburr". And it means quality of voice. I learned from your comment. I also learned quite late that melee is may-lay. Not mee-lee.

  • @robyexpert
    @robyexpert 3 หลายเดือนก่อน

    Great presentation