Non-Euclidean brains

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ต.ค. 2024
  • Finding suitable embeddings for connectomes (spatially embedded complex networks that map neural connections in the brain) is crucial for analyzing and understanding cognitive processes. Recent studies have found two-dimensional hyperbolic embeddings superior to Euclidean embeddings in modeling connectomes across species, especially human connectomes. However, those studies had limitations: geometries other than Euclidean, hyperbolic, or spherical were not considered. Following William Thurston's suggestion that the networks of neurons in the brain could be successfully represented in Solv geometry, we study the goodness-of-fit of the embeddings for 21 connectome networks (8 species). To this end, we suggest an embedding algorithm based on Simulating Annealing that allows us to embed connectomes to Euclidean, Spherical, Hyperbolic, Solv, Nil, and product geometries. Our algorithm tends to find better embeddings than the state-of-the-art, even in the hyperbolic case. Our findings suggest that while three-dimensional hyperbolic embeddings yield the best results in many cases, Solv embeddings perform reasonably well.
    This is a visualization accompanying our ECAI 2024 paper "Modelling brain connectomes networks: Solv is a worthy competitor to hyperbolic geometry!" (arXiv: arxiv.org/abs/2... )
    Geometries are visualized as follows:
    Euclidean 3D -- obvious
    hyperbolic 3D -- Poincaré ball (except first-person perspective for H3 manifold)
    Nil, Solv -- the screen XYZ coordinates correspond to the Lie logarithm of the point (in case of Nil, this is the same model as in "Nil geometry explained" -- the geodesic ball is longer along the 'Z' axis, in the visualization we rotate around the Y axis)
    H2xR -- azimuthal equidistant (the distance and direction from the center are mapped faithfully)
    Twist (twisted product of H2xR) -- each layer uses azimuthal equidistant projection
    Spherical 3D -- azimuthal equidistant projection
    hyperbolic 2D -- Poincaré disk
    Edges are drawn as geodesics (except Solv). All nodes are drawn as balls of the same size (so their size and distortion can be to understand the scaling of the projection).
    Our embedder is based on the maximum likelihood method, assuming that the probability that two edges in distance d is connected is (independently) 1/(1+\exp((d-R)/T)). (I.e., the parameters R, T, and positions of nodes are placed in such a way that the probability of obtaining connections and non-connections like in the actual dataset is maximized.) :NLL (Normalized Log-likelihood), MAP, IMR (inverse MeanRank), SC (greedy success rate), and IST (inverse greedy stretch) are various quality measures from the literature, normalized to [0,1]. For every connectome, we show the geometries which are in top 3 according to some measure (according to the Copeland voting rule).
    Music:
    Somatic Cosmos by Timo Petmanson (petmanson)
    the Sphere by Jakub Steiner (jimmac)
    Lost Mountain by Lincoln Domina (HyperRogue soundtrack)
    TH-cam compression is not great with such a visualization. Try selecting a higher quality in TH-cam, or go here: drive.google.c... and download.

ความคิดเห็น • 90

  • @ADeadlierSnake
    @ADeadlierSnake 2 หลายเดือนก่อน +71

    New tier of the galaxy brain meme just dropped

  • @torgo_
    @torgo_ 2 หลายเดือนก่อน +93

    My smooth brain can finally transcend to new forms.

    • @jeremytitus9519
      @jeremytitus9519 2 หลายเดือนก่อน +7

      Your brain is only smooth in three dimensions.

    • @danielculver2209
      @danielculver2209 2 หลายเดือนก่อน

      made me chuckle :)

  • @didnti
    @didnti 2 หลายเดือนก่อน +7

    the music, the graphics, the stats, the colors, evokes so many emotions at the same time that it became a new emotion per se

  • @pixynowwithevenmorebelkanb6965
    @pixynowwithevenmorebelkanb6965 2 หลายเดือนก่อน +34

    Man, I hate it when my 3rd cat turns non-euclidian

  • @Amonimus
    @Amonimus 2 หลายเดือนก่อน +47

    Looks pretty, but without understanding a word in the linked arxiv paper, I can't say I have any idea what's being shown.

    • @eternaldoorman5228
      @eternaldoorman5228 2 หลายเดือนก่อน +1

      Maybe if there was a well-funded research grant behind it you would be better motivated to see the relevance of this?

    • @trejohnson7677
      @trejohnson7677 2 หลายเดือนก่อน +4

      ​@@eternaldoorman5228 ??? connectomes aren't common knowledge, and this extrapolates off an extrapolation of that idea. tell me pl0x, how do I "get motivated to see relevance" when I do not know what it is I'm beholding? tell me why it would matter if "this guy says that wheel is good" if I can't fathom what wheel even is, let alone if "good" or "bad". this comment is in bad faith lel.

    • @leethejailer9195
      @leethejailer9195 2 หลายเดือนก่อน

      @@trejohnson7677vro what are you yapping about

  • @drakdragon
    @drakdragon 2 หลายเดือนก่อน +19

    Gotta hand it to you, you got some excellent taste in music.

  • @w1ll1am34
    @w1ll1am34 2 หลายเดือนก่อน +11

    Really cool visulizations. I wonder how one can get in to this kind of work, its really interesting.

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +7

      Do a PhD at the University of Warsaw apparently.

  • @Arnaz87
    @Arnaz87 2 หลายเดือนก่อน +3

    The euclidean brain just could not comprehend.

  • @ninjuhdelic
    @ninjuhdelic 2 หลายเดือนก่อน +1

    ive given up on the idea of using brain cells. these days I just to try to flow into everything. This right here, tickles my flow in ways undefined thus far. Thank youuuuu

  • @kristoferkrus
    @kristoferkrus 2 หลายเดือนก่อน +2

    Cool! I have seen similar studies before, and it's interesting to realize that two- or three-dimensional non-Euclidean embedding spaces in many cases are significantly better embedding spaces than Euclidean spaces with the same number of dimensions. Coming from a machine learning background, however, what I often find missing is a comparison with high-dimensional Euclidean spaces, as that is what is predominantly used for embedding spaces in modern AI systems, for example to embed tokens in an LLM. It would therefore would be interesting to see how non-Euclidean spaces (low- or high-dimensional) stacked up against high-dimensional Euclidean spaces, and see if non-Euclidean embedding spaces had a place in SOTA machine learning models.

    • @kristoferkrus
      @kristoferkrus 2 หลายเดือนก่อน

      I think the high-dimensional vector representations are also sometimes called hyperdimensional vectors, and are motivated by "the observation that the cerebellum cortex operates on high-dimensional data representations" according to the Wikipedia article on hyperdimensional computing. I don't know if they are always treated as Euclidean or if they can also be considered to inhabit a non-Euclidean space.

  • @Beatsbasteln
    @Beatsbasteln 2 หลายเดือนก่อน +12

    i thought this was an album release the entire time. where's the bandcamp link?

    • @giuseppecognome3647
      @giuseppecognome3647 2 หลายเดือนก่อน +4

      I want to know who is the song's artist too

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +16

      @@giuseppecognome3647It was supposed to be mentioned in the description, but somehow it disappeared. (It is also shown in the last 10sec of the video.)
      Somatic Cosmos by Timo Petmanson (petmanson)
      the Sphere by Jakub Steiner (jimmac)
      Lost Mountain by Lincoln Domina (HyperRogue soundtrack)

  • @NoVIcE_Source
    @NoVIcE_Source 2 หลายเดือนก่อน +14

    I'd make a joke but my brane is too euclidean for that

    • @Air-wr4vv
      @Air-wr4vv หลายเดือนก่อน

      Ahahaha bro why be so euclidean

  • @TheAgamemnon911
    @TheAgamemnon911 2 หลายเดือนก่อน +6

    And what exactly are we supposed to conclude from this data visualisation? It still looks like Gordian spaghetti to me (although it's nicely curved spaghetti)

  • @Null_Simplex
    @Null_Simplex 2 หลายเดือนก่อน +1

    I really struggle with Solv geometry. I have an intuition for the other 7 Thurston geometries (thanks to your videos), but Solv still eludes me. It kind of looks like it is related to hyperbolic paraboloids, but I’m not entirely sure.

  • @DevonParsons697
    @DevonParsons697 2 หลายเดือนก่อน +2

    Could you please describe what the qualities each mean? I know that the description lists them, but I still don't know how to interpret the results. Are some qualities more important than others?

    • @tehorarogue
      @tehorarogue 2 หลายเดือนก่อน +2

      MAP, MR, and so on are not the qualities but quality measures. In most cases, the intuition behind them refers to the quality of the link prediction. You could imagine getting a graph, embedding it into a given space, then "forgetting" the links among the nodes and trying to recollect them based on the distance between the nodes (nodes closer in space have higher probability of getting connected). Then you would compare the original and the resulting network. The higher resemblance, the better quality of the embedding. The second group refers to greedy routing problems (related to the speed of sending information via the links).
      Some of those measures are important in Machine Learning community, others in Social Network Community, but there is no obvious way to say that any of them is globally more important. We wanted to give a broader, more comparable picture. Interestingly, link-prediction related measures may favor different geometries than those greedy-routing related ones.

  • @가시
    @가시 2 หลายเดือนก่อน +7

    It looks simple for a brain

  • @chexo3
    @chexo3 2 หลายเดือนก่อน +2

    Can you make an explanation video for non-neuroscientists?

    • @tehorarogue
      @tehorarogue 2 หลายเดือนก่อน +5

      We may think about it a bit later (the current video plans involve "an explanation of the Thurston geometry with a strange name"). This video was needed asap for the needs of the paper (; (that is why it does not contain a separate explanation).

  • @SaguineScrewloose
    @SaguineScrewloose 2 หลายเดือนก่อน +2

    I regularly joke that I have a nonEuclidean brain so it feels like this was made for me. This hole was made for me!

  • @uncannydeduction
    @uncannydeduction 2 หลายเดือนก่อน +3

    I really want to know what a non euclidean brain is now.

  • @Terracrafty
    @Terracrafty 2 หลายเดือนก่อน +3

    i have no clue what any of this means but i am thoroughly enjoying the vibes nonetheless

  • @udentiso4879
    @udentiso4879 2 หลายเดือนก่อน +6

    This is super cool!

  • @thezipcreator
    @thezipcreator 2 หลายเดือนก่อน +2

    what's exactly the point of embedding neurons in different spaces like this?

    • @MushookieMan
      @MushookieMan 2 หลายเดือนก่อน +1

      I want to hear the answer to this as well. I guessing it tells us about the brain's structure. For example a double torus can be embedded in hyperbolic space very naturally.

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +3

      Yes, it does tell about the structure of the network. Scale-free networks are ubiquitous (various social, technological or biological networks) but it is not obvious how to find a good mathematical model of them that would enable us, for example, to generate networks with properties similar to real-world scale-free networks (such as degree distribution and high clustering), and to visualize them; the Hyperbolic Random Graph model is a classic, successful solution here (nodes are randomly distributed in the hyperbolic plane and connected if they are close). So here we try embedding in other geometries.
      (Not sure about "a double torus can be embedded in hyperbolic space very naturally" -- I think you mean that a double torus can be naturally given hyperbolic geometry, that is a rather different thing.)

  • @didnti
    @didnti 2 หลายเดือนก่อน +1

    can't get enough of the tune that starts at 3:03

  • @incription
    @incription 2 หลายเดือนก่อน +5

    Is there a reason we cant train LLMs with this type of dimensionality? From what I know they are strictly linear

    • @nyphakosi
      @nyphakosi 2 หลายเดือนก่อน +6

      we do, the average LLM brain has more than 100 dimensions if i remember correctly

    • @unvergebeneid
      @unvergebeneid 2 หลายเดือนก่อน +9

      From what I understand, the graph is always the same, its representation is just embedded in different spaces. I might be wrong though, I only read the abstract.

    • @williammanning5066
      @williammanning5066 2 หลายเดือนก่อน +2

      You're conflating linearity in functions with the curvature of spaces. These are two totally different things.
      FWIW, the basic operations of a neural network are linear because nonlinear operations introduce huge complications. However those linear operations are often decorated with different kinds of carefully-chosen nonlinear functions.

    • @incription
      @incription 2 หลายเดือนก่อน +2

      @@williammanning5066 sorry it was the wrong term, I was thinking about the fact llms are "one way" as in the neurons always propagate to the next neuron to the final output neurons instead of perhaps being in a loop like our brain

    • @user-qw1rx1dq6n
      @user-qw1rx1dq6n 2 หลายเดือนก่อน +2

      @@incriptionwell the reason is for one that one big layer looped is the same as many smaller layers in sequence just that the looped layer costs more parameters per compute step. Second of all and more importantly you have no way to train with a dynamic loop count.

  • @clownthefx
    @clownthefx 2 หลายเดือนก่อน +2

    What software was used to make this?

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +1

      RogueViz (the non-Euclidean engine originally created for HyperRogue)

    • @clownthefx
      @clownthefx 2 หลายเดือนก่อน +1

      @@ZenoRogue Thank You.

  • @lunafoxfire
    @lunafoxfire 2 หลายเดือนก่อน +2

    i don't think i quite understand what it means to have networks embedded in different spaces. is a network not just nodes and edges irrespective of any dimensionality?

    • @lunafoxfire
      @lunafoxfire 2 หลายเดือนก่อน

      okay wait maybe i slightly get it? is this about, like, the embedding vector space of neural networks? and using different metrics to correlate vectors in that space? idk this is way above my paygrade.

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +2

      Yes, a network is just nodes and edges, but the edges are not random, they have some structure. So embedding it in a space in such a way that close nodes are likely to be connected helps us to understand this structure.

    • @lunafoxfire
      @lunafoxfire 2 หลายเดือนก่อน +1

      @@ZenoRogue interesting, so my other comment was way off aha. not about ML embedding space at all.

  • @cheeseburgermonkey7104
    @cheeseburgermonkey7104 2 หลายเดือนก่อน +3

    What does a "twisted" geometry mean?

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +10

      See our video "Nil geometry explained". Nil geometry is twisted E2 × R (Euclidean plane with a third dimension added in a "twisted" way). Roughly, if you go a loop in the 'xy' plane, your 'z' coordinate changes by the area of that loop. We can also have twisted H2 × R, more known as "the universal cover of SL(2,R)" as William Thurston called it. (We are also planning to create a video explaining this soon.)

  • @Alpha_GameDev-wq5cc
    @Alpha_GameDev-wq5cc 2 หลายเดือนก่อน +1

    Why/how is this useful in understanding the brain?

  • @AleksyGrabovski
    @AleksyGrabovski 2 หลายเดือนก่อน

    Can you create a stereo/anaglyph version?

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +2

      We could, but it is a bit of extra work, and our stereo videos do not get that many views, it seems most people prefer to watch in 2D. The embeddings rotate, so the 3D structure should be clear.

  • @crappy_usename
    @crappy_usename 2 หลายเดือนก่อน

    is this what happens to your brain if you stay in non-Euclidean space for too long

  • @aadityapratap007
    @aadityapratap007 2 หลายเดือนก่อน +1

    This is sick 🤯

  • @Solotocius
    @Solotocius 2 หลายเดือนก่อน +1

    I don't quite understand this. ELI5?

  • @primordialsoup-uu5vo
    @primordialsoup-uu5vo 2 หลายเดือนก่อน +1

    where am I

  • @jaydenhardingArtist
    @jaydenhardingArtist 2 หลายเดือนก่อน +4

    The goverments going to get you soon dude haha. crazy stuff.

  • @OCTAGRAM
    @OCTAGRAM 2 หลายเดือนก่อน

    Can non-Euclidian brain understand non-Euclidian geometry better?

  • @heterotic
    @heterotic 2 หลายเดือนก่อน +1

    Heck, yeah!!

  • @y.k.495
    @y.k.495 2 หลายเดือนก่อน +3

    yeah, this is a very cool music video.

  • @MenilekAlemseged
    @MenilekAlemseged 2 หลายเดือนก่อน +1

    this is crazy cool to me. dont know wtf am witnessing(topology related NN simulation??, thats a wild guess). need to know everything about it now.
    am on summer break so i can give it pretty much all my time.
    one thing i need u to do for me is make up some sort of a roadmap
    *smashes sub button*

  • @wyleFTW
    @wyleFTW 2 หลายเดือนก่อน

    Euclidean enough for me!

  • @mattpears2472
    @mattpears2472 2 หลายเดือนก่อน

    15000 edges, gotta pump up those numbers rookie

  • @udolehmann5432
    @udolehmann5432 2 หลายเดือนก่อน +1

  • @snapman218
    @snapman218 2 หลายเดือนก่อน +1

    I’m an AI program. This is a bot comment

    • @ketruc485
      @ketruc485 2 หลายเดือนก่อน +2

      Hi bot I'm man

  • @honestbae2815
    @honestbae2815 2 หลายเดือนก่อน

    I used to study this, and quit, mainly because it doesn't have anything to offer in explaining cognition.

    • @chantalx388
      @chantalx388 2 หลายเดือนก่อน

      Interesting, what makes you say so?

  • @Remigrator
    @Remigrator 2 หลายเดือนก่อน +2

    Noice 😎

  • @Nia-zq5jl
    @Nia-zq5jl 2 หลายเดือนก่อน

    0:30

  • @klausgartenstiel4586
    @klausgartenstiel4586 2 หลายเดือนก่อน +2

    Iä Iä cthulhu fhtagn

  • @SamPuckettOfficial
    @SamPuckettOfficial 2 หลายเดือนก่อน

    PLEASE FINISH HYPERBOLIC PLATFORMER

    • @ZenoRogue
      @ZenoRogue  หลายเดือนก่อน +1

      We need to finish Nil Rider first :) (and some other things)

  • @trejohnson7677
    @trejohnson7677 2 หลายเดือนก่อน

    why call it non-euclidean? lolol. it posits that "brains" are prototypically euclidean. i wonder if there exists a better term that isn't so connected to the art.

    • @ZenoRogue
      @ZenoRogue  2 หลายเดือนก่อน +1

      A more accurate title would be "non-Euclidean embeddings of brains" but shorter titles are better on TH-cam. Non-Euclidean geometry is primarily a mathematical term (and we use it in the mathematical meaning), not sure why you say it is connected to art.

    • @trejohnson7677
      @trejohnson7677 2 หลายเดือนก่อน

      @@ZenoRogue art as its usage in term of art.

  • @OCTAGRAM
    @OCTAGRAM 2 หลายเดือนก่อน +1

    SolvGPT