Chapters (Powered by @danecjensen) - 00:00 - Presenting work, including a unique Arpinbased system for scene recognition 01:49 - Neurons detect different types of faces 02:59 - Textbook knowledge leads to different neural network hypotheses 03:45 - Alyoshas convergence with scene recognition 04:50 - Common systems, convergence, limitations, implications 05:47 - Similarity in neural networks Gaborlike filters 06:41 - Recurrent neural networks converge over time 08:31 - Increasing convergent trend in vector representations 09:51 - Kernels are fundamental for understanding representations 11:09 - Different vision networks improve over time 14:17 - Performance dominates contrastive vs noncontrastive vision networks 15:19 - Super language specialists 16:11 - Cross modal kernel alignment for neural network similarity evaluation 17:15 - Model similarity, language model alignment, and dyno results 21:30 - Platos allegory for world out there 22:15 - Multiviewing, causal process convergence 23:21 - Model convergence between image embedding and language embedding 26:50 - Heterogeneous views converge in kernel 27:10 - Limitations and implications in image processing 29:32 - Increasing kernel alignment between rich captions with images 33:55 - Unpaired translation between modalities success and implications 35:54 - Interesting work in eye surgery 39:07 - Datadriven models bias against humanlike representations 42:29 - Training on different data districts, asymptotes, and kernels 44:07 - Come back at three 45
I'm not sure how tight the Plato analogy is. As far as I understood it, Platons realm is disconnected from humans and does not require human observations and experience to form. For example, the form of an apple is always there, out there, somewhere, whether I, or anyone has ever seen even a single apple. The "appleness" still exists regardless. But these models require some observation, even if they are different, to even converge at any representation. The representation is created through the process of the models observing and "learning", it is not an everchanging, concept that already exists "somewhere out there". So what the models arrive at might shared statistics but these "forms" still only ever arise from the things it has seen. Additionally, I don't think the models could distinguish con-founding traits. Say the models only ever see red apples. I do not believe it would create a seperate notion of "redness" or "appleness" but only "red-appleness" while for Plato these would (as far as I understand) be two distinct, ideal forms.
people of different cultures view the world in entirely different ways. It depends on culture, language, genetics etc. for example, people who speak Navajo have an entirely different way of perceiving reality and breaking it down into components than western English speaking people. A shaman would also see the world completely differently to a western man
The title explains it "Platonic Representation". A platonic object exists outside of reality and reality is just a reflection of the perfect form. Think of a chair, it has 4 legs and a flat surface, it takes physical form and gets certain details but it is never a platonic chair. this hypothesis says that these models approach a platonic form representation that is evidence for the existence of platonic forms that exists outside of our reality.
Chapters (Powered by @danecjensen) -
00:00 - Presenting work, including a unique Arpinbased system for scene recognition
01:49 - Neurons detect different types of faces
02:59 - Textbook knowledge leads to different neural network hypotheses
03:45 - Alyoshas convergence with scene recognition
04:50 - Common systems, convergence, limitations, implications
05:47 - Similarity in neural networks Gaborlike filters
06:41 - Recurrent neural networks converge over time
08:31 - Increasing convergent trend in vector representations
09:51 - Kernels are fundamental for understanding representations
11:09 - Different vision networks improve over time
14:17 - Performance dominates contrastive vs noncontrastive vision networks
15:19 - Super language specialists
16:11 - Cross modal kernel alignment for neural network similarity evaluation
17:15 - Model similarity, language model alignment, and dyno results
21:30 - Platos allegory for world out there
22:15 - Multiviewing, causal process convergence
23:21 - Model convergence between image embedding and language embedding
26:50 - Heterogeneous views converge in kernel
27:10 - Limitations and implications in image processing
29:32 - Increasing kernel alignment between rich captions with images
33:55 - Unpaired translation between modalities success and implications
35:54 - Interesting work in eye surgery
39:07 - Datadriven models bias against humanlike representations
42:29 - Training on different data districts, asymptotes, and kernels
44:07 - Come back at three 45
I'm not sure how tight the Plato analogy is. As far as I understood it, Platons realm is disconnected from humans and does not require human observations and experience to form. For example, the form of an apple is always there, out there, somewhere, whether I, or anyone has ever seen even a single apple. The "appleness" still exists regardless. But these models require some observation, even if they are different, to even converge at any representation. The representation is created through the process of the models observing and "learning", it is not an everchanging, concept that already exists "somewhere out there". So what the models arrive at might shared statistics but these "forms" still only ever arise from the things it has seen.
Additionally, I don't think the models could distinguish con-founding traits. Say the models only ever see red apples. I do not believe it would create a seperate notion of "redness" or "appleness" but only "red-appleness" while for Plato these would (as far as I understand) be two distinct, ideal forms.
We can call it Platonic forms, but at essence it's the world modeled by an error function. This aligns well with Platonic forms.
I was eight minutes in before I understood the title!
It makes perfect sense for different AIs to learn similar representations of the same reality. This is similar to how science works.
people of different cultures view the world in entirely different ways. It depends on culture, language, genetics etc. for example, people who speak Navajo have an entirely different way of perceiving reality and breaking it down into components than western English speaking people. A shaman would also see the world completely differently to a western man
what is the opposite of confirmation bias?
Confirmation variance?
this could be sparks of SSI
This paper is not about AI. It is about Ontology and Epistemology.
why should this be so profound and how is it relevant to real world?
The title explains it "Platonic Representation". A platonic object exists outside of reality and reality is just a reflection of the perfect form. Think of a chair, it has 4 legs and a flat surface, it takes physical form and gets certain details but it is never a platonic chair. this hypothesis says that these models approach a platonic form representation that is evidence for the existence of platonic forms that exists outside of our reality.
35:20
This hypothesis suggests that representations of the world are universal