Many More is Different

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ต.ค. 2024
  • Andrea Liu, University of Pennsylvania
    In 1972 Phil Andersen articulated the motto of condensed matter physics as "More is different." However, for most condensed matter systems many more is quite similar to more. There are, however, systems in which many more is different. For example, the capabilities of artificial neural networks grow with their size. Unfortunately, so does the time and energy required to train them. The capabilities of brains also grow with their size, but they use relatively little energy. Brains are able to learn without an external computer because their analog constituent parts (neurons) update their connections without knowing what all the other neurons are doing, using local rules. We have developed an approach to learning that shares the property that analog constituent parts update their properties via a local rule, but does not otherwise emulate the brain. Instead, we exploit physics to learn in a far simpler way. Our collaborators have implemented this approach in the lab, developing physical systems that learn and perform machine learning tasks on their own with little energy cost. These systems should open up the opportunity to study how many more is different within a new paradigm for scalable learning.
    Learn more, follow us on social media and check out our podcasts:
    linktr.ee/sfis...

ความคิดเห็น • 3

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu 4 หลายเดือนก่อน

    This is cool, I have this idea that (adaptive correction) is a bilateral symmetry to (resistance against equilibrium). I used this in vector math when coding a simple mechanism for thrust correction in relation to velocity direction, in relation to the direction of the desired position (bilateral mirror of thrust in relation to velocity proportional to the vector direction of the goal). When there is intermediate goals, shift in algorithm of control is proportional to proximity of obstacle (which is another bilateral symmetry). It's like a nesting of equal and opposite obstacle->correction vector comparisons based of goal of proximal importance.

  • @markmnelson
    @markmnelson 4 หลายเดือนก่อน

    This is so cool!

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu 4 หลายเดือนก่อน

    There is a problem with data loss in that neural network example (if scale is the focus) for (category narrowing). In small neural networks that preform high degree information processing in order to narrow to one single outcome, a lot of information is lost (0,1 cat or dog 1, 0 outcomes) and this is fine on the narrow goal. However, if the information processing has to navigate a larger set of goals, and multiple networks are losing vast amounts of information that would otherwise be integral to the larger ensemble, the network must scale to be much larger to handle (lack of leverage [back propagation loss of nuance in favor of exaggerated certainty]) loss of data that would have otherwise been useful in a ensemble context where the goals are more interrelated, connected, and more numerous. Backpropagation becomes a less viable adaptive mechanism for larger systems, and becomes more expensive strategy when the parameters scale and goals becomes interrelated and numerous.