Equivariant Models | Open Catalyst Intro Series | Ep. 6

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 พ.ค. 2024
  • Episode 6: In this episode, we explore ML models that have equivariant representations. These model representations are quite fascinating, since they change predictably given changes in the input. For instance, if the input atoms are rotated, the model’s internal representation will also “rotate”. We’ll discuss how a special set of basis functions called spherical harmonics are used in equivariant models to represent atom neighborhoods and what makes them so mathematically interesting.
    This video series is aimed at machine learning and AI researchers interested in gaining a better understanding of how to explore machine learning problems in chemistry and material science.
    #opencatalyst #ai4science #climatechange
    Additional materials:
    A Gentle Introduction to Graph Neural Networks: distill.pub/2021/gnn-intro/
    A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems: arxiv.org/pdf/2312.07511.pdf
    Videos on Fourier Transforms:
    But what is the Fourier Transform? A visual introduction: • But what is the Fourie...
    Fourier Analysis, Steve Brunton: • Fourier Analysis [Data...
    Some equivariant model papers:
    3D steerable CNNs: Learning rotationally equivariant features in volumetric data: arxiv.org/abs/1807.02547
    Tensor Field Networks: Rotation-and translation-equivariant neural networks for 3d point clouds: arxiv.org/abs/1802.08219
    Geometric and physical quantities improve E(3) equivariant message passing: arxiv.org/abs/2110.02905
    Equivariant message passing for the prediction of tensorial properties and molecular spectra: arxiv.org/abs/2102.03150
    E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials:
    MACE: higher order equivariant message passing neural networks for fast and accurate force fields: arxiv.org/abs/2101.03164
    Reducing SO(3) convolutions to SO(2) for efficient equivariant GNNs: arxiv.org/abs/2302.03655
    EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations: arxiv.org/abs/2306.12059

ความคิดเห็น • 4

  • @harshagrawal2336
    @harshagrawal2336 11 วันที่ผ่านมา +1

    Too good!

  • @raguaviva
    @raguaviva 14 วันที่ผ่านมา

    I love these brave women! Thanks for standing up!

  • @landland4827
    @landland4827 วันที่ผ่านมา

    Awesome video. May I ask how might I apply the Wigner D matrix?
    I have the sum wave function. Then right now, I am trying to shift it by A that's just a scalar/number and not a matrix. It works for one wave function, but for two it's not moving in unison and it seems to be moving independantly. (Not an issue, as expected).
    But, now, how do I go about getting the Wigner D and how to apply it? I figured I can do J=1/2, but it feels like I need to configure it with alpha,beta,gamma but what would those values be in this context?

    • @landland4827
      @landland4827 17 ชั่วโมงที่ผ่านมา

      Figured it out! Wigner D, should be initialized with J=0.5, alpha=0, gamma=0.5, Beta should be k*a, where k is the frequency of the basis function. Again, thanks for an awesome video.