Greg Yang | Large N Limits: Random Matrices & Neural Networks | The Cartesian Cafe w/ Timothy Nguyen

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 53

  • @henrybigelow3570
    @henrybigelow3570 2 ปีที่แล้ว +40

    This is a must watch for anyone studying gaussian processes, scaling laws, parameter tuning. Greg's unconventional career path is the stuff of movies. Watch til the end - beautifully ties together multiple lines of research.

  • @1997CWR
    @1997CWR ปีที่แล้ว +13

    I can't believe this has as little view as it has. This is really such quality content! Thank you for making these videos for us!

    • @ShaunYCheng
      @ShaunYCheng ปีที่แล้ว +2

      The viewers not the regular 14 yr olds Mr. Beast fans. I reckon many are working professionals in relevant fields.

    • @radscorpion8
      @radscorpion8 ปีที่แล้ว

      uhh I think maybe you forgot that the vast majority of people are not able to understand or have interests in random matrices and neural networks but rather "LETS WATCH THIS TRAIN GO INTO A BIG HOLE WHOAAAA HAHAHHAHA"

  • @cademcmanus2865
    @cademcmanus2865 ปีที่แล้ว +5

    First episode I've seen of the show and it was great. You managed technical complexity and audience understanding really well so I managed to get something from it it despite being out of my depth at some points. Thanks a lot for sharing!

  • @someperson9052
    @someperson9052 ปีที่แล้ว +16

    I really enjoyed listening to him talking about his story. Seems so down to earth

  • @eismscience
    @eismscience 2 ปีที่แล้ว +8

    Great guest. I had to listen again to the year of graduation. Excellent way to begin with Greg's story - and hear about this early conversations with Freedman. I wish I had more time to listen to the whole lecture. In any case, thanks to both of you.

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว +1

      Yes, I love Greg's story! We need more like his.

  • @samuelleung9930
    @samuelleung9930 ปีที่แล้ว

    THANK YOU, TWO AMAZING PERSONS, FOR SHARING SUCH A GREAT LESSON!

  • @oncedidactic
    @oncedidactic ปีที่แล้ว +2

    Really fascinating stuff! Thanks so much for giving it exposure here. The conversational format was fun, hard to follow at times but enjoyable

  • @fredolivier1431
    @fredolivier1431 ปีที่แล้ว +1

    tears in my eyes, v inspirational

  • @ShaunYCheng
    @ShaunYCheng ปีที่แล้ว +1

    Your channel is so awesome. Please keep on making them.

  • @maximusideal
    @maximusideal ปีที่แล้ว +1

    Amazing interview! Really appreciate it!

  • @deenrqqwe6794
    @deenrqqwe6794 ปีที่แล้ว

    I guess the algorithm really wanted me to see this, fascinating topic though 😊

  • @matthewdrury6443
    @matthewdrury6443 ปีที่แล้ว +7

    I also fucking love math. Mathematics rules. I love hearing it so forcefully stated.

  • @johnbarnes-vi
    @johnbarnes-vi ปีที่แล้ว +4

    chatGPT below
    "Let's connect these three concepts: the Law of Large Numbers, the Central Limit Theorem, and Random Matrix Theory.
    Law of Large Numbers (LLN): This fundamental theorem in probability theory tells us that as the number of independent trials increases, the sample mean will converge to the population mean. It forms the backbone of statistics and probability, underpinning the concept of statistical convergence.
    Central Limit Theorem (CLT): This theorem builds upon the LLN by stating that the sum of a large number of independent and identically distributed random variables will approximate a normal distribution, regardless of the shape of the original distribution (given they have a defined mean and variance). This is why the normal distribution appears so frequently in nature and why many statistical methods are based on the assumption of normality.
    Random Matrix Theory (RMT): This field studies the properties of matrices whose entries are random variables. While it might initially seem like a separate domain, it connects to the LLN and CLT in the sense that it deals with high-dimensional analogues of these principles. For instance, the behavior of eigenvalues of large random matrices (as described by Wigner's semicircle law) is essentially a high-dimensional version of the CLT.
    Here's one way to think about the connections:
    The Law of Large Numbers is about the behavior of averages of random variables. It tells you what happens if you compute a certain quantity (the average) many times independently.
    The Central Limit Theorem is about the distribution of those averages. It tells you that the averages themselves have a specific distribution (normal), regardless of the distribution of the underlying random variables (as long as certain conditions are met).
    Random Matrix Theory can be seen as an extension of these concepts to the case where the quantities of interest are not just single numbers (like averages), but entire matrices. In this context, the analogue of an "average" might be something like an eigenvalue, and we are interested in the distribution of these eigenvalues."
    - chatGPT

  • @jostpuur
    @jostpuur 11 หลายเดือนก่อน

    What do you have to assume about the probability measure of the variables for the Central Limit Theorem to work? Is it something like that the probability densities should belong to Schwartz space? I'm under impression that you need Fourier transform in the proof. It's probably difficult to make the Fourier transform work rigorously with arbitrary probability measures?

  • @anthonybelz7398
    @anthonybelz7398 ปีที่แล้ว

    I was okay (barely) up until the kernel regime, exposing my inexperience with neural nets. Think I can go research some more NN fundamentals, then watch this video 10 more times. Probably need to exercise some math & code in that direction as well. Thanks GY+TN trying to educate this old dog .. 🥝🐐

  • @vinothnandakumar1058
    @vinothnandakumar1058 2 ปีที่แล้ว +1

    Thanks for the new video, your channel is great! Are you also planning to add some content about Rene Descartes' philosophical writings?

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว +1

      I plan on focusing on math and science for the forseeable future. I'm not opposed to doing more philosophical stuff later.

    • @shoopinc
      @shoopinc ปีที่แล้ว

      @@TimothyNguyen Would also recommend Al-Ghazali in that discussion of Descartes work as they thought about the same things.

    • @asaaa978
      @asaaa978 ปีที่แล้ว

      @@TimothyNguyen what do you mean by "foreseeable future"?

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว

      @@asaaa978 The future based on extrapolation from today's settings (which might be inaccurate because some revolutionary addition occurs later that wasn't factored in).

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว

      Oh I didn't realize that was specifically with regards to my comment about content for CC. Forseeable future means I have enough interest in math/physics to keep that topic going exclusively for now. But now that I've had Sean Carroll on, it's easier for me to get philosophical with topics without being an outright philosophy topic.

  • @XavierCrandis
    @XavierCrandis ปีที่แล้ว +2

    Love the chat.. Where can we hear some of Greg's 'beats'? :D

  • @ertanbaykal
    @ertanbaykal ปีที่แล้ว +1

    I once learned the central limit theorem as follows: No matter how the parents look like, the children will always be normal. It this true ?

  • @enterprisesoftwarearchitect
    @enterprisesoftwarearchitect ปีที่แล้ว +1

    Actual talk begins at 33:00

  • @1vootman
    @1vootman ปีที่แล้ว

    Ha! That's pretty great. I studied math at Cal, and made electronic music too. Cool and interesting guy.

  • @morgengabe1
    @morgengabe1 ปีที่แล้ว

    fwiw, you can avoid using limits if you express continuity as the quotient of the number of continuous elemements divided by the total number of elements. If the total number is equal to the sum of the of the number of preserved state components, those added and those removed, then a canonically smooth curve will always have continuity greater than zero. A filtration will always have continuity equal to 1. and a maximally coarse operator will always have continuity zero.
    if the total is not defined by content it can also be parameterized by any structure which is at least 2nd-cryptomorphic to a presentation of a taylor series.

  • @Yeriansen_Channel
    @Yeriansen_Channel หลายเดือนก่อน

    Bro was inventor of Grok!

  • @kemalware4912
    @kemalware4912 ปีที่แล้ว

    Thank you

  • @napiergrass8717
    @napiergrass8717 ปีที่แล้ว

    Came here after Timothy Nguyen's tweets for Greg yang

  • @borispetrovchich3141
    @borispetrovchich3141 8 หลายเดือนก่อน +1

    Meandering in his story

  • @muluken8941
    @muluken8941 ปีที่แล้ว

    Wow, he's a genius! His math skills surpass mine by far. At least let me listen to the music he produced. Can you tell me where I can find his music?

  • @SorryBeeinLate
    @SorryBeeinLate ปีที่แล้ว

    Just started watching it, I'm more of a fan of math but not really knowledgeable. In the beginning, in the definition of the law of large numbers if we drop the identical distribution requirement - will the whole thing converge to the expectation of individuals expectations? Like averaging all expectations?

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว +1

      You need some control over the distributions if they are not iid. Imagine each distribution being a delta distribution (a single outcome) on either 0 or 1 (i.e. a coin flip with a definite outcome). By interleaving arbitrarily long sequences of delta distribution at 0 or delta distribution at 1, you can make the running average fluctuate arbitrarily.

    • @SorryBeeinLate
      @SorryBeeinLate ปีที่แล้ว

      @@TimothyNguyen Ah, right! I see, thank you. And thanks for the videos. This was the second one I've watched after the Grant Sanders one. I enjoy the format very much!

  • @aidanthompson5053
    @aidanthompson5053 ปีที่แล้ว

    1:00:34

  • @Stopinvadingmyhardware
    @Stopinvadingmyhardware ปีที่แล้ว

    I see you’re interviewing one of those Raving radicals.

  • @jayantadas8259
    @jayantadas8259 ปีที่แล้ว

    the video is good but due to the handwriting, following the matter becomes difficult.

  • @larhen39
    @larhen39 ปีที่แล้ว

    language and theory is a part of nature not everything
    you'd better realize strongly always

  • @user-ei2kk2oh5v
    @user-ei2kk2oh5v ปีที่แล้ว +1

    "I don't want to steal the show" - Timothy Nguyen, after he has interrupted 20 times to say obvious things about, or things not really related, or things he just say so that everybody shall understand how smart he is. Greg Yang has an interesting and non-standard approach to explaining LLN and CLT, Nguyen over and over tries to steer it into the intuition and proofs that he is used to. Would be way better to let Yang talk, and maybe Nguyen could actually get some to him new intuitions about these theorems.

    • @TimothyNguyen
      @TimothyNguyen  ปีที่แล้ว

      Thanks for the feedback. The Cartesian Cafe is a discussion between two scientific peers where questions and clarifications happen in real time (as is necessary to be on the same page). If you prefer to hear Greg talk solo and would find that more helpful you can watch his other talks on TH-cam, which my conversation is not trying to replicate.

  • @pineapplegang9972
    @pineapplegang9972 ปีที่แล้ว +1

    Ha now at x ai

  • @Stopinvadingmyhardware
    @Stopinvadingmyhardware ปีที่แล้ว

    All of your math could have been learned by doing computer graphics before stepping foot onto the Harvard campus. Literally all of it applies in high end computer graphics. Everything from time and memory usage calculations to the distribution of vertices.

  • @martinhirsch94
    @martinhirsch94 ปีที่แล้ว +1

    You may know math but you don't know how to teach.

  • @효정이는아가얌효정이
    @효정이는아가얌효정이 ปีที่แล้ว

    This Lecture is amazing for other person.
    Maybe we know this video have next day, and we could know how to supposed them.

  • @wk9195
    @wk9195 ปีที่แล้ว +2

    he needs to say the word LIKE abit more

    • @kongchan437
      @kongchan437 ปีที่แล้ว

      Like, a hundred times more ?