2021's Biggest Breakthroughs in Math and Computer Science

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ต.ค. 2024

ความคิดเห็น • 824

  • @QuantaScienceChannel
    @QuantaScienceChannel  2 ปีที่แล้ว +154

    Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-year-in-math-and-computer-science-20211223/

    • @naturemc2
      @naturemc2 2 ปีที่แล้ว +2

      Your last few videos in this channel is killing it. Need it. Much need it ❤️

    • @zfyl
      @zfyl 2 ปีที่แล้ว +4

      I think the opposite. All i see here, is just mathematicians coming up with new approaches to existing problems (made by previous mathematicians) and publishing new approaches. These are not results, and i feel like these are practically useless. So sad to see, that the education system embraces pointless research in such overly sophisticated, yet never applied, fields of science!
      What a shame, as it happens on the background of the world in fires, looking for help...and what is give?...some over-engineered half solution for made up problems...

    • @antoniussugianto7973
      @antoniussugianto7973 2 ปีที่แล้ว +4

      Please Riemann hypothesis progress updates...

    • @EmperorZelos
      @EmperorZelos 2 ปีที่แล้ว +4

      Uh yeah no, I have to correct you.
      The continuum hypothesis is UNDECIDABLE in ZFC. Meaning there is no way to decide it.
      There is nothign to SOLVE there, there is nothing unanswered.
      It was resolved and understood many many decades ago.
      We KNOW it is independent and we cannot say c=Aleph_1.
      We can assume it axiomatically if we so want, or assume its negation and both are EQUALLY valid.
      What you're talking about here is adding an axiom to create a NEW axiomatic system where we CAN say it, but that does not mean it was "resolved" or anything because we already knew the answer.

    • @eeemotion
      @eeemotion 2 ปีที่แล้ว +1

      Thanks for sparing me the trouble of watching. As anything significant could be buried in such an annal. The only real breakthrough in lamestream science is how to get them to shield for a plasma environment while still thinking almost exclusively in terms of 'heat'. The almost being the novelty. Electricity still being a dirty word in space. Hence its smell at first described from the suits after a spacewalk as that of electric soldering was then peppered with burnt chicken and BBQ insinuations to make for the usual clumsy narrative reminiscent of the sticky tape on the supposed lunar landing module. Ah, who knows what's in the peel of an onion? It's a slow boil to get to the truth and for the cluttered cosmogony of the believers it seems all too much useless toil...

  • @ruchirkadam8510
    @ruchirkadam8510 2 ปีที่แล้ว +2122

    Man, loving these 'breakthrough' videos! It's feels fulfilling to see the progress being made! I mean, finally modelling quantum gravity? jeez!

    • @Djfmdotcom
      @Djfmdotcom 2 ปีที่แล้ว +88

      Same! I think in no small part it’s because we have all these TH-cam channels focusing on them! I’d much rather watch Videos about science, exploration and learning than MSM garbage that divides us. Science brings us together!

    • @v2ike6udik
      @v2ike6udik 2 ปีที่แล้ว +1

      BS. Gravity (as a separate force) is a hoax. It has been done for a reason.

    • @irs4486
      @irs4486 2 ปีที่แล้ว +33

      cringe bruh, stop commenting, ratio + yb better

    • @sublimejourney3384
      @sublimejourney3384 2 ปีที่แล้ว +7

      I love these videos too !!

    • @The.Golden.Door.
      @The.Golden.Door. 2 ปีที่แล้ว +5

      Quantum gravity is far more simpler to calculate than what modern day Physicists have known to be true.

  • @OneDayIMay91Bil
    @OneDayIMay91Bil 2 ปีที่แล้ว +1241

    Glad to have been a contributing member to this field had my first peer reviewed paper published in IEEE this year :)

    • @kf10147
      @kf10147 2 ปีที่แล้ว +66

      Congratulations!

    • @thatkindcoder7510
      @thatkindcoder7510 2 ปีที่แล้ว +28

      What's the paper?

    • @zfyl
      @zfyl 2 ปีที่แล้ว +50

      Too bad ieee is just an international conglomerate of science paper resellers. I, and everybody else in this planet want to know why are you writing these papers, and what is you contributed progress. Sorry for the negative tone, and congrats to the publishing 😉

    • @sampadmohanty8573
      @sampadmohanty8573 2 ปีที่แล้ว +34

      @@zfyl Exactly. Why is everyone writing these papers? And if it is for advancement of science, why is it not accessible to the general public? Is science a business - it is but many intellectuals do not want to see it as such because they want to believe that they do it for "a bigger cause" while in reality they do it selfishly which accidentally sometimes might actually do good, without the original intent being so. Please do not point to Arxiv.

    • @dougaltolan3017
      @dougaltolan3017 2 ปีที่แล้ว +6

      @@sampadmohanty8573 don't you just have to pay for access?

  • @MargaretSpintz
    @MargaretSpintz 2 ปีที่แล้ว +625

    Slight correction. The infinite limit of shallow neural networks as kernel machines (specifically Gaussian processes) was established in 1994 (Radford Neal). This was updated for 'ReLU' non-linearities in 2009 (Cho & Saul). In 2017 Lee & Bahri showed this result could be extended to deep neural networks. Not sure this counts as "2021's biggest breakthrough", though it is a cool result, so happy to have it publicised. 👍

    • @PythonPlusPlus
      @PythonPlusPlus 2 ปีที่แล้ว +22

      I was thinking the same thing

    • @lexusmaxus
      @lexusmaxus 2 ปีที่แล้ว +1

      Since there are no physical infinite machines so there must be mathematical operators that eliminates these infinities?

    • @hayeder
      @hayeder 2 ปีที่แล้ว +17

      Was about to post something similar. The recent famous paper in this area is Jacot et al with the NTK in 2018.
      It’s also not clear to what extent this explains practice. Eg see the work of chizat and Bach on lazy training.

    • @ramkitty
      @ramkitty 2 ปีที่แล้ว

      @@lexusmaxus or is infinity an inversion in some way

    • @Ef554rgcc
      @Ef554rgcc 2 ปีที่แล้ว

      Obviously

  • @MarcelBornancin
    @MarcelBornancin 2 ปีที่แล้ว +127

    I appreaciate the efforts in trying to make these heavily technical subjects understandable to the general public. Thank you all : )

  • @primenumberbuster404
    @primenumberbuster404 2 ปีที่แล้ว +338

    Mathematics is like that wind your sail boat needs to move way ahead on your journey. This was so heart warming to watch. There is really a thin line between maths and magic!
    Thanks a lot Quanta Magazine for this beautiful summary! loved it!

    • @jackgallahan9669
      @jackgallahan9669 2 ปีที่แล้ว +3

      wtf

    • @criscrix3
      @criscrix3 2 ปีที่แล้ว +1

      Some bot stole your comment and slightly reworded it lmao

    • @michaelblankenau6598
      @michaelblankenau6598 10 หลายเดือนก่อน

      That's a funny looking cat .

  • @williamzame3708
    @williamzame3708 2 ปีที่แล้ว +175

    Also: Aleph 1 is *by definition* the smallest cardinal bigger than Aleph 0. The question is whether the size of the continuum is Aleph 1 or something bigger ...

    • @alexantone5532
      @alexantone5532 2 ปีที่แล้ว +1

      The continuum of natural numbers?

    • @LeBartoshe
      @LeBartoshe 2 ปีที่แล้ว +24

      @@alexantone5532 Continuum is just a nickname for cardinality of real numbers.

    • @whataboutthis10
      @whataboutthis10 2 ปีที่แล้ว +3

      and the new result makes it seem it is less likely that continuum is aleph1, which was Cantor's guess that seemed the most plausible for many years

    • @EM-qr4kz
      @EM-qr4kz 2 ปีที่แล้ว +1

      if you take an infinite number of line segments one centimeter each..then you have an infinite line..this set of line segments are No = aleph 0 infinity..the line is one dimension object.. but! * if you take a square.. one square centimeter in size..the parallel straight sections that make this square up are infinite.. but the set of them is aleph 1 in size..and square in 2 dimension object.. could that be the key of dimentions ? especialy when we have fractals objects to describe?

    • @moerkx1304
      @moerkx1304 2 ปีที่แล้ว

      @@EM-qr4kz I'm not sure if you have some typos or I'm not exactly understanding what you're trying to say.
      But your analogy of a straight line being the natural numbers and then extending it to a square seems to me like Cantor's prove that the rational numbers are countable and hence of the same cardinality as the natural numbers.

  • @midas2092
    @midas2092 2 ปีที่แล้ว +21

    These videos last year introduced me to this channel, and yet I still have the same excitement when I see the new ones

  • @quentingallea166
    @quentingallea166 2 ปีที่แล้ว +26

    You know the channel is pretty good when you watch full length video while understanding about half of the content

    • @szymonbaranowski8184
      @szymonbaranowski8184 2 ปีที่แล้ว

      No. It means it still sucks half of the time. And in this case i bet it sucks much more than a half. And it means it's useless to watch it since you end up in the same spot you started but fooled & getting more arrogant having an opposite feeling

    • @quentingallea166
      @quentingallea166 2 ปีที่แล้ว +1

      @@szymonbaranowski8184 when I was a teenager, I was reading Hawking, Brian Green etc and understand maybe 10% the first time. I would read and read again the pages and chapter to understand more each time.
      The world is a complex place. As a scientific researcher, I face everyday this complexity. Over simplifying is possible and useful. Kurtzgesagt is a pretty neat example. However, in some cases, in my opinion, if you still want to go far, you can't explain it in 10min simply.
      But well, you are perfectly free to disagree .

  • @hansolo9892
    @hansolo9892 2 ปีที่แล้ว +162

    I have been using these kernel vector spaces for QML recently and this is one of those mathemagics I honestly adore!

    • @WsciekleMleko
      @WsciekleMleko 2 ปีที่แล้ว +14

      Hi I could take 2 fists of shrooms and it still would have same sense to me as it does right now. Im glad You are happy tho.

    • @joshlewis575
      @joshlewis575 2 ปีที่แล้ว +7

      @@WsciekleMleko yeah but just a few years ago you could've ate 2 ounces in your example. That's some crazy advancement, only a matter of time

    • @RexGalilae
      @RexGalilae 2 ปีที่แล้ว +6

      Yo I worked on QML too back in college!
      I used to devour papers by Anatole Lilienfeld and Matthias Rupp coz of how interesting they were. Gaussian and Laplacian Kernels were the bread and butter of my Kernel Ridge Regression models and I was pleasantly surprised to see kernel vector spaces here lol
      It's one of the dark horses of ML

  • @Levi_Ackerman_7
    @Levi_Ackerman_7 2 ปีที่แล้ว +81

    We really love watching breakthrough in science and technology.

  • @markusheimerl8735
    @markusheimerl8735 2 ปีที่แล้ว +66

    Love these videos. Gotta say as much as I wow'ed at the bubbles around our supermassive black hole in the physics video, I just have a specially warm spot in my heart for mathematics :)

    • @zight123
      @zight123 2 ปีที่แล้ว +3

      same. I now jack about math, but its so fascinating.

    • @szymonbaranowski8184
      @szymonbaranowski8184 2 ปีที่แล้ว +1

      You believe in black holes? Seriously?

  • @Geosquare8128
    @Geosquare8128 2 ปีที่แล้ว +43

    hadnt realized that svms were being applied to dnns like that

    • @alany4004
      @alany4004 2 ปีที่แล้ว +3

      Geosquare the GOAT

    • @marcelo55869
      @marcelo55869 2 ปีที่แล้ว +6

      Support Vector Machines is somehow equivalent to neural networks?? Who knew!?!
      I would love to see the proof. I might lack the fundamentals to understand everything but it might be interesting anyway...

    • @cyanimpostor6971
      @cyanimpostor6971 2 ปีที่แล้ว +11

      This has actually been around for 3 decades now. Since the 1990s in fact

    • @nabeelhasan6593
      @nabeelhasan6593 2 ปีที่แล้ว

      Thanks to RBF kernel

    • @varunnayyar3138
      @varunnayyar3138 2 ปีที่แล้ว

      yeah me too

  • @Epoch11
    @Epoch11 2 ปีที่แล้ว +25

    These are really great and I hope you do more of these. Hopefully we don't have to wait till the end of the year, to get more videos which talk about breakthroughs.

    • @whataboutthis10
      @whataboutthis10 2 ปีที่แล้ว +4

      this lol, give us more breakthroughs!

  • @AdlerMow
    @AdlerMow 2 ปีที่แล้ว +2

    Quanta Magazine is incredible! Their style make everything affordable to the interested layman and it grips, you can start with any video or article and see it by yourself! So thank you all Quanta Team and writers!

  • @bolducfrancis
    @bolducfrancis 2 ปีที่แล้ว +5

    The animation at 5:12 is the last piece I needed to finally understand the diagonal proof. Thank you so much for this!

  • @primorock8141
    @primorock8141 2 ปีที่แล้ว +94

    It's crazy that we've been able to do so much with deep neural networks and we are only now starting to figure out how they work

    • @ajaykumar-ve5oq
      @ajaykumar-ve5oq 2 ปีที่แล้ว +4

      We made machines but we don't know they perform task? sounds counter intuitive

    • @jakomeister8159
      @jakomeister8159 2 ปีที่แล้ว +9

      Ever done a task that just works, you don’t know how, it just works? Yeah this is it. It’s actually pretty cool

    • @balazsh2
      @balazsh2 2 ปีที่แล้ว +9

      @@ajaykumar-ve5oq more like we can measure how well they perform tasks, so we don't care about the whys :) transparent statistical methods exist and are widely used, just for AI black box methods perform better most of the time

    • @jirrking3461
      @jirrking3461 2 ปีที่แล้ว +2

      this video is idiotic, since we do know how they work and we have been visualizing them for ages now

    • @Elrog3
      @Elrog3 2 ปีที่แล้ว +12

      Saying we don't know how neural networks work is a stretch to the same caliber of saying we don't know how cars work.

  • @yakuzzi35
    @yakuzzi35 2 ปีที่แล้ว +3

    that's what I love about maths lots of times something that started out as a game or a fun curiosity turns out to be extremely applicable and equivalent to something unpredictable decades later

  • @Irrazzo
    @Irrazzo 2 ปีที่แล้ว +22

    1:01 "What happens inside their billions of hidden layers". I think you confused layers with parameters, or weights, here. The largest GPT-3 version for instance has 96 layers and 175 billion parameters.

    • @shambhav9534
      @shambhav9534 2 ปีที่แล้ว +2

      Parameters are whatever the starting nodes pick up and layers are layers, right? Or are parameters the starting nodes themselves?

    • @Irrazzo
      @Irrazzo 2 ปีที่แล้ว +2

      @@shambhav9534 In a simple feed-forward neural network like a multilayer perceptron, you can represent a neuron / node by the equation y=h(w*x + b). x is what goes into the layer that neuron belongs to (if it's the first hidden layer, x is just an unchanged input feature vector), y is what goes out. w are the weights (all the edges) connecting all the neurons in the previous layer with the one in the current layer we're currently looking at, b is a bias. '*' is a dot product. h is a nonlinear activation function. The union of all weights and biases of all neurons between all the layers are the parameters which are learned during training.

    • @shambhav9534
      @shambhav9534 2 ปีที่แล้ว +2

      @@Irrazzo Okay I get it now.

    • @Irrazzo
      @Irrazzo 2 ปีที่แล้ว +1

      Just one more thing about layers: instead of thinking of layers in terms of the nodes of which they consist, you can also think of them in terms of the data that flows through your network (the x's and y's). Then, layers are different, increasingly abstract representations of your data, connected via transformations, or functions. And the complexity, the 'billions', are due to the enormous size of the function space of the overall function (transformation) which the network approximates by a series (or rather, composition) of functions which only slightly differ from one to the next.

    • @shambhav9534
      @shambhav9534 2 ปีที่แล้ว +2

      @@Irrazzo I understood nothing but I do think I understand layers. They're layers which modify the starting input and at the end that input becomes the output. I tried(just tried) to make a neural network back in the day, I think I know the basics.

  • @AUniqueName
    @AUniqueName 2 ปีที่แล้ว +7

    These videos are severely underrated- Thank you for the knowledge you share and hopefully millions of people will be watching these per week- It's so good for people to know about these things

  • @MrMann163
    @MrMann163 2 ปีที่แล้ว +49

    It's crazy how much stuff from uni started flowing back watching this. The fact that I can actually be able to understand all these complicated maths is crazy but exciting

    • @matthewtang1489
      @matthewtang1489 2 ปีที่แล้ว +10

      I was like. Damn... I know all of these ideas when I was watching it. I guess I can finally taste the fruits of my university education.

    • @MrMann163
      @MrMann163 2 ปีที่แล้ว +2

      @@matthewtang1489 They told me the quadratic formula would be important, but no one said I'd ever need to know set theory. Oh such ripe fruits .-.

  • @mathman274
    @mathman274 2 ปีที่แล้ว +27

    interesting, when I was in school, many decades ago, 'we' always had the idea that there's no reason something couldn't exist between aleph-0 (size of N) and aleph-1 (size of R) however, a "finger was neverput on it". There were wild speculations about fractal dimensions, but that was just a fashionable thing to look at , at the time. Interesting where this is going.

    • @ferdinandkraft857
      @ferdinandkraft857 2 ปีที่แล้ว +21

      This question was answered in 1964 by Paul Cohen and Kurt Gödel. The Continuum Hypothesis (CH) is _independent_ of Zermelo-Fraenkel axioms (plus the axiom of choice). In other words, standard mathematics cannot prove it nor it's negation. You can, however, extend standard mathematics to include the CH or some other axioms. David Asperó et al "breakthrough" doesn't use only standard math. They only proved the equivalent of two axioms that are known to imply one particular hypothesis that is incompatible with CH...
      The video is unfortunately very superficial and gives the false idea of an "answer" to a problem that, in my opinion, is already answered.

    • @mathman274
      @mathman274 2 ปีที่แล้ว +1

      well... the keyword 'H' being hypothesis of course there's also the "incompleteness theorem", and extending the "axioms" might lead to inconsistency. Indeed "standard math" can't touch it, however including CH might be a little too much. Maybe I was just too "classically" educated, but still... interesting, as was the video, i think.

    • @Noname-67
      @Noname-67 2 ปีที่แล้ว

      @@ferdinandkraft857 it independent from ZFC doesn't mean that it's neither true or false. Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true. If anything non standard is just a conventional there wouldn't be ZFC as we know it, only ZF.
      Gödel himself believed that the Continuum hypothesis was wrong, without prove nor disprove rigorously, we still can use logical deduction and reasoning to get a agreeable answer.

    • @viliml2763
      @viliml2763 2 ปีที่แล้ว +1

      @@Noname-67 "Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true."
      define "true"
      none of them describe the physical universe, there's no reason someone can't say they're false and work with that

  • @kevinvanhorn2193
    @kevinvanhorn2193 2 ปีที่แล้ว +37

    Radford Neal explored this same idea of expanding the width of a neural net to infinity over a quarter-century ago, in his 1995 dissertation, Bayesian Learning for Neural Networks. He found that what you get is a Gaussian Process.

    • @zfyl
      @zfyl 2 ปีที่แล้ว +4

      is this single handedly makes all this breakthrough just a simple revisiting of an existing conclusion?

    • @Luizfernando-dm2rf
      @Luizfernando-dm2rf 2 ปีที่แล้ว +1

      the real MVP

    • @daviddodelson8870
      @daviddodelson8870 2 ปีที่แล้ว +9

      @Gergely Kovács: no. Neal's work dealt with neural networks with a single hidden layer, this breakthrough studies the limit of width for deep neural networks, i.e, many hidden layers.

    • @kevinvanhorn2193
      @kevinvanhorn2193 2 ปีที่แล้ว +2

      @@daviddodelson8870 Thanks for the clarification. Strange, though, that it took 25 years to take that next step.

  • @johnwick2018
    @johnwick2018 2 ปีที่แล้ว +6

    I didn't understand a single thing but it is awesome.

  • @AnthonyBecker9
    @AnthonyBecker9 2 ปีที่แล้ว +54

    Hmm, I'm not sure how the neural net to kernel machine model is a breakthrough. Maybe that was left out. But the idea that a neural net divides data points with hyperplanes in high-D space goes back decades.

    • @PedroContipelli2
      @PedroContipelli2 2 ปีที่แล้ว +14

      Kernel machines are linear, whereas neural networks are, generally, non-linear. Showing that an infinite-width network can be reduced to linear essentially raises suspicion about whether finite neural networks can be simplified in some novel way as well. The consequences could be groundbreaking.

    • @satishkpradhan
      @satishkpradhan 2 ปีที่แล้ว +6

      @@PedroContipelli2 arent all layers of neural network just linear functions of the previous layer, so technically isnt it possible at some conditions a multi layer neural network can be a linear function.

    • @PedroContipelli2
      @PedroContipelli2 2 ปีที่แล้ว +17

      @@satishkpradhan The activation function of each layer (sigmoid, tanh, relu, etc) is usually where the non-linearity is introduced.

    • @lolgamez9171
      @lolgamez9171 2 ปีที่แล้ว

      @@PedroContipelli2 analog artificial intelligence

    • @joshuascholar3220
      @joshuascholar3220 2 ปีที่แล้ว +13

      I stopped at the "nobody knows how neural networks work" and "billions of hidden layers" sentence. MY GOD, why did they have some moron who has no idea what he's talking about write this? And another one read it? MY GOD.

  • @gregparrott
    @gregparrott 2 ปีที่แล้ว +2

    Just discovered 'Quanta Magazine'. Your articles on Physics, Math and Biology are all top notch!
    Subscribed

  • @saiparepally
    @saiparepally 2 ปีที่แล้ว +1

    I really hope you guys continue to publish these every year

  • @KimTiger777
    @KimTiger777 2 ปีที่แล้ว +15

    Math is art as one needs creativity to arrive to new solutions. Big WOW!

    • @zfyl
      @zfyl 2 ปีที่แล้ว

      okay, this is actually a fair point
      totally agree

    • @Rotem_S
      @Rotem_S 2 ปีที่แล้ว +1

      Also because it's (sometimes) beautiful and can engage deeply

    • @bobsanders2145
      @bobsanders2145 2 ปีที่แล้ว

      That’s everything though not just math

  • @nichtrichtigrum
    @nichtrichtigrum 2 ปีที่แล้ว +2

    With only a high school maths background, I couldn't understand any of the concepts in the video. I'd be very happy if you could explain in more detail what a Liouville field actually is and what a free Gaussian field is and so on

  • @srivatsavakasibhatla823
    @srivatsavakasibhatla823 2 ปีที่แล้ว +2

    The last one made me remember what David Hilbert implied. "Physics is too complicated to be left for Physicists alone".

  • @aayankhan6734
    @aayankhan6734 2 ปีที่แล้ว +1

    one of the few joys of the end of the year is watching these types of video....loved it!

  • @KeertiGautam
    @KeertiGautam 2 ปีที่แล้ว +5

    I don't understand much but I feel happy that good science is happening. It means there's still some sense and logic in this world alive 😄

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 2 ปีที่แล้ว +3

    What what what what what? Finally, such a result in continuum hypothesis! Unbelievable.

  • @miguelriesco466
    @miguelriesco466 2 ปีที่แล้ว +14

    Hey it was pretty nice! Just to clear things up, the continuum hypothesis is whether aleph 1 is the cardinality or size of the real numbers. By definition it is the smallest infinity greater than aleph 0.

    • @IvanGrozev
      @IvanGrozev 2 ปีที่แล้ว +3

      We dont know the size of set of real numbers, we just know its bigger the aleph0. It can be aleph1, aleph2 .... even can be monstrously big as aleph_omega_1 etc. And in current state of most widelly accepted axiomatization of mathematics called ZFC is impossible to sovle continuum hypothesis.
      One watching this video get the impression that real numbers are aleph1 in size which is not true.

    • @sweetspiderling
      @sweetspiderling 2 ปีที่แล้ว +1

      @@IvanGrozev yeah this video is all wrong.

  • @jman997700
    @jman997700 2 ปีที่แล้ว +12

    This is the best news I've heard all year. People want to know about the good news too.

    • @zfyl
      @zfyl 2 ปีที่แล้ว

      what good is about these things? whom this will benefit?

    • @nullbeyondo
      @nullbeyondo 2 ปีที่แล้ว +1

      @@zfyl If you want a really accurate answer, then It is "what" will this benefit which is mainly all of our technology. And only if they're used right, then they'd improve the quality of life overall; but no guarantee on human behavior.

  • @Psychonaut165
    @Psychonaut165 2 ปีที่แล้ว

    Out of all the science channels I understand nothing about this is one of my favorites

  • @quicksilver0311
    @quicksilver0311 2 ปีที่แล้ว +1

    Am I the only one who was totally clueless for all 11 minutes? This video literally gives me "What am I doing with my life?" vibes and I love it. XD

  • @frankferdi1927
    @frankferdi1927 2 ปีที่แล้ว +1

    What I dislike is, that many videos, this one included at some points, reward before there is proof, stimulating excitement in the viewers.
    Generating publicity is important, I do know that.

  • @warpdrive9229
    @warpdrive9229 2 ปีที่แล้ว +2

    I wait for this video eagerly every year! Much love from India :)

  • @chilling00000
    @chilling00000 2 ปีที่แล้ว +31

    Isn’t the equivalence of wide NN and kernels known for a long time already…?

    • @satishkpradhan
      @satishkpradhan 2 ปีที่แล้ว +3

      even i thought so... but as i saw all comments of people in amazment i was confused. Thank God someone else also think so ... else I thought to reread everything I had learned... or revisit my analytical thinking.

    • @StratosFair
      @StratosFair 2 ปีที่แล้ว +6

      It is in fact (part of) what my Master's thesis was about and I am quite confused because indeed this has been known for some time already

    • @David-rb9lh
      @David-rb9lh 2 ปีที่แล้ว

      It’s about dnn here

    • @StratosFair
      @StratosFair 2 ปีที่แล้ว

      @@David-rb9lh I did a bit of digging and it turns out that the paper which introduces the result (wide deep neural networks are equivalent to kernel machines) has in fact been written in 2017. Now don't get me wrong, this is a very nice result, but by no means a 2021 breakthrough unfortunately.

    • @David-rb9lh
      @David-rb9lh 2 ปีที่แล้ว

      @@StratosFair I’m agree with you . I’ve not digged to much into details to be honest .

  • @binman5753
    @binman5753 2 ปีที่แล้ว +1

    Watching this and not understanding anything make these videos all the more magical 💫

  • @thanhtunghoang3448
    @thanhtunghoang3448 2 ปีที่แล้ว +8

    The first breakthrough is called Neural Tangent Kernels, first introduced in 2018 by Arthur Jacot at EPFL. He at that time, not a Google employee. Attributing this breakthrough to Google is unfair and misleading.

    • @WilliamParkerer
      @WilliamParkerer 2 ปีที่แล้ว

      No one's attributing it to this Google employee

  • @caracasmihai01
    @caracasmihai01 2 ปีที่แล้ว +1

    My brain had a meltdown when watching this video.

  • @andraspongracz5996
    @andraspongracz5996 2 ปีที่แล้ว +10

    Got halfway through the video, and stopped. I wonder if the creators ever asked the scientists in the video (or any expert, really) to check the final version of the narration. It is full of inconsistencies, and in case of the second segment (continuum hypothesis) just completely off. We know that the continuum hypothesis is independent from ZFC (the standard system of axioms of set theory) for nearly 60 years. It was famously Paul Cohen who proved this, and he was the one who developed the technique of forcing (in order to prove this result and others). He even got a Fields Medal for his work. I'm not sure about the relevance of the Aspero-Schindler theorem ("Martin's Maximum++ implies Woodin's axiom (∗)") as I'm not a set theorist, but it must be much more subtle than what the video suggests. It is well-understood for decades what the possible alef indices of the continuum can be. In particular, it is not necessarily alef_1, as suggested early on in this video, and contradicted later. The video has very nice graphics and catchy phrases, but the content is just wrong. It was quite cringey to listen to it, really.

    • @pingdingdongpong
      @pingdingdongpong 2 ปีที่แล้ว +1

      Yea, I agree. I know enough set theory (and it ain't much) to know that this is a bunch of hogwash.

    • @Macieks300
      @Macieks300 2 ปีที่แล้ว +3

      Yes. I agree. Set theory basics are easy enough to understand for undergraduates so it's the most approachable subject among all in these videos but hearing how wrong their explanation is I now must wonder how wrong are their explanations of the other discoveries.

  • @Quwertyn007
    @Quwertyn007 2 ปีที่แล้ว +7

    6:33
    Saying an axiom is "likely true" makes no sense, unless it was to follow from other axioms and thus be unnecessary. Axioms are what you start with - you can start with whatever assumptions you want, the best they can do is not contradict each other and lead to interesting/useful mathematics. Math doesn't take into account the physical world - it is only based on axioms.
    Maybe you could make an argument about this axiom likely being related to the physical world in some way, which in some non mathematical sense would make it "true", but that seems rather difficult.

    • @Quwertyn007
      @Quwertyn007 2 ปีที่แล้ว

      @FriedIcecreamIsAReality I think you make a good point, but I don't think many people would understand "likely true" as "intuitively making sense". That's just not what "true" means.

    • @Quwertyn007
      @Quwertyn007 2 ปีที่แล้ว

      @FriedIcecreamIsAReality I'm still just a mathematics student, so I'm not in the best position to judge whether it really is used this way, but this video isn't aimed at professors, so I think the phrasing is at least misleading

  • @mdoerkse
    @mdoerkse 2 ปีที่แล้ว +47

    Interesting that all three breakthroughs have to do with connections between different theories and 2 of them are mapping something useful to something easy to compute.

    • @zfyl
      @zfyl 2 ปีที่แล้ว

      what useful?

    • @mdoerkse
      @mdoerkse 2 ปีที่แล้ว +3

      @@zfyl Deep neural nets and quantum physics/gravity.

    • @seenaman96
      @seenaman96 2 ปีที่แล้ว

      I learned about kernels back in 2017 when using SVM... How are kernels breakthroughs? If you have inputs that are not activated in 1 dimension, exploding to a higher dimension will not include them... So it's fine to skip the work, DUH

    • @mdoerkse
      @mdoerkse 2 ปีที่แล้ว +2

      @@seenaman96 I'm not a mathematician and I don't know anything about kernels, but the video wasn't saying that kernels are the breakthrough. It's saying they are the old, easily computible thing that neural nets can be mapped to. The mapping is the breakthrough.

  • @pvic6959
    @pvic6959 2 ปีที่แล้ว +38

    I love how google showed up in both the physics and math/comp sci break through videos. it shows how much theyre doing and how much they're pushing humanity forward little by little. love them or hate them, its so cool to see science being done!

    • @martinschulze5399
      @martinschulze5399 2 ปีที่แล้ว +13

      Google is not altruistic ;)

    • @LA-eq4mm
      @LA-eq4mm 2 ปีที่แล้ว +5

      @@martinschulze5399 as long as someone is doing something

    • @willlowtree
      @willlowtree 2 ปีที่แล้ว +20

      i have great respect for the scientists working at google, but as a company it is inevitable that their goals are not always allied with humanity's interests

    • @pvic6959
      @pvic6959 2 ปีที่แล้ว +3

      @@willlowtree yeah my comment wasnt about about goals or anything. just that theyre doing so much science and sharing a lot of it with the world

    • @baronvonbeandip
      @baronvonbeandip 2 ปีที่แล้ว +5

      @@martinschulze5399 Water is wet. Nothing is altruistic.

  • @monad_tcp
    @monad_tcp 2 ปีที่แล้ว +13

    So they proved the equivalence between convolution kernels and neural networks. As someone who does searchers in computing graphics, I always had this feeling that they were very close, as you could use them together and sometimes even replace one for another.

    • @szymonbaranowski8184
      @szymonbaranowski8184 2 ปีที่แล้ว +1

      Seems not as any great or surprising breakthrough then.

  • @droro8197
    @droro8197 2 ปีที่แล้ว +7

    Talking about the continuum hypothesis without mentioning the results of Cohen and Godel is pretty much a crime. Basically the continuum hypothesis is independent from the the rest of set theory axioms and can be assume to be true or false. i guess the real problem here is talking about very heavy math problem in 10 minute video…

  • @Amir_404
    @Amir_404 2 ปีที่แล้ว +3

    Bit of a nitpick, but "neural networks" in computer science(or at least the ones that people use to solve problems) are not comparable to the neural networks in the brain. The two fundamental differences are that computers are "feed forward" and synchronous. in English, every layer fires at the same time and there are no loops. It is not that we can't make a neural network more similar to a brain(there is a lot of interesting research going on), but nobody has found an effective way of training those types of networks.

  • @viniciush.6540
    @viniciush.6540 2 ปีที่แล้ว +5

    "This enables to compute things that physicists don't know how to compute" oh man how i love this phrase lol

  • @badalism
    @badalism 2 ปีที่แล้ว +28

    We have known for a while that infinite width neural network + SGD is equivalent to Gaussian Process.

    • @zfyl
      @zfyl 2 ปีที่แล้ว +6

      thanks for single handedly eradicating the breakthrough level of that paper 😅

    • @Bruno-el1jl
      @Bruno-el1jl 2 ปีที่แล้ว +2

      Not for dnns though

  • @josueibarra4718
    @josueibarra4718 ปีที่แล้ว

    Gotta love how Gauss still somehow manages to butt in to present-day, groundbreaking discoveries

  • @SolaceEasy
    @SolaceEasy 2 ปีที่แล้ว +6

    Man, math's mysterious.

  • @user-ei8yd3tm9l
    @user-ei8yd3tm9l 2 ปีที่แล้ว +5

    towards the end of the video, I was like: this is pretty much why my naive thought of majoring in pure math got crushed after first-year university... math before university is nowhere close to real hard-core math, which is a different beast altogether.

  • @kamabokogonpachiro6797
    @kamabokogonpachiro6797 2 ปีที่แล้ว +1

    "When you watch a video, you get the sensation of understanding, but you never actually learn anything" ~ Veritasium

  • @deleted-something
    @deleted-something ปีที่แล้ว +1

    I knew in the moment they started speaking about the Continuum hypothesis this was gonna be interesting

  • @gettingdatasciencedone
    @gettingdatasciencedone 2 ปีที่แล้ว

    I love these intro videos that try and convey the complexity of recent advances.
    One small problem with this video is that the opening line is not strictly speaking true. The 1950s neural networks did not use the same learning rules as the human brain. They were very simplified models based on a bunch of assumptions.

  • @NovaWarrior77
    @NovaWarrior77 2 ปีที่แล้ว +12

    these are awesome! I'm glad we don't just have to look back to textbooks to see cutting edge advances!

  • @charlesvanderhoog7056
    @charlesvanderhoog7056 2 ปีที่แล้ว +1

    Kernel Machine new? We used variance analysis in multiple dimensions as far back as the 1970's and it was developed into what is called positioning in marketing. These techniques enable the researcher to extract immense amounts of data from small samples.

  • @dEntz88
    @dEntz88 2 ปีที่แล้ว +22

    With regard to the contiuum hypothesis: Did I understand this correctly that they are no longer operating in ZFC, but added more and stricter axioms? Wouldn't this imply that the continuum hypothesis is still undecidable in ZFC?

    • @hunterdjohny4427
      @hunterdjohny4427 2 ปีที่แล้ว +20

      Yes the continuum hypotheses is known to be undecidable in ZFC since Gödel. It has also been known for a while that if you were to add either of the axioms MM++ or Woodin's axiom (*) to ZFC, then the Continuum hypotheses would be false.
      Now, the paper by David Asperó and Ralf Schindler proves that (*) is weaker than MM++. This ofcourse has no bearing on the continuum hypotheses at all unless you consider either of them an axiom. How the video chooses to present this is quite odd. I guess the point they are trying to make is that since they were always considered rival axioms and we now know that one actually implies the other we might just add M++ as an axiom to ZFC. Woodin stated something along the lines that we shouldn't accept MM++ or (*) an axiom because MM++ is incompatible with the natural strengthening of (*). Regardless of what that actually means it at least should be clear that there are objections to simply accepting MM++ as an axiom.

    • @dEntz88
      @dEntz88 2 ปีที่แล้ว +4

      @@hunterdjohny4427 Thank you. I also found it weird how they framed it in the video. At least to me it came across that they were implying that the results could also be used in ZFC alone. Hence my question.

    • @dEntz88
      @dEntz88 2 ปีที่แล้ว +2

      @FriedIcecreamIsAReality But isn't that just creating new problems? If I remember Gödel correctly, every sufficiently powerful system of axioms will run into similar problems as the continuum hypothesis. My issue is that the video, at least to how I perceived it, framed the issue in a way that implies that result leads to a result which is "more true". But the notion of true solely depends on the axioms we choose and is subjective to a certain extent.

    • @hunterdjohny4427
      @hunterdjohny4427 2 ปีที่แล้ว +3

      ​@@dEntz88 Adding an axiom to ZFC wouldn't create new problems. Every theorem that was previously provable (or refutable) is still provable (or refutable), and some that were previously undecidable may now be provable (or refutable). So by adding an axiom your theory gets more 'specific'. What Gödel showed is that this process of adding axioms can never lead to a system of mathematics in which every statement is provable (or refutable), unless you add many many axioms in such a way that your set of axioms loses it's recursiveness. This is hardly desirable, since the set of axioms being non-recursive means that if I write down a statement you have no way of telling whether it is an axiom or not, neither will you be able to tell whether a given proof is valid or not. Our only option is to accept that any decent theory of mathematics (decent as in powerful enough to express basic arithmetic) can't be complete.
      Your issue with the video is correct of course, they pretend statements have an absolute truth value regardless of the system of axioms worked in. What is said at 6:33 is especially bizarre: [MM++ and (*) are both likely true] makes no sense whatsoever since both axioms are independent of ZFC.

    • @dEntz88
      @dEntz88 2 ปีที่แล้ว +2

      @@hunterdjohny4427 Thank you for your explanation. I only have a somewhat superficial knowledge of that area of maths and was actually thinking about the issues you elaborated.

  • @warpdrive9229
    @warpdrive9229 2 ปีที่แล้ว +1

    This was just awesome! See you guys next year again. Much love from India :)

  • @lebiquo8501
    @lebiquo8501 2 ปีที่แล้ว +3

    god i would love a "breakthroughs in chemistry" video

  • @richardfredlund3802
    @richardfredlund3802 2 ปีที่แล้ว

    that equivalence between the infinite width NN's and Kernel machines is really a very surprising and interesting result.

  • @MadScientyst
    @MadScientyst 2 ปีที่แล้ว

    I'd sum this up with a reference to a title of author Eric Temple Bell: 'Mathematics queen and servant of science'.....brilliant read & exposition as per this Quanta snippet!!

  • @YouChube3
    @YouChube3 2 ปีที่แล้ว

    Natural numbers, floating points and that third set I couldn’t bare even to try explain. Thank you narrator?

  • @jordanweir7187
    @jordanweir7187 2 ปีที่แล้ว +1

    I love how you guys don't leave out the gory details, thats what we all wanna see hehe, also great to have an update each year

  • @NickMorozov
    @NickMorozov 2 ปีที่แล้ว +4

    So, do I understand correctly that the neural networks are hyperdimensional? Or use extra dimensions for calculations? I'm sure I don't understand the ramifications but it sounds incredibly cool!

    • @sheriffoftiltover
      @sheriffoftiltover 2 ปีที่แล้ว +1

      Dimension in this context just means additional parameters from my understanding. EG: For a light, one dimension might be wavelength, one might be frequency and another might be luminosity

  • @JustNow42
    @JustNow42 ปีที่แล้ว

    If you would like to crack anything, try group theory . Split observations into groups and then use groups of groups etc.

  • @akshaysingh11990
    @akshaysingh11990 2 ปีที่แล้ว

    I wished I lived a million years and watched all the content created forever

  • @domdubz7037
    @domdubz7037 2 ปีที่แล้ว

    2021 and Gauss is still with us

  • @bobSeigar
    @bobSeigar 2 ปีที่แล้ว

    John Conway started my love for math. Rest in peace.

  • @IbadassI
    @IbadassI 2 ปีที่แล้ว +1

    I didn't understand most of the last half of the video, but I watched it anyway 🤯

  • @Ashallmusica
    @Ashallmusica 2 ปีที่แล้ว

    I'm the least educated person watching this( had only completed junior school ) now as a 21 years old. I just get curious with different things and clicking this video get me to learn a new word - Aleph. It's amazing for me yet i still didn't understand much here but I love this.

  • @piercevaughn7000
    @piercevaughn7000 2 ปีที่แล้ว +1

    Excellent intro
    Edit: excellent everything I’m pretty clueless on all of this, but this was awesome

  • @deantoth
    @deantoth 2 ปีที่แล้ว +1

    I've watched several of these breakthrough videos and although extremely interesting, you simplify a concept so much that rather than clarifying the topic, you make it more opaque. And just when I think you are about to provide some insight, you move on to the next segment. You could spend a few more minutes on each topic.. OR make a full video per topic please ! Thank you for your hard work.

  • @f_r_e_d
    @f_r_e_d 2 ปีที่แล้ว

    I have no idea what I just saw but here I am nodding my head while sipping coffee agreeing and mumbling “ah yes, of course”. Anyway new sub

  • @EM-qr4kz
    @EM-qr4kz 2 ปีที่แล้ว

    you have a square with vertices A, B, C, D. get all parallel straight segments from A, B to C, D. This set of line segments are aleph1 .. greater than the set of straight segments that make up an infinite line * * ... This is my observation. I do not know if it is true but it is interesting as we can say when a body is one dimensional or two, not in terms of geometry but through set theory.

  • @movax20h
    @movax20h 2 ปีที่แล้ว +4

    Deep neural network don't have billion of layers. Deepest I have ever heard of was 140 layers, which is insanely deep actually.

    • @willlowtree
      @willlowtree 2 ปีที่แล้ว +1

      yeah, they must've meant trainable parameters

  • @mlguy8376
    @mlguy8376 2 ปีที่แล้ว +3

    I would love to be in a world where people argue about these mathematical or scientific theories with as much vigour as they do depending on if they are red or blue in terms of party alignment.

    • @adamedmour9704
      @adamedmour9704 2 ปีที่แล้ว +1

      Careful what you wish for

    • @Ivan-hb3co
      @Ivan-hb3co 2 ปีที่แล้ว

      All we need is natural numbers

  • @nateb3277
    @nateb3277 2 ปีที่แล้ว

    I discovered Quanta only a few months ago but already love coming back to them for this kind of quality content on new developments in science and tech :) Like it's well written, well animated, and easily understood *chef's kiss*

  • @ilanhalioua9807
    @ilanhalioua9807 2 ปีที่แล้ว +2

    I'm pursuing a bachelor's degree in Applied Mathematics and Computing. This type of investigation is more related to pure or applied mathematics?

    • @nayjer2576
      @nayjer2576 2 ปีที่แล้ว

      where are you study?

    • @ilanhalioua9807
      @ilanhalioua9807 2 ปีที่แล้ว

      @@nayjer2576 UC3M (Madrid, Spain)

    • @nayjer2576
      @nayjer2576 2 ปีที่แล้ว

      @@ilanhalioua9807 mh ok. I live in germany and applied sciences and pure sciences are mostly split into two Types of universities. And faculties of applied sciences are not so much research orientated. You cant make an phd for example in an applied University
      Are applied mathematics teached in the same institution then pure maths? If so, can you do a phd after a Master degree in applied mathematics?

  • @andreadws7151
    @andreadws7151 2 ปีที่แล้ว

    Honestly I lack the scientific knowledge to understand most of this stuff but I found them fascinating.

  • @Pramerios
    @Pramerios 2 ปีที่แล้ว +1

    Bravo!! This was SUCH an awesome video! Definitely saving and coming back!

  • @robertschlesinger1342
    @robertschlesinger1342 2 ปีที่แล้ว +1

    Very interesting, informative and worthwhile video. Be sure to read the linked articles.

  • @TheMR-777
    @TheMR-777 2 ปีที่แล้ว

    4:28 First person, which came to my mind: *PROFESSOR!*

  • @nicholasb1471
    @nicholasb1471 2 ปีที่แล้ว

    This video makes me want to do my calculus 3 homework. If only it wasn't winter break right now.

  • @TheDavidlloydjones
    @TheDavidlloydjones 2 ปีที่แล้ว

    There's an olde story that Nixon asked Chairman Mao, "What do you think of the confrontation between Greece and Rome?" and Mao of course said "It's too early to say."
    2022 is fer shure not the time to be saying what was the most important stuff last year.
    Still, of course, props to these three groups of fine thinker-speculators -- and to the video-maker for such a good job of serving up the temptations.

  • @leGEEK84
    @leGEEK84 2 ปีที่แล้ว +4

    I don't understand, I thought Cohen's forcing was used to prove the independance of continuum hypothesis on ZF axioms ? Are they trying to find satisfying axioms that can prove or disprove the hypothesis or what is the question they are trying to answer here ?

    • @arnouth5260
      @arnouth5260 2 ปีที่แล้ว +1

      I think that the continuum hypothesis is independent of ZFC, but if we assume some other axioms (which are likely true, though also independent) it is true. But I’m not sure.

    • @drdca8263
      @drdca8263 2 ปีที่แล้ว

      @@arnouth5260 indeed, the idea is showing relationships between different axioms we could add, which could give us reason to choose to believe one over another

    • @kazedcat
      @kazedcat 2 ปีที่แล้ว +3

      They have shown that two of the possible axioms we can add to the ZFC are equivalent which makes this axioms to be more favorable.

    • @williamzame3708
      @williamzame3708 2 ปีที่แล้ว

      It makes no sense to ask whether axioms are "true".

    • @drdca8263
      @drdca8263 2 ปีที่แล้ว

      @@williamzame3708 debatable. Not saying that you’re wrong in claiming that [the question of [whether an axiom is “true”] is an invalid/meaningless question], but I don’t think the question as to whether you are right in that claim has been settled.
      In fact, halting-problem-related questions include some questions which I am strongly inclined to believe have definite answers, but which are not answered by whatever sound and computably enumerable axiom system.
      (Though, even if I’m right about this, that needn’t imply that there is any fact of the matter regarding “whether the continuum hypothesis is true”.)
      It seems to me that any question that can be equivalently expressed in the form “does this particular Turing machines (when started on an empty tape) halt?” ought to have a definite answer.
      Then, for whatever sound and computably enumerable axiom system you choose, there will be Turing machines which do not halt and which your axiom system does not prove that it doesn’t halt.
      Even further, though a fair bit less strongly, I feel inclined to believe that for any predicate over natural numbers such that [for every natural number there is a fact of the matter of whether the predicate is true of that natural number], that for any Turing machine equipped with an oracle for that predicate, that there is a fact of the matter as to whether that Turing-machine-with-oracle, when started on an empty tape, halts.
      I think the question of “is there a correct answer to whether the continuum hypothesis is true?” is philosophically difficult

  • @vasdgod
    @vasdgod 2 ปีที่แล้ว

    The prince of mathematics Carl Fredrich Gauss is still a legend today . Man he is the greatest mathaticiam I have ever seen.

  • @rabbitazteca23
    @rabbitazteca23 2 ปีที่แล้ว +16

    As someone who is on the way to study deep learning.... I always wondered why my professors and other people tell me they don't know what's happening inside these deep neural networks lol. In my mind, if you programmed it, shouldn't you know what calculations it is going through and what it's doing? This video gave me a pretty good idea why lol

    • @SanjaySingh-oh7hv
      @SanjaySingh-oh7hv 2 ปีที่แล้ว +21

      Except that deep neural networks are not "programmed" in a formal language. They are "trained" in a manner roughly comparable to how people are "trained" to do a job or learn a concept. You might as well as a math teacher to specify the neurons and the connections between them that enable a student to do arithmetic.
      Furthermore, if you could know how the neural networks are doing it, why bother with a large complex neural network, when you could simply code it up as an algorithm? Neural networks are for those problems for which there is a lot of data, but no algorithm that exists to specify the computations required to solve a given task.
      So far as I know, the only people really interested in what a deep neural network is doing are those that are constructing cognitive models of human brain activity. Everyone else, Google included, is only concerned with whether it saves them money.

    • @zerotwo7319
      @zerotwo7319 2 ปีที่แล้ว +1

      @@SanjaySingh-oh7hv ' Training ' is just a vector that adjusts the Weights and Biases.

    • @SanjaySingh-oh7hv
      @SanjaySingh-oh7hv 2 ปีที่แล้ว +6

      @@zerotwo7319 Well, setting aside the details of neural network learning algorithms (unless you want to discuss them), the point here is that once the training is done, it's a big mishmash of connections and weights that is so complex no one could understand it. So if, for example, the neural network is tripping up on some input and not getting the correct response, no one could go in there and say, "Ah ha! This is the weight or bias we need to adjust to this other value to fix the neural network." In contrast, it's possible to go to a specific line of code in a program and fix it because what it's doing and what variables it shares with other parts of the program are clearly laid out in a formal language. The only way to fix a neural network that is not quite correct is with more training, rather than a neural network version of brain surgery.

    • @rabbitazteca23
      @rabbitazteca23 2 ปีที่แล้ว

      @@SanjaySingh-oh7hv thank you. I have taken Data scie 1 which is essentially just Data Visualization and presentation and I am currently taking Data Scie 2 which is more on Machine Learing. Next term ill be taking deep learning. I find it to be such an exciting topic though very confusing if I am being honest. I appreciate your output, though! You are right though that most deep learning I see being made into production are usually made for the purpose of money-making rather than for the sake of advancing science and discovery itself.

  • @nitingupta2738
    @nitingupta2738 2 ปีที่แล้ว +1

    .....now its clear that mathematics is infinite...no matter how much u learn..its never over....

    • @AaronMorrisTheSteamFox
      @AaronMorrisTheSteamFox 2 ปีที่แล้ว

      "Our work is never over."

    • @Trucmuch
      @Trucmuch 2 ปีที่แล้ว

      Maths is indeed infinite, but its infinity is merely a puny alpha0 infinite 😉

  • @J3Compton
    @J3Compton 2 ปีที่แล้ว

    Love this! It would be nice to have the urls to the papers here if possible

  • @TheTrogg
    @TheTrogg 2 ปีที่แล้ว +1

    The breakthrough of the brain structure from the other video, should influence deep neural networks advancement.

  • @bingeltube
    @bingeltube 2 ปีที่แล้ว +1

    Quanta Magazine please provide the citation for the paper by Yasaman Bahri

  • @VeridisQuo313
    @VeridisQuo313 2 ปีที่แล้ว

    Like Dr. James Grime said on Numberphile, "Say Euler or Gauss, you're probably going to be right" hahahaha

  • @nathanlloyd774
    @nathanlloyd774 2 ปีที่แล้ว +2

    Me: *develops a deep neural network*
    Friend: Damn, how's it work
    Me: tf am I supposed to know??!!

  • @ianbryant
    @ianbryant 2 ปีที่แล้ว +5

    Idk about the whole kernel machine thing. Tbh the whole "neural networks are black boxes" thing isn't really true. Neural networks are trained via backpropagation, which is a calculus method to optimize parameters of a function to either minimize or maximize it over a set of inputs. In the case of minimization, I think it's enough to say, you take the derivative of the overall function with respect to each parameter, negate and multiply that derivative by a small constant and add it to the parameter, and repeat that process for all training samples. Any other way of looking at it is just an abstraction. As for an explanation of how it works, I've just provided the best one possible. It works because of calculus, and because it must work.

  • @lilysceeliljeaniemoonlight
    @lilysceeliljeaniemoonlight ปีที่แล้ว +1

    I feel like my brain is a deep neural network!

  • @ryanrobinett9176
    @ryanrobinett9176 2 ปีที่แล้ว +1

    "But there's a dark mystery here: We have no idea how deep neural networks work."
    I believe this is false. We have known that artificial neural networks are able to approximate arbitrary continuous functions, thanks to Cybenko's Universality Theorem from 1989. While the Theorem does not give constructive means as to how to approximate these functions better or faster that the state-of-the-art, it certainly gives a fundamental explanation as to why neural networks work.