0:00 Introduction by Prof. Lex 1:04 Fundamental nature of reality : Does god play dice ? (Refers Albert Einstein) 1:54 Philosophy of science : Instrumentalism and Realism 4:08 The unreasonable effectiveness of mathematics [1][2] 6:08 Math and simple underlying principles of reality 7:26 Human intuition and ingenuity 8:56 Role of imagination (Refers Einstein's special relativity) 10:00 Do we/ will have tools to describe the process of learning mathematically ? (Refers Hook's Microscope) [3][4][5] 12:16 From a Mathematical point of view : What is a great Teacher ? 13:48 Mechanism in Learning and Essence of Duck (Bumper sticker material. Quack Quack !!) 16:58 How far are we from integrating the predicates ? (Refer the duck content to understand this question) 18:17 Admissible Set of Functions and Predicates (Talks about VC Theory [6]) 23:01 What do you think about deep learning ? (Mentions Churchill's book "The Second World War" [7], Shallow Learning [8]) 27:57 Alpha Go and Effectiveness of Neural Networks [9] 30:46 Human Intelligence and Alan Turing 33:34 Big-O Complexity and Worst Case Analysis 38:49 Opinion of how AI is considered as coding to imitate a human being 39:44 Learning and intelligence 42:09 Interesting problems on Statistical Learning (Mentions Digit Recognition problem and importance of intelligence) 48:48 Poetry, Philosophy and Mathematics 50:40 Happiest Moment as a Researcher References : [1] Wigner, Eugene P. "The unreasonable effectiveness of mathematics in the natural sciences." In Mathematics and Science, pp. 291-306. 1990. [2] www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableEffectiveness.pdf [3] th-cam.com/video/2gtrkxtsQ2k/w-d-xo.html [4] books.google.com/books?hl=en&lr=&id=ISP_gRwuz94C&oi=fnd&pg=PR1&dq=Micrographia+hook&ots=LF1VWdxjQg&sig=Qca7QzxkynZXc4AGy0YldNdQP_k [5] Hook, Robert. "Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses with Observation and Inquiries Thereupon." Royal Society: London, UK 1665. [6] www.cs.cmu.edu/~bapoczos/Classes/ML10715_2015Fall/slides/VCdimension.pdf [7] www.goodreads.com/book/show/25587.The_Second_World_War [8] files.meetup.com/18405165/DLmeetup.pdf [9] www.imdb.com/title/tt6700846/
This is the most underrated interview I have ever come across. It deserves MILLIONS of Views. A genius who had a brilliant idea 30 years before he was appreciated
I can't remember the time that I've really enjoyed a great conversation like this one.These are good questions by Lex . And I am so excited and thrilled by the intelligence of Vladimir Vapnik.
I wish Lex would've asked the meaning of life question to Mr. Vapnik, that is always my favorite part of every pod. Lex, round 2 please! Glad to know Mr.Vapnik is still alive.
I have to express my gratitude for uploading stuff like this, Thanks so much Lex and thanks to Dr. Vapnik for taking the time to express some of the insights he has gained throughout his life
Fascinating. Mr. Vapnik's pure mathematics arguments are very much a sharp contrast, and welcome viewpoint on learning. Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
This is fascinating. I had to pay more attention to appreciate the detail in Mr.Vapnik's arguments. I feel Lex was outmatched by just the pure mathematical arguments of Mr.Vapnik, which is fair. It would be hard for anyone who isn't a pure mathematician to contest him, and have a debate. IT would be astonishing though to see a debate or discussion of mathematicians of this level. Maybe Lex, can go into way more technical podcasts than the general, abstract, and cultural pods that he is doing more of these days. Though, I still love he is still doing technical pods in various scientific topics. Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
Hi Lex, I have enjoyed many of your podcasts and was very happy and very interested to see you did these interviews with Vladimir Vapnik. It would be extremely interesting if you would interview Hava Siegelmann. She was, among many other things, the co-inventor of Support Vector Clustering with Vapnik; she, in fact, improved his labeled clustering to an unlabeled clustering algorithm - becoming one of the most widely used in industry. She is the inventor of Super-Turing computation, the only functional alternative to Turing computation. She was the founder and director of DARPA’s Lifelong Learning program for the past four years. Lifelong Learning is the most advanced program for AI capable of learning in real time and applying learned experience to previously not experienced circumstances. I would love to see an interview! Thanks, Eric
Wow! Incredible. What an interview. Like a series of Zen koans in mathematical form. I especially loved Dr. Vapnik's discussion of what a great teacher does. Two questions: 1) as physics drives deeper into the nature of reality will we find that math is not just a model but can fully represent, i.e. is, reality; and 2) if other universes exist do they have the same mathematics? Thanks!
Invariance- 43:53 When mathematicians first created deep learning, they immediately recognized that they use way more training data than humans need. How to decrease training data by 100x, and still have a high enough success> That is the real question of learning-intelligence. - Vapnik
Just another day I was thinking about "how come ideas are generated in different parts of the world within a definite time period simultaneously?". Glad to hear that a prominent mathematician thinks the same way (31:34). It's Platonic and poetic. And I have heard many mathematicians say this sort of thing. Ramanujan is also a great example that makes this theory interesting.
1. "I'm not sure that intelligence is just inside of us. It may also be outside of us." 2. "I know for sure that you must know something more than digits." 3. Invariance theory might be the hope of understanding intelligence?
@@artlenski8115 If the answer is yes then it's a shame. I'm sure Lex could have done the interview in Russian and then translate it in the subtitles. Although that would be much more time consuming to prepare the video. I guess you can't have the best of both worlds.
@@343clement The answer is absolutely yes. I think about this a lot. A lot of brilliant minds are lost to history due to this language bottleneck. Perhaps the best approach for Russian speakers, I think, is to mix Russian and English together as I feel based on topic and then later translate, but I haven't tried that yet. It would be tough on many levels. But you've inspired me to at least try.
@@lexfridman I cannot thank you enough for taking the time to edit and upload these videos, большое спасибо! By all means, please experiment with the format of the interviews. By the way, you playing the guitar while riding in Black Betty is the coolest thing ever :)
Great conversation! But I beg to differ with Vladimir Vapnik on the role of imagination in discoveries. Imagination and human intuition plays an active role in extending the existing laws and axioms, and to construct theories to fit observations. What he had worked on might not have required imagination and intuition, but when it comes to theorizing and extending the existing laws, or the language of mathematics itself (or physics) human intuition and imagination will be essential. Every sub-domain people specialize in will have its own unique demands.
Well I believe it is clear that Vladimir is simply talking about his personal life experience; of course every person has a different life experience. Maybe in Einstein discoveries imagination had a great role.
haha I liked his response to the AlphaGo question! On the other hand, I think it's missleading. Just like in maths, a problem's difficulty should be gauged by how hard it seems before solving it, not how hard it is in hinsight.
26:23 he is definitely right but we cannot wait 20 years for some brilliant mathematician to discover that. In the meantime I think it is good to use DL which is not perfect but gets the job done.
25:56 Representer theorem says that optimal solution ... is on shallow networks, not on deep learning. I cannot understand why this holds. Can sb explain or give me a reference? Thanks
Representer theorem says you can perfectly approximate arbitrary function with finite 1-layer neural network. Deep learning however uses more than one layer
@@3cheeseup this is a very interesting point. But if we consider that Deep Learning is (1) able to discover hidden structure of the data (features learning) and (2) model nested hierarchy of concepts, does this means that you should manually translate point (1) and (2) into a shallow network? In other words, you can approximate a DL model using a finite 1-layer neural network BUT in doing so you need to manually introduce concepts (1) and (2) into the shallow network.
@@colouredlaundry1165 The representer theorem doesn't say anything about the amount of neurons you need. It could be 1, 100, a Googol or a Googolplex to represent your target function. As we only have limited ressources I don't think the theorem is of any practical importance to us.
what does he says at @ 3.49 "the GOD or GOAL of ML is to learn about conditional probability" ? I think it's "GOAL" but then the next sentence is about God playing dices. I think he says GOAL first and then GOD in the following sentence but they sound so similar and they are very close to each other in the dialoge.
Great stuff, thought the editing somewhat breaks the flow. Why not put the whole conversation as is? I like the stutters and misunderstanding of questions type conversation :-) There is something there as well.
This talk about "Walks like a duck, swims like a duck, quacks like a duck" and particularly the part where Vladimir Vapnik brings up "playing chess like a duck" is interesting in the same sense as the phrase "Vegan tomatoes" for it implies a meaningful distinction where there is none, however, in the context of machines, it is very relevant. A computer isn't really capable of just throwing out distinctions that are not meaningful for if it does so, it has deemed it not meaningful in any context, not just the meaningful contexts.
I am not sure if we can derive theory of inteligence purely from math. In physics the problems are easier, because we can create meaningful equations, which can guide us. The examples could be Max plank quantization of energy or Albert Einstein retativity theory or Dirac's anti particles or currently string theory. On the other hand in biology, chemistry, ... there is less insight from equations. For example effects of protein folding are very difficult to deduce from equations and we have to use computation instead. The same could be with intelligence that it has mathematical description, but is very messy and does not adhere to our sense of mathematical beaty. This could of course change as we find more connections and built consistant theory, so initially messy ideas become more and more intuitive and beautiful, but the core does not change. Using beauty and elegance of math as heuristic is a little bit dangerous. For example geocentric theory at the time had nicer description than heliocentric theory. The reason was that we had to made more correction term to heliocentric theory to match the precision of geocentric theory. It was, because they didn't use elipse to describe motion, but instead compositions of circular motions were used. Only after emprical findings of Kepler we switched to elipses. Another more anecdotal example would be the dynamo theory of WALTER M. ELSÄSSER describing why plantes have magnetic fields. He told his theory to Albert Einstein, but “he didn’t much believe it. He simply could not believe that something so beautiful could have such a complicated explanation" in words of Einsten assistan (Einstein prefered not to tell his opinion). The theory was correct, Einstein's intuition was wrong. (Source: top of 3rd page of pdf -> www.geosociety.org/documents/gsa/memorials/v24/Elsasser-WM.pdf) Also currently string theory is getting some backlash, because of lack of results despite decade long effort. This theory has some promising connections and seems to be a perfect fit for missing element in our understanding of physics, but there are also some ugly parts, like need for more dimensions or too many possible universes. So we have to be carefull to not be too much focused on mathematical beauty, nature can just be messy or we might not have a mathematical tools to appreciate it's beauty.
Mathematical explication of implicit invariants can be at least partially done for some senses and particular problems, in general sense encoding homeomorphisms. But how to discover invariants when even a human observer doesn't see them or perceives incorrect invariants ))
Dear Mr. Fridman, this is good video. I am researching SVM and has a paper to introduce to you and Dr. Vapnik. Could you please let me know Dr. Vapnik contact point? Thank you.
But there are no simple invariants for any complicated real-world classification task. If there were, the machine learning would not be necessary. We could just use straight computer code.
So in a way, the problem of intelligence or at least the basis regarding the concept of a good teacher hinges on metaphorical truth and linguistic precision.
I strongly disagree with Vapnik on his opinion about intuition. He seems dogmatic in his dismissal of the idea, however, through history we have seen a number of human phenotypes that produce significant intellectual achievement. One such phenotype that appears to be convergent in many individuals who have made tremendous achievements and cracked open entire academic disciplines (e.g. Einstein) is that of the visionary. Someone who is able to intimately understand a problem so that they may sufficiently abstract it to allow for giant leaps of progress by using intuition or visualization rather than iterative logical steps. I feel like Vapnik may be more of the literal, autistic type of individual who is very good at specializing and using brute force logic to iterate from axioms to a model within his discipline. I would not be too quick to discount the role of intuition particularly in the more demanding, technical fields such as pure mathematics and theoretical physics as opposed to machine learning and statistics.
*does God play dice?* God is to our Universe what Gary gygax is to _Dungeons & Dragons_ God doesn't necessarily play with dice, but define what kinds of dice d6 d10 d20 etc should be the basis of his loose adventures that others play with under a DM who follows D&D rules which had an Intelligent Designer in Gary Gygax, There are other games which use dice, such as _Monopoly_ and therefore you can logically infer the existence of exouniverses that suppor alien life.
I CAN HYPOTHESISE EVEN THO GOD KNOWS ALL CONDITIONAL PROBABILITIES .....HE STILL NEEDS TO CONSIDER ALL THE OUTCOMES WITHOUT BIAS...WHICH IS IMPOSSIBLE FOR ANY OBSERVER...
@George Hatoutsidis Agreed. I think imagination is very important for finding or creating something valuable with math. Perhaps, he views imagination as working back from fantasy or thinking in terms of beauty. But imagination can be simply manipulating equations in a creative way to discover/uncover some valuable insight.
@George Hatoutsidis Also, I would say you do not need knowledge for imagination, rather you need knowledge to increase the chances that you will be able to manifest your imagination into reality.
0:00 Introduction by Prof. Lex
1:04 Fundamental nature of reality : Does god play dice ? (Refers Albert Einstein)
1:54 Philosophy of science : Instrumentalism and Realism
4:08 The unreasonable effectiveness of mathematics [1][2]
6:08 Math and simple underlying principles of reality
7:26 Human intuition and ingenuity
8:56 Role of imagination (Refers Einstein's special relativity)
10:00 Do we/ will have tools to describe the process of learning mathematically ? (Refers Hook's Microscope) [3][4][5]
12:16 From a Mathematical point of view : What is a great Teacher ?
13:48 Mechanism in Learning and Essence of Duck (Bumper sticker material. Quack Quack !!)
16:58 How far are we from integrating the predicates ? (Refer the duck content to understand this question)
18:17 Admissible Set of Functions and Predicates (Talks about VC Theory [6])
23:01 What do you think about deep learning ? (Mentions Churchill's book "The Second World War" [7], Shallow Learning [8])
27:57 Alpha Go and Effectiveness of Neural Networks [9]
30:46 Human Intelligence and Alan Turing
33:34 Big-O Complexity and Worst Case Analysis
38:49 Opinion of how AI is considered as coding to imitate a human being
39:44 Learning and intelligence
42:09 Interesting problems on Statistical Learning (Mentions Digit Recognition problem and importance of intelligence)
48:48 Poetry, Philosophy and Mathematics
50:40 Happiest Moment as a Researcher
References :
[1] Wigner, Eugene P. "The unreasonable effectiveness of mathematics in the natural sciences." In Mathematics and Science, pp. 291-306. 1990.
[2] www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableEffectiveness.pdf
[3] th-cam.com/video/2gtrkxtsQ2k/w-d-xo.html
[4] books.google.com/books?hl=en&lr=&id=ISP_gRwuz94C&oi=fnd&pg=PR1&dq=Micrographia+hook&ots=LF1VWdxjQg&sig=Qca7QzxkynZXc4AGy0YldNdQP_k
[5] Hook, Robert. "Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses with Observation and Inquiries Thereupon." Royal Society: London, UK 1665.
[6] www.cs.cmu.edu/~bapoczos/Classes/ML10715_2015Fall/slides/VCdimension.pdf
[7] www.goodreads.com/book/show/25587.The_Second_World_War
[8] files.meetup.com/18405165/DLmeetup.pdf
[9] www.imdb.com/title/tt6700846/
Thanks
You're awesome !
Thanks.
Keep revisiting this and slowly understanding more. This may be the best podcast on the channel.
This is the most underrated interview I have ever come across. It deserves MILLIONS of Views. A genius who had a brilliant idea 30 years before he was appreciated
I appreciate you sharing this with us all Lex. Gratitude.
Hey Lex, Thanks for making this content free and accessible online! Very generous and much appreciated.
I can't remember the time that I've really enjoyed a great conversation like this one.These are good questions by Lex . And I am so excited and thrilled by the intelligence of Vladimir Vapnik.
I wish Lex would've asked the meaning of life question to Mr. Vapnik, that is always my favorite part of every pod. Lex, round 2 please! Glad to know Mr.Vapnik is still alive.
Oooh wow, i didn't realize how lucky we were, podcast #71 is the 2nd Round. Awesome, let's gooo!
I have to express my gratitude for uploading stuff like this, Thanks so much Lex and thanks to Dr. Vapnik for taking the time to express some of the insights he has gained throughout his life
Fascinating. Mr. Vapnik's pure mathematics arguments are very much a sharp contrast, and welcome viewpoint on learning.
Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
This is fascinating. I had to pay more attention to appreciate the detail in Mr.Vapnik's arguments. I feel Lex was outmatched by just the pure mathematical arguments of Mr.Vapnik, which is fair.
It would be hard for anyone who isn't a pure mathematician to contest him, and have a debate. IT would be astonishing though to see a debate or discussion of mathematicians of this level. Maybe Lex, can go into way more technical podcasts than the general, abstract, and cultural pods that he is doing more of these days. Though, I still love he is still doing technical pods in various scientific topics.
Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
Спасибо, Лекс. Очень интересно было послушать профессора Владимира Вапника.
Три год позже
Hi Lex, I have enjoyed many of your podcasts and was very happy and very interested to see you did these interviews with Vladimir Vapnik. It would be extremely interesting if you would interview Hava Siegelmann. She was, among many other things, the co-inventor of Support Vector Clustering with Vapnik; she, in fact, improved his labeled clustering to an unlabeled clustering algorithm - becoming one of the most widely used in industry. She is the inventor of Super-Turing computation, the only functional alternative to Turing computation. She was the founder and director of DARPA’s Lifelong Learning program for the past four years. Lifelong Learning is the most advanced program for AI capable of learning in real time and applying learned experience to previously not experienced circumstances. I would love to see an interview! Thanks, Eric
Another great video. Thanks for that amazing content, Lex.
The legendary Vapnik !!! Thank you Lex !
This is so good, wish there were more guests like the one in this vid nowadays too
That was real good interview, thanks for sharing
Based on the number of views ... this podcast with Vapnik is greatly underrated
Wow! Incredible. What an interview. Like a series of Zen koans in mathematical form. I especially loved Dr. Vapnik's discussion of what a great teacher does. Two questions: 1) as physics drives deeper into the nature of reality will we find that math is not just a model but can fully represent, i.e. is, reality; and 2) if other universes exist do they have the same mathematics? Thanks!
Wow what an interesting conversation, thank you so much Lex for the video, really appreciate it and looking forward to more of such videos, cheers
Invariance- 43:53 When mathematicians first created deep learning, they immediately recognized that they use way more training data than humans need. How to decrease training data by 100x, and still have a high enough success> That is the real question of learning-intelligence. - Vapnik
Just another day I was thinking about "how come ideas are generated in different parts of the world within a definite time period simultaneously?". Glad to hear that a prominent mathematician thinks the same way (31:34).
It's Platonic and poetic. And I have heard many mathematicians say this sort of thing. Ramanujan is also a great example that makes this theory interesting.
Thank you for uploading such a beautiful interview! I enjoyed this video so much!
Absolutely amazing video!
The duck conversation was very intriguing and enjoyable.
This is a really good mental and hearing workout, his accent is really hard to listen to, but I like it
What a spectacularly intelligent person. A very different perspective than mainstream machine learning media.
Beauty and poetry! Again, thanks Lex!
exactly !!! this is like "... songs, paintings, writings, dance, drama, photography, carpentry, crafts, love, and love ..."
1. "I'm not sure that intelligence is just inside of us. It may also be outside of us."
2. "I know for sure that you must know something more than digits."
3. Invariance theory might be the hope of understanding intelligence?
Very insightful. Learned a lot about ducks
I think in 7:13 he says "residuals", not "details" (as in the subtitles). That's an important difference for the meaning of what he's saying.
This was incredible.
With each interview, I'm getting more interested in the subject. Thank you for the great content!
Thanks a lot for the podcast , it was very interesting to listen.
I can't help but wonder if professor Vapnik could have expressed his thoughts a bit better if the interview was done in Russian.
I am pretty sure the answer is yes. He's got lots of knowledge, wisdom in him, unfortunately the communication bottleneck is language.
@@artlenski8115 If the answer is yes then it's a shame. I'm sure Lex could have done the interview in Russian and then translate it in the subtitles. Although that would be much more time consuming to prepare the video. I guess you can't have the best of both worlds.
@@343clement The answer is absolutely yes. I think about this a lot. A lot of brilliant minds are lost to history due to this language bottleneck. Perhaps the best approach for Russian speakers, I think, is to mix Russian and English together as I feel based on topic and then later translate, but I haven't tried that yet. It would be tough on many levels. But you've inspired me to at least try.
@@lexfridman I cannot thank you enough for taking the time to edit and upload these videos, большое спасибо! By all means, please experiment with the format of the interviews. By the way, you playing the guitar while riding in Black Betty is the coolest thing ever :)
@@lexfridman That would be awesome, considering how many russian speakers here. :)
really interesting conversation, thank you!
great work , thanks both
Great conversation! But I beg to differ with Vladimir Vapnik on the role of imagination in discoveries. Imagination and human intuition plays an active role in extending the existing laws and axioms, and to construct theories to fit observations. What he had worked on might not have required imagination and intuition, but when it comes to theorizing and extending the existing laws, or the language of mathematics itself (or physics) human intuition and imagination will be essential.
Every sub-domain people specialize in will have its own unique demands.
Well I believe it is clear that Vladimir is simply talking about his personal life experience; of course every person has a different life experience. Maybe in Einstein discoveries imagination had a great role.
thanks Lex,that was great!
haha I liked his response to the AlphaGo question!
On the other hand, I think it's missleading. Just like in maths, a problem's difficulty should be gauged by how hard it seems before solving it, not how hard it is in hinsight.
What a brilliant mind !
Thanks Lex, this talk was amazeballs!
Anyone got what the MIT guy's name was @52:27?
Dodley? Or something
7:12 He said, "We are looking only residuals".
26:23 he is definitely right but we cannot wait 20 years for some brilliant mathematician to discover that. In the meantime I think it is good to use DL which is not perfect but gets the job done.
25:56 Representer theorem says that optimal solution ... is on shallow networks, not on deep learning.
I cannot understand why this holds. Can sb explain or give me a reference?
Thanks
Representer theorem says you can perfectly approximate arbitrary function with finite 1-layer neural network. Deep learning however uses more than one layer
@@3cheeseup this is a very interesting point. But if we consider that Deep Learning is (1) able to discover hidden structure of the data (features learning) and (2) model nested hierarchy of concepts, does this means that you should manually translate point (1) and (2) into a shallow network? In other words, you can approximate a DL model using a finite 1-layer neural network BUT in doing so you need to manually introduce concepts (1) and (2) into the shallow network.
@@colouredlaundry1165 The representer theorem doesn't say anything about the amount of neurons you need. It could be 1, 100, a Googol or a Googolplex to represent your target function. As we only have limited ressources I don't think the theorem is of any practical importance to us.
Gold. This is gold. Very nice to hear others perspectives. This guy is stubborn lol.
For folks that are having a hard time understanding, Captioning the video should help.
very wise man!
Ground Truths guide us all
24:29 Lays the smackdown on the dilettante and mathematically deficient.
Wonderful. Thank you.
I understand some of what's being said here.
what does he says at @ 3.49
"the GOD or GOAL of ML is to learn about conditional probability" ?
I think it's "GOAL" but then the next sentence is about God playing dices.
I think he says GOAL first and then GOD in the following sentence but they sound so similar and they are very close to each other in the dialoge.
very interesting person
Another interesting interview, but I think all of the interviews would be better with fewer leading questions and professing by the interviewer.
Einstein discovered relativity from equations btw, he saw that time was not constant from those derived equations
Great stuff, thought the editing somewhat breaks the flow. Why not put the whole conversation as is? I like the stutters and misunderstanding of questions type conversation :-) There is something there as well.
true, also when interviewers interrupt and talk over the smart person, only does damage.
This talk about "Walks like a duck, swims like a duck, quacks like a duck" and particularly the part where Vladimir Vapnik brings up "playing chess like a duck" is interesting in the same sense as the phrase "Vegan tomatoes" for it implies a meaningful distinction where there is none, however, in the context of machines, it is very relevant. A computer isn't really capable of just throwing out distinctions that are not meaningful for if it does so, it has deemed it not meaningful in any context, not just the meaningful contexts.
I am not sure if we can derive theory of inteligence purely from math. In physics the problems are easier, because we can create meaningful equations, which can guide us. The examples could be Max plank quantization of energy or Albert Einstein retativity theory or Dirac's anti particles or currently string theory.
On the other hand in biology, chemistry, ... there is less insight from equations. For example effects of protein folding are very difficult to deduce from equations and we have to use computation instead. The same could be with intelligence that it has mathematical description, but is very messy and does not adhere to our sense of mathematical beaty. This could of course change as we find more connections and built consistant theory, so initially messy ideas become more and more intuitive and beautiful, but the core does not change.
Using beauty and elegance of math as heuristic is a little bit dangerous. For example geocentric theory at the time had nicer description than heliocentric theory. The reason was that we had to made more correction term to heliocentric theory to match the precision of geocentric theory. It was, because they didn't use elipse to describe motion, but instead compositions of circular motions were used. Only after emprical findings of Kepler we switched to elipses.
Another more anecdotal example would be the dynamo theory of WALTER M. ELSÄSSER describing why plantes have magnetic fields. He told his theory to Albert Einstein, but “he didn’t
much believe it. He simply could not believe that something so beautiful could have such a complicated explanation" in words of Einsten assistan (Einstein prefered not to tell his opinion). The theory was correct, Einstein's intuition was wrong. (Source: top of 3rd page of pdf -> www.geosociety.org/documents/gsa/memorials/v24/Elsasser-WM.pdf)
Also currently string theory is getting some backlash, because of lack of results despite decade long effort. This theory has some promising connections and seems to be a perfect fit for missing element in our understanding of physics, but there are also some ugly parts, like need for more dimensions or too many possible universes.
So we have to be carefull to not be too much focused on mathematical beauty, nature can just be messy or we might not have a mathematical tools to appreciate it's beauty.
God Bless Vladmir Vapnik
is he saying @ 3:11 "setting" ?
His comment about music is similar to the ideas in GEB!
what is he saying at 1:35 ? "it is ???? described ", what is ???
Mathematical explication of implicit invariants can be at least partially done for some senses and particular problems, in general sense encoding homeomorphisms. But how to discover invariants when even a human observer doesn't see them or perceives incorrect invariants ))
Dear Mr. Fridman, this is good video. I am researching SVM and has a paper to introduce to you and Dr. Vapnik. Could you please let me know Dr. Vapnik contact point? Thank you.
Hi, what he meant by "predicate", please. I google it but I found a different definitions.
Think it's just kind of like a qualitative description or sentence.
podcasts.google.com/feed/aHR0cHM6Ly9saXN0ZW5ib3guYXBwL2Yvc3M2Y1NjQ3phSy0/episode/c3M2Y1NjQ3phSy06ZEdlaFp0YnJSc0g?ep=14.
from 59:28
But there are no simple invariants for any complicated real-world classification task. If there were, the machine learning would not be necessary. We could just use straight computer code.
Brilliant
So in a way, the problem of intelligence or at least the basis regarding the concept of a good teacher hinges on metaphorical truth and linguistic precision.
I love a lot of blah blah blah!! Great podcast!!!!
24:28
Gg suggest moon sonata when i play this video. Respect sir.
Thnx
AGI should make games and enjoy music.
Everything out of ramanujans mind came out of his intuition
The secret character of the podcast - the Duck.
BACK UP IN THE INTRO HERE ALEXANDER
Those subtitles should probably "Weak and strong convergence" not "Big and strong .."
Fuck like a Duck - I Swear thats what i heard lol then he sheepish smiles love it.
lol. that's an interesting example he just quacked out. quack like a duck
Very sad that this only gets 455 views
But, 455 relevant viewers.
It only came out today. Give it another day or two.
💖
I understand one thing from this conversation. AI will not take over humans just because AI is missing intelligence.
He shot down neural networks even for a hypothetical scenario, lol
I strongly disagree with Vapnik on his opinion about intuition. He seems dogmatic in his dismissal of the idea, however, through history we have seen a number of human phenotypes that produce significant intellectual achievement. One such phenotype that appears to be convergent in many individuals who have made tremendous achievements and cracked open entire academic disciplines (e.g. Einstein) is that of the visionary. Someone who is able to intimately understand a problem so that they may sufficiently abstract it to allow for giant leaps of progress by using intuition or visualization rather than iterative logical steps. I feel like Vapnik may be more of the literal, autistic type of individual who is very good at specializing and using brute force logic to iterate from axioms to a model within his discipline.
I would not be too quick to discount the role of intuition particularly in the more demanding, technical fields such as pure mathematics and theoretical physics as opposed to machine learning and statistics.
*does God play dice?*
God is to our Universe what Gary gygax is to _Dungeons & Dragons_
God doesn't necessarily play with dice, but define what kinds of dice d6 d10 d20 etc should be the basis of his loose adventures that others play with under a DM who follows D&D rules which had an Intelligent Designer in Gary Gygax, There are other games which use dice, such as _Monopoly_ and therefore you can logically infer the existence of exouniverses that suppor alien life.
Lex sound a bit nervious while interviewing Vapnik, although hard not to be in the face of him!
pure mathematical genius. would love more intense pods on maths. probably the hardest subject in the universe (as it describes it, quite literally)
учащийся ПТУ разговаривает с учёным.
I CAN HYPOTHESISE EVEN THO GOD KNOWS ALL CONDITIONAL PROBABILITIES .....HE STILL NEEDS TO CONSIDER ALL THE OUTCOMES WITHOUT BIAS...WHICH IS IMPOSSIBLE FOR ANY OBSERVER...
NO IMAGINATION!!! lol
@George Hatoutsidis Agreed. I think imagination is very important for finding or creating something valuable with math. Perhaps, he views imagination as working back from fantasy or thinking in terms of beauty. But imagination can be simply manipulating equations in a creative way to discover/uncover some valuable insight.
@George Hatoutsidis Also, I would say you do not need knowledge for imagination, rather you need knowledge to increase the chances that you will be able to manifest your imagination into reality.
Couldn't grasp this one..
What?
1.25x