Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!
I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.
Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!
I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.
Nice but would love to see some demos of the results. For example, the equation of the pendulum, the reconstruction from the found dynamics and comparison between the two.
I tried to use autoencoder to do Anomsly detection for anti-fraud task in social media.It's a good way to do information compression.But I never thought it can be used in model discovery for science! AI will change the game of Science research today!
Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.
very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.
I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.
Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with TH-cam's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.
Professor Brunton, thanks to you and team mates for the amazing content. I think it is desirable to correct the pendulum videos, because the images are affected by an affine transformation due to the lens distortions, looking to the botton video line you can se how distorted it is. There are libraries to identify the parameters of the camera affine transformation using a chessboard tracking the corners coordinates distortion.
You can easily factor the affine transformation in the encoder (and the inverse one in the decoder). You don't always have access to distortion correction settings, and as long as you've been using the same capturing equipment, you will be able to factor such transformations during training.
@@alfcnz Professor Canziani, it's amazing to have your answer here, in a way I'm your virtual machine learning student on youtube! Thanks a lot you and your team mates for the amazing content. I totally agree, especially when it comes to a linear transformation that would be easily understood by the network, my biggest concern is that this distortion could be wrongly treated as the problem physics, being more of an observational error, especially when linearity is enforced in the dynamics discovery.
thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!
Just curious whether your usage of the term "lift" is related to the topological/categorical use of that term? Specifically whenever there is a morphism f: X -> Y and g: Z -> Y then a lift is a map h: X -> Z such that f = gh (i.e. the diagram commutes). I think the analogy works: Let X be the original data space, Z the latent space, and Y = X. The composition gh is a map X -> Z -> X, if we set f = the identity on X, then h and g are the encoder and decoder, then f ≈ gh expresses the reconstruction objective.
This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.
Thank you professor for the very inspiring video! At 12:05, can we say something about the uniqueness of the representation transform phi and psi? Or they may not be unique at all, and may depend on how we train the network?
The linear areas seem to be a maximising of the neighbourhoods implied by the implicit function theory - i am probably wrong it was 1987 when i studied this
Nice video! I am very new to this subject (In fact this is the first video I have seen about it), but it seems that essentially what you do is derive dynamics from an action principle (minimizing the generalized loss functional) and so any partially known physics I suppose would just be incorporated by Lagrange multipliers. About the two different approaches for linearisation (going to higher and lower dimension), I think that both are physically motivated. You can definitely expect dynamics to become more linear if you go to higher dimension too. Think about thermodynamics: You can either try to describe average degrees of freedom like entropy, heat, etc. which would follow easy laws, but at the same time you could try and describe the system by describing each individual particle. It wouldn't really be feasible, but it's not unlikely that the dynamics can be described from a simple possibly linear law (like a box full of free collisionless particles in a homogeneous gravitational field).
23:32 sounds interesting. So you say this is a way to learn the linearizing transform for the convective term of the Navier-Stocks Eq? How do you even know if, after training the network, we end up with a meaningful solution?
You might not. Sara Hooker has recently been arguing that properties like accuracy and interpretability (among others) may direct conflict; so the better one is, the worse the others are. You might have to sacrifice a 'meaningful' solution for an accurate one.
Many non linear system exhibit phenomenon of chaos (divergence in the “original” coord if 2 systems have tiny diff in their init condition), would be interested to see if the “recovered” x_\hat should also reproduce the chaotic behavior with that same Lyapunov expononent, and also what should happen to the latent z’s.
First question, they do. It is validated in 1990-2000 where numerous engineers and mathematicians play shallow neural network. Second, I don’t have an answer.
Hey Steve, your videos are great! I wanna ask how can the balanced model reduction be used in the deep learning autoencoder. I'm asking, because with the BML you are able to find the coordinate transformation to equalize and diagonalize the Gramians, but this transformation could turn out to be dense and non-interpretable, right? Could you please explain what would be the advantage of combining these two? Thanks, your big fan!
probably boils down to 2D ('amount of tasty pizza' x 'amount of tasty bacon') quite precisely. [If even one training example involves brain data in response to pineapple pizza, the gradient instantly explodes, coffee levitates onto keyboard and alien police come to remove pineapple away from pizza, just in time before a black hole forms turning milky-way into a Lorenz attractor.]
Could someone help me? I’m a student fresh out of high school, I’ve got an Australian-HSC-education in Chemistry, physics and extension 2 maths, I intend on studying physics at university and possibly getting a minor in CS to give me the marketable skills. I’m currently just doing simple things like a code academy course on Python and likely the machine learning skill path. From where I am now, where do I go to understand this video?
to understand the video, coding is useless, it is not gonna help. you need to understand linear algebra, dynamical system or ODE/PDE, and also the math for neural network. take course in those subjects.
probably for each pixel separately in 1D by simple Euclidean gradient dx/dt, because the joint underlying function over all pixes is unknown (neural network needs to learn those correlations from examples).
In wikipedia ,state variables are reffered to as the varibles that describes the mathematical state of the system and state as something that descirbes the system ,but isn't state is the minimum set of variables that describes the system wikipedia article link : en.wikipedia.org/wiki/State_variable And also ,I want to ask is there any difference between configuration of a system and state of a system?
Yes, your understanding of state variables is correct. Sometimes its useful to make a distinction between state variables and a "minimum set" of state variables. State variables are anything that give you information about the state of the system -- it doesn't always have to be a minimal set. In my experience "configuration" and "state" are similar terms but I could be wrong about that.
@@vg5028 yes but isn't state is referred to as minimum set of varibles that completly desctibes the system(those minumun set of varibles i.e state varibales) but in wikipedia state is referred to as something that describes the system and state variable are something that describes the state of the system but isn't here state was reffered as minumun set of varibales i.e state variables?
@@vg5028 Well,my question is that,why the definition of state is different in this article by mit :web.mit.edu/2.14/www/Handouts/StateSpace.pdf and in this wikipedia article:en.wikipedia.org/wiki/State_variable
You asked a GREAT question. Think about this, you have a system variable of 2 state, one always is around 0.00001, the other is around -1 to 1. So you will tend to believe this system is approximately 1D. But mathematically, your understanding is 100% right. it is 2 degree and no less, but you can think it as 1D which brings you a lot of easy life, if you are in the business of modeling and control!
@@hfkssadfrew what i am asking is what 'state' is whether its referring to that condition of the system or referring to the mathematical description of the system?
😊 I don't know if computers are capable of deep learning Like I just explained our type of learning It don't come from all your Function boards The details that you place in it are your details I can't live your life my friend And your computer will never know what I'm trying to say Unless we were being straight but you don't have a straight life I doubt you make a completely straight computer ...😊 It's personal To understand your construction modeling You see the thing about my life it is not orchestrated by your construction modeling 😊 Even if I had my own chance ... Sometimes the facts ain't even facts... if it ain't even there What could be what won't be That's really not your prediction 😊 Unless it's within your case to understand 😮 Most people don't have these matters and they only predict 😊 Try to be the cause and effect of them Before you predict in the middle of them .... Even if predictions are such outcasts 😊 Even the teacher's pet taught us that ... I won't even use the word persuasions ..... You see a computer has to modify itself to each and every case of individual and the life and standards that they have to live by To understand them You will never help them By a parents point of view You got to take the strong considerations of their wrongs .... Their point of views Were there aiming what they can what they can't I don't need a computer that says well I can't do that I won't learn that 😊 That's what my professor at MIT told me If I can't do that I won't work on that 😊 I said okay you will give me a computer just the same ..... 😊 Logically I am correct But like I said that's a prediction I am careful about my predictions Because what is important to you is the same that is important to me it's just not important to you to give it to me as much as it was important to just keep it to yourself 😊 I'm a man of discoveries and I can't help but run my mouth 😮 But you're a man with a job and you got nothing else to learn ....😊 We did meet in the middle 😮 I can't help it you're going the other way 😊 Maybe I'm stupid Look we met back in the middle 😢 Call it even damn it
wow! this is so fun! I think I made it 2 somewhere, in this switchboard of bowties! I don't know whether 2 call this ''at&t,how can I help U"! or. land of confusion, in deep thought flow's? ha..ha.. yes, my attempt @ humor! thanks so much 4 the lesson! totally love this! good luck!
You got to be worried about the wrong point of view you feed a computer 😊 As a human we don't make the mistakes 😊 We necessarily know or know what we need or what is needed to be added ....😊 Sometimes no potential strains there 😊 Sometimes we don't have such qualifications as a qualification 😮 Even if you are not qualified a human will work you into qualified Leave it up to a computer 😊 You won't be qualified for s***
inventing my own math, from ground up and have no problem with physical systems and AI, you just have to make metrics emergent from a sack of infinite amount of Differential forms and just pick one until the metric of selfmanifistation won't be statistically correlated.
😊 next thing you know we got crooked computers 😊 Last time I checked there's not a f****** game on this computer that the game does not f****** cheat or can it play f****** digitally Fair 😊 Ever since they made one f****** computer program You can never trust a f****** poker cards ever again 😊 I don't want to play with your computer 😊 For one it does not know how to f****** shuffle 😊 And for two it don't know how to stop looking at my f****** cards
YT algorithm does know where to take me, never thought i'd sit through a lecture in my leisure time fully engaged. Very well done!
Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!
cant belive see you here, your vidieo is helpful too thanks you alot.
I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.
Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!
I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.
Incredible work your team is doing. So much to think about, with incredibly wide ranging applications
Thank you for your videos, Steve! Also, your gesticulation eases the complexity of your talk significantly. Keep up with the good work!
Nice but would love to see some demos of the results. For example, the equation of the pendulum, the reconstruction from the found dynamics and comparison between the two.
I tried to use autoencoder to do Anomsly detection for anti-fraud task in social media.It's a good way to do information compression.But I never thought it can be used in model discovery for science! AI will change the game of Science research today!
Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.
Your channel is incredible Prof. Brunton, thank you for your work! There is so much value here
Awesome work. I can't believe I understood most of this topic. One of the best explanations I have seen so far.
Fantastic discussion! Love that you cover the complexities so in-depth.
how much i love this videos and the quality of the software they use
I've been looking for some insights on how to leverage deep learning to optimize our MRI transmit coil. This has been extremely helpful
very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.
I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.
Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with TH-cam's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.
Professor Brunton, thanks to you and team mates for the amazing content. I think it is desirable to correct the pendulum videos, because the images are affected by an affine transformation due to the lens distortions, looking to the botton video line you can se how distorted it is. There are libraries to identify the parameters of the camera affine transformation using a chessboard tracking the corners coordinates distortion.
You can easily factor the affine transformation in the encoder (and the inverse one in the decoder). You don't always have access to distortion correction settings, and as long as you've been using the same capturing equipment, you will be able to factor such transformations during training.
@@alfcnz Professor Canziani, it's amazing to have your answer here, in a way I'm your virtual machine learning student on youtube! Thanks a lot you and your team mates for the amazing content.
I totally agree, especially when it comes to a linear transformation that would be easily understood by the network, my biggest concern is that this distortion could be wrongly treated as the problem physics, being more of an observational error, especially when linearity is enforced in the dynamics discovery.
@@alfcnz But if you trained it on distorted image data, wouldn't it make a false correction to undistorted image data?
@@iestynne Its is not difficult to calibrate the camera!
Such a gem of a video! Thank you!!
thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!
This is the most amazing stuff you guys have came up with so far!!! Awesome…great job.
Thank you for this vid. Really great content you are putting out for the community Steve.
Cool, nice lecture! 🤓🤓🤓
Thanks!
I wish I was able press the like button more than once.
Just curious whether your usage of the term "lift" is related to the topological/categorical use of that term? Specifically whenever there is a morphism f: X -> Y and g: Z -> Y then a lift is a map h: X -> Z such that f = gh (i.e. the diagram commutes).
I think the analogy works: Let X be the original data space, Z the latent space, and Y = X. The composition gh is a map X -> Z -> X, if we set f = the identity on X, then h and g are the encoder and decoder, then f ≈ gh expresses the reconstruction objective.
This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.
I will always love that the simple solution was just returned as the simple solution. :D
Do you have your presentation available on line? Or links to the arxiv site for the papers referenced? I would love to read them
Do you know of any jupyter notebook examples in say Keras or Pytorch that give an example of how to do this?
Thank you professor for the very inspiring video! At 12:05, can we say something about the uniqueness of the representation transform phi and psi? Or they may not be unique at all, and may depend on how we train the network?
The linear areas seem to be a maximising of the neighbourhoods implied by the implicit function theory - i am probably wrong it was 1987 when i studied this
Very esoteric video.
I like. 👍
Thank you for the amazing video. Would you please give a few simple examples and explain step by step of how to use these machine learning algorithms?
This was a super interesting one. Thank you very much for another engaging whirlwind tour through recent advances in computer science! :)
Deep learning is revolutionizing engineering, along with Exascale supercomputing.
Is Steve quiet for everyone? I've been in conferences all week, so I might be set up wrong, but I had to reverse twice to get a clean vocal.
It's fine for me on mobile
I had to turn up volume quite high, but now hearing just fine.
Nice video! I am very new to this subject (In fact this is the first video I have seen about it), but it seems that essentially what you do is derive dynamics from an action principle (minimizing the generalized loss functional) and so any partially known physics I suppose would just be incorporated by Lagrange multipliers. About the two different approaches for linearisation (going to higher and lower dimension), I think that both are physically motivated. You can definitely expect dynamics to become more linear if you go to higher dimension too. Think about thermodynamics: You can either try to describe average degrees of freedom like entropy, heat, etc. which would follow easy laws, but at the same time you could try and describe the system by describing each individual particle. It wouldn't really be feasible, but it's not unlikely that the dynamics can be described from a simple possibly linear law (like a box full of free collisionless particles in a homogeneous gravitational field).
Amazing lecture!
Fantastic video!!
23:32 sounds interesting. So you say this is a way to learn the linearizing transform for the convective term of the Navier-Stocks Eq? How do you even know if, after training the network, we end up with a meaningful solution?
You might not. Sara Hooker has recently been arguing that properties like accuracy and interpretability (among others) may direct conflict; so the better one is, the worse the others are. You might have to sacrifice a 'meaningful' solution for an accurate one.
I saw the thumbnail and the title and I assumed this was a course on encoding audio (dynamics) for movie editing. :)
If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?
You can always do a randomized SVD to make it faster
This is very cool!
Amazing!!
@3:30 If Steve Brunton says something is "a difficult task", you can be sure it really is a difficult task! :D
so amazing
Many non linear system exhibit phenomenon of chaos (divergence in the “original” coord if 2 systems have tiny diff in their init condition), would be interested to see if the “recovered” x_\hat should also reproduce the chaotic behavior with that same Lyapunov expononent, and also what should happen to the latent z’s.
First question, they do. It is validated in 1990-2000 where numerous engineers and mathematicians play shallow neural network. Second, I don’t have an answer.
Thank you so much!
Thank you.
Great lecture . Thanks a lot 🙏
Fantastic!
Hey Steve, your videos are great! I wanna ask how can the balanced model reduction be used in the deep learning autoencoder. I'm asking, because with the BML you are able to find the coordinate transformation to equalize and diagonalize the Gramians, but this transformation could turn out to be dense and non-interpretable, right? Could you please explain what would be the advantage of combining these two? Thanks, your big fan!
How do you deal with external control input u(t) for control problems and robots,
Maybe called exogenous inputs.
YES
Thanks! 👍🏼
Awesome!
you are the best
Thank you alot i hope share with apply by cod
Awesome Thanks!
Single Value Decomposition / Principal Components Analysis / Proper Orthogonal Decomposition
(field? / field? / field?)
Am I right that he implied that all those three are the same?
@@zeydabadi you’re right, they are. I was wondering if certain fields prefer one term over another.
How would we train a network like this though?
Tremendous video!
Amazing
Great!
waouh, cool but complex, not sure if it could be simplified a bit
Imagine using this to represent a human brain in a low dimentional space.
probably boils down to 2D ('amount of tasty pizza' x 'amount of tasty bacon') quite precisely. [If even one training example involves brain data in response to pineapple pizza, the gradient instantly explodes, coffee levitates onto keyboard and alien police come to remove pineapple away from pizza, just in time before a black hole forms turning milky-way into a Lorenz attractor.]
Could someone help me? I’m a student fresh out of high school, I’ve got an Australian-HSC-education in Chemistry, physics and extension 2 maths, I intend on studying physics at university and possibly getting a minor in CS to give me the marketable skills. I’m currently just doing simple things like a code academy course on Python and likely the machine learning skill path. From where I am now, where do I go to understand this video?
to understand the video, coding is useless, it is not gonna help.
you need to understand linear algebra, dynamical system or ODE/PDE, and also the math for neural network. take course in those subjects.
No this is good
How do I compute the x dot in case of x are pixels?
probably for each pixel separately in 1D by simple Euclidean gradient dx/dt, because the joint underlying function over all pixes is unknown (neural network needs to learn those correlations from examples).
Anybody have a feeling like me? Learning math and science with Harrison Well?
In wikipedia ,state variables are reffered to as the varibles that describes the mathematical state of the system and state as something that descirbes the system ,but isn't state is the minimum set of variables that describes the system
wikipedia article link : en.wikipedia.org/wiki/State_variable
And also ,I want to ask is there any difference between configuration of a system and state of a system?
Yes, your understanding of state variables is correct. Sometimes its useful to make a distinction between state variables and a "minimum set" of state variables. State variables are anything that give you information about the state of the system -- it doesn't always have to be a minimal set.
In my experience "configuration" and "state" are similar terms but I could be wrong about that.
@@vg5028 yes but isn't state is referred to as minimum set of varibles that completly desctibes the system(those minumun set of varibles i.e state varibales) but in wikipedia state is referred to as something that describes the system and state variable are something that describes the state of the system but isn't here state was reffered as minumun set of varibales i.e state variables?
@@vg5028 Well,my question is that,why the definition of state is different in this article by mit :web.mit.edu/2.14/www/Handouts/StateSpace.pdf
and in this wikipedia article:en.wikipedia.org/wiki/State_variable
You asked a GREAT question. Think about this, you have a system variable of 2 state, one always is around 0.00001, the other is around -1 to 1. So you will tend to believe this system is approximately 1D. But mathematically, your understanding is 100% right. it is 2 degree and no less, but you can think it as 1D which brings you a lot of easy life, if you are in the business of modeling and control!
@@hfkssadfrew what i am asking is what 'state' is whether its referring to that condition of the system or referring to the mathematical description of the system?
😊 I don't know if computers are capable of deep learning
Like I just explained our type of learning
It don't come from all your
Function boards
The details that you place in it are your details
I can't live your life my friend
And your computer will never know what I'm trying to say
Unless we were being straight but you don't have a straight life
I doubt you make a completely straight computer
...😊 It's personal
To understand your construction modeling
You see the thing about my life it is not orchestrated by your construction modeling
😊 Even if I had my own chance
...
Sometimes the facts ain't even facts... if it ain't even there
What could be what won't be
That's really not your prediction
😊 Unless it's within your case to understand
😮 Most people don't have these matters and they only predict
😊 Try to be the cause and effect of them
Before you predict in the middle of them
.... Even if predictions are such outcasts
😊 Even the teacher's pet taught us that
... I won't even use the word persuasions
..... You see a computer has to modify itself to each and every case of individual and the life and standards that they have to live by
To understand them
You will never help them
By a parents point of view
You got to take the strong considerations of their wrongs
....
Their point of views
Were there aiming what they can what they can't
I don't need a computer that says well I can't do that I won't learn that
😊 That's what my professor at MIT told me
If I can't do that I won't work on that
😊 I said okay you will give me a computer just the same
.....
😊 Logically I am correct
But like I said that's a prediction
I am careful about my predictions
Because what is important to you is the same that is important to me it's just not important to you to give it to me as much as it was important to just keep it to yourself
😊 I'm a man of discoveries and I can't help but run my mouth
😮 But you're a man with a job and you got nothing else to learn
....😊 We did meet in the middle
😮 I can't help it you're going the other way
😊 Maybe I'm stupid
Look we met back in the middle
😢 Call it even damn it
AI to learn how many black shirts Steve Brunton has
wow! this is so fun! I think I made it 2 somewhere, in this switchboard of bowties! I don't know whether 2 call this ''at&t,how can I help U"! or. land of confusion, in deep thought flow's? ha..ha.. yes, my attempt @ humor! thanks so much 4 the lesson! totally love this! good luck!
You got to be worried about the wrong point of view you feed a computer
😊 As a human we don't make the mistakes
😊 We necessarily know or know what we need or what is needed to be added
....😊 Sometimes no potential strains there
😊 Sometimes we don't have such qualifications as a qualification
😮 Even if you are not qualified a human will work you into qualified
Leave it up to a computer
😊 You won't be qualified for s***
AI has gone through a number of AI winters because people claimed things they couldn't deliver
Sound really hard
First 9 minutes can be summarized with this sentence: "There exists a neural network which can perform SVD."
Lol. You can say “there exists a polynomial which can approximately perform any operation”. If you think so, then you still don’t get the point.
@@hfkssadfrew I think the point is after minute 9.
inventing my own math, from ground up and have no problem with physical systems and AI, you just have to make metrics emergent from a sack of infinite amount of Differential forms and just pick one until the metric of selfmanifistation won't be statistically correlated.
😊 next thing you know we got crooked computers
😊 Last time I checked there's not a f****** game on this computer that the game does not f****** cheat or can it play f****** digitally Fair
😊 Ever since they made one f****** computer program
You can never trust a f****** poker cards ever again
😊 I don't want to play with your computer
😊 For one it does not know how to f****** shuffle
😊 And for two it don't know how to stop looking at my f****** cards
audio is sooo low WTF