- 60
- 26 370
Christoph Lippert
เข้าร่วมเมื่อ 1 พ.ย. 2011
Math4ML Bayesian Linear Regression
HPI Math4ML Lecture
Lecture 06 Bayesian Linear Regression
Lecture 06 Bayesian Linear Regression
มุมมอง: 141
วีดีโอ
Math4ML Bayesian Inference in the Gaussian distribution
มุมมอง 2484 หลายเดือนก่อน
HPI Math4ML Lecture Part 3 Probability Lecture 05 Bayesian Inference in the Gaussian distribution
Math4ML Covariance Matrix
มุมมอง 565 หลายเดือนก่อน
HPI Math4ML Lecture Part 3 Probability Lecture 03 Expectation and Variance - Part 3 Covariance Matrix Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) and mml-book.github.io/ (Deisenroth et al, 2020)
Math4ML Variance and Covariance
มุมมอง 585 หลายเดือนก่อน
HPI Math4ML Lecture Part 3 Probability Lecture 03 Expectation and Variance - Part 2 Variance and Covariance Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) and mml-book.github.io/ (Deisenroth et al, 2020)
Math4ML Expected Value
มุมมอง 395 หลายเดือนก่อน
HPI Math4ML Lecture Part 3 Probability Lecture 03 Expectation and Variance - Part 1 Expected Value Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) and mml-book.github.io/ (Deisenroth et al, 2020)
DL 7.2 Batch Normalization
มุมมอง 54ปีที่แล้ว
HPI Deep Learning Lecture Chapter 7. Modern Convolutional Neural Networks Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020).
DL 10.7 Transformers
มุมมอง 113ปีที่แล้ว
Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020)
DL 10.6 Self-Attention and Positional Encoding
มุมมอง 184ปีที่แล้ว
Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020)
DL 10.3 Attention Scoring Functions and Multihead Attention
มุมมอง 267ปีที่แล้ว
Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020)
DL 10.2 Attention Pooling by Similarity
มุมมอง 779ปีที่แล้ว
Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020)
DL 10.1 Attention Mechanism
มุมมอง 200ปีที่แล้ว
Lecture based on “Dive into Deep Learning” D2L.AI (Zhang et al., 2020)
Unconstrained Optimization
มุมมอง 114ปีที่แล้ว
HPI Math4ML Lecture Part 2 Calculus Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) and Convex Optimization ( web.stanford.edu/~boyd/cvxbook/ ) (Stephen Boyd and Lieven Vandenberghe, 2004)
Optima - Sufficient and Necessary Conditions
มุมมอง 152ปีที่แล้ว
HPI Math4ML Lecture Part 2 Calculus Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018)
math4ML 11 Taylor's Theorem
มุมมอง 92ปีที่แล้ว
HPI Math4ML Lecture Part 2 Calculus Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) mml-book.github.io/ (Deisenroth et al. 2020, Mathematics for Machine Learning)
math4ML 11 multivariate Taylor Expansion
มุมมอง 241ปีที่แล้ว
HPI Math4ML Lecture Part 2 Calculus Lecture based on github.com/gwthomas/math4ml (Garrett Thomas, 2018) mml-book.github.io/ (Deisenroth et al. 2020, Mathematics for Machine Learning)
DL 7.1 Modern Convolutional Neural Networks 1/2
มุมมอง 1542 ปีที่แล้ว
DL 7.1 Modern Convolutional Neural Networks 1/2
math4ML I 09 Fundamental Theorem Linear Algebra
มุมมอง 2923 ปีที่แล้ว
math4ML I 09 Fundamental Theorem Linear Algebra
math4ml I 08 Singular Value Decomposition
มุมมอง 1903 ปีที่แล้ว
math4ml I 08 Singular Value Decomposition
math4ML I 07 Positive (Semi-)Definite Matrices
มุมมอง 2843 ปีที่แล้ว
math4ML I 07 Positive (Semi-)Definite Matrices
Great, quick-to-the-point explanation. Appreciate the referral to C. Bishop's book, I have been reading it for a while and it can sometimes be confusing with all the notations it introduces.
How do I give you a lot of my money so ou keep making good videos?
Examples were great to understand the concept ls more clearly and detailed explained of each topics
Amazing lectures thank you for making it available for free
Nice proffesor ❤
Thanks you for the video .i am study machine learning in France,i am wondering for what level you need to have in maths to fully master the multivariate distribution
Thanks
Crystal clear and understandable
very clear and helpful, truly appreciated👍
Best explanations I've seen so far. Thanks
good lecture - easier to understand d2l un-detailed texts.
Can you avoid writing on the slides? Defocuse my attention.
There‘s a small typo in the first formula on slide 6, i think the n in the denominator should be an m
That was the most interesting lecture yet.
0:44 myhotmom.online
At 12.55 I believe it would make it clearer to state that „forall x in E: x<=b“
Great, please do more on the ‘mathematics/stats/proba for machine learning’ as many of us got rusty /forgot those and need a refresher or the very specific math for ML
Keep it coming, great work and teaching