- 280
- 637 704
Advanced LAFF
เข้าร่วมเมื่อ 25 มิ.ย. 2019
Authoring for Active Learning
Quick video to share our experiences with online courses. Intended audience: other instructors at UT.
This was a hastily created video, so don't judge our course by the quality of it!
For information on Advanced Linear Algebra: Foundations to Frontiers, see ulaff.net
A representative review by a learner:
"It was a very unique experience which I will never forget. I felt more engaged and involved than any other online or even on-campus class I have ever attended. Thanks Robert, Maggie, and TAs for making a memorable experience and providing a env to learn and grow. The structure, exercises, leaving room for thinking about a problem by pausing/cutting videos into two and may other things, pushed me to go one extra step. Robert and Maggie, maybe you guys should consider preparing a MOOC on "How to do MOOCs". Thanks once again."
This was a hastily created video, so don't judge our course by the quality of it!
For information on Advanced Linear Algebra: Foundations to Frontiers, see ulaff.net
A representative review by a learner:
"It was a very unique experience which I will never forget. I felt more engaged and involved than any other online or even on-campus class I have ever attended. Thanks Robert, Maggie, and TAs for making a memorable experience and providing a env to learn and grow. The structure, exercises, leaving room for thinking about a problem by pausing/cutting videos into two and may other things, pushed me to go one extra step. Robert and Maggie, maybe you guys should consider preparing a MOOC on "How to do MOOCs". Thanks once again."
มุมมอง: 747
วีดีโอ
High performance Implementation of Cholesky Factorization
มุมมอง 1.7K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.1.1 Subspace Iteration, implementation
มุมมอง 1.5K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.3.6 Putting it all together, Part 2
มุมมอง 1.7K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.1.1 Power Method, implementation
มุมมอง 1.4K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.3.5 Implicitly shifted QR algorithm
มุมมอง 3.5K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.1.1 Power Method to compute second eigenvalue, implementation Part 2
มุมมอง 7814 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
10.1.1 Power Method to compute second eigenvalue, implementation Part 1
มุมมอง 1.7K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
9.2.1 Gershgorin Disk Theorem, Part 2
มุมมอง 2K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
9.2.1 Gershgorin Disk Theorem, Part 1
มุมมอง 4K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.2.3 Reduction to bidiagonal form
มุมมอง 2.9K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.3.3 One sided Jacobi's method
มุมมอง 1.6K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.3.2 Jacobi's method, Part 2
มุมมอง 1.6K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.3.2 Jacobi's method, Part 1
มุมมอง 2.3K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.3.1 Jabobi rotation
มุมมอง 2.5K4 ปีที่แล้ว
Advanced Linear Algebra: Foundations to Frontiers Robert van de Geijn and Maggie Myers For more information: ulaff.net
11.2.4 Implicitly shifted bidiagonal QR algorithm
มุมมอง 2.3K4 ปีที่แล้ว
11.2.4 Implicitly shifted bidiagonal QR algorithm
11.2.2 A strategy for computing the SVD, Part 3
มุมมอง 1.3K4 ปีที่แล้ว
11.2.2 A strategy for computing the SVD, Part 3
11.2.2 A strategy for computing the SVD, Part 2
มุมมอง 1.1K4 ปีที่แล้ว
11.2.2 A strategy for computing the SVD, Part 2
11.2.2 A strategy for computing the SVD, Part 1
มุมมอง 1.3K4 ปีที่แล้ว
11.2.2 A strategy for computing the SVD, Part 1
11.2.1 Computing the SVD from the Spectral Decomposition, Part 2
มุมมอง 9444 ปีที่แล้ว
11.2.1 Computing the SVD from the Spectral Decomposition, Part 2
11.2.1 Computing the SVD from the Spectral Decomposition, Part 1
มุมมอง 1.1K4 ปีที่แล้ว
11.2.1 Computing the SVD from the Spectral Decomposition, Part 1
11.1.1 Linking the SVD to the Spectral Decomposition
มุมมอง 2.1K4 ปีที่แล้ว
11.1.1 Linking the SVD to the Spectral Decomposition
10.2.2 Simple shifted QR algorithm, Part 1
มุมมอง 3K4 ปีที่แล้ว
10.2.2 Simple shifted QR algorithm, Part 1
great
That was so clear thank you
Minor error on right side second last bullet point. Should be dim(N(\lambda I - A))=0. Missed out the null space notation N( . ).
Life saver!! Fantastic lecture=, Professor. I've a graduate exam in numerical analysis and i couldn't quite grasp the intuition behind Householder QR Factorization. Truly amazing! Best, from UWaterloo.
Very clear explanation! I'm implementing a tiny linear algebra library and this channel has been super helpful to me in the process 💗
Thanks, you cooked me up real good
Thanks sir <3
explain it very well!!!
how did he go from 'v' which was a row matrix to a column matrix? 2:55?
how does the mirroring onto the standard basis relate to the upper triangular matrix?
if one knows the 2.norm of x and the standard basis e, then beta e directly gives the required mirror vector along e right. Then why do we need to represent the "mirror" operation as a matrix in terms of u in the first place?
Why is it not + and is * ?
why does it make no sense at all?
Which part do you not understand? I think the proof is quite straightforward. Although at 4:23 I think what Prof. Robert may have meant (I could be wrong) is that the general strategy is to show that if max f(x) <= alpha and max f(x) >= alpha, then the only way these two inequalities can be satisfied is if max f(x) = alpha (i.e. think of it as max f(x) has been "sandwiched" to alpha). That was how he derived that ||A||_2 = max(d_1, d_2), where A is a 2-by-2 diagonal matrix with d_1 and d_2 as entries.
Very good explanation.
Amazing, i was struggling for this one from last two days.
Thank you
amazing
Frobeanius is making me loose all my -marbles- beans.
2-Norm of a matrix is Frobenius Norm?
Well, I didn't get it from the first problem. so should I know the LU decompostion first to understand it. If it were in the form of a normal matrix I could have solved it. can anyone suggest please.
what do you mean by LU decomposition?
great explainer
Thank you for this very clear and comprehensive explanation!
Splendid! I wish I could like the video more than once!
Thanks You are a Saviour for Tommorow's Matrix Computation Quiz.
its not good, why dont you write something on the board?!
Nice class, thanks!
I love how simple, straight to the point, short his explanaition Thank you sir!
Hi Professor van de Gejin, is it possible to generalize this inequality when A is rank-deficient?
This video was actually amazing! Wow, needs more exposure. Thanks professor!
did not help santa
Nice, very well explained!
Thank you teacher!
This is what I wanted. Amazing sir.
Thank you!!!
The best explanation of Householder QR algorithm
Best explanation I saw until now. Thanks!!
helpful👍
excellent explanation. Thanks
You are an absolute fooookin legend.. Who ever this guy Robert de Geijn is I want to say thanks lol
Great explanation Sir 👏
There unknown way to visualize subspace, or vector spaces. You can stretching the width of the x axis, for example, in the right line of a 3d stereo image, and also get depth, as shown below. L R |____| |______| This because the z axis uses x to get depth. Which means that you can get double depth to the image.... 4d depth??? :O p.s You're good teacher!
What a brilliant video !! Thank you so much. had been looking everywhere to extend my 3x3 imagination and here you give it beautifully. thank you so much !!
Thank you very much! Very good explanation! 👍
different into tune. liked it!
Is it easy to code such an algorithm? Maybe you have any references I could look at? I am interested in the SVD of the covariance matrix in order to approximate a rigid body’s basis to calculate rotations with respect to the XYZ axes. I have been going round in circles with no success, the main problem being that the three eigenvectors constantly change direction as the object is rotated so it’s impossible to compute smooth rotations. Tried QR, Power Method, Euler angles and Quaternions with similar results, hence no success. I am hoping that the Polar Decompositon addresses the issue but not so sure. Any help will be appreciated.
In short, the reduced form just means that we leave out the irrelevant 0s from the sigma matrix, and it's corresponding parts in the other 2? Sounds simple enough.
Amazing! Thanks for this course!
It has been some years ago that I have enjoyed for the first time this video. Come back now, just to remember some stuff :)
Thank you for sharing your knowledge to the world. I'm just a tiny programmer who haven't learned error analysis, suffering for numerical error of some time-complexity-optimized NURBS algorithm. As a beginner of error analysis outside of the university, your lectures really save me a lot. If I understood well, are followings true? Example of (relatively) ill-conditioned problem - Find the reciprocal of given real number x. (small changes near zero makes catastrophic error) - Find any solution of equation y' = x^2 * (y + 0.000001 * sin(x)) with initial value x(0) = 1, y(0) = 0.001 numeically Example of numerically unstable algorithm - Solve linear system using Cramer's rule - Compute f(x) = (x - 1.414e+9) / 3.141e-13 using an expression: (x * (1 / 3.141e-13)) - 1.414e+9 / 3.141e-13 (I don't know why but at least I know the later is horrible)
If the proof is so simple why not show it?