- 196
- 289 826
Blitz Kim
South Korea
เข้าร่วมเมื่อ 27 ก.ย. 2012
Started coding since 2016.
Interested in Programming, Games, Finance, Artificial intelligence, Machine learning, Big data analytics.
Ex MMORPG developer & Game designer.
Interested in Programming, Games, Finance, Artificial intelligence, Machine learning, Big data analytics.
Ex MMORPG developer & Game designer.
Lecture 08-02 Dimensionality Reduction
Machine Learning by Andrew Ng [Coursera]
0806 Motivation I: Data Compression
0807 Motivation II: Data Visualization
0808 Principal Component Analysis problem formulation
0809 Principal Component Analysis algorithm
0810 Reconstruction from compressed representation
0811 Choosing the number of principal components
0812 Advice for applying PCA
0806 Motivation I: Data Compression
0807 Motivation II: Data Visualization
0808 Principal Component Analysis problem formulation
0809 Principal Component Analysis algorithm
0810 Reconstruction from compressed representation
0811 Choosing the number of principal components
0812 Advice for applying PCA
มุมมอง: 1 073
วีดีโอ
Lecture 08-01 Clustering
มุมมอง 2K7 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 0801 Unsupervised learning introduction 0802 K-means algorithm 0803 Optimization objective 0804 Random initialization 0805 Choosing the number of clusters
Lecture 0812 Advice for applying PCA
มุมมอง 4327 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0811 Choosing the number of principal components
มุมมอง 4417 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0810 Reconstruction from compressed representation
มุมมอง 2277 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0809 Principal Component Analysis algorithm
มุมมอง 1.1K7 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0808 Principal Component Analysis problem formulation
มุมมอง 3307 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0807 Motivation II: Data Visualization
มุมมอง 1547 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0806 Motivation I: Data Compression
มุมมอง 2557 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-02 Dimensionality Reduction
Lecture 0805 Choosing the number of clusters
มุมมอง 2027 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-01 Clustering
Lecture 0804 Random initialization
มุมมอง 2357 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-01 Clustering
Lecture 0803 Optimization objective
มุมมอง 2837 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-01 Clustering
Lecture 0802 K-means algorithm
มุมมอง 1.5K7 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-01 Clustering
Lecture 0801 Unsupervised learning introduction
มุมมอง 2487 ปีที่แล้ว
Machine Learning by Andrew Ng [Coursera] 08-01 Clustering
Lecture 0703 The mathematics behind large margin classification (optional)
มุมมอง 3547 ปีที่แล้ว
Lecture 0703 The mathematics behind large margin classification (optional)
Lecture 06-02 Machine learning system design
มุมมอง 8687 ปีที่แล้ว
Lecture 06-02 Machine learning system design
Lecture 06-01 Advice for applying machine learning
มุมมอง 1.3K7 ปีที่แล้ว
Lecture 06-01 Advice for applying machine learning
Lecture 0611 Trading off precision and recall
มุมมอง 1.9K7 ปีที่แล้ว
Lecture 0611 Trading off precision and recall
Lecture 0610 Error metrics for skewed classes
มุมมอง 5567 ปีที่แล้ว
Lecture 0610 Error metrics for skewed classes
Lecture 0608 Prioritizing what to work on: Spam classification example
มุมมอง 8847 ปีที่แล้ว
Lecture 0608 Prioritizing what to work on: Spam classification example
Lecture 0607 Deciding what to try next (revisited)
มุมมอง 2957 ปีที่แล้ว
Lecture 0607 Deciding what to try next (revisited)
8:11 Hypothesis Representation
25:20 Math behind large margin classification 45:04 Kernals I 1:00:49 Kernals II
0:15 Model Representation 11:15 Cost Function 16:25 Cost Function Intuition
very helpfull
Nice class
here after the 2024 nobel prize announcement in Physics
Nobel Prize
This is the greatest introduction of ML
Deep respect! Thank you for this beautiful lecture!
very much helpful ❤
I remember walking into the Spice Rack and seeing the (iirc) Perq machines running boltzmann machines. Or maybe they were Symbolics machines?
37:09 Theta_0=0 does not mean the decision boundary goes through the origin... Theta as drawn on the left has positive first entry...
Theta_0 stands for the x2-intercept of the linear decision boundary, so YES, if it is 0, the boundary goes through the origin.
props to you :v
I feel like Prof. Hinton is trying to avoid math to fit the understanding level of coursera users. However, instead of conveying the common sense and insights behind the functions, he simply rephrase the functions into everyday language. Since everyday language is extremely vague and unclear, he created difficulty for even math lovers to understand the ideas behind. I highly value the ideas Prof. Hinton tries to convey, but to really understand the ideasis hard. Thanks for sharing anyway.
Hi Blitz kim, may you please assist me in how i can add loads to an octave portal frame?
The instructor is Andrew Ng
Thanks a lot
Thanks a lot
00:00 1A Why do we need machine learning? 13:15 1B What are neural networks? 21:45 1C Some simple models of neurons 30:09 1D A simple example of learning 35:47 1E Three types of learning
This channel is sooo educational. Thank you!!
I look up to you sir
wow!
Hi. Anyone who understands what we mean by a training case? I do understand what an input vector is, but not a training case. Help would be appreciated.
The perceptron is a linear device. The input vector is supposed to be dot-multiplied with the weights vector, and the decision depends on whether the result is positive or negative. Therefore, an input vector can be represented as a hyperplane. A "training case" is an input vector PLUS the right answer, i.e. the indication of "which side" of the hyperplane you're supposed to end up on. The perceptron gives the right answer for ALL training cases if and only if its weights vector is simultaneously on the "right sides" of ALL those hyperplanes. Such a thing might not exist, as mentioned in the video, but if it exists, that means the problem is solvable by a perceptron.
Any one who can explain how the network is learning the weights in the hand-written digits example? I don't get the increment and decrement weights idea.
Inspirational story about importance of Being a balanced controller th-cam.com/video/4BiKhka7APc/w-d-xo.html
This is the best video on hopfield and boltzmann machines on youtube.
I am surprised that I am first to like the video. This is so nicely explained.
you are awosome
What a brilliant explanation! Thanks a bunch.
Four effective ways to learn an RNN: 35:24
Very insightful
I just wish he would define his terminology a bit. E.g. “greedy”, etc. But otherwise, brilliant. 31:05 probably how the brain works.
Dense and informative lecture, but starts to make sense after repeated viewing.
Learning from the master. Unfortunately, he doesn’t explain at all about the “convolution” part of CNNs.
This guys knows what he's talking about.
What is mean by input vector? Why input vector is perpendicular to the hyperplane ? how to differentiate input vector and training case hyperplane?
(the ideal plane can be represented as) W.X+b=0 =>W.X = -b => W.(X/b) = -1 => dot product of W and (X/b) is -1 => they are perpendicular . Also if W and (X/b) are perpendicular vectors that means W and X are perpendicular vectors , since b is a mere calling factor. Hope this is understandable
@@nayanvats3424 Thank you so much
This man is The Godfather of AI...really genius
30:10, 'the backward pass is completely linear' hahha oh man that was hilarious!
"This was really amazing in the year 16 BG" XD XD XD XD
The GOD of Neural Networks ( Geoffrey Hinton)
truly awesome! So many insights.
He is a brilliant explainer
Great!! Thank you so much
What about fitting the model (hyper-parameters) to the training set, rather than the validation or test set? In other words, taking your trained models, and see which one performs best on the training data. Is that an acceptable practice? Does using the validation set reduce over-fitting to the rest of the training data?
If you decide on your hyperparameter values based on your training dataset, you run into the same problem you get with testing your model using the training set: the values are tuned specifically to the data that the model has already seen. Over-fitting is a problem. You want your parameter and hyperparameter decisions to generalize. Using a separate validation and test sets is essentially using two passes with unique test sets to find two distinct categories of parameters.
@@JohnEusebioToronto What are distinct categories of parameters?
The first set of parameters are learned by the model to fit the trainigng data, the second set of parameters are the hyperparameters. To evaluate both in total you want a completely unseen test set
SVM math is a challenged me, but I am determined to master it.
So it all comes down to the most frequently occurring "n-words"
many thanks for uploading this
19:00 what the hell
Possibly one one of the more concise and complete explanations on how to visualize the process... Side note, it looks like `Raghavan` of `Medium.com` agrees because they totally took a screen-cap medium.com/datadriveninvestor/feature-scaling-why-what-where-how-683f61812f4c though without citing a source. ... though much like +Nurhan Serin (however, not for the same reasons) I too think a bit of time spent on how reversing things can be done would be super helpful for those just getting into the subject.
I want to ask how to denormalize what we found h function. Because I think it is a neccessity to transform normalized h and y values.
You use normalization just to speed up gradient descent and find the global minimum faster and design your model, you will always have your features and labels untouched on your database. (btw we do not touch the training set on your database)
Does anybody have the link to where I can get my password?
i thought u were smarter than this euler