Message from the creator: I hope you've all enjoyed this series of videos. It was fun to collaborate with freeCodeCamp! If you're interested in more content from me feel free to check out calmcode. Also, I'd like to give a shoutout to my employer, Rasa! We're using scikit-learn (and a whole bunch of other tools) to build open-source chatbot technology for python. If that sounds interesting, definitely check out rasa.com/docs/rasa/.
This is the way everything should be taught! I love that you present concepts in a structured and systematic way, speaking slowly and clearly, using as few words as possible... - starting with the concept and talking through drawing a logical diagram (which is so important for developing abstract thinking in terms of high level concepts, which is how we think when we are experienced in something). - then writing clean, concise code to implement each part of the concept - showing plots that directly demonstrate the effects of the entire iteration Too many tutorials make the mistake of talking too much. A lot of videos also either assume too much or too little about the viewer's knowledge. This seems to confidently stike the nail on the head! Thanks!
The way each dataset complements the associated pitfall you want to bring up at a given moment... wow. What an amazing intro -- it must have taken a lot of forethought and behind the scenes organization to make the flow of this video series seem so effortless. THANK YOU!!
I must agree with others: this is a great lecture. I mean... REALLY good. Vincent, do you have any more of these? This stuff is not only informative, but also pleasant to watch and listen to. Good, correct, and clear English is rather rare these days. Sadly. This lecture is good because it does not shy away from details. It also goes beyond just showing the API. It tries to build something new from the available "Lego" pieces. Which is great as it shows creativity and also how to dig deeper to understand the data. Very, very good exposition. Many thanks.
I feel you about clear and well enunciated English. I HATE having to 'interpret' what I'm hearing....too much extraneous Cognitive Load for an already high Intrinsic Load topic.
This is an excellent tutorial. Im doing the coursera ibm maachine learning cert and supplementing it with this video. This overall is a much more palatable and easier to understand tutorial of scikit learn and really a machine learning model in general. Awesome work!
Awesome Tutorial, I have some suggestions regarding your content: 1. Tutorial on RUST 2. Tutorial on JULIA 3. Tutorial on AWK & SED (Especially AWK) 4. Tutorial on LUA What do you guys think????
the explanations are well detailed, this really helps with understanding the library and know exactly what to use and where to use it. You have helped a great community of beginners. 🙏🏾🙏🏾🙏🏾🙏🏾🙏🏾
Great video. Helped me with multiple sections that I had been fumbling my way through. No hard going over some things I already knew aswell. Thanks for this..👍
So amazing. Either this video is especially approachable or I've been exposed to these concepts enough now that they're finally starting to click. Probably both, but the former is definitely a significant factor. Well done
By the way, im working through the eCornell Python for Machine Learning and certificate in Machine Learning courses and this video is a perfect supplement. This is so helpful. Thank you!
Thanks for this great material about scikit-learn, it is really helpful and understanding is more comfortable with educators beatiful explanations. Huge thanks and keep going...
Do you guys like..read minds or something? I was working on a django project yesterday, and you released one. I was stuck on ML today, and here's the video. Wicked!
Hello, I just wanted to say for those who plan to do the videos. The data set 'Boston house prices' has been removed by scikit, therefore this tutorial is not really working anymore unless you change the dataset
This is compelling writing. If the subject fascinates you, a subsequent book with similar themes would be beneficial. "From Bytes to Consciousness: A Comprehensive Guide to Artificial Intelligence" by Stuart Mills
You can still downgrade your scikit-learn version to 1.0.2 and it should be fine, also if you don't want to, you can use the fetch_california_housing instead
Could you please do "Python for Raspberry Pi 4". I cannot fight a proper guide which properly introduces and explains from the very beginning. I would like to experiment with robotics (e.g. robot arm, etc.), but have no idea how to start programming it. All available guides are using irrelevant projects to start with Raspberry. Note: Thank you for the tutorial!
Hi, what do you guys suggest me to watch if I'm totally new to ML? I find this course a little bit beyond my knowledge, I thought because I've got the foundation of DS I can jump on this course but I think I'll need some intro to ML videos.
very useful... I run the code on idle but it didnt work well, there are something that need to revise like importation of library being after used variable.
The Boston housing prices dataset has an ethical problem: as investigated in [1], the authors of this dataset engineered a non-invertible variable "B" assuming that racial self-segregation had a positive impact on house prices [2]. Furthermore the goal of the research that led to the creation of this dataset was to study the impact of air quality but it did not give adequate demonstration of the validity of this assumption. The scikit-learn maintainers therefore strongly discourage the use of this dataset unless the purpose of the code is to study and educate about ethical issues in data science and machine learning.
I did not succeed to reproduce the figure @ 1:16:56. I'm always getting the same figure as the one just before even I did the log transformation of the "Amount" column. Anyone have had the same problem?
The mean is always measured over all 10 splits, for precision, for recall AND for the minimum separately. In other words, FIRST the minimum is calculated, THEN the mean over all these minimums is calculated. If you would have only one split, there would not be a problem. But starting with two splits, we have: test_precision 1.0 and 0.46 = mean 0.73. test_recall 0.37 and 1.0 = mean 0.68. However, the minimum is 0.37 and 0.46, and if you calculate the mean of these two, it's 0.42, which is below 0.73 and below 0.68. So it's reasonable that the minimum is always a bit lower than each of the two lines. In fact, I never found the "appendix", Vincent was talking about. I just took the grid-results as a dataframe, exported it to excel and played a bit around.
Data leakage? In the introducing section (like in 28:41) we have a gridsearch that contains a pipeline with the numeric features transformer. I guess it is the right way to data leakage, because in our pipeline we first transform all the numeric features in the entire dataset and straightly after that we start our model learning through the cross-validation process within the entirely transformed dataset. Our training sets, created during cv, contain previously standardized data, so the model "knows" something about the examples that are not in the training set and can predict better when process them in the prediction step. Thus we should exclude any numeric features transformation in our grid search, am I right? If I'm not, please explain the mechanism.
Can I ask you how you are able to draw on the screen? I understand you are probably using a Stylus pen over some touch screen surface, which mirrors your display, but what software are you using for that?
@43:00 where you perform the QuantileTransformer step and plot it...shouldn't the scatter plot fn take X (non transformed) and X_new (transformed) data as params? Little confused why we passed X_new[:, 0] X_new[:, 1]. It seems like we plotted 2 different features (indexed by 0, 1) after transformation step?
No, it is actually syntax of pandas, X[l1=[list...], l2=[list....]] => choose all rows in l1 and all columns in l2. so, X_new[:, 0] chooses all rows with col 0, X_new[:, 1] chooses all rows with col 1. Hope this helps
Sorry, I have a question : Which version of python and opencv are matched ? Because a lot of tutorials I had follow, but unable to find matched compatible version of python and opencv. Please help me to find solution to my own project. Thank you so much.
It is a delicate subject, but I think the question of the Algorithm being racist is an ill advised one. The real question under it is whether The % of black population parameter affects the house price or not. Is the aim of a data scientist to make the actual prediction or to make the data fit a point of view (which, btw, I totally endorse in principle)
So far into the video, I don't see the data split into train and test samples. Does that mean the model is testing on seen data? If yes, how reliable are these metrics? Someone shed some light, please.
I have one question on time of lapsing GridSearchCV pipeline: how to minimize time of running code, because my model was estimated with mean fit time at least 9 min. My processor is AMD Ryzen 5 5500U with Radeon Graphics 2.10 GHz and 6 cores. Thenk you in advance!
00:19 i did not underestand why after changing k value from 5 to 1 prediction diagram changed ? knn is a classification algoithm but here it was like a regration
Message from the creator:
I hope you've all enjoyed this series of videos. It was fun to collaborate with freeCodeCamp!
If you're interested in more content from me feel free to check out calmcode. Also, I'd like to give a shoutout to my employer, Rasa! We're using scikit-learn (and a whole bunch of other tools) to build open-source chatbot technology for python. If that sounds interesting, definitely check out rasa.com/docs/rasa/.
i guess I'm kinda randomly asking but do anybody know of a good place to watch newly released tv shows online ?
@Jad Kylan Try flixzone. Just search on google for it =)
@Aries Ulises definitely, I've been using flixzone for months myself =)
@Aries Ulises thanks, I went there and it seems like a nice service :) I really appreciate it!!
@Jad Kylan happy to help =)
This is by far the most beginner friendly introduction to sk-learn I've seen
This is the way everything should be taught!
I love that you present concepts in a structured and systematic way, speaking slowly and clearly, using as few words as possible...
- starting with the concept and talking through drawing a logical diagram (which is so important for developing abstract thinking in terms of high level concepts, which is how we think when we are experienced in something).
- then writing clean, concise code to implement each part of the concept
- showing plots that directly demonstrate the effects of the entire iteration
Too many tutorials make the mistake of talking too much. A lot of videos also either assume too much or too little about the viewer's knowledge.
This seems to confidently stike the nail on the head!
Thanks!
Amazing review!
Exactly 👍
Are you serious???
Instructor didn't even show the dataset. How would anyone understand whats going on like this?
The way each dataset complements the associated pitfall you want to bring up at a given moment... wow. What an amazing intro -- it must have taken a lot of forethought and behind the scenes organization to make the flow of this video series seem so effortless. THANK YOU!!
please bro can you tell me where to find appending for the plot answer ?
This video saved me from a 5K course! Thanks! Loads of Love!
I must agree with others: this is a great lecture. I mean... REALLY good. Vincent, do you have any more of these? This stuff is not only informative, but also pleasant to watch and listen to. Good, correct, and clear English is rather rare these days. Sadly. This lecture is good because it does not shy away from details. It also goes beyond just showing the API. It tries to build something new from the available "Lego" pieces. Which is great as it shows creativity and also how to dig deeper to understand the data. Very, very good exposition. Many thanks.
I feel you about clear and well enunciated English. I HATE having to 'interpret' what I'm hearing....too much extraneous Cognitive Load for an already high Intrinsic Load topic.
OMG! I love all the contente that Vincent makes! I must watch this video!
Send me a link to his channel
Just completed the first part of the lecture. I have been using scikit for a couple of months! Dudeee! This is an eye opener!
Wow - I need to share this with the rest of the class! Thanks for making this video so understandable.
16:00 pipe
23:45 grid search
37:00 standard scaler
42:00 quantiles better
46:55 …
55:00 fraud ex
comeback dude. don't give up.
Thankyou very much, much needed for beginners like me❤️,
I hope one day when I'll become expert, I will make free courses for others too❤️
I was rewatching the course to make my basics better , there were actually a lot of details man!!!
This is an excellent tutorial. Im doing the coursera ibm maachine learning cert and supplementing it with this video. This overall is a much more palatable and easier to understand tutorial of scikit learn and really a machine learning model in general. Awesome work!
great video series, thanks ! In this video @56:56 i think you meant to say that "there are way more cases without Fraud than with Fraud"
exactly why i came to the comments
Awesome Tutorial,
I have some suggestions regarding your content:
1. Tutorial on RUST
2. Tutorial on JULIA
3. Tutorial on AWK & SED
(Especially AWK)
4. Tutorial on LUA
What do you guys think????
Great video ! At 1:49:40 you could use ".values" at the end instead of np.array in the beginning.
the explanations are well detailed, this really helps with understanding the library and know exactly what to use and where to use it. You have helped a great community of beginners. 🙏🏾🙏🏾🙏🏾🙏🏾🙏🏾
I loved the end chapter that joined machine learning with expert systems I've used 30 years ago...
Does Vincent has his own Channel, I just love his teaching style!!
google calmcode
you're welcome
Just Amazing once again, u guys rock as always...
Great video. Helped me with multiple sections that I had been fumbling my way through. No hard going over some things I already knew aswell.
Thanks for this..👍
So amazing. Either this video is especially approachable or I've been exposed to these concepts enough now that they're finally starting to click. Probably both, but the former is definitely a significant factor. Well done
By the way, im working through the eCornell Python for Machine Learning and certificate in Machine Learning courses and this video is a perfect supplement. This is so helpful. Thank you!
thank you so much! I am slowly digesting this stuff and most likely will have to review it 2 or more times.
Well explained and high quality video and audio. Unlike some other videos out there.
thank you. your video makes me clear about scikit-learn and machine learning. you're my saint
does this tutorial worth it to watch like in this year , its 3 year old!!?
Im busy for the next 2h.
Me too
Way to go!
+=1
Thanks for this great material about scikit-learn, it is really helpful and understanding is more comfortable with educators beatiful explanations. Huge thanks and keep going...
This video is awesome! Your narration style is fantastic.
excellent explanation for a beginner in ML .Thanks for the course.
Rime series needed these Polynomial parameters, i think. Cool tutorial though!
Do you guys like..read minds or something?
I was working on a django project yesterday, and you released one. I was stuck on ML today, and here's the video. Wicked!
Boston House Price Dataset is available on Kaggle for those who are saying scikit learn has removed it.
what a great course! thank you for openning the gates..
Hello, I just wanted to say for those who plan to do the videos. The data set 'Boston house prices' has been removed by scikit, therefore this tutorial is not really working anymore unless you change the dataset
35:56 as a non-American, it is so satisfying hearing z read as 'zed' not 'zi'. lol
i feel i learned so much, great job sir. Thank you :)
Just started learning scikit! thank you for the material
great series of demo videos. well explained for a beginner to learn from zero.
This is compelling writing. If the subject fascinates you, a subsequent book with similar themes would be beneficial. "From Bytes to Consciousness: A Comprehensive Guide to Artificial Intelligence" by Stuart Mills
please bro can you tell me where to find appending for the plot answer ?
it's insane how good this video is
Wow such an awesome course, cant believe this is free
Very clear and helpful, thank you!
To anyone who can't find this dataset. It's been removed. You will understand the reason at around 31:00
50:00 count vecotorizer is a really good preprocessor for that too in my opinion
i am trying to learn from this course but it says that the boston data set has been removed from scikit learn. what should i do?
You can still downgrade your scikit-learn version to 1.0.2 and it should be fine, also if you don't want to, you can use the fetch_california_housing instead
Great introduction to ML, educational and well explained to the core... 🙂
Very good teacher. Thanks for the content I learned a lot.
Kudos! Excellent training.
Thank you for uploading this video!
Wow thank u this really clarified my doubts :)
thanks my co name --- vicent, you inspire me to do machine learning
Awesome! Thank you for sharing!
awesome! continue at 46:05
Could you please explain why the min of recall and precision is lower than both? Could not find appendix.
+1, anyone knows where to find the appendix?
hint: min_both is calculated separately at every train/test split in the cross-validation
+1, same, could not find appendix
Very interesting, Thank you very much
Amazing presentation !!
How was the presenter able to hand annotate on top of the screen? Sometimes as strokes that are temporary, and sometimes as a whiteboard?
Could you please do "Python for Raspberry Pi 4". I cannot fight a proper guide which properly introduces and explains from the very beginning. I would like to experiment with robotics (e.g. robot arm, etc.), but have no idea how to start programming it. All available guides are using irrelevant projects to start with Raspberry.
Note: Thank you for the tutorial!
I could help with a little info if you are still interested,
very nice tutorial watched the whole thing
How you watched 2 hr video in 27minutes
Hi, what do you guys suggest me to watch if I'm totally new to ML?
I find this course a little bit beyond my knowledge, I thought because I've got the foundation of DS I can jump on this course but I think I'll need some intro to ML videos.
StatQuest
@@Caradaoutradimensao Awesome looks good!
Thanks a lot!
@@Caradaoutradimensao thanks bro
You are the ONE
Thank you Sir
PERFECT TIMING!!!
Is it still worth watching this video? How much has changed in 2 years? Thank you
Excited!!!
so well explained thank you
Fantastic. Thank you very much.
The section on Metrics gets confusing for me. Any easy to understand books I can read for understanding metrics?
The metrics section was overwhelming for me as well. There has to be a pre requisite base work before going for this.
very useful... I run the code on idle but it didnt work well, there are something that need to revise like importation of library being after used variable.
this has an awesome didactics
The Boston housing prices dataset has an ethical problem: as
investigated in [1], the authors of this dataset engineered a
non-invertible variable "B" assuming that racial self-segregation had a
positive impact on house prices [2]. Furthermore the goal of the
research that led to the creation of this dataset was to study the
impact of air quality but it did not give adequate demonstration of the
validity of this assumption.
The scikit-learn maintainers therefore strongly discourage the use of
this dataset unless the purpose of the code is to study and educate
about ethical issues in data science and machine learning.
I did not succeed to reproduce the figure @ 1:16:56. I'm always getting the same figure as the one just before even I did the log transformation of the "Amount" column. Anyone have had the same problem?
Did anybody figure out why the mean of the min(recall, precision) was below the actual mean of both recall & precision? 1:10:57
The mean is always measured over all 10 splits, for precision, for recall AND for the minimum separately. In other words, FIRST the minimum is calculated, THEN the mean over all these minimums is calculated. If you would have only one split, there would not be a problem. But starting with two splits, we have: test_precision 1.0 and 0.46 = mean 0.73. test_recall 0.37 and 1.0 = mean 0.68. However, the minimum is 0.37 and 0.46, and if you calculate the mean of these two, it's 0.42, which is below 0.73 and below 0.68. So it's reasonable that the minimum is always a bit lower than each of the two lines. In fact, I never found the "appendix", Vincent was talking about. I just took the grid-results as a dataframe, exported it to excel and played a bit around.
@@meisterpianist Thanks for the explanation!
truly a great tutorial!
For the Titanic example: 76% of the women survived, whereas just 16% of the men survived, that would have been a really good classifier to start with
Data leakage? In the introducing section (like in 28:41) we have a gridsearch that contains a pipeline with the numeric features transformer. I guess it is the right way to data leakage, because in our pipeline we first transform all the numeric features in the entire dataset and straightly after that we start our model learning through the cross-validation process within the entirely transformed dataset. Our training sets, created during cv, contain previously standardized data, so the model "knows" something about the examples that are not in the training set and can predict better when process them in the prediction step. Thus we should exclude any numeric features transformation in our grid search, am I right? If I'm not, please explain the mechanism.
Beautiful lecture!
Can I ask you how you are able to draw on the screen? I understand you are probably using a Stylus pen over some touch screen surface, which mirrors your display, but what software are you using for that?
for better learning you can also provide data links used in this course ,sir if u can
What do you mean watch all these videos? Are there different videos series?
Very nice, thank you.
Really it is amazing course
@43:00 where you perform the QuantileTransformer step and plot it...shouldn't the scatter plot fn take X (non transformed) and X_new (transformed) data as params? Little confused why we passed X_new[:, 0] X_new[:, 1]. It seems like we plotted 2 different features (indexed by 0, 1) after transformation step?
No, it is actually syntax of pandas,
X[l1=[list...], l2=[list....]] => choose all rows in l1 and all columns in l2.
so, X_new[:, 0] chooses all rows with col 0, X_new[:, 1] chooses all rows with col 1.
Hope this helps
Sorry, I have a question :
Which version of python and opencv are matched ?
Because a lot of tutorials I had follow, but unable to find matched compatible version of python and opencv.
Please help me to find solution to my own project. Thank you so much.
It is a delicate subject, but I think the question of the Algorithm being racist is an ill advised one. The real question under it is whether The % of black population parameter affects the house price or not. Is the aim of a data scientist to make the actual prediction or to make the data fit a point of view (which, btw, I totally endorse in principle)
Very good tutorial.
So far into the video, I don't see the data split into train and test samples. Does that mean the model is testing on seen data? If yes, how reliable are these metrics?
Someone shed some light, please.
Great video!
thanks for his great video.
amazing content, thanks a ton!
what will be the prerequisite for scikit learn ??
Great crash course.
I have one question on time of lapsing GridSearchCV pipeline: how to minimize time of running code, because my model was estimated with mean fit time at least 9 min. My processor is AMD Ryzen 5 5500U with Radeon Graphics 2.10 GHz and 6 cores. Thenk you in advance!
Where are the datasets for the sklearn metric tutorial (credit card dataset, etc)? Thank you!
"Building dependencies failed"
error: subprocess-exited-with-error
Cannot import boston housing price dataset.
At the metrics part, when you plot mean recall and mean precision, how is it that i got the same results for the train and test sets?
25:50 using space instead of tab .... stops watching :) (joke) great video
1:11:00 what’s the answer though?
00:19 i did not underestand why after changing k value from 5 to 1 prediction diagram changed ? knn is a classification algoithm but here it was like a regration
Thanks!
this is one of the best videos I have seen covering sklean so well. Thanks a lot! would love to learn sklearn in more depth for different scenarios ..
Hi Vignesh, could you suggest a book which covers the metrics section?
great tutorial! one question: how do you make the plots at 1:29? the 'make_plots' function
he imported matplotlib.pyplot and used scatter plot i think