I have watched only 4 mins so far i had to pulse and write this comment. I will say this is one of the best tutorial i have seen in data science. Sir you need to take this to another level. What a great teacher you are
For anyone stuck with the categorical features error. from sklearn.compose import ColumnTransformer ct = ColumnTransformer([("town", OneHotEncoder(), [0])], remainder = 'passthrough') X = ct.fit_transform(X) X Then you should be able to continue the tutorial without further issue.
Hey, thank for the code. I tried using your code but it gives me an error, despite of converting it (X) to an array, it gives me this error. " TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array. "
@@Ran_dommmm I know you said "despite converting X to an array", but just double check you have used the .toarray() method correctly. The error message seems pretty clear on this one. This function may help confirm that a dense numpy array is being passed. import numpy as np import scipy.sparse def is_dense(matrix): return isinstance(matrix, np.ndarray) Pass in X for matrix and it should return True. Good luck fixing this.
Exercise solution: github.com/codebasics/py/blob/master/ML/5_one_hot_encoding/Exercise/exercise_one_hot_encoding.ipynb Everyone, the error with catergorical_features is fixed. Check the new notebook on my github (link in video description). Thanks Kush Verma for giving me pull request for the fix.
Thank you for the wonderful explanation sir. However I am getting an error as __init__() got an unexpected keyword argument 'catergorical_features' for the line for my code onehotencoder = OneHotEncoder(catergorical_features = [0]). Is it because of change of versions? what is the solution to this?
Hi, Your explanation is very simple and effective Ans for practice session A)Price of Mercedes Benz -4Yr old--mileage 45000= 36991.31721061 B)Price of BMW_X5 -7Yr old--mileage 86000=11080.74313219 C) Accuracy=0.9417050937281082(94 percent)
How to download these attached files from github Code in tutorial: github.com/codebasics/py/tree... Exercise csv file: github.com/codebasics/py/blob...
15:50 write your code like this: ct = ColumnTransformer( [('one_hot_encoder', OneHotEncoder(categories='auto'), [0])], remainder='passthrough' ) X = ct.fit_transform(X) X Ok so it will work fine otherwise it will give an error.
@@jollycolours correct, the categorical_features parameter is deprecated and for the same following are the steps needs to be followed; from sklearn.compose import ColumnTransformer ct = ColumnTransformer([('one_hot_encoder', OneHotEncoder(), [0])], remainder='passthrough') X = np.array(ct.fit_transform(X), dtype=float)
This guy is AMAZING! I have spent 2 days trying decenes of other methods and this is the only one that worked for my data and didnøt come as an error, this guy totally saved my mental sanity, I was growing desperate as in DESPERATE! Thank you, thank you, thank you!
Wonderful Video. This so far the easiest explanation I have seen for one hot encoding. I have been struggling from very long to find a proper video on this topic and my quest ended today. Thanks a lot, sir.
@@sauravmaurya6097 its quite helpful if u are a beginner. Beginner in sense of {not from engineering or programming background }. U can accompany this with coursera’s andrew ng course.
@@sauravmaurya6097 if u already know calculus and python programming (intermediate level) , ML would feel easy . After doing this go to the deep learning series bcz thats what used in industries.
this ML tutorial is by far the best one i have seen it is so easy to learn and understand and your exersise also helps me to apply what i have learn so far thank you.
you really made it very easy to understand such new concepts, Thanks a lot starting from mint 12:30 about OneHotEncoder . Some udpates in Sklearn prevent using categorical_features=[0] here is the code update as of April 2020 from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer columnTransformer = ColumnTransformer([('encoder', OneHotEncoder(), [0])], remainder='passthrough') X = np.array(columnTransformer.fit_transform(x), dtype = np.str) X= X[:,1:] model.fit(X,y) model.predict([[1,0,2800]]) model.predict([[0,1,3400]])
I achieved the same result using a different method that doesn't require dropping columns or concatenating dataframes. This alternative approach can lead to cleaner and more efficient code df=pd.get_dummies(df, columns=['CarModel'],drop_first=True)
Your answer is perfect Ankit. Good job, here is my answer sheet for comparison: github.com/codebasics/py/blob/master/ML/5_one_hot_encoding/Exercise/exercise_one_hot_encoding.ipynb
I am getting 84% accuracy without encoding variable, but after encoding i am getting 94% accuracy on model. Thank you for your teaching. Doing great Job
I'm reading a textbook that has an exercise to study this same dataset to predict survived. I just finished the exercise from the book - I can't seem to go past 81% score. Thanks for your awesome explanation
To understand the difference between LabelEncoder and OneHotEncoder "'medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621"
The label encoding done for the independent variable column, 'town' in the second half of the video, I think, isn't needed. Instead just doing One Hot Encoding is enough. Wonderful contribution anyway. Thanks!!
model.predict([[45000,4,0,0]])=array([[36991.31721061]]), model.predict([[86000,7,0,1]])=array([[11080.74313219]]), model.score(X,Y)=0.9417050937281082. Thanks sir for these exercise
15:50 write this code from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer ct = ColumnTransformer([('town', OneHotEncoder(), [0])], remainder = 'passthrough') x = ct.fit_transform(x) x
Iam here from 2024 after 6 years and I want to say that this playlist is wonderful! I hope that you update it because there're many changes in the syntax of sklearn now
Hey next week I am launching an ML course on codebasics.io which will address this issue. It has the latest API, in depth math and end to end projects.
Wait wait... I don't see the point 😕 The first half of the video does the same thing as one hot encoding(the second half of video)but second half is more tedious and takes more steps Then why not use the pd.get_dummies instead of onehotencoding??? What's the advantage of using onehot?
I personally like pd.get_dummies as it is convenient to use. I wanted to just show two different ways of doing same thing and there are some subtle differences between the two. Check this: stackoverflow.com/questions/36631163/pandas-get-dummies-vs-sklearns-onehotencoder-what-is-more-efficient
Hi sir !! Most easier way u teach ML. Thanks a lot!!!. I m going through ur videos and assignments. I got the answer for merce: 36991.31, BMW:11080.74 & model score :0.9417. The Model score is 94.17%. My QUE is how to improve the Model score ??? Is there any way to apply the features?
yeah it is showing the same for me, however you can try converting your dummies into int as: dummies = dummies.astype(int) This will convert true and false to 1 and 0 respectively
@@codebasics Sure, on the official docpage here (scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) the following is written regarding categorical_features: "Deprecated since version 0.20: The categorical_features keyword was deprecated in version 0.20 and will be removed in 0.22. You can use the ColumnTransformer instead."
Great videos! Unfortunately it becomes harder and harder to code in the same time as the video because there are more and more changes in the libraries you use. For example sklearn library removed categorical_features parameter for onehotencoder class. It was also the case for other videos from the playlist. Would be great to have the same playlist in 2022 :)
First of all thank you for making life easier for people (who want to learn Machine Learning). You explain really well. Big Fan. When I was trying to execute categorical_features=[0], it gave an error. It seems this feature has been depreciated in the latest version of scikit learn. Instead they are recommending to use ColumnTransformer. I was able to get the same accuracy 0.9417050937281082. Another thing i wanted to know, when you had initially used label encoder and converted categorical values to numbers, why we specified the first column as categorical, when it was already integer value ?
Thank you for wery well explained tutorial. I have one question though, you are training all of your data here and yet model score is only 0.95. Why is that? It must be 1. If you were to split your data and train it would make sense but your case doesn't. What am I missing here?
Alper, It is not true that if you use all your training data the score is always one. Ultimately for regression problem like this you are trying to make a guess of a best fit line using gradient descent. This is still an *approximation* technique hence it will never be perfect. I am not saying you can never get a score of 1 but score less then 1 is normal and accepted.
If anyone is interested, we can also skip the label encoder when using column transformer altogether by using the below : x=df[['town','area']].values y=df['price'].values from sklearn.compose import make_column_transformer ct = make_column_transformer( (OneHotEncoder(categories='auto'), [0]), remainder="passthrough" ) X=ct.fit_transform(x) X = X[:, 1:] model.fit(X, y)
@@codebasics I am sorry I did not check that. Thank u sir for your videos, words are not enough to convey my gratitude for sharing your expertise to all.
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
I have watched only 4 mins so far i had to pulse and write this comment. I will say this is one of the best tutorial i have seen in data science. Sir you need to take this to another level. What a great teacher you are
That for the feedback my friend 😊👍
100% aligned...am doing an external course but have to refer to ur session to understand the topic in external course...amazing effort..
For anyone stuck with the categorical features error.
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([("town", OneHotEncoder(), [0])], remainder = 'passthrough')
X = ct.fit_transform(X)
X
Then you should be able to continue the tutorial without further issue.
thanks bro
thanks a lot! it helps
Thank you brother.
Hey, thank for the code.
I tried using your code but it gives me an error, despite of converting it (X) to an array, it gives me this error.
" TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array.
"
@@Ran_dommmm I know you said "despite converting X to an array", but just double check you have used the .toarray() method correctly. The error message seems pretty clear on this one.
This function may help confirm that a dense numpy array is being passed.
import numpy as np
import scipy.sparse
def is_dense(matrix):
return isinstance(matrix, np.ndarray)
Pass in X for matrix and it should return True.
Good luck fixing this.
Exercise solution: github.com/codebasics/py/blob/master/ML/5_one_hot_encoding/Exercise/exercise_one_hot_encoding.ipynb
Everyone, the error with catergorical_features is fixed. Check the new notebook on my github (link in video description). Thanks Kush Verma for giving me pull request for the fix.
Thank you for the wonderful explanation sir. However I am getting an error as __init__() got an unexpected keyword argument 'catergorical_features' for the line for my code onehotencoder = OneHotEncoder(catergorical_features = [0]). Is it because of change of versions?
what is the solution to this?
_init__() got an unexpected keyword argument 'categorical_features' sir I get this error when I specify categorical features
@@urveshdave1861 Have you got any answer for this? I am having the same error
@@urveshdave1861 okay .. i will do that. thanks
@@urveshdave1861 Hey I am also getting the same error. how did you resolve it?
Hi,
Your explanation is very simple and effective
Ans for practice session A)Price of Mercedes Benz -4Yr old--mileage 45000= 36991.31721061
B)Price of BMW_X5 -7Yr old--mileage 86000=11080.74313219
C) Accuracy=0.9417050937281082(94 percent)
Same bro
same bro.... thx for replying so that i can check my results
Sir pls continue your machine learning tutorials ,yours tutorials are one of the best I have seen so far .
sure Gaurav, I just started deep learning series. check it out
@@codebasics
Kindly explain the concept of dummies in deep learning as well
Anyone can be a teacher , but real teacher eliminates the fear from students .. you did the same !! Excellent knowledge and skills
Sreenivasulu, your comment means a lot to me, thanks 😊
How to download these attached files from github
Code in tutorial: github.com/codebasics/py/tree...
Exercise csv file: github.com/codebasics/py/blob...
Check video description and find the paragraph that starts with *to download CSV and code...* That section explains how to download those files
@@codebasics thanks
15:50 write your code like this:
ct = ColumnTransformer(
[('one_hot_encoder', OneHotEncoder(categories='auto'), [0])],
remainder='passthrough'
)
X = ct.fit_transform(X)
X
Ok so it will work fine otherwise it will give an error.
what is the use of this " (categories='auto') " and " 'one_hot_encoder' "
Thank you, you're a lifesaver! I was trying multiple ways since categorical_features has now been depreciated.
@@jollycolours correct, the categorical_features parameter is deprecated and for the same following are the steps needs to be followed;
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([('one_hot_encoder', OneHotEncoder(),
[0])], remainder='passthrough')
X = np.array(ct.fit_transform(X), dtype=float)
This guy is AMAZING! I have spent 2 days trying decenes of other methods and this is the only one that worked for my data and didnøt come as an error, this guy totally saved my mental sanity, I was growing desperate as in DESPERATE! Thank you, thank you, thank you!
I am glad it was helpful to you 🙂👍
Wonderful Video.
This so far the easiest explanation I have seen for one hot encoding. I have been struggling from very long to find a proper video on this topic and my quest ended today.
Thanks a lot, sir.
I was confuse from where to start studying ml and then my friend suggested this series.... It's great :-)
any other courses or source you are following? and any development you have begun ?
want to know how much this playlist is helpful? kindly reply.
@@sauravmaurya6097 its quite helpful if u are a beginner. Beginner in sense of {not from engineering or programming background }. U can accompany this with coursera’s andrew ng course.
@@sauravmaurya6097 if u already know calculus and python programming (intermediate level) , ML would feel easy . After doing this go to the deep learning series bcz thats what used in industries.
this ML tutorial is by far the best one i have seen it is so easy to learn and understand and your exersise also helps me to apply what i have learn so far thank you.
Glad it helped!
Your ability to simplify things is amazing, thank you so much. You are a natural teacher.
you really made it very easy to understand such new concepts, Thanks a lot
starting from mint 12:30 about OneHotEncoder . Some udpates in Sklearn prevent using categorical_features=[0]
here is the code update as of April 2020
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
columnTransformer = ColumnTransformer([('encoder', OneHotEncoder(), [0])], remainder='passthrough')
X = np.array(columnTransformer.fit_transform(x), dtype = np.str)
X= X[:,1:]
model.fit(X,y)
model.predict([[1,0,2800]])
model.predict([[0,1,3400]])
The code is working but give a different prediction compared to dummies
Plus my X is showing 5 column instead of 4
I was entering the 0 and 1 wrongly. I am getting the same answer thank you for the code
thanks buddy
the god of data science......Amazing explanation sir..kudos to your patience in explanation
Glad it was helpful!
Even in 23 your video is such a relief..kudos to your teaching.
I achieved the same result using a different method that doesn't require dropping columns or concatenating dataframes. This alternative approach can lead to cleaner and more efficient code
df=pd.get_dummies(df,
columns=['CarModel'],drop_first=True)
First of all, 1000*Thanks for sharing such content on youtube..
I got an accuracy of 94.17% on training data.
Bandham, I am glad you liked it buddy 👍
I was shocked after the first 5 minutes of the video and have never thought it would be so easy and fast! Thanks ALOT1
Miyuki... I am glad you liked it
your are the best teacher on youtube , i have never seen before
Merc: 36991.317
BMW: 11080.743
Score: 94.17%
Your answer is perfect Ankit. Good job, here is my answer sheet for comparison: github.com/codebasics/py/blob/master/ML/5_one_hot_encoding/Exercise/exercise_one_hot_encoding.ipynb
thanks for posting the answer bro
Could we upvote this comment to the top? Been looking for this for quite some time now. This is important, and this comment matters.
@@codebasics I used pandas dummy variable instead of using onehotencoding, because it is too confusing.
Got the same answer using OneHotEncoder after correcting tons of errors and watching videos over and over.
I must say this is the best course I've come across so far.
This was really well done! Kudos to you! It's hard to find clear and concise free tutorials nowadays. Subscribed and hope to see more awesome stuff!
You have gift of explaining things even to the layman. Big Up to you
Thanks a ton Wangs for your kind words of appreciation.
I am getting 84% accuracy without encoding variable, but after encoding i am getting 94% accuracy on model. Thank you for your teaching. Doing great Job
I will say this is one of the best tutorial i have seen in ML
I'm reading a textbook that has an exercise to study this same dataset to predict survived. I just finished the exercise from the book - I can't seem to go past 81% score.
Thanks for your awesome explanation
This is the best machine learning playlist i have came across on youtube😃👍, Hats off to you sir.
One of the best explanation for Encoding 👌👍
Glad it was helpful!
To understand the difference between LabelEncoder and OneHotEncoder "'medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621"
my model score 94% Accuracy .Thankyou sir for amazing video.
superb and precisely explained
Thank you 🙂
Awesome, you're explaining concepts in very simple manner.
Vishwa I am happy to help 👍
How can I like this video more than 100 times!
I am happy this was helpful to you.
This is really the best series to get started with ML
How are u starting?
Glad it was helpful!
@@shinosukenohara.123 I am watching this channel, Krish Naik and Andrew NG course on Coursera
For Mercedec benz I got 51981.26, for BMW i got 39728.19 & score is 94.17% . Thank you very much to make ML easy.
I wish I could give this videos 2 thumbs up! Great explanation of all the steps in one-hot encoding! Thank you!!
The Data Science GOAT! One day I will send you a nice donation for all that you have contributed to my journey sir!
A PLACE TO RUN TO WHEN ONE IS STUCK, THANK UOU SO MUCH SIR
If anyone got struck at One hot encoder at 16:26 then type this command and execute pip install -U scikit-learn==0.20
Thanks 😃
stuck and still not executed using your solution
the best video series on ML sir ....Thank you very much sir....
Think you very much...wonderful work..special think from Morocco in north of Africa
You teach with passion! thank you for the series!
The label encoding done for the independent variable column, 'town' in the second half of the video, I think, isn't needed. Instead just doing One Hot Encoding is enough. Wonderful contribution anyway. Thanks!!
I agree
Your videos are awesome
Glad you like them!
model.predict([[45000,4,0,0]])=array([[36991.31721061]]),
model.predict([[86000,7,0,1]])=array([[11080.74313219]]),
model.score(X,Y)=0.9417050937281082.
Thanks sir for these exercise
15:50 write this code
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([('town', OneHotEncoder(), [0])], remainder = 'passthrough')
x = ct.fit_transform(x)
x
thanks sir nice lecture
sir you are really a great teacher
you teach everything so nicely
even tough thing becomes easy when you teach
thanks a lot
Iam here from 2024 after 6 years and I want to say that this playlist is wonderful!
I hope that you update it because there're many changes in the syntax of sklearn now
Hey next week I am launching an ML course on codebasics.io which will address this issue. It has the latest API, in depth math and end to end projects.
Highly Qualitative.
That image on one hot encoding 🤣🔥
Thank you sir🎉. You made my ML Journey Better.. 🤩
This is an amazing tutorial! saved me so much time and brought so much clarity!!! Thank you!
I also got them correct. Sir, this course is amazing. You have made it so easy to understand.
Glad to hear that
difficult topics are easily understood, Thank you so much for the content sir
Hi Dhaval, your explanation on all the topics is crystal clear.
Can you please make videos on NLP also
You make it easy with your explanation !! Thank you !!
I learned a lot from the exercise that you gave at the end of the video, thank you so much sir!
Simply excellent explanation with very simple examples!
@13:20 we need to do :
dfle = df.copy() ?
because otherwise changes in dfle will reflect back to df
Thanks :)
Yes u r right
definitely one of the best videos to learn from!
Excellent video - thank you!
Mercedes = array([[36991.31721061]])
BMW = array([[11450.86522658]])
Accuracy = 0.9417050937281082
Thanks for your time and knowledge once again!
thank you, this helped me so much with multivariate regression with many categorical features!
Your tutorial video is helping so much for knowing more about ML.
I am happy this was helpful to you.
You are doing a wonderful job, people like you inspire me to learn and share the knowledge i gain. It is very useful for me. All the best.
Wait wait... I don't see the point 😕
The first half of the video does the same thing as one hot encoding(the second half of video)but second half is more tedious and takes more steps
Then why not use the pd.get_dummies instead of onehotencoding???
What's the advantage of using onehot?
I personally like pd.get_dummies as it is convenient to use. I wanted to just show two different ways of doing same thing and there are some subtle differences between the two. Check this: stackoverflow.com/questions/36631163/pandas-get-dummies-vs-sklearns-onehotencoder-what-is-more-efficient
@@codebasics thank you :]... btw you make grt videos
They're basically the same however pd.dummy variables are easier to use.
Thank u, sir.
yes I agree
Please make regression video using preprocessing library with standaridization and normalization variables
Thanks for the excellent video.. but due to the recent enhancements, ColumnTransformer from sklearn.compose is to be used for OneHotEncoding.
Preeti, can you give me a pull request.
Very nice explanation, appreciated
Sir, very nice explained
Glad it was helpful!
nice teaching, really outstanding thanks a lot
Excellent video.., thank you so much.
That's a great tutorial of one-hot encoding. I was unable to find a complete example anywhere. Thanks for sharing.
Thanks Adnan for your valuable feedback
Hi sir !! Most easier way u teach ML. Thanks a lot!!!. I m going through ur videos and assignments. I got the answer for merce: 36991.31, BMW:11080.74 & model score :0.9417. The Model score is 94.17%. My QUE is how to improve the Model score ??? Is there any way to apply the features?
someone plz help!! at 15:14 getting an error for { y = df.price }
It shows "AttributeError: 'DataFrame' object has no attribute 'price' "
That means there no column labelled as price. Again redo it. You might have lost the column while executing some drop command multiple times.
The import linear regression statement lol. Amazing tutorial. :D
thanks for updating Eexerce code for oneHotEncoding
4:08
why mine is true false not 0 1?
yeah it is showing the same for me, however you can try converting your dummies into int as:
dummies = dummies.astype(int)
This will convert true and false to 1 and 0 respectively
This helped me a lot in my assignment, thank you so much code basics
Glad it helped!
I was learning through a paid course, and then I had to come here to understand this concept of dummy variable.
@14:01,pls explain how come you applied label encoding for nominal categories ,morever LE should be applicable to target column only
Beautiful explanation, very helpful
nicely explained👌
Thank you so much for the detailed step by step explanation.
Glad it was helpful!
bro at 16:43 onwarss, why you dropped first column ? and why you assigned the entire thing as X ?
This module makes my code hot!
This use of OneHotEncoder now appears to be deprecated. You may wish to make a note about this in the video.
Check my pinned comment at the top (the first comment)
@@codebasics Sure, on the official docpage here (scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) the following is written regarding categorical_features: "Deprecated since version 0.20: The categorical_features keyword was deprecated in version 0.20 and will be removed in 0.22. You can use the ColumnTransformer instead."
Excellent as usual!
Great videos! Unfortunately it becomes harder and harder to code in the same time as the video because there are more and more changes in the libraries you use. For example sklearn library removed categorical_features parameter for onehotencoder class. It was also the case for other videos from the playlist. Would be great to have the same playlist in 2022 :)
Point noted. I will redo this playlist when I get some free time from tons of priorities that are in my plate at the moment
@@codebasics Thank you for the reply and again : Great job for all the quality tutorials!
Many Thanks ! Great Explanation :)
First of all thank you for making life easier for people (who want to learn Machine Learning). You explain really well. Big Fan. When I was trying to execute categorical_features=[0], it gave an error. It seems this feature has been depreciated in the latest version of scikit learn. Instead they are recommending to use ColumnTransformer. I was able to get the same accuracy 0.9417050937281082. Another thing i wanted to know, when you had initially used label encoder and converted categorical values to numbers, why we specified the first column as categorical, when it was already integer value ?
in order not to remove the column by hand, you can use drop_first=True while using get_dummies.
Good point
Thank you for wery well explained tutorial. I have one question though, you are training all of your data here and yet model score is only 0.95. Why is that? It must be 1. If you were to split your data and train it would make sense but your case doesn't. What am I missing here?
Alper, It is not true that if you use all your training data the score is always one. Ultimately for regression problem like this you are trying to make a guess of a best fit line using gradient descent. This is still an *approximation* technique hence it will never be perfect. I am not saying you can never get a score of 1 but score less then 1 is normal and accepted.
Thank you for this series. Such great help
Glad it was helpful!
Dil jeet liya , yahi khoj rha tha
If anyone is interested, we can also skip the label encoder when using column transformer altogether by using the below :
x=df[['town','area']].values
y=df['price'].values
from sklearn.compose import make_column_transformer
ct = make_column_transformer(
(OneHotEncoder(categories='auto'), [0]),
remainder="passthrough"
)
X=ct.fit_transform(x)
X = X[:, 1:]
model.fit(X, y)
Thanks neenu for the tip. The notebook in video description is actually updated to make use of column transformer.
@@codebasics I am sorry I did not check that. Thank u sir for your videos, words are not enough to convey my gratitude for sharing your expertise to all.
12:45 what is the need for label encoder? why can't use onehot encoder directly?