a small summary : for those who are gonna start , he preprocessed the dataset a bit ( removing NaN values, adding features and splitting the catogerical value column to binary columns ) and then scaled,splitted and trained & tested on linear , random forest ..finding best estimator at last ( no explaination on what estimators are, so read forest ahead of doing this )
AT APPROX 31:00 - If ISLAND is not showing I just increased my test_size = 0.2 to 0.25, or until it became large enough that it did include the ISLAND. Not sure of a real fix but this worked to get past this hurdle. Take care
If you could brief explain what linear regression did ? Were all the variable taken into account and develop a slop to predict the value based on existing data? What if we removed some negatively correlated data and the response? I fail to understand what we did apart from cool images, if you can make a brief lectures on regression random decision tree cluster with some situation analysis- it would help us Thanks
Just found your channel! Im on a journey to become a data scientist and really build a solid understanding. This is a great first project to get under my belt. Having you by my side while going through the steps is awesome. I will try out doing projects all by myself also but first following along is a great start to get more comfortable and see the steps included and how u tackle it! Greetings from Sweden!
Hi. What I would recommend doing in the hyperparameter tunning phase on the RFR model. Is to use np.range() instead of a list with hard values the model has to use and which are limited to two options or three. Yes this might take a lot of time to run but using randomizedsearchCV would be okay as a starter then if you see the model improving you can use gridsearchcv instead.
@@thinhtruong9405 I would highly recommend you to watch the video until end, search for the concepts and try to write the code yourself. That's how you can fully take benefit of this content.
@@softwareengineer8923 i see, but i have a problem, i want this code to do something, if you have please give it to me, sry im from vietnam so my English is so bad
Great video, thank's a lot. But I'm missing the most interesting part: How can I use the model for getting the house value for an object which isn't part of the used data?
The good: feature engineering, I liked the one hot encoding explanation, and how easy you made it look. The bad: extremely superficial explanations. E.g., min 29, “we get a score of 66, which is not too bad, but also not too good” great, thanks for the in-depth explanation as to what 66 means and how to interpret. Most of these “tutorials” are just people recording themselves writing code, like it´s a big deal. The real important piece is understanding the business problem, and interpreting results in terms everyone can understand; I can copy/paste code from a hundred different websites. Also, linear regression is not about getting a 66 or whatever score, it´s about predicting a value, in this case, house prices; how is “66” relevant to that goal?? The ugly: speak way too fast for no reason at all. You´re making a tutorial, not speed racing. Thanks anyway.
Thank you much for the detailed video , everything was explained very feel , i would suggest this could be the best video to start with the machine learning projects as a beginner. And personally this video helped me a lot as i am taking up my first ML project..
Randomforest algo takes features at random so if we literally change nothing and fit the model again and again we can see the scores changing(+-2%). Also only one variable median income was strongly related with target(bcoz it had correlation>0.5). If many variables would have been above 0.5 then we might had seen drastic changes during gridsearch min_features
I have two questions 1. Why didnt you use all feature in train_data (many columns were skewed) to convert via log 2. I didnt saw any change in histogram before and after . How did you decided that data is converted to normal distribution?
Great content, but as a Newley founded developer interested in ML I do wish you went into a bit more detial on the key features being leveraged in the walkthrough. I would not mind spending an hour or so more to fully understand the methods and functions your leveraging in this demo. All in all thank you for your hard work and dedication in sharing what I believe to be humans biggest development since the Industrial Revolution. Keep on Techin sir.
ya think? I should have cut my losses when you made the test/train split that early, .at around 28:00 the instructions became to confused to be useful. Until then, thanks for the instructions.
@@skripandthes You are making changes into the dataframe you can't reverse unless you restart the whole runtime on your workspace. Like jupyter notebook.
@@olanrewajuatanda533 Just define a new dataframe. Instead of doing this: Df.dropna(col, axis=1, inplace=true) Do this: Df = Df.dropna(col, axis=1) This way you don't hard code new changes to the dataframe and you can just edit the cell and run it again to correct any mistakes.
You don't need to normalize data when dealing with linear regression, that's the main advantage of this method, it is based on coefficients, and those coeficients adjust to the order of magnitude of each variable !
Continuity issue apparently: did you drop the ocean_proximity column before you ran the correlation matrix? My train_data.corr() fails due to values like '
plt.figure(figsize=(15,8)) sns.heatmap(train_data.loc[:, train_data.columns!='ocean_proximity'].corr(), annot=True, cmap="YlGnBu") I used this code to ignore the column. Hopefully this will help you get through it.
11:50 I got an error using corr() because of non-numeric column 'ocean_proximity'. How did you do it? Did you change the code of pandas? Edit: I found it myself. Go to python installation path/libraries/pandas/core/frame.py Go to corr function definition and set numeric_only: bool = True.
i hade the same issue and i resolve it by dropping the colume # visualize a correlation matrix with the target variable # dropping the "ocean_proximity" because its not numerical data_without_OP = train_data.drop(['ocean_proximity'], axis=1) plt.figure(figsize=(15, 8)) # Ajusta el tamaño de la figura si es necesario sns.heatmap(data_without_OP.corr(), annot=True, cmap="YlGnBu") plt.show() ------- after that maybe you will faceeing a problem that the heatmap dosen show all the numbers its a problem of matplotlib version u using save ur notebook and close it then create a new blank notebook and run this code: !pip install matplotlib==3.7.3 if u run it in your project it will note allow u and u r notebook will freeze bcz u using it
Thanks for the vid! First day on ur chanel really happy found u! And it seems you use a sort of autocompite for typing when on terminal? or ur typing is just soo fast..
Every thing was great but the fact that ive to debugg my entire code because we split earlier and had to pre process the test data again was so painfull speacially in jupyter lab
@@gongxunliu5237 wow I rewatched the video 10 times to understand how he was able to get past that error and am still lost... I ended up converting the ocean proximity column into an id column prior to running the model... did corr() used to automatically filter out the string columns or something in the past?
Naive bayes, Gaussian naive bayes, KNN, Decision tree(Randomforest is collection of decision trees), gradient boosting and XGBoost. Try every one of them with different different parameters for each and select the best one with best set of parameters
hey, broh where is the datase of california house price, i didn't get yet here or in your githab. or you haven't share with us alhough you said the link of the dataset is on the description.
So how do you find the working details of the model? It's great to know the 'score' is 0.8 or whatever but what parameters are used to get that 0.8? In other words, I train a model with a score of 0.8 then get some new data points (lat, long, #bedrooms, total_bedrooms, etc (all except house price)) What's the equation I use to generate an expected house value and where do I get it? Great video though.
@@Ailearning879 but can you please help me where to test the model which is trained? since we only got the model's accuracy or score. And I'm a beginner in ML
where can i get the notebook? i tried searching your gihub repository but dont see any related to house price prediction. Can you please share the notebook?
I found that I could increase the test_size from 0.2 to 0.25 or until it became large enough that it included the island by change. Not a real fix but works for this. Take care
guys while training the data always remember to write train and then test the data, like x_train,x_test,y_train,y_test like that otherwise target variable in this case will give NaN values
Is it just me who's getting the error "Input contains NaN, infinity or a value too large for dtype('float64')"? For both linear as well as random forest
11:45 use the test_data.corr(numeric_only=True) instead as this will return an error if you do so. I do not understand how did you not get an error? I got this and had to apply the function above to solve it " ValueError: could not convert string to float: 'NEAR OCEAN'"
Hey bro! Can you please guide me in number prediction in a specific position by reading existing excel data!? I wanted to generate 6 numbers with this logic
i got a value error when I used .corr() on my train data. something along the lines of not being able to convert the str into int. so I am unable to make a heat map. I am an absolute beginner so can someone please help me out. anything will be well appreciated
As of this writing, I am not able to find the exact data set (.csv file ) for Californian house prices. If some one can provide me with the link for the same one used in this video this will be greatly appreciated!
Hey how come your channel is much more interesting, and you have less followers. I think you need to make more series on different languages mainly on c#.
At minute 28:40 line "31" I typed the same "reg.score(X_test, y_test)" but it does'nt work. The ValueError is "Input X contains NaN." What I did wrong? Can anyone help me? I would like to complete this project. Thank you
I am done with the project understood what It does, But a main question still arises in my mind what predictions did we made? Where are the prices that are predicted. Can someone please Explain I am new into data science
Let's say that for example you have made a web application that has a form in which you can input the data about the house you own. The user inputs the data about the house and then the data is passed to a model and it evaluates the price of a house according to the data provided by the uses related to the data that it was trained on. A real life use case would be like a website used to sell properties, such model could encourage a person to sell his property if the estimated price satisfies them, also it could help people that do not have enough knowledge about estate market to estimate their property price. Hope that helps ;)
a small summary : for those who are gonna start , he preprocessed the dataset a bit ( removing NaN values, adding features and splitting the catogerical value column to binary columns ) and then scaled,splitted and trained & tested on linear , random forest ..finding best estimator at last ( no explaination on what estimators are, so read forest ahead of doing this )
how did he change ocean proximity from object to int?
@@mbulelondlovu9427 he took one feature like
AT APPROX 31:00 - If ISLAND is not showing I just increased my test_size = 0.2 to 0.25, or until it became large enough that it did include the ISLAND. Not sure of a real fix but this worked to get past this hurdle. Take care
Mate you explain everything so concisely and keep it so interesting! Really enjoyed this video
I agree with you
11:47 train_data.corr(numeric_only=True)
Thanks
this was really helpful
thanks
This saved me, thanks
bruh
If you could brief explain what linear regression did ? Were all the variable taken into account and develop a slop to predict the value based on existing data? What if we removed some negatively correlated data and the response? I fail to understand what we did apart from cool images, if you can make a brief lectures on regression random decision tree cluster with some situation analysis- it would help us Thanks
Just found your channel! Im on a journey to become a data scientist and really build a solid understanding. This is a great first project to get under my belt. Having you by my side while going through the steps is awesome. I will try out doing projects all by myself also but first following along is a great start to get more comfortable and see the steps included and how u tackle it! Greetings from Sweden!
Hi. What I would recommend doing in the hyperparameter tunning phase on the RFR model. Is to use np.range() instead of a list with hard values the model has to use and which are limited to two options or three.
Yes this might take a lot of time to run but using randomizedsearchCV would be okay as a starter then if you see the model improving you can use gridsearchcv instead.
One of the best machine learning tutorials on TH-cam, thanks a a lot for lucid and well detailed explanation.
hi, do you have this code, can you give it to me ?
@@thinhtruong9405 I would highly recommend you to watch the video until end, search for the concepts and try to write the code yourself. That's how you can fully take benefit of this content.
@@softwareengineer8923 i see, but i have a problem so if you have this code pls give it to me :((, im from viet nam, my english is so bad
@@softwareengineer8923 i see, but i have a problem, i want this code to do something, if you have please give it to me, sry im from vietnam so my English is so bad
@@softwareengineer8923 i see, i have a problem so i need this code to do something, im from viet nam so my endlish is so bad :((
Great video, thank's a lot. But I'm missing the most interesting part: How can I use the model for getting the house value for an object which isn't part of the used data?
did u discover that?
u can create FCT with a model and X as an argument and then u can predict every value u want
@@techsnail8581 dattebayo
The good: feature engineering, I liked the one hot encoding explanation, and how easy you made it look.
The bad: extremely superficial explanations. E.g., min 29, “we get a score of 66, which is not too bad, but also not too good” great, thanks for the in-depth explanation as to what 66 means and how to interpret. Most of these “tutorials” are just people recording themselves writing code, like it´s a big deal. The real important piece is understanding the business problem, and interpreting results in terms everyone can understand; I can copy/paste code from a hundred different websites. Also, linear regression is not about getting a 66 or whatever score, it´s about predicting a value, in this case, house prices; how is “66” relevant to that goal??
The ugly: speak way too fast for no reason at all. You´re making a tutorial, not speed racing.
Thanks anyway.
Am impressed,your explanation is so smooth and i can keep tyrack and understand every step or code you input💯
Thank you much for the detailed video , everything was explained very feel , i would suggest this could be the best video to start with the machine learning projects as a beginner. And personally this video helped me a lot as i am taking up my first ML project..
I am stuck at "reg.score". please resolve my error
Randomforest algo takes features at random so if we literally change nothing and fit the model again and again we can see the scores changing(+-2%).
Also only one variable median income was strongly related with target(bcoz it had correlation>0.5).
If many variables would have been above 0.5 then we might had seen drastic changes during gridsearch min_features
I have two questions
1. Why didnt you use all feature in train_data (many columns were skewed) to convert via log
2. I didnt saw any change in histogram before and after . How did you decided that data is converted to normal distribution?
the bars should fit in normal distribution curve which generally would be in middle
your tutorials are the best thing i found on the internet
Great content, but as a Newley founded developer interested in ML I do wish you went into a bit more detial on the key features being leveraged in the walkthrough. I would not mind spending an hour or so more to fully understand the methods and functions your leveraging in this demo.
All in all thank you for your hard work and dedication in sharing what I believe to be humans biggest development since the Industrial Revolution.
Keep on Techin sir.
exactly@patricks2595
This was a great video. Just discovered your channel today. Definitely going to subscribe!
ya think?
I should have cut my losses when you made the test/train split that early, .at around 28:00 the instructions became to confused to be useful. Until then, thanks for the instructions.
Exactly lmao, i for the life of me could not understand why he would not completely preprocess the data first and then split the data
For those in the comments section, never do inplace=True.
why?
What should we do to substitute that?
True
@@skripandthes
You are making changes into the dataframe you can't reverse unless you restart the whole runtime on your workspace. Like jupyter notebook.
@@olanrewajuatanda533
Just define a new dataframe.
Instead of doing this:
Df.dropna(col, axis=1, inplace=true)
Do this:
Df = Df.dropna(col, axis=1)
This way you don't hard code new changes to the dataframe and you can just edit the cell and run it again to correct any mistakes.
Guys please how was he able to copy and paste so fast @26:01min... Where he was trying to change train data to test data..?
Oh my!! Just amazing!! Make more such videos. Thank you so much.
You don't need to normalize data when dealing with linear regression, that's the main advantage of this method, it is based on coefficients, and those coeficients adjust to the order of magnitude of each variable !
How this channel doesn't get 1M yet !!
Keep it up bro! Pls do more videos with predictions
Continuity issue apparently: did you drop the ocean_proximity column before you ran the correlation matrix? My train_data.corr() fails due to values like '
plt.figure(figsize=(15,8))
sns.heatmap(train_data.loc[:, train_data.columns!='ocean_proximity'].corr(), annot=True, cmap="YlGnBu")
I used this code to ignore the column. Hopefully this will help you get through it.
@@MatthewXiong-gk8nz thanks so much buddy
Amazing work man
explained better than my instructor xD thanks man
11:50 I got an error using corr() because of non-numeric column 'ocean_proximity'. How did you do it? Did you change the code of pandas?
Edit: I found it myself. Go to python installation path/libraries/pandas/core/frame.py
Go to corr function definition and set numeric_only: bool = True.
Thanks bro
hello, what should I do if my X_test doesn't have any value in ISLAND? I can't perfom the reg.score
thanks for your help
Heatmap cannot be render while there are non-numerical values (ocean_proximity) in the train data
I have experienced the same issue - how did the author manage to render a heatmap without dropping this column?
Try sns.heatmap(train_data.corr(numeric_only = True), annot=True, cmap= "YlGnBu")
i hade the same issue and i resolve it by dropping the colume
# visualize a correlation matrix with the target variable
# dropping the "ocean_proximity" because its not numerical
data_without_OP = train_data.drop(['ocean_proximity'], axis=1)
plt.figure(figsize=(15, 8)) # Ajusta el tamaño de la figura si es necesario
sns.heatmap(data_without_OP.corr(), annot=True, cmap="YlGnBu")
plt.show()
-------
after that maybe you will faceeing a problem that the heatmap dosen show all the numbers its a problem of matplotlib version u using
save ur notebook and close it then create a new blank notebook and run this code:
!pip install matplotlib==3.7.3
if u run it in your project it will note allow u and u r notebook will freeze bcz u using it
Excellent tutorial...
Thanks for the vid! First day on ur chanel really happy found u!
And it seems you use a sort of autocompite for typing when on terminal? or ur typing is just soo fast..
Every thing was great but the fact that ive to debugg my entire code because we split earlier and had to pre process the test data again was so painfull speacially in jupyter lab
Saw this as how to build project , this is my first one , let's see where this will take me - 1.
boss so appreciated I can't even express it
Best tutorial I've seen.
Man! Your computer runs effortlessly😅 It's soo smooth...
What are the specs? 😅
I need to get one like that.😂
How did you get the .corr() method to ignore the ocean_proximity column even though it had non-numeric values in the beginning??
train_data.corr(numeric_only=True) will do
@@gongxunliu5237 I didn't even know that was a parameter, tysm
@@gongxunliu5237 wow I rewatched the video 10 times to understand how he was able to get past that error and am still lost... I ended up converting the ocean proximity column into an id column prior to running the model... did corr() used to automatically filter out the string columns or something in the past?
@@jonathanitty5701 i think it was either that, or the default value changed from True to False, not sure which
my ISLAND column gets deleted when creating test_data - any way to fix this?
sameeee
🤯 Great video.
Thank you for nice explanation. Keep this good work. I want to know what is the outcome of this model. What insight I got after run the model.
sir i am getting -1.25 score!
what to do now!
thank you !!!
it was really helpful
Great video. Apart from Linear Regression and Random Forest, are there any other algorithms that might be suitable for this type of problem?
KNN Regressor
Naive bayes, Gaussian naive bayes, KNN, Decision tree(Randomforest is collection of decision trees), gradient boosting and XGBoost.
Try every one of them with different different parameters for each and select the best one with best set of parameters
hey, broh where is the datase of california house price, i didn't get yet here or in your githab.
or you haven't share with us alhough you said the link of the dataset is on the description.
What's the interpretation of the "score"? Is it R-squared for regression? How about for random forests? Do they compare from one model to another?
I can't get over you sir
You are a legend
Explained everything perfectly, Your channel is going to be my go to channel, to learn data science!!!
So how do you find the working details of the model? It's great to know the 'score' is 0.8 or whatever but what parameters are used to get that 0.8? In other words, I train a model with a score of 0.8 then get some new data points (lat, long, #bedrooms, total_bedrooms, etc (all except house price)) What's the equation I use to generate an expected house value and where do I get it?
Great video though.
The model/function is made by the algorithm and that cannot be inferred. All we can do is put the values parameters and get the prediction.
@@Ailearning879 but can you please help me where to test the model which is trained? since we only got the model's accuracy or score. And I'm a beginner in ML
Hi. Very well explained! thank you.
sorry to say but in my code "ocean proximity"is not shown.
Informative video, quick question why would you not want the values to be zero when taking the log of the values?
Because log(0) is undefined. That is, you cannot raise a number to a power to get 0.
BTW, how do you copy and paste so quickly around minute 14 when you were doing the 'log' adjustment on the train_data? Which shortcut are you using?
alt + shift + down arrow key.
when I ran x_test_s, I got: could not convert string to float: 'INLAND'. how to solve it?
same here
I wouldn't waste your time. This code doesn't work and he races through everything. Much better tutorials out there.
Bro preprocess the data properly
@@sumankumarsahu9711 i followed the eaxct way he showed here
@@PulakKabir .corr(numeric_only=True)
Fixed the correlation portion at least
where can i get the notebook? i tried searching your gihub repository but dont see any related to house price prediction. Can you please share the notebook?
In X test I am getting 14 col while in X train I am getting 15 cols what should I do?
Add one more blank column / variable to test which gonna be your target variable
@@parth1211 how to do that?
Hey hav u solved this error
How did you get 0.66 score? I made similar data transformations and got only 0.25 score and 0.78 MSE
tahts a great video, but how do i get the predicted values now? I mean i built the model and how would i get predictions?
Thanks for the vid
At 13:00 why didn't you apply np.log to 'median_income' and 'median_house_value'? They seem pretty skewed as well
what if im missing a column ISLAND?
I found that I could increase the test_size from 0.2 to 0.25 or until it became large enough that it included the island by change. Not a real fix but works for this. Take care
Hi NeuralNine. I am having doubt in executing the corr() function. How can I move forward?
Nice, ty
Great tutorial! One correction at 12:45 - longitude is inveresely correlated with latitude rather than the median house income.
How did you fix it
guys while training the data always remember to write train and then test the data, like x_train,x_test,y_train,y_test like that otherwise target variable in this case will give NaN values
Is it just me who's getting the error "Input contains NaN, infinity or a value too large for dtype('float64')"? For both linear as well as random forest
11:45 use the test_data.corr(numeric_only=True) instead as this will return an error if you do so. I do not understand how did you not get an error?
I got this and had to apply the function above to solve it " ValueError: could not convert string to float: 'NEAR OCEAN'"
16:57 Second Problem I ran into if anybody can help, pd.get_dummies(train_data.ocean_proximity) retuns True & False instead of 1&0 s
@@marawanmyoussefsame here 😢
This problem can be solved by chatgpt but later it creates a problem 🥲
I guess you mean by train_data.corr(numeric_only=True) because test isn't defined yet correct me if I'm wrong
thank you so much
why you said this is classification at 39:39 when it is regression problem ?
it was great thank you a lot bro.
thankkk youuu !!!!
love this
Can you add custom code so that model predict saleprice when input code is given
I don't know, but errors are generated in my code, though I write exactly same thing as you do . And I have no idea what to do. 😅
great video. and o my wat is the intro music. im a music artist and would love to hear the full thing.
Hey bro! Can you please guide me in number prediction in a specific position by reading existing excel data!? I wanted to generate 6 numbers with this logic
when you define the X_test_s ?? when i want to scaling i should use the X_test_s AS your code but i gets error i have not X_test_s
where can i get total code
7:27 wouldn't you rather use data.isna().sum()? If you have a missing value in the whole row you might not catch that.
isnull().sum()?
May I ask why the longitude and longitude are not applied encoding?
respect -= 100
i got a value error when I used .corr() on my train data. something along the lines of not being able to convert the str into int. so I am unable to make a heat map. I am an absolute beginner so can someone please help me out. anything will be well appreciated
no matter what i do i cant get the join method
same here
why do we need normal distribution in total-rooms, population...?
As of this writing, I am not able to find the exact data set (.csv file ) for Californian house prices. If some one can provide me with the link for the same one used in this video this will be greatly appreciated!
Timestamp : 20:00
Just Nice
x_test_s = scaler.transform(x_test) is not working .Can anyone help me to resolve
hello there, can i ask for your help to make data preprocessing for a specific dataset. it have 53884 rows and 8 columns..
is there a link to the pyhton notebook?
Hi! How did you get those Vim bindings in jupyter?
Hey how come your channel is much more interesting, and you have less followers. I think you need to make more series on different languages mainly on c#.
At minute 28:40 line "31" I typed the same "reg.score(X_test, y_test)" but it does'nt work. The ValueError is "Input X contains NaN."
What I did wrong? Can anyone help me? I would like to complete this project. Thank you
run all cells again
@@samarthamera doesn't work
@@imansaid2321did you figure it out? It’s not working with me
where is the source code of this project I get an some error
Gracias, es díficil encontrar buen contenido en mi idioma, así que lo asisto aquí, mismo que me toca con subtitulos. Thanks so much !
Can you upload the data path over here
tf my linear regression model score is coming -433.35
how to get same dataset? where?
What to do if I get notified error
I am done with the project understood what It does, But a main question still arises in my mind what predictions did we made?
Where are the prices that are predicted. Can someone please Explain I am new into data science
if you find out the answer to your question, please let me know
Let's say that for example you have made a web application that has a form in which you can input the data about the house you own. The user inputs the data about the house and then the data is passed to a model and it evaluates the price of a house according to the data provided by the uses related to the data that it was trained on. A real life use case would be like a website used to sell properties, such model could encourage a person to sell his property if the estimated price satisfies them, also it could help people that do not have enough knowledge about estate market to estimate their property price. Hope that helps ;)