Dear Screenivas, I am really thank you for your lessons, work and time spent, You don't imagine how you help me. Your lessons are very useful and informative, especially I like lessons concerning time series forecasting, regression using different models, it is very cool. Thank you so much!
model = Sequential() model.add(Dense(128, input_dim=13, activation='relu')) model.add(Dense(64, activation='relu')) I m a little new to this, How do we select the number of neurons here as 128 and 64
Dear Seeeni, thanks a million for sharing such a helpful tutorial. I have a question. How can I define a multivariate regression neural network with a weighted mean square error loss function?
Is deep learning the best model for data that has both linear and non-linear values? Also what does Dense 128 and 64 value mean? thank you! love this video, straight to the point.
@ rajeswari reddy patil Sir , your videos are very knowledgeable. Thanks for your contribution . Please provide more videos on object detection with bounding box & its variations .
Hi I want to ask, I put random_state=42, in both model random forest regressor and neural network regressor., If I copy the code and run it again, random forest give the same result for metric mae and mse, while neural network produce different result , why is that? They said because each run for neural network model, it will initiate weight and bias, but then I already put random_state as a seed, so that the weight and bias stay the same. So I'm a bit confused
Hi, Screeni, Thank you for the video. At the end you said that the random forest can give you the contribution list of features, does it the same for PCA method? Since PCA also gives you a bunch of eigen values.
Random forest allows you to rank your features based on their contribution towards the decision making. PCA is actually remapping your features into a new set of features (components). In other words, PCA creates completely new features using your existing features.
Hi Sreeni, thank you for the excellent video. I have question on data scaling for features that collected after training the model and will be used to predict based on existing model . If I scale this newly collected features, then the means and the standard deviation of newly collected features are most likely different from means and the standard deviation in original features. In this case how to appropriately preprocessed newly collected features?
Thank you for creating this amazing knowledge database and they are very helpful and easy to absorb. How do you set seed for the sequential model to generate the same model output ?
Thank you very much; Please I would like to know why you have chosen 128 (Dense(128...)) and 64, is there any criteria ? For my case, I have 3 features and one as label; what is the best number for both values. Thank you
Start with your best guess. I don't think anyone can tell you what a good number of neurons are for your specific problem. You can build a few different models and compare the accuracies.
Dropout is an approximation of Bayesian regularization for neural networks. I’m not sure if adding a separate Bayesian regularization makes any sense if you already introduce dropout. It would be a cool exercise to compare the effects of dropout, L2, l1, and Bayesian regularization techniques. I. Know for sure L2 and l1 are available in Keras as layers. All you do is: from Keras import regularizers then add them to your dense layers. Also, do not forget that data augmentation also helps generalize the model.
Y is what you are trying to predict so no need for scaling. Scaling is needed if you have multiple parameters that affect the outcome/output and if these parameters vary a lot in range.
Thank you sir, if i want the model to give me the most expensive/cheap apartment. is there any way to do that? or the model is just a prediction model to input parameters?
Hi Sreeni, a quick question. When using linear regression with 'Scaled' regressors, did you exclude the intercept term? I assume since the regressors are standardized, the intercept term no longer exists in lr
Thank you for this!!!! I have a question please: Can I use a multilayer perceptron for regression problem with one output but without using an activation function? Is this is more efficient? If yes, can you reply by the line code of the output layer without using activation function? THANKSSS
hello sir, currently I am working on artificial neural network using keras library on google Collab. when using feature importance code there, its showings that 'Sequential' object has no attribute 'feature_importances_' . could you please help me to solve this
Hello. Thank you for this tutorial, it is very useful. I have a question. What type of neural network was built in this video? Is it a Feed Forward Neural Network? Thank you :)
Hi sir, I need some advice. I already passed the split test. But the problem comes when I try the regression. My dataset have dtypes of object, int and float. And my X is to predict what kind of the cell (whether G GM or M). But there will be an error raise of cannot convert string to float. so what should I do?
Not sure what the exact problem is but if you are trying to train using strings (e.g. G, GM, M) then it will give an error. You need to encode them first into numbers, for example 1 for G, 2 for GM and 3 for M. I have done this in one of my recent videos. Video number 149.
Please have a look at what is stored in the 'history' variable and you will understand what's going on. In summary, the history variable stores the information about loss values and any tracking metrics for each epoch. If the training involves any validation data, it also stores validation loss in addition to the training loss.
Hi sir! I've written you an email asking for some problems I had while running the code... I'd really appreciate if you could help me mr. DigitalSreeni!!
I am getting 100+ emails a day asking for help. I wish I had that kind of time and bandwidth to help everyone. Unfortunately, I cannot help with individual projects. I structure my lectures such a way that they are easily digestible by anyone with some basics in python. I do understand that some issues need help which is why I created the Discord server so we can all help each other as community. Here is the link to my Discord server: discord.gg/QFe9dsEn8p
Hello! You did not have a 'W' column for Whites, or 'A' for Asians -or 'I' as Indian for that matter. Is the 'B' column there to ensure racism is baked into the future? Some people are surprised when Google engineers and other FAANG employees come forward to talk about implicit racism. At least here you have explicit racism. Good day!
This is the original source of the data used in this python tutorial, a Dataset derived from information collected by the U.S. Census Service concerning housing in the area of Boston Mass. : Harrison, D. and Rubinfeld, D.L. `Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978 Looks like the study was done back in 1978. I am sure there are recent studies that include larger demographics.
Loss is coming as loss: nan. Epoch 1/100 11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan Epoch 2/100 11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan Epoch 3/100 11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan Epoch 4/100 11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan
every time I watch this video, I always gain a new appreciation for it.
The best lecturer ever, thank you sir Sreeni. God Bless you!
Dear Screenivas, I am really thank you for your lessons, work and time spent, You don't imagine how you help me. Your lessons are very useful and informative, especially I like lessons concerning time series forecasting, regression using different models, it is very cool. Thank you so much!
You are amazing. You explain some of the very intricate concepts so easily that everyone can understand it. Tanks a ton!!
You are my hero!!!!
I am a beginner, but your videos have raised me a lot of doubts
thank you so much
I hope to be able to realize my idea soon
Best of luck!
Awesome video man, this was by far the most helpful one out there for me
Thank you very much for this. I'm so glad to find your channel. It's very well explained.
You're very welcome!
Wonderful ! Thank you for the video Sreeni.
This was actually awesome. Really enjoyed it
Nice content man, i'm making my masters degree and all your tutorials are very helpful, keep the great job
model = Sequential()
model.add(Dense(128, input_dim=13, activation='relu'))
model.add(Dense(64, activation='relu'))
I m a little new to this, How do we select the number of neurons here as 128 and 64
It's random..you can choose any.
Dear Seeeni, thanks a million for sharing such a helpful tutorial. I have a question. How can I define a multivariate regression neural network with a weighted mean square error loss function?
Yor channel is very useful! Thanks!
Thank you.
Amazing Sir!
Keep watching
Is deep learning the best model for data that has both linear and non-linear values? Also what does Dense 128 and 64 value mean? thank you! love this video, straight to the point.
@ rajeswari reddy patil Sir , your videos are very knowledgeable. Thanks for your contribution . Please provide more videos on object detection with bounding box & its variations .
Hi I want to ask, I put random_state=42, in both model random forest regressor and neural network regressor., If I copy the code and run it again, random forest give the same result for metric mae and mse, while neural network produce different result , why is that? They said because each run for neural network model, it will initiate weight and bias, but then I already put random_state as a seed, so that the weight and bias stay the same. So I'm a bit confused
Hi, Screeni, Thank you for the video. At the end you said that the random forest can give you the contribution list of features, does it the same for PCA method? Since PCA also gives you a bunch of eigen values.
Random forest allows you to rank your features based on their contribution towards the decision making. PCA is actually remapping your features into a new set of features (components). In other words, PCA creates completely new features using your existing features.
@@DigitalSreeni Thank you for the explanation!
Love this tutorial... ❤ Sir. Thanks
Hi Sreeni, thank you for the excellent video. I have question on data scaling for features that collected after training the model and will be used to predict based on existing model . If I scale this newly collected features, then the means and the standard deviation of newly collected features are most likely different from means and the standard deviation in original features. In this case how to appropriately preprocessed newly collected features?
دمت گرم، خیلی خوب بودی
متشکرم
If we have labeled columns? I have 21 columns total. And 21st colum is prediction. All are float values. How can i proceed? 😔
Thank you for creating this amazing knowledge database and they are very helpful and easy to absorb.
How do you set seed for the sequential model to generate the same model output ?
Thanks for the knowledge share in detail......
Thank you very much;
Please I would like to know why you have chosen 128 (Dense(128...)) and 64, is there any criteria ?
For my case, I have 3 features and one as label; what is the best number for both values. Thank you
Start with your best guess. I don't think anyone can tell you what a good number of neurons are for your specific problem. You can build a few different models and compare the accuracies.
@@DigitalSreeni ok Thank y bro
@@DigitalSreeni an other question please, how to get R square for neuroun model ?
Thank you so much. I would like to see one video on Bayesian Regularization methods applied to neural networks
Dropout is an approximation of Bayesian regularization for neural networks. I’m not sure if adding a separate Bayesian regularization makes any sense if you already introduce dropout. It would be a cool exercise to compare the effects of dropout, L2, l1, and Bayesian regularization techniques. I. Know for sure L2 and l1 are available in Keras as layers. All you do is: from Keras import regularizers then add them to your dense layers. Also, do not forget that data augmentation also helps generalize the model.
@@DigitalSreeni Thank you so much for this advice. Let me try them out.
we do not need to scale y(targeted output)?
Y is what you are trying to predict so no need for scaling. Scaling is needed if you have multiple parameters that affect the outcome/output and if these parameters vary a lot in range.
@@DigitalSreeni i have six inputs that varies from 0.0003 to 688956
Love the video man!
Thank you sir, if i want the model to give me the most expensive/cheap apartment. is there any way to do that? or the model is just a prediction model to input parameters?
Hi Sreeni, a quick question. When using linear regression with 'Scaled' regressors, did you exclude the intercept term? I assume since the regressors are standardized, the intercept term no longer exists in lr
Thank you for this!!!! I have a question please: Can I use a multilayer perceptron for regression problem with one output but without using an activation function? Is this is more efficient? If yes, can you reply by the line code of the output layer without using activation function? THANKSSS
Thank you. Please add multi-output regression using Keras and TensorFlow
Hi there. I developed a model based on your video. But I get a negative R2. What is the problem?
hello sir,
currently I am working on artificial neural network using keras library on google Collab. when using feature importance code there, its showings that 'Sequential' object has no attribute 'feature_importances_' . could you please help me to solve this
Sir. How to find R2 score? Model accuracy? In NN? We can find easily in other machine learning algorithm R2 score.
Hello.
Thank you for this tutorial, it is very useful.
I have a question.
What type of neural network was built in this video?
Is it a Feed Forward Neural Network?
Thank you :)
Sir. Do we have tutorial on gaussian process regression GCR for non parametric data?
fantastic
Hi sir, I need some advice. I already passed the split test. But the problem comes when I try the regression. My dataset have dtypes of object, int and float. And my X is to predict what kind of the cell (whether G GM or M). But there will be an error raise of cannot convert string to float. so what should I do?
Not sure what the exact problem is but if you are trying to train using strings (e.g. G, GM, M) then it will give an error. You need to encode them first into numbers, for example 1 for G, 2 for GM and 3 for M.
I have done this in one of my recent videos. Video number 149.
How to find the grapfh of predicted and real value? with Rsqaure value
You are doing a great job!!!!
I am a beginner, but your videos improved me a lot.
can you do a tutorial on how to use LSTM for spatial prediction?
in line 81, val_loss is extracted from history. but where the val_loss defined?
Please have a look at what is stored in the 'history' variable and you will understand what's going on. In summary, the history variable stores the information about loss values and any tracking metrics for each epoch. If the training involves any validation data, it also stores validation loss in addition to the training loss.
@@DigitalSreeni okay sir. Understood now. Thanks
Amazing!!! Thank you!!
Which architecture has been used here?
Dear Sreeni. Could you please provide a tutorial on multi-target/ objective regression problems using ML?
Hi, i am looking for the same..if you get any info share it here. thanks
Do I need to scale price too? Or it does not make no difference?
While doing NNs.
You need to scale all inputs that will be used in training the neural network. You do not need to scale the outputs.
Sir. What github repository number of this code?
Nice tutorial sir
Thanks and welcome
Ty
Hi sir! I've written you an email asking for some problems I had while running the code... I'd really appreciate if you could help me mr. DigitalSreeni!!
I am getting 100+ emails a day asking for help. I wish I had that kind of time and bandwidth to help everyone. Unfortunately, I cannot help with individual projects. I structure my lectures such a way that they are easily digestible by anyone with some basics in python. I do understand that some issues need help which is why I created the Discord server so we can all help each other as community. Here is the link to my Discord server: discord.gg/QFe9dsEn8p
@@DigitalSreeni thank you very much and sorry for the inconvenience
i can't find the code in github there are many can you help me please
Code is organized based on video number, so for video number 141 please look at the file name starting with 141.
@@DigitalSreeni thank you sir
LOVE U
Hello! You did not have a 'W' column for Whites, or 'A' for Asians -or 'I' as Indian for that matter. Is the 'B' column there to ensure racism is baked into the future?
Some people are surprised when Google engineers and other FAANG employees come forward to talk about implicit racism. At least here you have explicit racism.
Good day!
This is the original source of the data used in this python tutorial, a Dataset derived from information collected by the U.S. Census Service concerning housing in the area of Boston Mass. :
Harrison, D. and Rubinfeld, D.L. `Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978
Looks like the study was done back in 1978. I am sure there are recent studies that include larger demographics.
Wow...
Loss is coming as loss: nan.
Epoch 1/100
11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan
Epoch 2/100
11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan
Epoch 3/100
11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan
Epoch 4/100
11/11 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: nan - mae: nan - val_loss: nan - val_mae: nan
acc = history.history['mean_absolute_error']
val_acc = history.history['val_mean_absolute_error']
plt.plot(epochs, acc, 'y', label='Training MAE')
plt.plot(epochs, val_acc, 'r', label='Validation MAE')
plt.title('Training and validation MAE')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
This code is not working
I think you need to add 'mean_absolute_error' to the metrics list on line 73