- 19
- 102 790
Harsh Kumar
India
เข้าร่วมเมื่อ 25 ก.ย. 2007
I have been working in the field of Data Science, Analytics and Technology for 9 years. This channel will have videos relating to coding, machine learning and technology.
Weight Initialisation techniques in Pytorch | Quick Walkthrough | Tutorial for Beginners
We look at how to use different Weight Initialisation techniques in Pytorch. The idea is to provide a practical code walkthrough for the viewer to start working with Pytorch for beginners.
List of Weight Initialisation techniques discussed in the video are:
1. Default Weight Initialisation
2. Zero Initialisation
3. Random Initialisation
4. Xavier Initialisation
5. He Initialisation
Notebook on Github: ttps://github.com/harsh1kumar/learning/blob/master/machine_learning/pytorch/05_weight_initialisation.ipynb
Kaggle Competition/Data Source: www.kaggle.com/c/santander-customer-transaction-prediction
Timestamp:
00:00 - Intro
00:22 - Get data and preprocess
00:37 - Default Weight Initialisation
02:09 - Zero Initialisation
04:01 - Random Initialisation
05:21 - Xavier Initialisation
06:24 - He Initialisation
07:10 - Outro
List of Weight Initialisation techniques discussed in the video are:
1. Default Weight Initialisation
2. Zero Initialisation
3. Random Initialisation
4. Xavier Initialisation
5. He Initialisation
Notebook on Github: ttps://github.com/harsh1kumar/learning/blob/master/machine_learning/pytorch/05_weight_initialisation.ipynb
Kaggle Competition/Data Source: www.kaggle.com/c/santander-customer-transaction-prediction
Timestamp:
00:00 - Intro
00:22 - Get data and preprocess
00:37 - Default Weight Initialisation
02:09 - Zero Initialisation
04:01 - Random Initialisation
05:21 - Xavier Initialisation
06:24 - He Initialisation
07:10 - Outro
มุมมอง: 204
วีดีโอ
Regularization techniques in Pytorch | Quick Walkthrough | Tutorial for Beginners
มุมมอง 438ปีที่แล้ว
We look at how to use 5 different regularization techniques in Pytorch. The idea is to provide a practical code walkthrough for the viewer to start working with Pytorch for beginners. List of regularization techniques discussed in the video are: 1. Batch Normalisation 2. Dropout layer 3. L2 regularisation (weight_decay) 4. Early Stopping 5. Data Augmentation Notebook on Github: github.com/harsh...
Optimizers in Pytorch | Quick Walkthrough | Tutorial for Beginners
มุมมอง 230ปีที่แล้ว
We look at how to use some important optimizers in Pytorch, without going into too much details about theory. The idea is to provide a practical code walkthrough for the viewer to start working with Pytorch for beginners. List of Optimizers discussed in the video are: 1. SGD 2. Mini-Batch SGD 3. SGD with Momentum 4. Adagrad 5. RMSprop 6. Adam Notebook on Github: github.com/harsh1kumar/learning/...
Activation functions in Pytorch | Quick Walkthrough | Tutorial for Beginners
มุมมอง 113ปีที่แล้ว
We look at how to use some important activation functions in Pytorch, without going into too much details about theory. The idea is to provide a practical code walkthrough for the viewer to start working with Pytorch Notebook on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/pytorch/02_activation_functions.ipynb Kaggle Competition/Data Source: www.kaggle.com/c/santander-cu...
Simple Neural Network using Pytorch | Quick Walkthrough | Tutorial for Beginners
มุมมอง 434ปีที่แล้ว
We look at how to implement a simple NN using Pytorch, without going into too much details about theory. The idea is to provide a practical code walkthrough for the viewer to start working with Pytorch Notebook on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/pytorch/01_simple_nn.ipynb Kaggle Competition/Data Source: www.kaggle.com/c/santander-customer-transaction-predict...
MLFlow Tutorial | Hands-on | ML Tracking and Serving
มุมมอง 8Kปีที่แล้ว
0MLFlow hand-on tutorial which provides a detailed walkthough of the process to log model details using MLFlow and demonstrates how to use the same model for online serving. Example used in this video is of an image classification building using keras to classify cats and dogs. Notebook on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/keras/02_cat_dog_classifier.ipynb Tim...
pytest Tutorial: How to write tests in Python | Data Science
มุมมอง 236ปีที่แล้ว
This video provides a brief beginners tutorial for pytest. Pytest is a popular testing framework for Python that makes it easy to write tests and run them. Code used for this tutorial: github.com/harsh1kumar/learning/tree/master/python/pytest Timestamp: 00:00 - Introduction 00:42 - Setup 02:06 - Write Tests 05:09 - Test Fixtures 08:09 - Parametrize 09:36 - Skip
Simple Neural Network using Tensorflow and Keras for Image Recognition | Beginners Tutorial
มุมมอง 2893 ปีที่แล้ว
How to create a simple neural network for Image Recognition using Tensorflow and Keras? The tutorial will provide a step-by-step guide for this. Code on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/keras/01_ann_for_image_recognition.ipynb Timestamp: 00:00 - Introduction 00:29 - Data for modelling 03:21 - Model structure 06:14 - Model training 06:43 - Evaluating model 11:...
Ranking Functions in SQL (Rank vs Dense_Rank vs Row_Number) | Comparison with example
มุมมอง 4713 ปีที่แล้ว
This video explains the difference between 3 SQL functions - Rank(), Dense_Rank() and Row_Number(). Example shown in the video is for PostgreSQL, but explanation will hold true for all variants of SQL. Timestamp: 00:00 - Introduction 00:55 - Rank() Function 02:17 - Dense_Rank() Function 03:08 - Row_Number() Function
XGBoost Model in Python | Tutorial | Machine Learning
มุมมอง 33K3 ปีที่แล้ว
How to create a classification model using XGBoost in Python? The tutorial will provide a step-by-step guide for this. Problem Statement from Kaggle: www.kaggle.com/c/santander-customer-transaction-prediction/ Code on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/santander_trxn_prediction/07_trxn_pred_xgboost.ipynb Code on Kaggle: www.kaggle.com/harsh1kumar/santander-trxn...
LightGBM Model in Python | Tutorial | Machine Learning
มุมมอง 26K3 ปีที่แล้ว
How to create a LightGBM classification model in Python? The tutorial will provide a step-by-step guide for this. Problem Statement from Kaggle: www.kaggle.com/c/santander-customer-transaction-prediction/ Code on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/santander_trxn_prediction/06_trxn_pred_lightgbm.ipynb Code on Kaggle: www.kaggle.com/harsh1kumar/santander-trxn-pre...
Gradient Boosting (GBM) in Python using Scikit-Learn | Tutorial | Machine Learning
มุมมอง 10K3 ปีที่แล้ว
How to create a Gradient Boosting (GBM) classification model in Python using Scikit Learn? The tutorial will provide a step-by-step guide for this. Problem Statement from Kaggle: www.kaggle.com/c/santander-customer-transaction-prediction/ Code on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/santander_trxn_prediction/05_trxn_pred_gbm.ipynb Code on Kaggle: www.kaggle.com/h...
Variable Importance, Partial Dependence for Random Forest | Tutorial (Part 3/3)
มุมมอง 1.8K3 ปีที่แล้ว
This python tutorial will provide a step-by-step guide for getting variable importance, partial dependence and other details of a Random Forest Model. Problem Statement from Kaggle: www.kaggle.com/c/santander-customer-transaction-prediction/ Code on Github: github.com/harsh1kumar/learning/blob/master/machine_learning/santander_trxn_prediction/04_trxn_pred_rf_details.ipynb Code on Kaggle: www.ka...
Hyper Parameter Tuning for Random Forest in Python | Tutorial (Part 2/3)
มุมมอง 5K3 ปีที่แล้ว
Hyper Parameter Tuning for Random Forest in Python | Tutorial (Part 2/3)
Random Forest in Python using Scikit-Learn | Tutorial (Part 1/3)
มุมมอง 2.2K3 ปีที่แล้ว
Random Forest in Python using Scikit-Learn | Tutorial (Part 1/3)
Decision Tree in Python using Scikit-Learn | Tutorial | Machine Learning
มุมมอง 11K3 ปีที่แล้ว
Decision Tree in Python using Scikit-Learn | Tutorial | Machine Learning
Fast AI without GPU (CPU only) Setup Walk-through
มุมมอง 3.1K4 ปีที่แล้ว
Fast AI without GPU (CPU only) Setup Walk-through
so clear explanation- thanks harsh - pray you do great in life :))
This guy is a real deal. Thanks bro
Thanks boss
Nice Tutorial to start with ML Flow
Keep it up brother, your videos are clean. I had lost hope in Indian youtubers after all the clickbait. But yours are real content.
To much fast bro
Lovely walkthrough! I really wanted to see the interface before getting started with MLFlow
the shortest yet clearest model walk thru ever. kudos bruh
Thank you sir🙏, vidio ini sangat membantu 😊
Hi, how to host the mlflow server in say a redhat vm and then use that url as our set_tracking_uri lets say from a windows vm??
eval_metric throws error, can anyone suggest me the reason?
train() got an unexpected keyword argument 'early_stopping_rounds' i get this error
This parameter seems to have been deprecated in the newer version of lightgbm. You can try using an older version
Hello, been long time no videos, i hope to see more videos from you.
Thank you for your kind words. The next video will be uploaded in 2 to 3 days.
unable to install 'imghdr' it says "Could not find a version that satisfies the requirement imghdr (from versions: none)"
It seems like imghdr is being deprecated. You should look for its alternatives.
This helped!! Thank you so much.
Hii..actually I trained tflite model and apply mlflow on google colab but autolog is not working and also(mlflow.tensorflow.log_model) is not working...how can I do that??
Nice One.
I think im the stupid one ... the video is in detail but i fail to do ... head scratching moment in spyder :(
Ruff over flake
Great demo and walkthrough!
Really nice video and explanation Harsh
Thank you
More videos [like this] that teach optimization of all the parameters in the model, please
Thanks
I cannot overstate the fact that this video is really clear and terrific. Absolutely fantastic effort on your part. Thank you very much for doing this
can you make video for light gbm for prediction?
Why is the prediction outputs not binary [0,1], but continuous values? Does it represent a confidence in something being a 1?
Yes, it can be interpreted as confidence in being 1
Thank you. Well explained! I'm wondering why don't you use a regressor model instead of the classifier model, because what I see are numerical datasets. I wish you can provide/ explain plots that illustrate observations vs. predictions, and the statistical performance indicators such as RMSE, residuals (errors), R2, etc. Lastly, can I use transfer learning method in the random forest method or it is just limited for deep neural networks (freezing weights)? Thank you so much!
Great video thank you!
Just wanted to know whether EDA, feature selection is not needed for XGboost ?
EDA should be done irrespective of the model. Feature selection can also help removing unnecessary complexity in the model. But the benefit for techniques like XGBoost is that it can take in large number of features and give importance to the relevant ones. I would advice doing first iteration with all possible features and then remove features with lower importance, while monitoring model performance metrics.
how to get class predictions instead of probability prediction ?
Thank's for your video, I have questions about "min_samples_split" and "min_samples_leaf" you have divided a thousand (/1000), where did you get it from the number 1000?
Disliking this video because it’s too good and I don’t want others to know abt it 😂😂
in my project only i get 45% in training and 44 in testing. What do you think i can do to get better accuracy please.
This video covers a lot of thing in short time
Sir, your videos or so good and clear. Can you please highlight how evaluation metrics could be applied to test the outcomes of validation from Random forest. Thank you
Nice video! Thank you so much! One pair of doubts, is there a way to download the notebook with outputs from Kaggle? Is it possible to train models like XGBoost with GPU? because the last time I tried there, the debugger suggested that it was only possible with sequential models like neural networks.
The program is too too much time to run 😵 But Thanks to you Sir, for explaining the program and arguments very well.
You can try LightGBM. It may be faster depending on your context. I have a video for it on my channel.
extremely clear, short, and useful !! tks
This is a a very well explained video !
ohh i see thanks man
I am getting the following error NameError Traceback (most recent call last) Input In [6], in <cell line: 11>() 7 print(os.path.join(dirname, filename)) 9 # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" 10 # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session ---> 11 kaggle/input/santander-customer-transaction-prediction/sample_submission.csv() 12 kaggle/input/santander-customer-transaction-prediction/train.csv() 13 kaggle/input/santander-customer-transaction-prediction/test.csv() NameError: name 'kaggle' is not defined
Sir, seems like you have not changed the path to the csv file. You will have to provide the location to your own file
ok...then after that how to create confusin matrix,classification report
why you have not fit the model ?
Thanks man, great explanation ^^
Thank you, Mr. Kumar.
Thank you so much for this work. How could you plot the chart between R-squared and number of estimators from 20-200?
please do not stop making videos.you are too good
Your justification for learning rate is not right.
How do you do it for Multiclass classification?