I really appreciate your videos. I find them very easy to understand. I want to buy the lifetime membership especially to learn statistics and data science. Do you yourself do all the courses in statistics and data science or is it your teammates?
The output of AdaBoost classification is taken by summing the weight of 1s and 0s, and the one having the large value is selected. It's not done by majority vote. For regression also may be we have to consider weights, I have no Idea about it.
Hey Krish really the appreciable and informative doubt cleaaring videos. Thanks for it loved the sessions.Please assist what is the cost of life time axis subscriptions currently been going on...
Hey krish, great session. You were thorough with the topics. Btw I was interested in getting the current offer of lifetime subscription(7080). Is it for all the different domains on oneneuron? Thank you in advance.
I have a doubt……during cross validation where we choose which model to use i am getting some accuracy but after hyperparameter tuning the accuracy jumps by 2 % Is this normal?
I really like the way your explain such complex problem in simple terms. Also in previous videos, I have learned calculation for gini and entropy which could help in further understanding. From last couple of days I was trying to understand Adaboost, GradientBoost and XGBoost. I have understanding for first two but understanding XGBoost was little bit challenge. Is it possible to explain why XGBoost is so successful by some exercise using all kinds of metrics valuation.
I would like to point out 2 corrections: 1. I read about Adaboost and it looks like the buckets are actually not created. Only the updated weights are passed to the next step with all the samples. 2. Majority vote doesn't happen in Adaboost. It works on weighted sum. (Got rejected in an interview because I said majority) Please verify this.
I am still confused about Low bias and high variance Sir when decision tree is overfitted then it means that the tree is more biased towards the training dataset right? So it should mean high bias Can you please explain this?
high bias means: your model is highly biased towards one or two features and giving prediction based on those highly biased features and neglecting other features, it means that your model didn't learn all the features well so you can say it's is Underfitted model. High variance means: when your model learn all the features to well and even the outliers or noise it may arise Overfitting High Bias and Low Variance: it means importance is given to one or two features and ignoring others(high Biased) , not giving other things any importance (Less variance). it is Underfitted condition Low biased High variance: importance is given to every features and even noise(high variance) , not biased towards one or two features(low bias). everything is used for prediction so it's Overfitting condition Hope you got it,😊.... that why we always look for low bias and low Variance
let me explain you with the error things.. when you model is predicting well with training data , let's say you get no error or 100% accuracy when you predict with train data. it may have happened because your model learn to well on everything thing (high variance) that is Overfitting. when you model not giving much accuracy even when you predicting with training data means it didn't learn training Data well and have low variance and may have become biased to some feature, its Underfitting
you must be thinking about high Bias and high variance see this thing is practically impossible (in my opinion 😉) because you can't be biased to one thing and also giving importance to everything you can't say i love my girlfriend and also all the girls of this world😂😂😂 basically you are Highly biased to you girl friend , you are Underfitted, you have less variance you don't know other tastes😂 so your prediction about girls behaviour will be based on the girl you know (your girlfriend). let's assume you are predicting with training data (means you are supposed to predict your own girlfriend's behaviour) and if you failed to guess correctly means you don't even know your own girlfriend you are too Underfitted 😂 if you are able to guess correctly your girlfriend's behaviour and other random girl's behaviour than you are a true Simga male( the best model that we are looking for)😎.
Actual session starts @11:30
Thanks
KYA padhate ho sir...kasam se mja aa gya ...bcm huge fan of u..if consider this lecture is a movie , u r real KRISH Naik sir
Thanks
Thank you for your content. You are changing peoples lives!!!
Krist your detail explanation of these techniques is making machine learning
Thank you sir for this great explanation of session
although it is a complex..
really well explained.
Great Explanation sir❤️
Thank you Krish..
Excellent session
Thank U very much for these course
Fabulous explanation sir!! Thank you very much!! My all doubts are cleared.
finished watching
Could you please cover the practical case for Ensemble Learning? How Ensemble model is created and trained.
so helpful god bless you sir
Hey can u do a lesson on Forecasting time series but when in the date time u have hours as well? For energy consumption for example
Good Session
Kudos Krish 🙏
Sir Requesting you to disclose which topic you would be posting in the live? So that we can come live.
I really appreciate your videos. I find them very easy to understand. I want to buy the lifetime membership especially to learn statistics and data science.
Do you yourself do all the courses in statistics and data science or is it your teammates?
Sir kindly Deep learning and NLP pyee v bniya ga Videos plz.
Also Deep learning and NLP k project v bniya ga plz help ho jayee gai .
The output of AdaBoost classification is taken by summing the weight of 1s and 0s, and the one having the large value is selected. It's not done by majority vote. For regression also may be we have to consider weights, I have no Idea about it.
thanks sir
Is the 2nd stump generated from feature 2 in the ada boost example [1:01:00]?
super sir
what can go wrong if we do normalization in the case of Random Forest? (36:20- suggest that we shouldn't be doing normalization in Random Forest).
The value might look too small and probably hard to interpret the decision tree after visualisation.
Hey Krish really the appreciable and informative doubt cleaaring videos. Thanks for it loved the sessions.Please assist what is the cost of life time axis subscriptions currently been going on...
40:00
@Krish What is the headphone model you're using ???
Sir please make video on yolo algorithm
Hey krish, great session. You were thorough with the topics. Btw I was interested in getting the current offer of lifetime subscription(7080). Is it for all the different domains on oneneuron? Thank you in advance.
Like is that subscription based price, or is it a life time access with one time payment.
Currently lifetime access with one payment is going on
That's really great, thank you once again for such amazing sessions, I'll be enrolling soon :)
I have a doubt……during cross validation where we choose which model to use i am getting some accuracy but after hyperparameter tuning the accuracy jumps by 2 %
Is this normal?
agriculture engineer can do a good career in datascience
I really like the way your explain such complex problem in simple terms. Also in previous videos, I have learned calculation for gini and entropy which could help in further understanding. From last couple of days I was trying to understand Adaboost, GradientBoost and XGBoost. I have understanding for first two but understanding XGBoost was little bit challenge. Is it possible to explain why XGBoost is so successful by some exercise using all kinds of metrics valuation.
what if the output of decision trees are balanced!?
Hi Everyone,
I am unable to open the community session link. As it showing invalid. Can anyone able to send notes.
👍👍👍👍👍👍
💕❤️❤️💕
I would like to point out 2 corrections:
1. I read about Adaboost and it looks like the buckets are actually not created. Only the updated weights are passed to the next step with all the samples.
2. Majority vote doesn't happen in Adaboost. It works on weighted sum. (Got rejected in an interview because I said majority)
Please verify this.
should the random forest error be less than the bagging error
I would like to join, but the payment link is not working for me as I own only RuPay card and it is not supporting
"Everybody clear till hear"
Krish is that lifetime access still there?
Hi Krish, can you please differentiate the Machine Learning and Deep Learning?
He already explained that in Day 1 live session…
I am still confused about
Low bias and high variance
Sir when decision tree is overfitted then it means that the tree is more biased towards the training dataset right?
So it should mean high bias
Can you please explain this?
high bias means:
your model is highly biased towards one or two features and giving prediction based on those highly biased features and neglecting other features, it means that your model didn't learn all the features well so you can say it's is Underfitted model.
High variance means:
when your model learn all the features to well
and even the outliers or noise
it may arise Overfitting
High Bias and Low Variance:
it means importance is given to one or two features and ignoring others(high Biased) ,
not giving other things any importance (Less variance). it is Underfitted condition
Low biased High variance:
importance is given to every features and even noise(high variance) , not biased towards one or two features(low bias). everything is used for prediction
so it's Overfitting condition
Hope you got it,😊....
that why we always look for low bias and low Variance
in short, to keep in mind
when you see high Bias, its Underfitting
when you see high variance, its Overfitting
let me explain you with the error things..
when you model is predicting well with training data , let's say you get no error or 100% accuracy when you predict with train data. it may have happened because your model learn to well on everything thing (high variance) that is Overfitting.
when you model not giving much accuracy even when you predicting with training data
means it didn't learn training Data well and have low variance and may have become biased to some feature, its Underfitting
you must be thinking about
high Bias and high variance
see this thing is practically impossible (in my opinion 😉)
because you can't be biased to one thing and also giving importance to everything
you can't say i love my girlfriend and also all the girls of this world😂😂😂
basically you are Highly biased to you girl friend , you are Underfitted, you have less variance you don't know other tastes😂
so your prediction about girls behaviour will be based on the girl you know (your girlfriend).
let's assume you are predicting with training data
(means you are supposed to predict your own girlfriend's behaviour)
and if you failed to guess correctly means you don't even know your own girlfriend
you are too Underfitted 😂
if you are able to guess correctly your girlfriend's behaviour and other random girl's behaviour than you are a true Simga male( the best model that we are looking for)😎.
still confused about being high Biased?
then let me know , i will give you another good example
I am unable to access the NOTES of this in Ineuron. Please help🙏🙏
click on enroll now(it is free of cost) and then procced , you will get there
@@259_parthpatidar9 I am not getting it can you please help
how to download notes?
krish naik ji your coping and pasting the adaboost there is no logic in your adaboost explanatation pls check it
He is just a bullshit guy, making money from teaching people just an overview, no fundamentals in maths.