@@SumanSadhukhan-md4dq Loss function is the error expression when considered for a single data point. When we sum up all the errors over all the data points, we get the cost function. So, basically, cost function is when we sum up the loss functions of all the data points. Hope this makes sense to you.
You deserve more respect bro....really i admire you...No one give interpretation and inferences like u. And please make a videos on Neural Network and NLP.
It takes patience, endurance and talent to be a teacher, but you make it look so easy day after day. I hope you know just how much we all appreciate you.
Still halfway trough the video, but I feel necessary to comment here. This is by far the best explanation of Loss function and the intuition leading up to it. I have enrolled in paid courses to learn this, and let me tell you the quality of explanation in this 53 min video is better than any other resources out there. Kudos to you...keep shining Nitish!!
15:54 Actually it should be yi^, Because that line has been drawn by the model, so it will be a predicted value right? And the above point should be yi. I think you have said it ulta. Could you please reply to this? And all your sessions are awesome. Thank you so much for such a great explanation.
I think he just accidentally marked y as y hat on graph but everything else is correct, because he referred y hat as the one predicted by the model and y as the actual result. So in the formula it makes sense that we're subtracting the predicted value from the actual value.
As of today this video has 8k views. But i am sure in the next 1 year it will cross 1 Lac view. This is the best explanation of cost function derivation.
You are amazing!! Just the way we want the lectures to be!! When you take care of uppercase and lowercase chars while writing terms like X_train, y_train, it reflects your in depth understanding of the subject and the notations one should follow. You will reach great heights.
I tried to learn from many places but not able to learn it in a well manner even I try krish naik. I love the way you teach bro its awesome to learn from you
Hats off to you sir. Apart from immense knowledge, your patience and ability to simplify abstract topics is outstanding. You are a gifted teacher. Thank you is not enough.
Nitesh Sir is explaining linear regression so brilliantly ! The clarity and depth of his explanation are absolutely top-notch. It's amazing how he makes such a complex topic feel so simple and intuitive. Thank you, Sir, for sharing your knowledge in such an impactful way
Sir apney (y hat) ko ulta likh diya (y hat) toh actual value hai and (yi) jo model predict kiya ..but ap bol sahi rahe ho likh ulta rahe ho syd...(yi) upr hoga and (y^) niche hoga q ki nichey wla model predicte kr rha is liya niche wla hoga.. Or di =(yi-y^) hoga or yi upr hoga and y^ niche hoga.. 15:30 Plz reply
You are awesome in explanation I have never learn anything as simple as you explained. you made the complex things for me very simple. You are awesome.
@ 16:00 there is small mistake If predicted is represented by hat symbol and actual by not hat Then y hat must be below and y must be above in y-axis line
I love the way you explain difficult topic like it's not a big deal. Even this much deep explaination we can't get in paid course as well but you give this beautiful content for free. i=1 While i>0: print("Thank you very much Nitish Sir!")
Very good explanation for the linear regression algorithm. You covered the math behind it. I never thought that i could learn how algorithm works in the backend. Thanks for the explanation.
I think the way you have cleared my concepts has given me more confidence to talk on ML models and their fine tuning. Really very intuitive understanding.
16:00 mere khayal se pridiction value line pr honi chahiye or actual value to hum logo ne plot krai hi hai to jo line pr Dot bnaya hai (x,y) vo H^ hoga or jo actual point hai line wale point ke uper vo Hi hoga kunki vahi to actual data hai or Jo pdidict hua vo line pr hai to us hisab se Hi - H^ (actual value - predicated value) krenge tabhi to equation bnegi. Mere khayal se aapne vo glti se H^ actually point ko likha hai 15:55 pr
Clear and concise explanation!! Will it be possible to create one lecture series on in-depth Python? I have checked the one uploaded by you. But it has some missing lectures. Also, in some lectures the board is not visible. Thanks in advance!
I took one course which has over 1M students still I find the course rubbish, like they did not go any deep. And here is this guy with just 220k subscribers(while commenting) and have god level teaching skills and going very deep in what and how. You deserve more than what you are getting. Hopefully people will find this channel and you get more for what you are doing. You are a just amazing 🔥
Finally in model. Your explanation is clearer for me compared to even Andrew Ng. I am not kidding. Thank you Nitish. :)
Yes, I too feel the same...
real
@@iamsomeone54 I came here just because I was not feeling comfortable with andrew ng
Can anyone differentiate loss and coss function?
@@SumanSadhukhan-md4dq Loss function is the error expression when considered for a single data point. When we sum up all the errors over all the data points, we get the cost function. So, basically, cost function is when we sum up the loss functions of all the data points. Hope this makes sense to you.
You deserve more respect bro....really i admire you...No one give interpretation and inferences like u.
And please make a videos on Neural Network and NLP.
u r best mentor/teacher i ever came across for data science....
It takes patience, endurance and talent to be a teacher, but you make it look so easy day after day. I hope you know just how much we all appreciate you.
Still halfway trough the video, but I feel necessary to comment here. This is by far the best explanation of Loss function and the intuition leading up to it. I have enrolled in paid courses to learn this, and let me tell you the quality of explanation in this 53 min video is better than any other resources out there. Kudos to you...keep shining Nitish!!
Awesome man.. India is moving towards AI/ML and in the journey you are a blessing.. Anybody can learn AI/ML from your videos...
Glad you liked them
Dear Nitish, You are best DS mentor. Thanks for creating the channel for those who seeks the job in DS.
15:54 Actually it should be yi^, Because that line has been drawn by the model, so it will be a predicted value right? And the above point should be yi. I think you have said it ulta. Could you please reply to this? And all your sessions are awesome. Thank you so much for such a great explanation.
Hi,
I also observed the same thing, was checking for any existing comments and found this.
I think he just accidentally marked y as y hat on graph but everything else is correct, because he referred y hat as the one predicted by the model and y as the actual result. So in the formula it makes sense that we're subtracting the predicted value from the actual value.
I have never seen a teacher who teach math's soo easily. I mean hats off to you sir. Every student deserves teaching like you give
As of today this video has 8k views. But i am sure in the next 1 year it will cross 1 Lac view. This is the best explanation of cost function derivation.
You are amazing!! Just the way we want the lectures to be!! When you take care of uppercase and lowercase chars while writing terms like X_train, y_train, it reflects your in depth understanding of the subject and the notations one should follow. You will reach great heights.
thank you sir for all such beautiful content. there is a small mistake at 16:00. predicted and actual values are named incorrectly
best tutor for ML on youtube. know one teaches with this much depth
I tried to learn from many places but not able to learn it in a well manner even I try krish naik. I love the way you teach bro its awesome to learn from you
You are a boon for us. As a gratitude, I dont skip the ads in your video.
Hats off to you sir. Apart from immense knowledge, your patience and ability to simplify abstract topics is outstanding. You are a gifted teacher. Thank you is not enough.
I am from Mathematics & Computer background but first time I loved mathematics so much. The explanation can't be better than this !!
Nitesh Sir is explaining linear regression so brilliantly ! The clarity and depth of his explanation are absolutely top-notch. It's amazing how he makes such a complex topic feel so simple and intuitive. Thank you, Sir, for sharing your knowledge in such an impactful way
Finally I found the best teacher which will help me in ml journey
Sir apney (y hat) ko ulta likh diya (y hat) toh actual value hai and (yi) jo model predict kiya ..but ap bol sahi rahe ho likh ulta rahe ho syd...(yi) upr hoga and (y^) niche hoga q ki nichey wla model predicte kr rha is liya niche wla hoga..
Or di =(yi-y^) hoga or yi upr hoga and y^ niche hoga.. 15:30
Plz reply
thank god someone noticed, i was sooooooo confused T_T
@@niketasengar9191 yaah so mujhe abi tak solution ni mila..! Apko mila?
at 26:48 shouldn't xi be also 0 since m value is 0?
Awesome vedio I couldn't this much detailed explanation even from great TH-cam data science teachers
Loved ur explaination sir, Really got a taste of calculus after so long.
Best explanation that exists, hands down!
You are awesome in explanation I have never learn anything as simple as you explained. you made the complex things for me very simple. You are awesome.
@ 16:00 there is small mistake
If predicted is represented by hat symbol and actual by not hat
Then y hat must be below and y must be above in y-axis line
The most underrated data science channel on TH-cam
I love the way you explain difficult topic like it's not a big deal. Even this much deep explaination we can't get in paid course as well but you give this beautiful content for free.
i=1
While i>0:
print("Thank you very much Nitish Sir!")
I rarely subscribe any channel in here, so, I subscribed yours. keep up the good work till advance level and beyond
Very good explanation for the linear regression algorithm. You covered the math behind it. I never thought that i could learn how algorithm works in the backend. Thanks for the explanation.
31:02 if first derivative is zero it can be either maxima or minima. so if we calculate our equation for this how to confirm is this maxima or minima
machine learning is fun right now but time consuming too. All thanks to you sir.
I think the way you have cleared my concepts has given me more confidence to talk on ML models and their fine tuning.
Really very intuitive understanding.
At 16:20 , isn't the yi hat the prediction that lie on the line, not the actual y
true
16:00 I feel like he said opposite. Am I wrong or he got confused?
Really great service to the society. God bless you. You get everything in life
16:00 mere khayal se pridiction value line pr honi chahiye or actual value to hum logo ne plot krai hi hai to jo line pr Dot bnaya hai (x,y) vo H^ hoga or jo actual point hai line wale point ke uper vo Hi hoga kunki vahi to actual data hai or Jo pdidict hua vo line pr hai to us hisab se Hi - H^ (actual value - predicated value) krenge tabhi to equation bnegi. Mere khayal se aapne vo glti se H^ actually point ko likha hai 15:55 pr
Yup, I also think the same
Sir u are really .......Superrr
Respect ++ from NIT Raipur
update at 16:22 the predicted value and actual value notation of Y should be opposite
since it will be squared up, it wont make any impact i.e (2-5) square = 9 and (5-2) square = 9
@12:35 can anyone explain, what does the penalize means here?
great explanation . concept clear . mathematical intuition was too good .
Such an Amazing teacher with such an incredible content and explanation.
Very grateful to you Nitish!
A very biggg Thank you too you.
after getting the m 's value at 39:27 , i think (xi -xbar) can be cancelled?
Sir, You are such a Genius, I never ever seen like you...🙇
Woww..... ! Satisfied with full clarity. Thank you sir.
i think you deserve at least 50M subscribers!
This playlist is a blessing. Thank you Sir🙏
Awesome videos, You are the best teacher for data science
SPEECH-LESS!!!!watching your videos is the best decision i've everrrr madeee.
Clear and concise explanation!!
Will it be possible to create one lecture series on in-depth Python? I have checked the one uploaded by you. But it has some missing lectures. Also, in some lectures the board is not visible. Thanks in advance!
You’re helping so many lives god bless you!!
One of the best explanation i saw
Thanks a lot sir😀
Loved it!!! No one can explain like you sir! ❤❤❤❤
No one can explain more simpler than this
I took one course which has over 1M students still I find the course rubbish, like they did not go any deep. And here is this guy with just 220k subscribers(while commenting) and have god level teaching skills and going very deep in what and how. You deserve more than what you are getting. Hopefully people will find this channel and you get more for what you are doing.
You are a just amazing 🔥
You deserve million views
Wow...
I got full clarity now😊
You deserve a lot of respect. Thank you for the effort!!!!!!
on (19:11 min) you have taken y-y^ and y^ is mx+b then why you have taken (y - mx-b) why there is minus sign between mx and b
- sign when multiplied to
(mx +b) will change it to
(mx-b)
No words to say after watching your videos just love you brother...
bhai kya teaching style superb.
Hidden Gem On TH-cam
you are best ever nitishbhaoi hats off to you
What a great explanation 👏🔥
Hats off 🫡🙇♂️
Thanks sir for the video🔥🔥🔥🔥🔥🔥🔥🔥🔥 Hats off to you
You course is really great,can you cover time series as well
That was exceptionally good... Thank you for this amazing explainer
great learning bro,
maja a gaya ,
love you bhai.
Its year 2025 Jan and your playlist is still a blessing for many of us. Thanks Sir🙇
Amazing Sir !!!!
Love your videos
bahut hi pyara video tha sir thankyou
you are the great bro what a way of teaching, incredible
.....no doubt knowledge has no boundary alots of love from pakistan❣
Brother you nailed it in explanation, best tutorial
where is your Patreon , you deserved lots of love and respect, Thank you for everything :) GBU
selfless service once I would get placement I would surely do something to grow this channel
Clean and crisp Explanation
you have created a gold mine...
Best content in ML space
Best Teacher Ever!
Thank You Sir.
the way you teach is awesome
that's what we called learning algorithms from scratch. sir, can u tell us which book you preferred to learn this from? just love your content.
Will soon be uploading a video on this topic.
lovely explanation
The best Explanation ever!
Sir Your videos are amazing
U are the best.........really amazing
Sir Ji Tussi Great Ho !! :)
Will you b working on sgd regressor in upcoming videos?
Great Video. clear explanation
Awesome Sirji, tuse great ho
Man dil jeet liyaa❣️🔥
Thanks sir for this explanation on linear regression
15:50 to 16:00...is it correct?
actualy y pred and yi is interchanged on y axis lane,
Next Level Explanation.......thanks for sharing🫡🙂↕️
Nice session
Wish I could like the video twice!
Thank you so much sir 🙏🙏🙏
Extremely good✌✌
SUCH A GOOD VIDEO
All doubts cleared thanks bro