Cost Function and Loss Function in Data Science | Cost function machine learning | Regression Cost

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 103

  • @kavyanagesh8304
    @kavyanagesh8304 10 หลายเดือนก่อน +1

    Thank you so much! You're a great teacher, Mr.Aman! 🙏

  • @amalaj4988
    @amalaj4988 3 ปีที่แล้ว +13

    Regression Loss : L1 loss, L2 loss, Huber Loss
    Classification Loss : Hinge Loss (SVM) , Cross Entropy (Binary CE ( sigmoid ) , Categorical CE ( softmax ) , Sparse Categorical CE )

    • @raghavverma120
      @raghavverma120 2 ปีที่แล้ว

      Btaiyo bhai softmax kha use karte hai?

  • @ajiteshthawait4771
    @ajiteshthawait4771 2 ปีที่แล้ว +3

    Notes: cost/ loss function: A function which associates cost with function. Decision using costs, for ex Google maps, loss = diff between actual and predict values, cost = sum of all losses, MAE is the cost function for linear regression. MAE:L1 loss MSE: L2 loss, loss is per observation and cost is for the whole class. Our goal is to find the cost function. Loss functions in the regression. Any algorithm which uses optimization uses loss function.

  • @divyankverma
    @divyankverma 2 ปีที่แล้ว +4

    In 10 minutes you taught me more than my lecturer did in 4 months

    • @micahmainala9357
      @micahmainala9357 ปีที่แล้ว

      🤣🤣😂😂😂

    • @Srivastd06
      @Srivastd06 ปีที่แล้ว

      😂😂🤣🤣🤣🤣

    • @ug1880
      @ug1880 3 หลายเดือนก่อน +1

      Ghonchu😂

  • @TehreemAyesha
    @TehreemAyesha 3 ปีที่แล้ว +1

    Please do not stop making the videos. You are doing great explaining these topics in simple terms. Thank you!
    Would love to learn entire ML from you.

  • @memescompilation6477
    @memescompilation6477 3 ปีที่แล้ว +9

    perfect. this is how i want someone to teach.
    Please cover every aspect of machine learning.

  • @digvijaydesai5642
    @digvijaydesai5642 9 หลายเดือนก่อน

    Thank you Aman sir!! You are the best Data Science teacher.

  • @usmansaeed678
    @usmansaeed678 2 ปีที่แล้ว

    keep up the good work .

  • @sheikhshah2593
    @sheikhshah2593 3 ปีที่แล้ว

    Great. 👍. Concluded video

  • @findingvaluetoday
    @findingvaluetoday ปีที่แล้ว

    Your explanations are very intuitive and easy to understand. Thank you very much for running this channel - it is helping me a ton in learning data science!

    • @UnfoldDataScience
      @UnfoldDataScience  ปีที่แล้ว

      You're very welcome! Please share channel with friends as well. Thanks again.

  • @vikrantchouhan9908
    @vikrantchouhan9908 3 ปีที่แล้ว +1

    Thanks for the gentle and uncomplicated explanation.

  • @rds9815
    @rds9815 ปีที่แล้ว

    very good one i was watching other video which are provided by some institute am unable to under stand that but after watching your video the doubts are cleared
    please keep doing videos

  • @denzelomondi6421
    @denzelomondi6421 2 ปีที่แล้ว

    Man you are too good at this thing

  • @AnkitKumar-ss7sx
    @AnkitKumar-ss7sx 3 ปีที่แล้ว +1

    SIr, in some explanations I've seen they use 1/2n for mean and formula is written as 1/2n (nEi=1)(h(x*i) - y*i)square. Please explain.

  • @bhartichambyal6554
    @bhartichambyal6554 3 ปีที่แล้ว +1

    Why we use gradient descent. because sklearn can automatically find best fit line for our data.what is the purpose of gradient descent.

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว +1

      Gradient descent is a generic method to optimize parameters used in many ways. This is not related to only one algorithm as such.

    • @bhartichambyal6554
      @bhartichambyal6554 3 ปีที่แล้ว +1

      ​@@UnfoldDataScience when we use simple linear regression and multi-linear regression then it use OLS by default or it use GRADIENT DESCENT for finding the best fit line ? please answer my question

  • @himiee
    @himiee 3 ปีที่แล้ว +1

    Perfectly explained... Got it in one view 🙌🙌

  • @rosemarydara1025
    @rosemarydara1025 2 ปีที่แล้ว

    superrrrrb explanation. even a layman can understand

  • @vidyasagarpasala5218
    @vidyasagarpasala5218 3 ปีที่แล้ว

    Very nice explanation

  • @nabanitapaul2331
    @nabanitapaul2331 ปีที่แล้ว

    Nice explanation

  • @paramesh1629
    @paramesh1629 3 ปีที่แล้ว

    Top Class Explaination

  • @askpioneer
    @askpioneer ปีที่แล้ว

    Hello Aman ! great video. in simple language its clear. Can you please make video for classification type as you said it will be different like cross entropy or some thing else ?

  • @dataflex4440
    @dataflex4440 3 ปีที่แล้ว +1

    Brilliant Dude ! Superb explanation this channel needs growth

  • @appagi
    @appagi ปีที่แล้ว

    Very nicely explained Sir!!

  • @carlmemes9763
    @carlmemes9763 3 ปีที่แล้ว +1

    Thanks for this video sir👍❤️

  • @ArpitYadav-ws5xe
    @ArpitYadav-ws5xe 3 ปีที่แล้ว

    Excellent

  • @Shivakumar-ph6gk
    @Shivakumar-ph6gk 2 ปีที่แล้ว

    Well explained. Than q

  • @osho2810
    @osho2810 2 ปีที่แล้ว

    Great Sir...

  • @fazilhabeeth
    @fazilhabeeth 2 ปีที่แล้ว

    Your teaching is excellent bro....
    Others.. use lots of concept to explain one concept make us to confuse....
    But your method is very simple... and can be understood by all....
    👌👌👌

    • @UnfoldDataScience
      @UnfoldDataScience  2 ปีที่แล้ว +1

      Thanks Fazil. Please share with others as well. Happy learning 😊

  • @archanamohapatra7589
    @archanamohapatra7589 3 ปีที่แล้ว +1

    Big Thanks, Nicely explained 👍

  • @fishx1580
    @fishx1580 2 ปีที่แล้ว

    your the best.. ;)

  • @JP-fi1bz
    @JP-fi1bz 3 ปีที่แล้ว +2

    Hello sir when are you going to have next q & a session and wat time?

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว

      This weekend we ll hv, I ll announce on community section of my channel.

  • @megalaramu
    @megalaramu 3 ปีที่แล้ว

    Hi Aman, I do have a question - We use MAE/MSE/RMSE for regression problems even if it's Decision Trees. But when it comes to classification we use Log Loss for Logistic, Hinge Loss for SVMetc For decision Tree is there anything seperte it is based on entropy or the gini impurity? Also NB just acts like a lookup table right, how about there

  • @bangarrajumuppidu8354
    @bangarrajumuppidu8354 3 ปีที่แล้ว +1

    great explanation sir !!

  • @muhammadabuzar7910
    @muhammadabuzar7910 3 ปีที่แล้ว

    this is such a perfect way of explaining it
    thank you so very much

  • @coreyhartman4510
    @coreyhartman4510 3 ปีที่แล้ว

    great explanation.

  • @akhileshgandhe5934
    @akhileshgandhe5934 3 ปีที่แล้ว

    Do more videos👍

  • @krishnab6444
    @krishnab6444 2 ปีที่แล้ว

    thank u sir!

  • @Maryam_Qureshi
    @Maryam_Qureshi 3 ปีที่แล้ว

    Thank you. It was helpful

  • @RamanKumar-ss2ro
    @RamanKumar-ss2ro 3 ปีที่แล้ว +1

    Thank you.

  • @hardikvegad3508
    @hardikvegad3508 3 ปีที่แล้ว +1

    Sir can we relate this |w|^2 L2 & |w| L1 with ridge & lasso?

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว +2

      We will understand lasso and ridge in detail in other video.

  • @lemonbitter7641
    @lemonbitter7641 3 ปีที่แล้ว

    Sir am confused about what does cross val score tells and between loss function in a classification model

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว

      Cross Validation is a different cocept, understand it here
      th-cam.com/video/rPlBijVFw7k/w-d-xo.html

    • @lemonbitter7641
      @lemonbitter7641 3 ปีที่แล้ว

      @@UnfoldDataScience thanks for helping me out sir.

  • @_shikh4r_
    @_shikh4r_ 3 ปีที่แล้ว

    Nicely explained, Subscribed 👍

  • @balapranav5364
    @balapranav5364 3 ปีที่แล้ว +1

    I have one doubt sir , when to use Mae, when to use Mae and when to use rmse please

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว +2

      In case your data has outliers, use mae, not squared ones(rmse, mse)

    • @abrahammathew8698
      @abrahammathew8698 3 ปีที่แล้ว

      @@UnfoldDataScience Sir, Is it mean absolute deviation or median absolute deviation as mean can be impacted by outlier? Thanks for great video.

  • @sandipansarkar9211
    @sandipansarkar9211 3 ปีที่แล้ว

    finished watching

  • @mosama22
    @mosama22 3 ปีที่แล้ว

    Thank you Amen, this was really beautiful :-)

  • @faiemveg7350
    @faiemveg7350 3 ปีที่แล้ว

    How can you take half of 68 here
    I think in equation y= MX+c we can use beta0= 150 and beta1 = 0.5 then equation should be y= 150+0.5*176 ??

    • @janakiraam1
      @janakiraam1 3 ปีที่แล้ว

      @faie veg, here independent variable is height and dependent variable is weight so the equation is y=150+0.5*68.

  • @Ankitsharma-vo6sh
    @Ankitsharma-vo6sh 3 ปีที่แล้ว +1

    thanks

    • @UnfoldDataScience
      @UnfoldDataScience  3 ปีที่แล้ว

      Thank you.

    • @Ankitsharma-vo6sh
      @Ankitsharma-vo6sh 3 ปีที่แล้ว

      @@UnfoldDataScience bro being true there is nothing to thank me but i have a request try some end to end projects i really want to see what is your approach towards problems

  • @hanselcreado3570
    @hanselcreado3570 3 ปีที่แล้ว

    How do you pick the value for beta0 and beta1

  • @letslearndatasciencetogeth479
    @letslearndatasciencetogeth479 3 ปีที่แล้ว +1

    Sr plss explain cross entropy explaination

  • @bintu8962
    @bintu8962 2 ปีที่แล้ว

    Sir how u applying beta 1 value 150 or .5

    • @UnfoldDataScience
      @UnfoldDataScience  2 ปีที่แล้ว

      There are recommendations to be followed by research.

  • @azrflourish9032
    @azrflourish9032 3 ปีที่แล้ว +1

    so,
    loss function : y_actual - y_predict
    cost function:
    l1= MAE : (1/n)( sigma(y_actual - y_pred))
    l2 = MSE : (1/n)( sigma((y_actual - y_pred)^2)
    did I get correctly???

    • @raghavverma120
      @raghavverma120 2 ปีที่แล้ว

      Confusing.. first he called mae as cost function and later on he called the same thing as loss function for linear regression

    • @raghavverma120
      @raghavverma120 2 ปีที่แล้ว

      Inshort.. loss function = cost function and can be used interchangeably

    • @raghavverma120
      @raghavverma120 2 ปีที่แล้ว

      Cost function is calculated over the entire data set, and loss for one training instance

  • @arslanjutt4282
    @arslanjutt4282 ปีที่แล้ว

    Not understand what is difference in cost and loss function because mse is cost function as well loss

  • @CRICKETLOVER_10
    @CRICKETLOVER_10 3 ปีที่แล้ว

    I am b.com pass out can i became data scienctist

  • @neekhilsingh2114
    @neekhilsingh2114 2 ปีที่แล้ว

    Sir please share your mail id for my cv review…. Unfolddata science mail id does not work. Please share

  • @sandipansarkar9211
    @sandipansarkar9211 3 ปีที่แล้ว +1

    finished watching

  • @pranjalgupta9427
    @pranjalgupta9427 3 ปีที่แล้ว +1

    Thanks 🙏