Linear Regression

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ก.ย. 2024

ความคิดเห็น • 102

  • @Nikhil-lq1kb
    @Nikhil-lq1kb 2 ปีที่แล้ว +30

    Error at 15:32
    Multiple linear regression: Y = B0 + B1*x1 + B2*x2 + ... + BP*xp + E
    Polynomial linear regression: Y = B0 + B1*x^1 + B2*x^2 + ... + BP*x^p + E

    • @kasyapvelampalli2811
      @kasyapvelampalli2811 ปีที่แล้ว +3

      Right.. I was confused here too! Because linear regression must always have deg=1 as opposed to what has been taught in the lect, eq. cannot have a polynomial degree of 'p'

    • @ram-pc4wk
      @ram-pc4wk ปีที่แล้ว

      no , linearity is based on coeffiecents x terms in this case, not directly the x terms

    • @narmadaa2106
      @narmadaa2106 8 หลายเดือนก่อน

      Yes u r right
      It's polynomial regression

    • @narmadaa2106
      @narmadaa2106 8 หลายเดือนก่อน +1

      If the degree of x is more than1 it represents non linearity

  • @Creative_arts_center
    @Creative_arts_center 2 ปีที่แล้ว +12

    The best professor in machine learning.i like her teaching.i followed her from 2010 onwards.i had collected her lectures through CDs from 2010.i like her very much

  • @mayanksj
    @mayanksj 6 ปีที่แล้ว +13

    Machine Learning by Prof. Sudeshna Sarkar
    Basics
    1. Foundations of Machine Learning (th-cam.com/video/BRMS3T11Cdw/w-d-xo.html)
    2. Different Types of Learning (th-cam.com/video/EWmCkVfPnJ8/w-d-xo.html)
    3. Hypothesis Space and Inductive Bias (th-cam.com/video/dYMCwxgl3vk/w-d-xo.html)
    4. Evaluation and Cross-Validation (th-cam.com/video/nYCAH8b5AQ0/w-d-xo.html)
    5. Linear Regression (th-cam.com/video/8PJ24SrQqy8/w-d-xo.html)
    6. Introduction to Decision Trees (th-cam.com/video/FuJVLsZYkuE/w-d-xo.html)
    7. Learning Decision Trees (th-cam.com/video/7SSAA1CE8Ng/w-d-xo.html)
    8. Overfitting (th-cam.com/video/y6SpA2Wuyt8/w-d-xo.html)
    9. Python Exercise on Decision Tree and Linear Regression (th-cam.com/video/lIBPIhB02_8/w-d-xo.html)
    Recommendations and Similarity
    10. k-Nearest Neighbours (th-cam.com/video/PNglugooJUQ/w-d-xo.html)
    11. Feature Selection (th-cam.com/video/KTzXVnRlnw4/w-d-xo.html )
    12. Feature Extraction (th-cam.com/video/FwbXHY8KCUw/w-d-xo.html)
    13. Collaborative Filtering (th-cam.com/video/RVJV8VGa1ZY/w-d-xo.html)
    14. Python Exercise on kNN and PCA (th-cam.com/video/40B8D9OWUf0/w-d-xo.html)
    Bayes
    16. Baiyesian Learning (th-cam.com/video/E3l26bTdtxI/w-d-xo.html)
    17. Naive Bayes (th-cam.com/video/5WCkrDI7VCs/w-d-xo.html)
    18. Bayesian Network (th-cam.com/video/480a_2jRdK0/w-d-xo.html)
    19. Python Exercise on Naive Bayes (th-cam.com/video/XkU09vE56Sg/w-d-xo.html)
    Logistics Regession and SVM
    20. Logistics Regression (th-cam.com/video/CE03E80wbRE/w-d-xo.html)
    21. Introduction to Support Vector Machine (th-cam.com/video/gidJbK1gXmA/w-d-xo.html)
    22. The Dual Formation (th-cam.com/video/YOsrYl1JRrc/w-d-xo.html)
    23. SVM Maximum Margin with Noise (th-cam.com/video/WLhvjpoCPiY/w-d-xo.html)
    24. Nonlinear SVM and Kernel Function (th-cam.com/video/GcCG0PPV6cg/w-d-xo.html)
    25. SVM Solution to the Dual Problem (th-cam.com/video/Z0CtYBPR5sA/w-d-xo.html)
    26. Python Exercise on SVM (th-cam.com/video/w781X47Esj8/w-d-xo.html)
    Neural Networks
    27. Introduction to Neural Networks (th-cam.com/video/zGQjh_JQZ7A/w-d-xo.html)
    28. Multilayer Neural Network (th-cam.com/video/hxpGzAb-pyc/w-d-xo.html)
    29. Neural Network and Backpropagation Algorithm (th-cam.com/video/T6WLIbOnkvQ/w-d-xo.html)
    30. Deep Neural Network (th-cam.com/video/pLPr4nJad4A/w-d-xo.html)
    31. Python Exercise on Neural Networks (th-cam.com/video/kTbY20xlrbA/w-d-xo.html)
    Computational Learning Theory
    32. Introduction to Computational Learning Theory (th-cam.com/video/8hJ9V9-f2J8/w-d-xo.html)
    33. Sample Complexity: Finite Hypothesis Space (th-cam.com/video/nm4dYYP-SJs/w-d-xo.html)
    34. VC Dimension (th-cam.com/video/PVhhLKodQ7c/w-d-xo.html)
    35. Introduction to Ensembles (th-cam.com/video/nelJ3svz0_o/w-d-xo.html)
    36. Bagging and Boosting (th-cam.com/video/MRD67WgWonA/w-d-xo.html)
    Clustering
    37. Introduction to Clustering (th-cam.com/video/CwjLMV52tzI/w-d-xo.html)
    38. Kmeans Clustering (th-cam.com/video/qg_M37WGKG8/w-d-xo.html)
    39. Agglomerative Clustering (th-cam.com/video/NCsHRMkDRE4/w-d-xo.html)
    40. Python Exercise on means Clustering (th-cam.com/video/qs7vES46Rq8/w-d-xo.html)
    Tutorial I (th-cam.com/video/uFydF-g-AJs/w-d-xo.html)
    Tutorial II (th-cam.com/video/M6HdKRu6Mrc/w-d-xo.html )
    Tutorial III (th-cam.com/video/Ui3h7xoE-AQ/w-d-xo.html)
    Tutorial IV (th-cam.com/video/3m7UJKxU-T8/w-d-xo.html)
    Tutorial VI (th-cam.com/video/b3Vm4zpGcJ4/w-d-xo.html)
    Solution to Assignment 1 (th-cam.com/video/qqlAeim0rKY/w-d-xo.html)

  • @HA-bj5ck
    @HA-bj5ck ปีที่แล้ว +3

    Very well explained...........This is GOLD ❤

  • @shudeshna66
    @shudeshna66 7 ปีที่แล้ว +4

    Having 1/2 as a multiplicative factor does not change the solution as what minimizes z also minimizes 1/2 z. 1/2 is usually added so that the derivative formula has a constant coefficient of 1.

  • @lampfall7915
    @lampfall7915 2 ปีที่แล้ว +2

    She is wonderful teacher, respect to you

  • @itsdurgeshray
    @itsdurgeshray 9 วันที่ผ่านมา

    CORRECTION at 16:24
    equation shall not have exponential degrees in power as it is for Polynomial Regression.

  • @RohitKumar-jh1km
    @RohitKumar-jh1km 7 ปีที่แล้ว +64

    You people explain those things in details which doesn't require explanation. And those things which do require explanation you skip them as if they even doesn't need expatiation.

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 6 ปีที่แล้ว +3

      they copied from andrew NG's lectures

    • @akhandbha
      @akhandbha 6 ปีที่แล้ว

      How do you know ?

    • @shivaniamehta9851
      @shivaniamehta9851 4 ปีที่แล้ว

      You can see this for clarification.
      medium.com/@nicolabernini_63880/ml-what-is-the-difference-between-gradient-descent-and-stochastic-gradient-descent-be79ab450ef0

    • @subashchandrapakhrin3537
      @subashchandrapakhrin3537 4 ปีที่แล้ว +1

      @@dipanjanbiswas4924 Is the Anderw NG father of ML ??? or the people who write the papers

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 4 ปีที่แล้ว

      @@subashchandrapakhrin3537 you can say that

  • @roseb2105
    @roseb2105 5 ปีที่แล้ว +7

    maybe i am missing somthing here but are these lessons meant to be a review or just an overview of what will be taught? beacuse its hard to understand this if learning this the first time without much examples?

  • @avinashdwivedi2015
    @avinashdwivedi2015 2 ปีที่แล้ว +6

    i was good at Linear regression and after watching this lecture.. i forgot everything about Regression. lol ironic

  • @siddharthGupta632
    @siddharthGupta632 6 ปีที่แล้ว +13

    Why you have written polynomial regression equation in place of multiple linear regression. This seems a bad lecture. Not expected from IIT

    • @sunny10528
      @sunny10528 4 ปีที่แล้ว

      Yes I too got stuck at this point in the lecture and started doubting my own knowledge

  • @onataghoghoatikpe5989
    @onataghoghoatikpe5989 4 ปีที่แล้ว +3

    I am enjoying your courses. thanks

  • @Tchknow879
    @Tchknow879 ปีที่แล้ว

    mam you teaches awesome but one thing that i suggest you can you improve your black board ,improve your camera so we can see clearly

  • @rajasekharareddy6246
    @rajasekharareddy6246 6 ปีที่แล้ว +5

    To understand this video , I think people must know the linear algebra . Then only they can understand this concept.

    • @AkashCherukuri
      @AkashCherukuri 3 ปีที่แล้ว

      The name is *Linear* Regression my man

  • @theperson66
    @theperson66 7 หลายเดือนก่อน

    The best professor!! I love your classes, thank you for your hard work.

  • @sunderrajan6172
    @sunderrajan6172 7 ปีที่แล้ว +8

    Kind of confusing lecture - switching from single variable regression example to multi-variable. All explanation is in a rush. I was hoping that the examples are well explained. Having 1/2 in the equation, is it for half theta? I heard this is not important. When you compare Stanford or MIT Online lectures, lots of improvements needed.

    • @shirshak6738
      @shirshak6738 5 ปีที่แล้ว +1

      for examples see tutorials

  • @jamesmathew8291
    @jamesmathew8291 ปีที่แล้ว

    Excellently covered the topic. Which textbook reference ma'am

  • @tolifeandlearning3919
    @tolifeandlearning3919 2 ปีที่แล้ว +2

    Great lecture

  • @anushamathur2019
    @anushamathur2019 3 ปีที่แล้ว +5

    polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
    y = b0+b1x+b2x^2+...
    and you are calling that as multiple independent variables which relates to multiple linear regression not polynomial

    • @ashwinprasad5180
      @ashwinprasad5180 3 ปีที่แล้ว

      This is indeed a mistake , I presume. it should be y = b0 + b1x1 + b2x2 + ... + bpxp instead of rising to the power.

    • @shreyxsh5054
      @shreyxsh5054 3 ปีที่แล้ว +2

      @@ashwinprasad5180 yes! i took me 1 day. i rhought IIK KGP teachers must be right .... then i found MIT Andre ng one and now all is sorted.. thanks

  • @wreckedinsect5069
    @wreckedinsect5069 3 ปีที่แล้ว

    my professor lectured fucking 3h and i understood nothing but linear is straight.. here in half and hour i am really ready for the exam, thanks

  • @santoshkumargoutam4791
    @santoshkumargoutam4791 3 ปีที่แล้ว

    Mam :- Excellence concept clarification

  • @chandureddim4327
    @chandureddim4327 หลายเดือนก่อน

    Anybody can help me why we need to assume that errors are independent to each other or mean as zero & has some standard deviation ? and as normally distributed ? please

  • @debarpitosinha1162
    @debarpitosinha1162 ปีที่แล้ว

    Error in multiple linear regression formula formula should be x=b1+b2x2+b3x3+...............+bpxp

  • @jivanmainali1742
    @jivanmainali1742 3 ปีที่แล้ว +2

    Why objective function is 1/2 of sum square error. If we have n data set it should be average so I guess it is 1/n of sum of square error

    • @AkashCherukuri
      @AkashCherukuri 3 ปีที่แล้ว

      It's for making the mathematics easier since you would have to differentiate the function later. (1/2 gets canceled with the 2 which you get from differentiation, making equations and stuff a lot cleaner.)

  • @sujitfulse8846
    @sujitfulse8846 7 ปีที่แล้ว

    please explain the concept completely do not leave them in-between.

  • @getfitwithakhil
    @getfitwithakhil 6 ปีที่แล้ว +2

    Mam, you rushed towards the end of the lecture. The theory is more important as we have computers to do most of the calculations.

  • @abhyunnati8589
    @abhyunnati8589 ปีที่แล้ว

    Superb

  • @JMD_coding
    @JMD_coding 3 ปีที่แล้ว +1

    Sir/mam after completion of this course can i get any certificate
    Please Reply me

  • @ankursaxena4942
    @ankursaxena4942 4 ปีที่แล้ว

    Nice Video How to use #Linear_Regression in #Machine_Learning

    • @harisankar6104
      @harisankar6104 4 ปีที่แล้ว

      bro please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @viral_baba
    @viral_baba 5 ปีที่แล้ว +2

    Hello Prof
    the equations written on the blackboard are of polynomial regression but the slides contain equations of multivariate regression is it a mistake if it is please mention it in the annotation. if anyone knows the answer to my query respond to me freely.
    Thanks

    • @shivangidubey7062
      @shivangidubey7062 3 ปีที่แล้ว

      It was......she wrote polynomial equation.

  • @SHIVAMGUPTA-wb5mw
    @SHIVAMGUPTA-wb5mw 4 ปีที่แล้ว

    We started with question to find the parameter but never discussed on that....

    • @ashwinprasad5180
      @ashwinprasad5180 3 ปีที่แล้ว

      That is what the algorithm called gradient descent does, which she wrote at the end. It finds the parameters such that it reduces the loss function

  • @madsudan9227
    @madsudan9227 6 ปีที่แล้ว +1

    gives a brief overview ,Thanks for your efforts

  • @roseb2105
    @roseb2105 5 ปีที่แล้ว +3

    im very confused. and lost with these lectures

  • @rohitranjan5218
    @rohitranjan5218 3 ปีที่แล้ว

    How is she explaining the non linear equation as linear one, the equation should be linear and she has end up with non linear one. the numbers suffix notation has been written as power. 16- 19th minutes of the video.

  • @sauravprasad1996
    @sauravprasad1996 7 ปีที่แล้ว +4

    directly skipped to LMS algo without explaining "how to learn the parameters " clearly ..! poor explanations !

  • @hiraksenroy691
    @hiraksenroy691 6 ปีที่แล้ว

    Easy to interpret for statistics background..

  • @SwaroopSinghDeval
    @SwaroopSinghDeval 7 ปีที่แล้ว +7

    Equation of multi-variable liner regression is wrong.

    • @solarstryker
      @solarstryker 7 ปีที่แล้ว +1

      Swaroop Singh Deval yeah I think she misinterpreted the sub's as the powers

    • @premshankar5967
      @premshankar5967 6 ปีที่แล้ว

      yes exactly

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 ปีที่แล้ว +2

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @rishabhpansari9963
    @rishabhpansari9963 5 ปีที่แล้ว +4

    I think LMS is least mean square

    • @mrm371
      @mrm371 2 หลายเดือนก่อน

      Scope

  • @pankajkumarbarman765
    @pankajkumarbarman765 ปีที่แล้ว

    great lecture ma'am . Thank you so much and happy teacher day, Pronam niben.

  • @s_sasmal
    @s_sasmal 5 ปีที่แล้ว +4

    Can't imagine that Kids are preparing from there 8th standard to get into the IIT
    and after getting into the IIT they will get this kind of lecture.

  • @manyamittal6767
    @manyamittal6767 6 ปีที่แล้ว

    Maybe split this lecture into two. It got really rushed at the end.

  • @regretsonly44
    @regretsonly44 ปีที่แล้ว

    Queen 👑
    Amazing explanation

  • @JMD_coding
    @JMD_coding 3 ปีที่แล้ว

    Shall we get any certificate after completion all video's

  • @harisankar6104
    @harisankar6104 4 ปีที่แล้ว

    please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @ritikraushan7392
    @ritikraushan7392 2 ปีที่แล้ว +1

    Kuch samajh me nahi aaya

  • @Man0fSteell
    @Man0fSteell 7 ปีที่แล้ว +11

    There are TH-cam channels that provide better lectures or explanations in a simplified form than these IIT professors .
    Too bad our Indian quality of education/teaching (or whatever you wanna call) needs to improve a lot!! :(

  • @shashu1999
    @shashu1999 6 ปีที่แล้ว +1

    Copied J(theta) formula from Andrew Ng's module and didnt update the variables

  • @wolfisraging
    @wolfisraging 6 ปีที่แล้ว +6

    Worst explanation of gradient descent in the world

  • @harshitsingh480
    @harshitsingh480 5 ปีที่แล้ว +1

    lms stands for least m,ean square not least minimum slope

    • @harshitsingh480
      @harshitsingh480 5 ปีที่แล้ว

      Sorry for comma in between its mean not m,ean

  • @saurabhchoudhary4572
    @saurabhchoudhary4572 6 ปีที่แล้ว +4

    Ma'am please review your lectures before publishing, poor explanation and incorrect equation for multi linear regression.

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 ปีที่แล้ว

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @ishankulkarni3542
    @ishankulkarni3542 5 ปีที่แล้ว

    Nahi samaj mei aa Raha hai.....jo PPT mei hai use hi phir se explain kar Rahi hai madam

  • @sachinsd4663
    @sachinsd4663 6 ปีที่แล้ว +4

    28:12 wtf was that?It sounded alien like and hilarious 😂

    • @mitrabb4812
      @mitrabb4812 6 ปีที่แล้ว

      bro i was searching for this comment lmao !!!!

    • @sachinsd4663
      @sachinsd4663 6 ปีที่แล้ว

      @@mitrabb4812dude I am glad someone noticed that shit.It is insane.

    • @mitrabb4812
      @mitrabb4812 6 ปีที่แล้ว

      yea man big LOL

  • @saptarshisanyal4869
    @saptarshisanyal4869 3 ปีที่แล้ว

    Sorry to say this maam , but this is wrong explanation of gradient descent algorithm and cost function. This tutorial was good till 25 minutes and after that it was total confusion.

  • @tararawat2955
    @tararawat2955 7 ปีที่แล้ว +1

    Things are not being clearly explained. Its really unclear or confusing...atleast that example must be taken completely to understand the concept

  • @TheUnblameable22
    @TheUnblameable22 4 ปีที่แล้ว

    Surprised to see.. writing from the chit still making the basic equation itself wrong. The multiple linear regression is wrongly written. Assumptions are just copied and not explained.

  • @mahipalmahato7648
    @mahipalmahato7648 11 หลายเดือนก่อน

    7:25

  • @JMD_coding
    @JMD_coding 3 ปีที่แล้ว

    Mam can i get certificate

  • @sandeepkushwaha9790
    @sandeepkushwaha9790 6 ปีที่แล้ว

    Now more confuse explanation is not good can any one share Good videos for linear regression with gradient descent

    • @a.yashwanth
      @a.yashwanth 6 ปีที่แล้ว

      coursera's machine learning by stanford is good.

  • @SubhamCreative.613kviews
    @SubhamCreative.613kviews 6 ปีที่แล้ว

    nptel teach us very badly.........

  • @navedahmad5851
    @navedahmad5851 7 ปีที่แล้ว

    proper explanation should be provided, the teacher is just rushing without explaining the concepts, this is not good.

  • @koppuprasanthkumar9211
    @koppuprasanthkumar9211 4 ปีที่แล้ว

    is this called an NPTL ....worst of time whatever the concepts need extra time you just skip them like anything.......i wont watch NPTL from now onwards ........why u r doing these NPTL certifications i don't know........and overall the title is not at all justified......we don't know how to learn the straight line using linear regression ...........don't watch and waste your time........find anyother resources.......

  • @suddhasheel
    @suddhasheel 7 ปีที่แล้ว

    Sorry to say this! But poor explanations by IIT standards. LMS, Batch gradient descent, and Stochastic descent would require more explanation.

  • @SandeepSharmaRhythmNGroove
    @SandeepSharmaRhythmNGroove 6 ปีที่แล้ว

    not good explanation at all.