Locally Weighted & Logistic Regression | Stanford CS229: Machine Learning - Lecture 3 (Autumn 2018)

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.พ. 2025

ความคิดเห็น • 147

  • @raccoonious4038
    @raccoonious4038 10 หลายเดือนก่อน +59

    The simplification of log likelihood function log(L(theta)) to give you back the cost function J(theta) has to be one of the most beautiful transformations I've seen in a while!

    • @MosesMakuei-b5z
      @MosesMakuei-b5z 5 หลายเดือนก่อน +1

      Hehe, I'm certain that the first derivation of the least square as a cost function did not come from a probabilistic interpretation. The goes to prove that if you are right in one angle, you will also be right in all angles. It was interesting to see that too.

  • @SaidurRahman-c8w
    @SaidurRahman-c8w 6 หลายเดือนก่อน +76

    Notice how views decrease by half on each new lecture, Congratulations on making this far, keep going fellas, we got this.

  • @manudasmd
    @manudasmd 2 ปีที่แล้ว +113

    Damn , this guy just explains concepts so clearly. love this course

    • @morespinach9832
      @morespinach9832 ปีที่แล้ว +2

      Where else have you seen these techniques explained? It’s not that hard.

    • @hannukoistinen5329
      @hannukoistinen5329 ปีที่แล้ว

      Damn, chinese communist teaching in Stanford!!!

    • @manudasmd
      @manudasmd ปีที่แล้ว +16

      @@hannukoistinen5329 Damn, Cool Joke bro!! You must be really a funny guy.

    • @ujjolchakrabarty9285
      @ujjolchakrabarty9285 8 หลายเดือนก่อน +1

      Do you know where we can get the practice sets for the course?

    • @shaksham.22
      @shaksham.22 7 หลายเดือนก่อน

      @@ujjolchakrabarty9285 trying to find the same thing. Cant access it from the website

  • @_desouvik
    @_desouvik 7 หลายเดือนก่อน +36

    They changed the voices of students 👩‍🎓, at first I was amused why they all talk in a same way 😮, but now that makes sense

  • @FrankDong-o8b
    @FrankDong-o8b หลายเดือนก่อน +1

    This lecture reinforces so much my understanding between the OLS and MLE in a much much better way

  • @atalantinopieva
    @atalantinopieva 7 หลายเดือนก่อน +8

    For anyone struggling with the concept.. likelihood indicates how likely a particular population is to produce an observed sample given a particular distibution.
    For example, if we have data that should follow a Gaussian Distribution with mean=5 and variance = 0.1 but the ACTUAL data in my dataset are all 0.5, well... the likelihood that my data actually follow this distribution is very low!
    If each ACTUAL data has a high density probability, the overall likelihood will be high!

  • @carvalhoribeiro
    @carvalhoribeiro ปีที่แล้ว +7

    Your clear explanation of these concepts is greatly appreciated. Thank you so much for sharing

  • @SalihBekri
    @SalihBekri ปีที่แล้ว +5

    i'm an EE student and we don't anything to do with ML except a simple course in the final year and i'm still taking this course wish me luck guys because it's hard reaally hard

    • @AadityaSaraf69
      @AadityaSaraf69 ปีที่แล้ว

      good luck! and yes, it is very hard

  • @adhammazen2547
    @adhammazen2547 2 หลายเดือนก่อน +11

    Hello, I don't know if the comment section here is still being replied to/looked at by staff, but I want to point out that the lecture notes PDFs on the website of CS229 lead to an error and don't actually load. I'm certain many others like myself would appreciate being able to look at these notes. Thank you!

    • @TonyDaExpert
      @TonyDaExpert 2 หลายเดือนก่อน

      I recommend looking for the GitHub that has them

    • @bhargavtripathi907
      @bhargavtripathi907 2 หลายเดือนก่อน

      you can search for the notes online on Google with the name of Andrew ng Cs229 notes .

    • @T-r5t
      @T-r5t หลายเดือนก่อน

      I have the same issue, did you get any help?

    • @nikhils1182
      @nikhils1182 25 วันที่ผ่านมา

      @@T-r5t just checkout the webpage of CS229 2020 class , main_notes has all the course notes by Andrew Ng itself....

    • @Twi_543
      @Twi_543 7 วันที่ผ่านมา

      @@T-r5t perhaps try the wayback machine archive

  • @elonmusk4267
    @elonmusk4267 ปีที่แล้ว +7

    What a phenomenal lecture! So beautiful, so elegant, just looking like a wow

  • @tomzhangg
    @tomzhangg 2 ปีที่แล้ว +11

    A classic tradeoff in locally weighted models between training cost and accuracy, though it seems like the cost really comes from refitting for each x input during testing.

    • @adityachauhan7269
      @adityachauhan7269 ปีที่แล้ว +2

      ohhh so thats how it does it, wouldnt this overfit? It's like the start of thinking towards "forest-lile" methods, amazing.

  • @glitchAI
    @glitchAI 8 หลายเดือนก่อน +1

    he speaks with so much bass that I have to ramp up my volume.

  • @Emanuel-oz1kw
    @Emanuel-oz1kw 6 หลายเดือนก่อน +8

    Motivation: only 10% will make it to the last video

  • @dartng5029
    @dartng5029 ปีที่แล้ว +17

    0:28: 📚 The video discusses supervised learning, specifically linear regression, locally weighted regression, and logistic regression.
    5:38: 📚 Locally weighted regression is a non-parametric learning algorithm that requires keeping data in computer memory.
    13:05: 📊 Locally weighted regression is a method that assigns different weights to data points based on their distance from the prediction point.
    19:01: 📚 Locally linear regression is a learning algorithm that may not have good results and is not great at extrapolation.
    24:46: 🔍 The video discusses Gaussian density and its application in determining housing prices.
    31:31: 💡 The likelihood of the parameters is the probability of the data given the parameters, assuming independent and identically distributed errors.
    36:55: 📊 Maximum Likelihood Estimation (MLE) is a commonly used method in statistics to estimate parameters by maximizing the likelihood or log-likelihood of the data.
    43:44: 📊 Applying linear regression to a binary classification problem is not a good idea.
    49:22: 🎯 The video discusses the choice of hypothesis function in learning algorithms and why logistic regression is chosen as a special case of generalized linear models.
    54:45: 📚 The video explains how to compress two equations into one line using a notational trick.
    1:01:31: ✏ Batch gradient ascent is used to update the parameters in logistic regression.
    1:07:52: 📚 The video explains how to use Newton's method to find the maximum or minimum of a function.
    1:13:55: 💡 Newton's method is a fast algorithm for finding the place where the first derivative of a function is 0, using the first and second derivatives.
    Recap by Tammy AI

  • @НиколайТодоров-и9т
    @НиколайТодоров-и9т ปีที่แล้ว +24

    I love the videos and Mr Ng explains things clearly, but gosh, the markers he uses are so pale and hard to read

  • @CLL-mr3kz
    @CLL-mr3kz หลายเดือนก่อน

    Let's do it guys!

  • @moussadiallo6430
    @moussadiallo6430 ปีที่แล้ว +1

    great lecture. ML is fun with you😀

    • @stanfordonline
      @stanfordonline  ปีที่แล้ว +1

      Thanks for your comment and for watching!

  • @AyushGupta-zc4lh
    @AyushGupta-zc4lh ปีที่แล้ว +1

    Awesome lecture

  • @AditiSalunkhe-f4q
    @AditiSalunkhe-f4q 3 หลายเดือนก่อน

    At timestamp 52:11, How can P(Y=1 | x ; theta) = h(x), since h(x) should only take two values 0 and 1. This will give value if h(x) which lies between [0, 1].

    • @AyanKhan-ek1iy
      @AyanKhan-ek1iy 3 หลายเดือนก่อน +1

      Basically, h(x) can take values between 0 and 1, since it is a probability. What you're talking about here is the actual class, or "y". It can either be 0, or 1. So, P(Y=1|x;theta) actually means that we are finding the probability that our class would be 1, given a feature x affected by parameter theta. Hope that makes it clear.

  • @ramankr0022
    @ramankr0022 ปีที่แล้ว +1

    Very helpful.

  • @All_Kraft
    @All_Kraft 11 หลายเดือนก่อน +1

    Thank you for explanation. I don’t know why but it’s so annoying, when lecturer constantly erases and writes the same signs((

  • @liketheblue5082
    @liketheblue5082 2 ปีที่แล้ว +8

    32:18 I have a question about this likelihood function. Can somebody help me with it?
    According to the IID assumption, the probability of all the observations is equal to the product of each probability . However, isn’t the expression a density instead of a probability of a normal distribution? I am really confused. I think the probability should be the integral of density function. If it's density, what's the meaning of the product of densities?

    • @HamzaAsgharKhan
      @HamzaAsgharKhan 2 ปีที่แล้ว +10

      For I.I.D, P(AB) = P(A)P(B). Your observation about it being the probability density of the Gaussian is correct. When we maximize it, we are trying to find the point which has the highest probability. A point that has the highest density will have the highest probability. So using the probability density function is correct in that regard (I think you are confused by the fact that the density will probably result in a value that is not between 0 and 1 but with a little thought about what I said, hopefully you will be able to see why normalizing the values to be between 0 and 1 do not really matter). I don't know how much help this answer will be to you, I'm simply having a hard time to articulate what I'm trying to say.

    • @liketheblue5082
      @liketheblue5082 2 ปีที่แล้ว +4

      @@HamzaAsgharKhan Thank you very much! I didn't expect someone would give me such a detailed answer! That's exactly what I thought. The product of density might not really have a meaning in statistics, the density can also be greater than 1 , but it would be enough to find the maximum point. I appreciate it!!

    • @HamzaAsgharKhan
      @HamzaAsgharKhan 2 ปีที่แล้ว +3

      @@liketheblue5082 I'm glad it helped! 😊

  • @xinli7836
    @xinli7836 6 หลายเดือนก่อน

    Awesome totally

  • @karthikeyapervela3230
    @karthikeyapervela3230 9 หลายเดือนก่อน +1

    26:37 How is it being implied? Like we are assuming the error term to be a gaussian, from there we jumped to the conditional distribution of y given x parameterized by theta, I did not understand this implication.

    • @suvamsivam9658
      @suvamsivam9658 8 หลายเดือนก่อน

      its assumed that the error term is normally distributed

  • @aksharpedada7037
    @aksharpedada7037 2 วันที่ผ่านมา

    can anyone suggest material to go through along the course thats relevent today

  • @surajyadav1033
    @surajyadav1033 ปีที่แล้ว +2

    at 1:16:59 shouldnt the formula have negative sign before the hessian inverse

    • @neelabhsomani5129
      @neelabhsomani5129 7 หลายเดือนก่อน +1

      We are trying to *maximize* the likelihood function. Hence the formula has a +ve sign instead of a negative sign.

  • @henryyy8625
    @henryyy8625 7 หลายเดือนก่อน +1

    How can we access the homework? The link in the syllabus leads to the piazza but it does not get into the classroom?

  • @גבריאלחדאד
    @גבריאלחדאד 16 วันที่ผ่านมา

    i have a question if anybody can help:
    in the locally weighted linear regression algorithm, do we need to recalculate the theta vector for each prediction? because the weight function depends on the example we are trying to make the prediction for... or maybe in the training stage, we pick several points, calculate the theta vector for each of them, and use the closestoption when making predictions? how exactly does this work?
    and by the way, can anyone provide a link for the course website?

  • @haoranlee8649
    @haoranlee8649 ปีที่แล้ว

    i like this guy‘s video, it's amazing

  • @طالبالعلم-ج1ث
    @طالبالعلم-ج1ث ปีที่แล้ว

    Thank You Very Much

  • @RiyaSharma-lq1ok
    @RiyaSharma-lq1ok 21 วันที่ผ่านมา

    Where can I find the lecture notes and the problem sets ?

  • @pavel.pavlov
    @pavel.pavlov ปีที่แล้ว +1

    He needs to get the IBM guys blackboard

  • @nanunsaram
    @nanunsaram 2 ปีที่แล้ว +1

    Thank you!

  • @ShubhamKumar-it2uy
    @ShubhamKumar-it2uy 10 หลายเดือนก่อน +1

    Can anyone explain what had happened to Andrew's voice at 19:32 ?

    • @mshoshan9698
      @mshoshan9698 10 หลายเดือนก่อน +2

      It seems like they applied some audio distortion effect whenever a student asks a question (to preserve anonymity) that makes their voice sound very deep.

  • @elching.8924
    @elching.8924 หลายเดือนก่อน

    Andrew is applied professional, he has surface level understanding of the math behind these concepts and lack of depth in his knowledge affects the quality of his arguments. There is no way students can understand Newton method without knowing the proofs of mean value theorems.

  • @morespinach9832
    @morespinach9832 ปีที่แล้ว +2

    Since when these these basic statistics techniques become “machine learning”??

    • @closingtheloop2593
      @closingtheloop2593 11 หลายเดือนก่อน

      Its all marketing. To be fair, much of these results fall out of linear system theory that does not require any statistics. So the branding is somewhat subjective.

  • @jaymistry689
    @jaymistry689 4 หลายเดือนก่อน +1

    can someone help me to find partial derivative of L(theta) at 1:01:10?

  • @bwmartin24
    @bwmartin24 2 ปีที่แล้ว +21

    A lot of the links aren't working on the syllabus linked in description. Is there an updated version with the class notes pdf's, etc.?

    • @aphievel
      @aphievel 2 ปีที่แล้ว +3

      You can refer to the notes of the summer 2019 class. Though the topics were covered in a different order, the content is the same.

    • @ikrammaizi8678
      @ikrammaizi8678 ปีที่แล้ว +2

      @@aphievel where?

    • @shakeelahmad3162
      @shakeelahmad3162 ปีที่แล้ว

      docs.google.com/spreadsheets/d/18pHRegyB0XawIdbZbvkr8-jMfi_2ltHVYPjBEOim-6w/edit#gid=0

  • @aysukeskin3749
    @aysukeskin3749 13 วันที่ผ่านมา

    how can I have access to homework and projects

  • @logeshwaran1537
    @logeshwaran1537 ปีที่แล้ว +1

    Whether anybody knows how to get familiar with these concepts..like where to apply and practice these??

    • @YisneySoto
      @YisneySoto 11 หลายเดือนก่อน

      I'm thinking about ChatGPT. Ask it for exercises and to evaluate your responses.

  • @KipIngram
    @KipIngram 2 ปีที่แล้ว

    10:24 - How is this just not a form of interpolation using shape functions? That doesn't really seem like "learning" to me.

  • @sophiafunworldatthepark6740
    @sophiafunworldatthepark6740 ปีที่แล้ว

    I try to find way how to use this to teach kids.

  • @stephendiopter2289
    @stephendiopter2289 ปีที่แล้ว +1

    the course page has some problem sets and class notes provided by prof. but are inaccessible . Is there any way to get those ?
    p.s. I just need those problem sets

    • @stephendiopter2289
      @stephendiopter2289 ปีที่แล้ว

      never mind. got them

    • @prienee
      @prienee 11 หลายเดือนก่อน

      @@stephendiopter2289how did you get them?

    • @sowaszpieg7528
      @sowaszpieg7528 11 หลายเดือนก่อน

      ​@@stephendiopter2289 where did you find them?

    • @durai5213
      @durai5213 10 หลายเดือนก่อน

      @@stephendiopter2289 Can you help me where to find the lecture note

    • @durai5213
      @durai5213 10 หลายเดือนก่อน

      @@stephendiopter2289 Can you help me where to find the lecture note

  • @viharivemuri7202
    @viharivemuri7202 11 หลายเดือนก่อน

    While deriving maximum likelihood for linear regression, the professor modelled a gaussian error term. However, for logistic regression he did not use an error term, does anyone know why that is?

    • @shaksham.22
      @shaksham.22 7 หลายเดือนก่อน +1

      You wont need an error term for logistic regression because in linear regression you are trying to predict the h(x) which can vary based only some real world phenomenon. However, in case of classification(for which logistic regression is used) you are more or less trying to fit the h(x) into few defined classes of output, for instance the true or false of an occurrence. Hence presence of error function does not have any effect on the outcome. In other word, the output h(x) is discrete in classification so theres no requirement of an error term.
      I may be wrong with this though.

  • @neelabhsomani5129
    @neelabhsomani5129 ปีที่แล้ว

    Check point 44:16

    • @malfuriosstormrage5218
      @malfuriosstormrage5218 ปีที่แล้ว

      THanks. Can you explain what he meant at H(x) is different when using logistic function? Is it because it's bounded [0,1]?

    • @neelabhsomani5129
      @neelabhsomani5129 7 หลายเดือนก่อน

      @@malfuriosstormrage5218 h(x) is nothing but our hypothesis function. So depending on the task (classification or regression), our hypothesis function will look different. For example, for linear regression our h(x) was w0 + w1x1 + ... + wnxn. (Here w is same as theta, parameters). But h(x) looked different in logistic function.
      Our hypothesis also depends on preferred outcome. Like you mentioned, h(x) looks different because we want to bound the output to [0,1]. Hope this helps.

  • @kaipingli-mh3mw
    @kaipingli-mh3mw ปีที่แล้ว

    thx

  • @LOGENDIRANVD
    @LOGENDIRANVD ปีที่แล้ว

    soooooo goood

  • @albertlei9249
    @albertlei9249 2 ปีที่แล้ว +2

    Looks like in gradient ascent if we replace the scalar learning rate alpha by the inverse H^{-1}, we get the Newton's method.

    • @tomzhangg
      @tomzhangg 2 ปีที่แล้ว +1

      Also remember that the partial derivative is replaced with the gradient vector, allowing for matrix multiplication.

    • @shubhamkumar-nw1ui
      @shubhamkumar-nw1ui 2 ปีที่แล้ว

      Can you guys help me out ? I can't get my head around likelihood of theta thing ....why this is equal to product of probabilities of Y

    • @ZeroManifold
      @ZeroManifold 2 ปีที่แล้ว

      Newton's method uses 2nd order approximation vs the gradient descent uses 1st order approximation, the rationale is quite similar.

  • @Adnan_19946
    @Adnan_19946 4 หลายเดือนก่อน

    Where can we get the fabled lecture notes?

  • @rayugamax183
    @rayugamax183 ปีที่แล้ว +1

    In the links given in description don't have the class notes he keeps mentioning and he tells to read from them. Can anyone help? I mean how do i get those?

    • @kinetic_kane9033
      @kinetic_kane9033 ปีที่แล้ว

      Same question. I think the notes are only available to stanford students because its in their intranet.

    • @prathmeshmishra4357
      @prathmeshmishra4357 9 หลายเดือนก่อน

      @@kinetic_kane9033 cs229.stanford.edu/lectures-spring2022/main_notes.pdf

  • @sanatani_0228
    @sanatani_0228 ปีที่แล้ว

    Is it better than his course on coursera or it is same?

    • @stanfordonline
      @stanfordonline  ปีที่แล้ว +6

      Hi there, thanks for your comment! The material on coursera is more introductory level and this lecture is from the graduate course CS229 and covers more advanced topics.

    • @anuragsahu4527
      @anuragsahu4527 7 หลายเดือนก่อน

      @@stanfordonline so what should i prefer?? this course or the one in coursera??

  • @krishnaaa___03
    @krishnaaa___03 6 หลายเดือนก่อน

    can anyone tell me how the h thetha(x) which is equal to sigma j = 0 to n (thetha j Xj) can be written as thetha(transpose) into X ? how the transpose came here it should be thetha into x then it makes sense somewhat... how is this transpose imposed on the thetha?
    (obviously in linear regression)

    • @ippilisaisugandhasri9952
      @ippilisaisugandhasri9952 2 หลายเดือนก่อน

      h theta(x) gives a value right which is a scalar ,and the scalar can be obtained only if you multiply theta(transpose) into X or X(transpose) into theta

  • @codehere142
    @codehere142 10 หลายเดือนก่อน

    Can't find the derivation of the MLE

  • @mekuzeeyo
    @mekuzeeyo 2 ปีที่แล้ว +1

    then what is the difference between the locally weighted regression and polynomial regression? in application

    • @closingtheloop2593
      @closingtheloop2593 11 หลายเดือนก่อน

      I got the same question. Im sure with more exposure it will be clear. Polynomial regression and locally weighted regression echoes simularity in concept with gains scheduled control design for nonlinear systems. Same trick, different pony. IE, how can we apply linear theory to non linear systems?

  • @studybuddy8307
    @studybuddy8307 7 หลายเดือนก่อน

    please help me get the problem set

  • @ayushipanda7228
    @ayushipanda7228 6 หลายเดือนก่อน

    My boyfriend does this course very diligently 😊

  • @meeqvin
    @meeqvin 11 หลายเดือนก่อน

    in our case parameter of learning algorithm(theta) is a cost of our house?

    • @shaksham.22
      @shaksham.22 7 หลายเดือนก่อน +1

      nope X is the cost of house, parameter are weights of every feature at a given point on x that help you identify the corresponding h(x)

  • @b14ckb0y9
    @b14ckb0y9 5 หลายเดือนก่อน

    I couldn't find the Newton's method in lecture notes. Can anybody tell me in which page this belongs?

    • @niayeshshafieian3290
      @niayeshshafieian3290 4 หลายเดือนก่อน

      Can I ask where did you find the lecture notes?

  • @dr.owl_the_great
    @dr.owl_the_great ปีที่แล้ว +1

    Where we get problemset of this courses

  • @haitematik5832
    @haitematik5832 11 หลายเดือนก่อน

    ML for Goa'ulds

  • @The-Thinking-Room
    @The-Thinking-Room 2 หลายเดือนก่อน

    If anyone has notes on it, please feel free to share the link here!

    • @k4gaurav
      @k4gaurav หลายเดือนก่อน

      do you still need this ??

  • @ilpreterosso
    @ilpreterosso 2 ปีที่แล้ว +2

    What happened at 17:45

    • @sanspapyrus683
      @sanspapyrus683 2 ปีที่แล้ว +1

      Probably a mic failure. Not sure though.

  • @fahimesokhangou3646
    @fahimesokhangou3646 2 ปีที่แล้ว +4

    I have a question about locally weighted regression. Imagine we want to calculate studentized residual. we have different hat matrix (projection matrix) for each observation and each hat matrix is a matrix (k by k) which k is a number of the observation in the span. Now I would like to calculate the leverage. I would like to know how to determine leverage for each observation?

  • @ras4884
    @ras4884 ปีที่แล้ว +5

    someone, buy this guy better markers!

  • @rushinshah4344
    @rushinshah4344 ปีที่แล้ว +1

    where can i access the problem sets?

    • @iamnotsure237
      @iamnotsure237 2 หลายเดือนก่อน

      Cs 229 website or github

  • @bouazizzied5086
    @bouazizzied5086 ปีที่แล้ว

    can someone tell me after we derive the maximum likelihood of theta how do we use it to modify all our parameters theta?

    • @gautamgirotra3572
      @gautamgirotra3572 ปีที่แล้ว

      From MLE of theta we have the function that should be maximized i.e. l(theta)
      Now use any optimization algorithm(like Gradient descent/ newton's method) to optimize
      for example using GD
      theta(new) = theta(old) +alpha * partial derivative of l(theta)

  • @youssera6352
    @youssera6352 ปีที่แล้ว

    Hi, i'm trying to follow this courses in order to start reading papers for my phd research/preperation, i don't seem to understand most of the mathematic equations, do i really need to understand them to achieve my goal or i just need to understand the concepts and memorize the formulas ?

    • @Nett6799
      @Nett6799 ปีที่แล้ว

      i have the same problem as you , what's your phd research theme ?

    • @jaimehernandezbascur8619
      @jaimehernandezbascur8619 ปีที่แล้ว

      Hi, it's highly recommended to have a background in probability and statistics, and linear algebra before studying machine learning. Personally i think that a few knownledge in optimization is sufficiently but no necessary.

  • @DagmawiAbate
    @DagmawiAbate 2 ปีที่แล้ว

    Okay.

  • @namphan9281
    @namphan9281 ปีที่แล้ว

    now I know why my university teaches optimization techniques for CS program 💀

  • @shwetatiwari7910
    @shwetatiwari7910 2 หลายเดือนก่อน +1

    Why do the students sound that Optimus Prime😂

  • @aramuradyan2138
    @aramuradyan2138 ปีที่แล้ว

    Where are lecture notes?

    • @lohitaksha244
      @lohitaksha244 ปีที่แล้ว +1

      look up cs229 autumn 2018 on google, you should find the repository maxim5/cs229-2018-autumn

  • @browndonkey
    @browndonkey 2 ปีที่แล้ว +1

    Are the class notes he mentions throughout the course available anywhere for download?

    • @agustinsalazar9351
      @agustinsalazar9351 2 ปีที่แล้ว +1

      In the link to the syllabus in the description there are some lecture notes available, although many are dead links

    • @patrickt.4121
      @patrickt.4121 2 ปีที่แล้ว +4

      google it and you'll find them. first hit.

    • @shashankrana977
      @shashankrana977 ปีที่แล้ว +2

      See what I am doing is to follow the current year course page for assignments as they are mostly working links. Lecture notes can be found in the course page given in the Lect 1 desc.

    • @stephendiopter2289
      @stephendiopter2289 ปีที่แล้ว

      can you share the link of current year course page @@shashankrana977

  • @McAwesomeReaper
    @McAwesomeReaper ปีที่แล้ว

    You know he's thought about just getting slightly shorter sleeves tailored, right?

  • @vientios_talisman
    @vientios_talisman ปีที่แล้ว

    1:02:02

  • @malfuriosstormrage5218
    @malfuriosstormrage5218 ปีที่แล้ว

    What a concept. I just "wow"d when MLE was shown. Anyone here familiar with Power System State Estimation?

  • @vasudevrv7417
    @vasudevrv7417 ปีที่แล้ว

    can anyone explain me where that x came from in the final equation of gradient ascent

  • @Lalala_1701
    @Lalala_1701 8 หลายเดือนก่อน +1

    Why every student sounds like a giant.😂

    • @KevenDuan-cn
      @KevenDuan-cn 5 หลายเดือนก่อน +2

      It may be to protect the privacy of the students, so special treatment is made for the sound

  • @Lionsboy86
    @Lionsboy86 2 ปีที่แล้ว

    Otu yo

  • @notsodope7227
    @notsodope7227 ปีที่แล้ว +1

    The way it started and the way it is going forward :/ So much math

  • @김연우-i6v
    @김연우-i6v 7 หลายเดือนก่อน

    토사장 들이 별걸 다만드네 ㅋㅋ