Computer Science Engineering
Computer Science Engineering
  • 59
  • 218 625
5.1.3 Backpropagation Intuition by Andrew Ng
Neural Networks : Learning
Machine Learning - Stanford University | Coursera
by Andrew Ng
Please visit Coursera site:
www.coursera.org
มุมมอง: 2 370

วีดีโอ

5.1.2 Backpropagation Algorithm by Andrew Ng
มุมมอง 9K5 ปีที่แล้ว
Neural Networks : Learning Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org
5.1.1 Cost Function by Andrew Ng
มุมมอง 1.8K5 ปีที่แล้ว
Neural Networks : Learning Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org
Reactions from HIM IT-BPM Stakeholders and Industry Players: HIMSCon Philippines 2017
มุมมอง 377 ปีที่แล้ว
Reactions from HIM IT-BPM Stakeholders and Industry Players: HIMSCon Philippines 2017 Healthcare embracing change that comes with mobility and intelligence such as IoT, deep learning and other emerging technologies to provide more quality healthcare to people.
4.3.3 Neural Networks Multiclass Classification by Andrew Ng
มุมมอง 1.6K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.3.2 Neural Networks Examples and Intuitions II by Andrew Ng
มุมมอง 1K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.3.1 Neural Networks Examples and Intuitions I by Andrew Ng
มุมมอง 2.2K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.2.2 Model Representation II by Andrew Ng
มุมมอง 9937 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.2.1 Model Representation I by Andrew Ng
มุมมอง 1.3K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.1.2 Neurons and the Brain by Andrew Ng
มุมมอง 1.2K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
4.1.1 Non linear Hypotheses by Andrew Ng
มุมมอง 1.3K7 ปีที่แล้ว
Neural Networks : Representation Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
3.4.4 Regularized Logistic Regression by Andrew Ng
มุมมอง 2K7 ปีที่แล้ว
Solving the Problem of Overfitting Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
3.4.3 Regularized Linear Regression by Andrew Ng
มุมมอง 2.3K7 ปีที่แล้ว
Solving the Problem of Overfitting Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
3.4.2 Cost Function (Regularization) by Andrew Ng
มุมมอง 2.4K7 ปีที่แล้ว
Solving the Problem of Overfitting Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
3.4.1 The Problem of Overfitting by Andrew Ng
มุมมอง 8K7 ปีที่แล้ว
Solving the Problem of Overfitting Machine Learning - Stanford University | Coursera by Andrew Ng Please visit Coursera site: www.coursera.org Learn Machine Learning for free.
3.3.1 Multiclass Classification One vs all by Andrew Ng
มุมมอง 53K7 ปีที่แล้ว
3.3.1 Multiclass Classification One vs all by Andrew Ng
3.2.3 Advanced Optimization by Andrew Ng
มุมมอง 2.1K7 ปีที่แล้ว
3.2.3 Advanced Optimization by Andrew Ng
3.2.2 Simplified Cost Function and Gradient Descent by Andrew Ng
มุมมอง 2.6K7 ปีที่แล้ว
3.2.2 Simplified Cost Function and Gradient Descent by Andrew Ng
3.2.1 Cost Function by Andrew Ng
มุมมอง 4.8K7 ปีที่แล้ว
3.2.1 Cost Function by Andrew Ng
3.1.3 Decision Boundary by Andrew Ng
มุมมอง 8K7 ปีที่แล้ว
3.1.3 Decision Boundary by Andrew Ng
3.1.2 Hypothesis Representation by Andrew Ng
มุมมอง 2.6K7 ปีที่แล้ว
3.1.2 Hypothesis Representation by Andrew Ng
3.1.1 Classification by Andrew Ng
มุมมอง 9K7 ปีที่แล้ว
3.1.1 Classification by Andrew Ng
2.2.2 Normal Equation Noninvertability by Andrew Ng
มุมมอง 1.1K7 ปีที่แล้ว
2.2.2 Normal Equation Noninvertability by Andrew Ng
2.2.1 Normal Equation by Andrew Ng
มุมมอง 8K7 ปีที่แล้ว
2.2.1 Normal Equation by Andrew Ng
2.1.5 Features and Polynomial Regression by Andrew Ng
มุมมอง 4.6K7 ปีที่แล้ว
2.1.5 Features and Polynomial Regression by Andrew Ng
2.1.4 Gradient Descent in Practice II Learning Rate by Andrew Ng
มุมมอง 8K7 ปีที่แล้ว
2.1.4 Gradient Descent in Practice II Learning Rate by Andrew Ng
2.1.3 Gradient Descent in Practice I Feature Scaling by Andrew Ng
มุมมอง 5K7 ปีที่แล้ว
2.1.3 Gradient Descent in Practice I Feature Scaling by Andrew Ng
2.1.2 Gradient Descent for Multiple Variables by Andrew Ng
มุมมอง 7K7 ปีที่แล้ว
2.1.2 Gradient Descent for Multiple Variables by Andrew Ng
2.1.1 Multiple Features by Andrew Ng
มุมมอง 9657 ปีที่แล้ว
2.1.1 Multiple Features by Andrew Ng
1.2.5 Non-Photorealism by John Hart
มุมมอง 377 ปีที่แล้ว
1.2.5 Non-Photorealism by John Hart

ความคิดเห็น

  • @sayanroy8095
    @sayanroy8095 25 วันที่ผ่านมา

    Its helped me - thanks

  • @jbperez808
    @jbperez808 3 หลายเดือนก่อน

    Octave is even more higher level and mathematically oriented than python meaning such ideas can be expressed much more concisely it's a bit like a free Matlab or Wolfram

  • @Phi_AI
    @Phi_AI 6 หลายเดือนก่อน

    This is implementation of Linear regression from scratch in NumPy only. In-depth explanation of key concepts like Cost Function and Gradient Descent th-cam.com/video/wxCQxZKo4hU/w-d-xo.html

  • @ShahramDerakhshandeh-sf7ld
    @ShahramDerakhshandeh-sf7ld 6 หลายเดือนก่อน

    That's a great.❤

  • @DeltaJes-co8yu
    @DeltaJes-co8yu 7 หลายเดือนก่อน

    I cannot follow the accent unfortunately and even the CC is not working

  • @patriots7400
    @patriots7400 7 หลายเดือนก่อน

    why you shorten your last name? I want cite you!

  • @aihsdiaushfiuhidnva
    @aihsdiaushfiuhidnva 9 หลายเดือนก่อน

    This is very good! But where did you get Andrew's presentation?

  • @ontarioinctransport8912
    @ontarioinctransport8912 10 หลายเดือนก่อน

    First comment enjoy

  • @adityavardhanjain
    @adityavardhanjain 10 หลายเดือนก่อน

    I wonder how the complexity of the model might affect the overfitting (or underfitting?)

    • @utkarshdubey4983
      @utkarshdubey4983 16 วันที่ผ่านมา

      the higher order/degree of the model equation is, higher the chances of overfitting..

  • @khoaphamquocanh4906
    @khoaphamquocanh4906 ปีที่แล้ว

    Where can I watch this old course? Thanks

  • @betafishie
    @betafishie ปีที่แล้ว

    first

  • @sharedhardware
    @sharedhardware ปีที่แล้ว

  • @ryanwang9699
    @ryanwang9699 ปีที่แล้ว

    Great video!

  • @abdelrahmane657
    @abdelrahmane657 2 ปีที่แล้ว

    Thank you so much. It’s been very useful. 🙏👏

  • @helenareveillere338
    @helenareveillere338 3 ปีที่แล้ว

    Hello, Do you know if I could listen to the sound of the MANIAC somewhere on the internet ? I'm a sound editor working on an audio documentary about Mathematics and litterature, and I need to recreate the sound of the MANIAC. Thanks for your answer. Helena

  • @datarubik
    @datarubik 3 ปีที่แล้ว

    The great video, thank you so much professor

  • @shivani404sheth4
    @shivani404sheth4 3 ปีที่แล้ว

    so nicely explained. thank you!

  • @reachDeepNeuron
    @reachDeepNeuron 4 ปีที่แล้ว

    instead of using superscript and subscript terms , had it been explained like start with the gist of what this algorithm does and then using math plus superscript , would help holding the audience and also motivating the audience to continue watching

  • @shahadp3868
    @shahadp3868 4 ปีที่แล้ว

    Nicely done it sir...what about one vs one

  • @akashprabhakar6353
    @akashprabhakar6353 4 ปีที่แล้ว

    I did not get one thing...Suppose for a classification we get the max probability..then we wd be classifying only one class separately and rest 2 as another...but how are we classifying all 3 separately??

  • @samueldushimimana3831
    @samueldushimimana3831 4 ปีที่แล้ว

    well done Andrew

  • @nawabengineering4388
    @nawabengineering4388 4 ปีที่แล้ว

    Well explained but Why is it called cost function? And taking 1/2 is not clear. Why and why not take square root?

    • @ditdit-dahdah-ditdit-dah
      @ditdit-dahdah-ditdit-dah 9 หลายเดือนก่อน

      Cost function is also called as Loss function, both are synonyms. Division by m or 2m is interchangeable. What we are really concerned about it a model which is producing least error , not the value of the loss function directly. Cost functions can be of 3-types , among them is a regression, which again has 3 types , that is Mean Error , Mean squared error , mean absolute error . Why so many ? Cause a data set may have negative/positive errors , taking mean directly may cancel out +/- errors , and taking a square directly can be a bit troublesome if you have some outliers . In these videos , Andrew can be seen using all three in regression based . Note :it's not the only required param for concluding a model isn't good.

    • @sjorsvanderstelt2024
      @sjorsvanderstelt2024 5 หลายเดือนก่อน

      A quadratic polynomial is much easier to work with than a square root, and as it will have a minimum at the same point, we can work with the quadratic polynomial. The 1/2 is there so that the derivative of the cost function will be weighed with 1.

  • @nawabengineering4388
    @nawabengineering4388 4 ปีที่แล้ว

    Everybody in this ML field directs to use python, you are the first one who referred to octave. Why is this so?

  • @elbrenantonio5256
    @elbrenantonio5256 5 ปีที่แล้ว

    Any video for multiclass entropy and entropy. Please show calculations sample. Thanks.

  • @bismeetsingh352
    @bismeetsingh352 5 ปีที่แล้ว

    Don't you have legal issues for copying content from coursera

    • @thesteve0345
      @thesteve0345 4 ปีที่แล้ว

      I am pretty sure coursera copied from his content.

    • @GelsYT
      @GelsYT 4 ปีที่แล้ว

      he is coursera

    • @jaideepsingh7955
      @jaideepsingh7955 4 ปีที่แล้ว

      @@GelsYT hahaha true..

  • @randomcowgoesmoo3546
    @randomcowgoesmoo3546 5 ปีที่แล้ว

    Thanks Andrew Yang, I'll definitely vote for you.

    • @LouisDuran
      @LouisDuran 7 หลายเดือนก่อน

      wrong dude, the other guy wants to give you UBI. This guy wants to give you OVA

  • @ZombieLincoln666
    @ZombieLincoln666 5 ปีที่แล้ว

    audio quality is shit

  • @swathys7818
    @swathys7818 5 ปีที่แล้ว

    Thank you for great explanation Sir!

  • @sanketneema286
    @sanketneema286 5 ปีที่แล้ว

    thankyou sir

  • @truettbloxsom8484
    @truettbloxsom8484 5 ปีที่แล้ว

    Just wanted to say these videos are amazing! thank you!

  • @dream191919
    @dream191919 5 ปีที่แล้ว

    There is an error of the example Andrew used here to demonstrate Normal equation. The X is the 4 by 5 matrix which makes the system underdetermined, which also result in the inverse of X's transpose multiplying X having no inverse. So the Normal equation cannot be calculable.

    • @bonipomei
      @bonipomei 3 ปีที่แล้ว

      X is 4x5 and X(transpose) is 5x4. Therefore, X(transpose)*X = 5x4 * 4x5 which results in a 5x5 matrix, which has an inverse.

    • @brownishcookie
      @brownishcookie 2 หลายเดือนก่อน

      ​@@bonipomei Very late reply but being a square matrix doesn't mean it has inverse.

    • @MahmoudYosri
      @MahmoudYosri หลายเดือนก่อน

      @@brownishcookie yes especially if its determinant is 0

  • @IamPdub
    @IamPdub 5 ปีที่แล้ว

    Great video, can you make a video on Stemming with Multiclass Classification?

  • @heller4196
    @heller4196 5 ปีที่แล้ว

    Get this man a good Camera and Mic.

    • @namangupta8609
      @namangupta8609 3 ปีที่แล้ว

      Sending you the bill...

    • @abdelrahmane657
      @abdelrahmane657 2 ปีที่แล้ว

      @@namangupta8609 Did you receive the bill? Or you’ll be the only youtuber watching this video

  • @Anveshana837
    @Anveshana837 5 ปีที่แล้ว

    Great video

  • @punkntded
    @punkntded 6 ปีที่แล้ว

    What does theta represent?

    • @ofathy1981
      @ofathy1981 6 ปีที่แล้ว

      learning rate

    • @ByteSizedBusiness
      @ByteSizedBusiness 6 ปีที่แล้ว

      @@ofathy1981 alpha is the learning rate in gradient descent .... theta is a parameter like weights in NN

    • @MelvinKoopmans
      @MelvinKoopmans 5 ปีที่แล้ว

      @@ofathy1981 Theta does not represent the learning rate, instead it represents the parameters of the model (e.g. the weights). So P(y | x; θ) translates to English as "The probability of *y* given *x* , parameterized by *θ* ".

    • @amirdaneshmand9743
      @amirdaneshmand9743 3 ปีที่แล้ว

      That the parameters of logistic classifier which is trained separately for each case