Lecture 3 | Learning, Empirical Risk Minimization, and Optimization

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 พ.ย. 2024

ความคิดเห็น • 19

  • @chovaus
    @chovaus 5 หลายเดือนก่อน

    best course about deep learning. now 2024 and happy I found it back. well done!

  • @ian-haggerty
    @ian-haggerty 7 หลายเดือนก่อน

    Thank you again to Carnegie Mellon University & Bhiksha Raj. I find these lectures fascinating.

  • @ErolAspromatis
    @ErolAspromatis 3 ปีที่แล้ว +5

    The professor with the sword is the Conan of Machine Learning!

  • @devops8729
    @devops8729 4 ปีที่แล้ว +7

    Thanks for sharing knowledge. Amazing content and Professor.

  • @ahnafsamin3777
    @ahnafsamin3777 2 ปีที่แล้ว +3

    The teacher seems to be so mean to his students! Quite surprised to see this at the CMU!

  • @sansin-dev
    @sansin-dev 3 ปีที่แล้ว +1

    What is a good textbook / reference book to follow to keep with this lecture?

  • @ZapOKill
    @ZapOKill 3 ปีที่แล้ว +1

    3 minutes into the lecture, and by now I would have left 2 times... and watched it on youtube, where I can use my phone

  • @jijie133
    @jijie133 5 ปีที่แล้ว +2

    Great!

  • @anuraglahon8572
    @anuraglahon8572 5 ปีที่แล้ว +5

    I want to attend the class

  • @mastercraft117
    @mastercraft117 ปีที่แล้ว

    Does someone know where I can get the assignments for this class?

  • @CharlesVanNoland
    @CharlesVanNoland 2 ปีที่แล้ว +1

    240 students didn't even show up?
    These are the people developing our operating systems, our webstack platforms, our applications and software. They're all lazy bums who aren't even passionate about their field like was the case 20 years ago. Software used to be written by people who wanted to code if they were rich or poor. It was in their blood. Now we just have 90% of the industry flooded with people who want the Sillyclown Valley lifestyle but don't care for the work. The industry only exists because of people who loved the work and the lifestyle was just a bonus.

  • @insoucyant
    @insoucyant 3 ปีที่แล้ว

    Thank You.

  • @pratoshraj3679
    @pratoshraj3679 4 ปีที่แล้ว +3

    Wish I was his student

  • @bhargavram3480
    @bhargavram3480 4 ปีที่แล้ว +1

    Dear Professor,
    At around timestamp 57:25, we go from the integral to an average sum. On what basis are we substituting P(X) = 1/N. What is the basis for this assumption that the PDF of X is uniform?

    • @paulhowrang
      @paulhowrang 4 ปีที่แล้ว +1

      There is no basis for that, but do you want to assume a distribution over data? It is somewhat a parsimonious approach, when we do not know the distribution, take the least "informative" one, i.e., Uniform distribution. This way we are assuming no prior information about that data. But if you have prior, feel free to use it!

    • @lusvd
      @lusvd 2 ปีที่แล้ว

      We are not substituting P(X) = 1/N.
      In the slide 109, last equation (in red):
      The Law of large numbers (LNN) states that the RHS will converge to the LHS as N -> infinity.
      In other words, we do not know P(X) and we dont need to, because we can estimate the expected value using the LNN.

  • @pranjalgupta2072
    @pranjalgupta2072 5 ปีที่แล้ว +3

    hey man satisfying your personal ego on a youtube dislike not cool.