Lecture 2: Image Classification

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 28

  • @conradwiebe7919
    @conradwiebe7919 4 ปีที่แล้ว +106

    If you are reading this you are the ten percent (as of the time of writing this) that didn't up and leave after the intro. I hope to see you all at lecture 22.

    • @m-aun
      @m-aun 4 ปีที่แล้ว +2

      you want to do this course together?

    • @conradwiebe7919
      @conradwiebe7919 4 ปีที่แล้ว +1

      I'm really just skimming these to better form intuition. I'm not sure what you mean by do the course together, I'd be happy to discuss anything in the lectures but I'm not going on to do any projects with computer vision out of this.

    • @m-aun
      @m-aun 4 ปีที่แล้ว

      @@conradwiebe7919 I was planning to do all the HWs/ Assignments given on the course website along with the lectures

    • @conradwiebe7919
      @conradwiebe7919 4 ปีที่แล้ว +2

      Didn't even see they had those lol, Imma still stick with my original plan tho. I'm trying a more organic entrance to ml. I made some really rudimentary search algos like queue, stack, greed, and astar and have now started generating mazes. I want to try and train something that looks like astar search. It's a long way from deep learning but I don't think I can make that leap and still know everything that's going on. Maybe I'll join you a month from now, I'd still be happy to discuss the topics with you.

    • @m-aun
      @m-aun 4 ปีที่แล้ว

      @@conradwiebe7919 then you should start with the ml course taught by Andrew Ng

  • @guavacupcake
    @guavacupcake 4 ปีที่แล้ว +17

    Much better audio thanks!

  • @xanderlewis
    @xanderlewis ปีที่แล้ว +1

    25:22 He just described a well-known exam technique beloved of students everywhere!

  • @terryliu3635
    @terryliu3635 7 หลายเดือนก่อน +3

    Great lectures!! Pls keep posting the latest series! Thank you!!

  • @huesOfEverything
    @huesOfEverything 3 ปีที่แล้ว +2

    I like how he says.. 'This is WRONG.. so bad... you should not do this!' cracks me up for some reason

  • @raphaelmourad3983
    @raphaelmourad3983 4 ปีที่แล้ว +7

    Very good teaching of computer vision! Thanks Justin Johnson for these very nice lectures.

  • @zhaobryan4441
    @zhaobryan4441 10 หลายเดือนก่อน

    He taught the essential in a great way

  • @andrewstang8590
    @andrewstang8590 9 หลายเดือนก่อน +1

    Hi
    I thought the MNIST dataset had 60k training images. Or?

  • @DariaShcherbak
    @DariaShcherbak 5 หลายเดือนก่อน

    Thank you for the lecture! Greetings from Ukraine)

  • @veggeata1201
    @veggeata1201 4 ปีที่แล้ว +6

    For the nearest neighbor classifier isn't training time going to be O(n)? If we are going to store pointers for each training example, we still have to iterate over the number of training examples, which is n.

    • @bhavin_ch
      @bhavin_ch 4 ปีที่แล้ว +11

      If you have to iterate over the elements - yes. If you just copied a list, it's probably a single pointer

  • @훼에워어-u1n
    @훼에워어-u1n ปีที่แล้ว

    thanks! such an informative video

  • @adarshtiwari6374
    @adarshtiwari6374 4 ปีที่แล้ว +2

    14:06

  • @randomsht-cy7we
    @randomsht-cy7we 4 หลายเดือนก่อน

    That Hot dog and not hot dogs was from Silicon Valley. The professor watches the show :)

  • @mahmoudatiaead7347
    @mahmoudatiaead7347 ปีที่แล้ว

    how can i get the homework anyone knows?

    • @sampathkovvali6255
      @sampathkovvali6255 ปีที่แล้ว

      Assignments? Check out this course page in description

  • @ДаниилГусев-с9л
    @ДаниилГусев-с9л 2 ปีที่แล้ว

    Well, maybe I didn't get something, but I totally disagree about the train-valid-test idea as Justin described it. We train a model on train data and evaluate on valid set to change a model behavior. That's correct, however, it does not mean we should look at the test set split only at the beginning of our research. We should estimate our model on the test set at least several times and if the model performance is very different on the test set in comparison to the validation set it means something was done very wrong - e.g. splitting strategy. Of course, using the test set influences our decisions, but how much? Can you say that the estimation of the ready model on the test set really spoils everything? I doubt that.

    • @sampathkovvali6255
      @sampathkovvali6255 ปีที่แล้ว +1

      Nope, your model is not allowed to look at the test set during tuning even a peak. You as a model will also over-fitt. 😂

  • @이루다학부졸업전기전
    @이루다학부졸업전기전 4 ปีที่แล้ว

    27:02