EfficientNet Explained!

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ม.ค. 2025

ความคิดเห็น • 39

  • @mtmotoki2
    @mtmotoki2 4 ปีที่แล้ว +1

    I have a hard time reading papers because my English isn't very good, but you've been very helpful in explaining it in your videos. Thank you.

  • @daesoolee1083
    @daesoolee1083 4 ปีที่แล้ว

    Interesting. Building a CNN model always depended on my intuition from existing CNN models. I never questioned the significance of each scale-up method. The analysis by disentangling is very helpful to the community. Excellent.

    • @roymarley5178
      @roymarley5178 3 ปีที่แล้ว

      I know it is quite off topic but do anyone know of a good place to stream new movies online?

    • @kogmawgaming
      @kogmawgaming 2 หลายเดือนก่อน

      @@roymarley5178 fed

  • @anonyme103
    @anonyme103 4 ปีที่แล้ว +2

    Clean, simple, and great explanation! Thanks

  • @schneeekind
    @schneeekind 4 ปีที่แล้ว +2

    1:07 is the image right? b) and d) should change the figures? how is higher resolution resulting in deeper blocks?

  • @easonlyon2217
    @easonlyon2217 3 ปีที่แล้ว

    Looking forward to EfficientNet-V2 paper!

  • @MaryamSadeghi-AI
    @MaryamSadeghi-AI 2 ปีที่แล้ว

    Great explanations thank you!

  • @konataizumi5829
    @konataizumi5829 3 ปีที่แล้ว +1

    I feel like the grid search to find alpha, beta and gamma was not elaborated on enough in the paper. Does anyone understand this more deeply, or how one could reproduce it?

  • @selfdrivingcars3605
    @selfdrivingcars3605 4 ปีที่แล้ว

    Great intuitive explanation! Thank you!!

  • @shreymishra646
    @shreymishra646 ปีที่แล้ว

    maybe i am a bit stupid on this but there is a mistake at 4:43 and i checked in the paper too w should be equal to -0.07 instead of 0.07 because if assuming asper the video the ratio is 0.07 then say if the flops of the resultant model are half of the target flops then this will become ACC * (0.5)^0.07 ~ ACC * 0.95 which is less than 1 hence penalizing the model (since we need to maximize this, right ? ) which is wrong it should actually support such a model while if we keep w equals to -0.07 then objective fx bcomes Acc * (0.5)^-0.07 ~ Acc * 1.04
    I was a bit confused in the begenning of the vide since i didnt read the paper first, but now i am quite certain of it .
    I am quite surprised no one else noticed it !!!

  • @guruprasadsomasundaram9273
    @guruprasadsomasundaram9273 4 ปีที่แล้ว

    What a lovely summary thanks!

  • @faridalijani1578
    @faridalijani1578 4 ปีที่แล้ว +1

    224*224 image resolution (r=1.0) --> 560*560 image resolution (r=2.5)

  • @omkardhekane4404
    @omkardhekane4404 3 ปีที่แล้ว

    Well explained. Thank you!

  • @maoztamir1980
    @maoztamir1980 3 ปีที่แล้ว

    great explanation! Thanks

  • @xxRAP13Rxx
    @xxRAP13Rxx 3 ปีที่แล้ว

    At 2:37, shouldn't 2^n more computational resources imply a B^(2n) and a gamma^(2n) increase given the constraint A*B^2*gamma^2 = 2 ?

    • @xxRAP13Rxx
      @xxRAP13Rxx 3 ปีที่แล้ว

      Also, I'm looking over the actual paper. The chart at 5:32 is a bit different from what I'm seeing. Everything's about the same but every BOLDED Top1 Acc. entry (recorded from their own architecture) has been boosted up a few percentage points to outshine their rival counterparts. I wonder if they updated the paper since you posted this video, or maybe they figure it best to fudge the numbers since this chart is located on the front page of the paper.

  • @IvanGoncharovAI
    @IvanGoncharovAI 5 ปีที่แล้ว +2

    Great explanation!

  • @AbdullahKhan-if8fn
    @AbdullahKhan-if8fn 5 ปีที่แล้ว +5

    Very well explained. Thanks!

  • @svm_user
    @svm_user 4 ปีที่แล้ว +1

    Thanks, great explanation.

  • @Ftur-57-fetr
    @Ftur-57-fetr 4 ปีที่แล้ว

    Superb explanation!!!!

  • @nuralifahsalsabila9057
    @nuralifahsalsabila9057 23 วันที่ผ่านมา

    hi can u make a video that explained a efficientnet lite?

  • @BlakeEdwards333
    @BlakeEdwards333 5 ปีที่แล้ว +2

    Thanks Henry!

  • @anticlementous
    @anticlementous 4 ปีที่แล้ว

    Really great video! One thing I don't understand though, is how the scaling works exactly. Are the network dimensions scaled while training and while keeping the weights from the smaller scale or is the entire network retrained from scratch on each scaling? Also, if I do transfer learning with a model pre-trained on efficientnet I could get the benefits of reducing the network size but wouldn't have to run through the same scaling process?

  • @gvlokeshkumar
    @gvlokeshkumar 4 ปีที่แล้ว +2

    Thank you so much!!!

  • @masoudparpanchi505
    @masoudparpanchi505 4 ปีที่แล้ว

    a question. by this equation you said : Alpha * (Beta^2) * (gamma^2) = 2 .
    when I increase Alpha I should decrease two other variables?

  • @divyamgoel8038
    @divyamgoel8038 4 ปีที่แล้ว +3

    Hi! I think you got the resolution scaling wrong. They don't change the input dimensions (from say 224 to 360) but rather increase the number of convolution filters in every convolution, effectively increasing the number of feature maps of the low-level representation of the input at any given point in the model.

  • @tashin8312
    @tashin8312 4 ปีที่แล้ว

    I have a question. For my custom dataset I have used effnet b0-b5 & the results were getting poor each time I am using more complex models. Which means b0 gave best outcome while b5 gave the worst.... image sizes were 2000x1500 ...what could be the reason for that?

    • @tushartiwari7929
      @tushartiwari7929 2 ปีที่แล้ว

      Did you find the reason for that?

    • @l.perceval9460
      @l.perceval9460 5 หลายเดือนก่อน

      Ur data scale depends!

  • @ibropwns
    @ibropwns 5 ปีที่แล้ว +2

    Thanks a lot!

  • @masoudparpanchi505
    @masoudparpanchi505 4 ปีที่แล้ว

    good explanation

  • @anefuoche1053
    @anefuoche1053 2 ปีที่แล้ว

    thank you

  • @rahuldeora5815
    @rahuldeora5815 4 ปีที่แล้ว +1

    MobileNetV2 and EfficientNet' video

  • @miremax0
    @miremax0 4 ปีที่แล้ว

    Большое спасибо!