Autoencoders - EXPLAINED

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ก.ย. 2024
  • Data around us, like images and documents, are very high dimensional. Autoencoders can learn a simpler representation of it. This representation can be used in many ways:
    - fast data transfers across a network
    - Self driving cars (Semantic Segmentation)
    - Neural Inpainting: Completing sections of an image, or removing watermarks
    - Latent Semantic Hashing: Clustering similar documents together.
    And the list of applications goes on.
    Clearly, Autoencoders can be useful. In this video, we are going to understand it's types and functions.
    For more content, hit that SUBSCRIBE button, ring that bell.
    Subscribe now for more awesome content: www.youtube.com...
    patreon: / codeemporium
    REFERENCES
    [1] Autoencoders: www.deeplearni...
    [2] Sparse autoencoder (last part): web.stanford.e...
    [3] Why are sparse encoders sparse?: www.quora.com/...
    [4] KL Divergence: en.wikipedia.o...
    [5] Semantic Hashing: www.cs.utoront...
    [6] Variational Autoencoders: jaan.io/what-i...
    [7] Xander’s video on Variational AutoEncoders (Arxiv Insights): • Variational Autoencoders
    CLIPS
    [1] Karol Majek’s Self driving car with RCNN: • Mask RCNN - COCO - in...
    [2] Auto encoder images: www.jeremyjord...
    [3] Semantic Segmentation with Autoencoders: github.com/ara...
    [4] Neural Inpainting paper: arxiv.org/pdf/...
    [5] GAN results: • Progressive Growing of...
    #machinelearning #deeplearning #neuralnetwork #ai #datascience

ความคิดเห็น • 28

  • @atifadib
    @atifadib 3 ปีที่แล้ว +11

    This is the most underrated channel on youtube.

  • @ispeakforthebeans
    @ispeakforthebeans 5 ปีที่แล้ว +22

    Why does this guy not have a million subscribers

    • @CodeEmporium
      @CodeEmporium  5 ปีที่แล้ว +13

      I ask myself the same question every day

    • @UgurkanAtes
      @UgurkanAtes 3 ปีที่แล้ว +2

      @@CodeEmporium you dont need though

    • @senx8758
      @senx8758 3 ปีที่แล้ว +1

      @@CodeEmporium there are not too many ML engineers :)

    • @SudhanvaDixit
      @SudhanvaDixit 3 หลายเดือนก่อน +2

      Quality of the followers over quantity ;)

  • @hamzanaeem4838
    @hamzanaeem4838 3 ปีที่แล้ว +3

    You have great understanding in this particular domain but the way you get million of subscribers is that you should explain in deep that why there is only 1 bottle neck , why not 2 . How encoder compresses what is the working behind it , each and everything and explain it on student level so that it may understands very easily . But I appreciate your stuff . Keep it up Man !

  • @mohammedfareedh
    @mohammedfareedh 4 ปีที่แล้ว +2

    Man your voice is so clean and pleasant

    • @CodeEmporium
      @CodeEmporium  4 ปีที่แล้ว +2

      Thanks for loving my voice

  • @neuodev
    @neuodev 2 ปีที่แล้ว +2

    This is awesome explanation 🙌.
    About to consider please don't show the subscribe button in the every 2 min it is very confusing. every thing except this look good 👍 . thnks

  • @seyha3447
    @seyha3447 5 ปีที่แล้ว +4

    What a great work! Thanks for videos. btw, can you make any videos about Conv-decon network? how it's different with Auto-encoders?

  • @cesarfaustoperez6372
    @cesarfaustoperez6372 3 ปีที่แล้ว +1

    Autoencoders and encoder-decoder are equal?

  • @ashutoshshinde5267
    @ashutoshshinde5267 3 ปีที่แล้ว +1

    Great explanation!! Thank you!

  • @landoftheunknown116
    @landoftheunknown116 ปีที่แล้ว

    n_h in 6:43 is number of hidden layers pr number of neurons in hidden layer. Also, are we considering only 1 hidden layer?
    A very nice explanation though! I am preparing for an interview and this is like a gold treasure for me!

  • @wlxxiii
    @wlxxiii 5 ปีที่แล้ว +3

    would love videos on NLP! :)

    • @CodeEmporium
      @CodeEmporium  5 ปีที่แล้ว +2

      Coming soon! ;)

    • @vinayreddy8683
      @vinayreddy8683 4 ปีที่แล้ว

      @@CodeEmporium still waiting for NLP videos

  • @amortalbeing
    @amortalbeing 4 ปีที่แล้ว +1

    6:42 why did you say, we are considering a sigmoid activation ? what would be different if I used another activation function such as RELU?
    Would the kl term change?Do we apply this on all layers or only the last layer of the encoder ?

  • @krishj8011
    @krishj8011 4 ปีที่แล้ว

    Great video...

  • @nagendran7781
    @nagendran7781 8 หลายเดือนก่อน

    Can you make a video for the beginners, this stuff with formulas and equation gets too complicated.

  • @est9949
    @est9949 4 ปีที่แล้ว +1

    9:37 can't a CNN do the same job? What's the difference between using CNN and autoencoders with convolutional layers? Or are they actually the same thing? I'm new to both types of networks, so would appreciate any elaboration. Thanks.

    • @generichuman_
      @generichuman_ 2 ปีที่แล้ว

      With a CNN you need labelled data

  • @NozaOz
    @NozaOz ปีที่แล้ว

    Very cool

  • @starlord7548
    @starlord7548 4 ปีที่แล้ว +1

    Please change the comic san font

  • @spyzvarun5478
    @spyzvarun5478 ปีที่แล้ว

    Why do we need the encoder and decoder to be a shallow network?

    • @LolLol-rr6eb
      @LolLol-rr6eb 9 หลายเดือนก่อน

      We would still want the encoder and decoder to learn as much from the input as possible. A shallow network allows many different features to be learned, thus increasing the odds that the network has learned a good amount from the input image.

  • @jodumagpi
    @jodumagpi 5 ปีที่แล้ว

    That intro though!!!

  • @harshraj7014
    @harshraj7014 5 ปีที่แล้ว +1

    01:22 .. data around us like images and donkeynets