Training A Diffusion Model - Stable Diffusion Masterclass

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ต.ค. 2023
  • This is a preview lesson from the deeplizard Stable Diffusion Masterclass!
    Welcome to this deeplizard course, Stable Diffusion Masterclass - Thoery, Code, & Application! In this course, we learn the theory behind Stable Diffusion and get hands on experience with code and applications.
    🔥 EARLY ACCESS SALE 🔥
    Be the first to enroll in the course, and get the limited-time early access sale price!
    ✨ See the course page to get an overview of the course and enroll:
    🔗 deeplizard.com/course/dicpailzrd
    ✨ Check out more course details and other resources in the course intro:
    🔗 deeplizard.com/lesson/dia1aidrzl
    💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥
    👋 Hey, we're Chris and Mandy, the creators of deeplizard!
    👀 CHECK OUT OUR VLOG:
    🔗 / deeplizardvlog
    💪 CHECK OUT OUR FITNESS CHANNEL:
    🔗 / @deeplizardlifts
    🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order:
    🔗 neurohacker.com/shop?rfsn=648...
    ❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
    Mano Prime
    👀 Follow deeplizard:
    Our vlog: / deeplizardvlog
    Fitness: / @deeplizardlifts
    Facebook: / deeplizard
    Instagram: / deeplizard
    Twitter: / deeplizard
    Patreon: / deeplizard
    TH-cam: / deeplizard
    🎓 Deep Learning with deeplizard:
    AI Art for Beginners - deeplizard.com/course/sdcpailzrd
    Deep Learning Dictionary - deeplizard.com/course/ddcpailzrd
    Deep Learning Fundamentals - deeplizard.com/course/dlcpailzrd
    Learn TensorFlow - deeplizard.com/course/tfcpailzrd
    Learn PyTorch - deeplizard.com/course/ptcpailzrd
    Natural Language Processing - deeplizard.com/course/txtcpai...
    Reinforcement Learning - deeplizard.com/course/rlcpailzrd
    Generative Adversarial Networks - deeplizard.com/course/gacpailzrd
    Stable Diffusion Masterclass - deeplizard.com/course/dicpailzrd
    🎓 Other Courses:
    DL Fundamentals Classic - deeplizard.com/learn/video/gZ...
    Deep Learning Deployment - deeplizard.com/learn/video/SI...
    Data Science - deeplizard.com/learn/video/d1...
    Trading - deeplizard.com/learn/video/Zp...
    🛒 Check out products deeplizard recommends on Amazon:
    🔗 amazon.com/shop/deeplizard
    📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard's link:
    🔗 amzn.to/2yoqWRn
    🎵 deeplizard uses music by Kevin MacLeod
    🔗 / @incompetech_kmac
    ❤️ Please use the knowledge gained from deeplizard content for good, not evil.

ความคิดเห็น • 7

  • @marufahmed3416
    @marufahmed3416 2 หลายเดือนก่อน +2

    Really good explanation, please continue to produce these videos.

  • @deeplizard
    @deeplizard  9 หลายเดือนก่อน +1

    ✨ See the course page to get an overview of the course and enroll:
    🔗 deeplizard.com/course/dicpailzrd
    ✨ Check out more course details and other resources in the course intro:
    🔗 deeplizard.com/lesson/dia1aidrzl

  • @leanwolf6667
    @leanwolf6667 9 หลายเดือนก่อน +1

    Thank you!

  • @stefanobraghettocatoni1464
    @stefanobraghettocatoni1464 7 หลายเดือนก่อน

    I have a question. Is fair to use the same noise scheduler for inference than for training? I am asking because when the model is predicting noise during training it has just the correspond noise step acording to that timestep. But when inference, the model has 2 noises: The timestep noise (same as training) + noise that wasn't able to delete since the model isn't perfect and always will make tiny mistakes. For diffusion process with a lot of steps I fell is unfair to treat both as the same.
    Thank you for your clarification!

  • @alikadhm1903
    @alikadhm1903 9 หลายเดือนก่อน +1

    we need more discount for this course, it’s expensive compared with udemy

  • @shanhuang5770
    @shanhuang5770 4 หลายเดือนก่อน +2

    This is really explained poorly in that
    1. It assigns unnecessary complexity: rather than say the model subtracts the predicted noise and adds it back, you could have just said it subtracts 10% of the predicted noise.
    2. The explanation goes on and on with the computation steps without explaining the intuition.
    3. It is simply incomplete. You did not explain what are we doing with noise scheduler with multiple “t”s, if during training, the model is already learning denotes in incremental steps

    • @Ezghoul1
      @Ezghoul1 3 หลายเดือนก่อน

      Of course... they are trying to sell a course 😅