Get Better Images: Random Noise in Stable Diffusion

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 มิ.ย. 2024
  • Are your Stable Diffusion generations not as great as MidJourney's? Discover how a tiny bit of random noise can make a big difference in image quality!
    In this episode of Stable Diffusion for Professional Creatives, we'll show you how to improve your images using random noise, whether through ControlNet or latent manipulation.
    Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
    Workflow: openart.ai/workflows/rGKT2dp6...
    Install the missing nodes via Manager by importing the workflow.
    I'm using epicRealism as a checkpoint, but you can use any other 1.5 or SDXL model.
    epicRealism: civitai.com/models/25694/epic...
    Timestamps:
    00:00 - Intro
    00:59 - Visualizing Noise
    01:57 - Noise inside of comfyUI
    02:56 - Thinking about Rules and Sandboxes
    03:48 - Noise Driven ControlNet: Perlin Noise
    04:32 - Noise Driven ControlNet: Perlin and Gradient
    05:06 - Noise Driven ControlNet: Perlin, Gradient and Sketch
    05:45 - Noise Driven Latents
    06:16 - Noise Driven ControlNet and Latents
    06:35 - Workflow design Philosophy: Rules
    08:36 - Wokrflow Breakdown
    13:10 - Final Considerations
    14:26 - Outro
    #stablediffusion #stablediffusiontutorial #randomnoise #noise #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 31

  • @OriBengal
    @OriBengal 10 วันที่ผ่านมา +1

    I hope people really listen... really *grasp* ... what you're saying about discovery through experimentation... to figure out what these things do... and why this is important for real studio work. Too many people just "cut/paste" (grab a workflow, put in a new prompt). Great job!

    • @risunobushi_ai
      @risunobushi_ai  9 วันที่ผ่านมา

      Thank you for the super kind words!

  • @pixelcounter506
    @pixelcounter506 11 วันที่ผ่านมา +1

    Always very interesting to listen to your findings, Andrea! I like your idea of taming noise injections and giving it some structures. One could even enlarge your concept to implement colors, too. From my point of view it is always more interesting to play around with (even copied) ideas, workflows, new nodes than to go only with mainstream concepts. Keep up your professional work! 🙂

    • @risunobushi_ai
      @risunobushi_ai  10 วันที่ผ่านมา

      Thank you! Yes, depth doesn't care too much about colors, that's why I used the latent noise injection, but you can experiment with colors even more!

  • @AbsolutelyForward
    @AbsolutelyForward 13 วันที่ผ่านมา +2

    A very good example of how you can design "differently" with generative ai ... and must, if you really want to utilise the full potential - bravo :)
    Your tutorial reminds me of an experiment in which I used the Lumetri scopes (Luma waveform) from Adobe Premiere as an image prompt. It would certainly be interesting to capture the "moving" live Luma waveform via screen capture node and link it to AnimateDiff or a real time generation workflow.

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา

      Exactly! The most interesting stuff we find when using new tech is very rarely found while playing it safe

  • @Mranshumansinghr
    @Mranshumansinghr 13 วันที่ผ่านมา +3

    I never miss any of your videos. Best Comfi Knowledge on TH-cam.

  • @dtamez6148
    @dtamez6148 13 วันที่ผ่านมา +1

    Your closing statement is not only brilliant, but spot on! "Stable diffusion for professionals' indeed! 👏

  • @maxehrlich
    @maxehrlich 13 วันที่ผ่านมา +2

    Maybe your best video yet! While not as technical as relighting, the philosophical aspect of why we are doing what we are doing is even more important than technique to make compelling images.

    • @risunobushi_ai
      @risunobushi_ai  13 วันที่ผ่านมา +2

      Thanks! It's been a while since I wanted to make a video like this one, because I think workflow design is the same as any design work. Having a philosophy and a course of action behind what you do is one of the most important things imo.

  • @Al-KT
    @Al-KT 12 วันที่ผ่านมา +1

    For visualizing noise in blender (1:32) use Color output of the noise texture instead of Fac. That way for offset on each axis it's gonna give slightly different result. Now all of the offset is same for all axis, in the direction of the vector (1, 1, 1)

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา +1

      It's been a hot minute since I worked with geo nodes in Blender. I plugged it in and I debated looking up a guide as soon as I saw it wasn't displacing along the normals, and I said "eh, it's just to visualize stuff, that's fine". But yeah, absolutely, affecting the offset alongside the each face's axis would be the correct way of doing it!

  • @nhatthibui6491
    @nhatthibui6491 12 วันที่ผ่านมา +1

    I have never missed any of your videos because what you do is very practical and highly applicable to my work.

  • @melodyhour
    @melodyhour 2 วันที่ผ่านมา +1

    love your videos!

  • @hakandurgut
    @hakandurgut 12 วันที่ผ่านมา +1

    Great tutorial.. just like the way noise is used in touchdesigner.

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา

      random noise is truly the gift that keeps on giving across all software

    • @AB-wf8ek
      @AB-wf8ek 12 วันที่ผ่านมา +1

      Coming from 3D animation, noise is also a common tool

  • @risunobushi_ai
    @risunobushi_ai  13 วันที่ผ่านมา +6

    Why no SD3 video? Well, because it's not interesting to me production-wise or even as a base for experimenting production-related stuff. Apart from anything that can be said - and has been said - about SD3, I think it's too early to both taking it into consideration for production related tasks, and jumping to conclusions in terms of how good or bad of a model it ends up being.
    I'll probably talk about it when - and if - we'll get a complete set of controlnets, finetunes, and accessory modules like IPAdapter or IC-Light.
    In the meantime there's so much stuff left to explore with 1.5 and XL, and there's so many great channels and videos who'll cover SD3, that I don't think that the lack of my voice on the matter will be missed.

    • @fernandopain4824
      @fernandopain4824 13 วันที่ผ่านมา +1

      That´s why I follow your channel. Thanks!

    • @dtamez6148
      @dtamez6148 13 วันที่ผ่านมา +1

      I agree. And, apparently, there is some questionable issues with it's license agreement as well😒

    • @risunobushi_ai
      @risunobushi_ai  13 วันที่ผ่านมา +4

      The funny thing is I have a law degree (albeit from an Italian Uni), so even if I'm in no position to give counsel on it, I'd be able to make a breakdown of the license agreement. But either way SAI should just release a simple statement disclosing to the laymen what they expect out of finetunes. Finetuners, coders, and the community members in general are not corps, they shouldn't need to have a legal team in order to understand what they can and can't do.

    • @risunobushi_ai
      @risunobushi_ai  13 วันที่ผ่านมา +1

      @fernandopain4824 thank you, I really appreciate the sentiment

  • @stephenmurphy8349
    @stephenmurphy8349 13 วันที่ผ่านมา +1

    Nice approach!

  • @BrunoMartinho
    @BrunoMartinho 10 วันที่ผ่านมา

    Sadly I'm getting an error:
    Error occurred when executing ColorPreprocessor:
    No module named 'controlnet_aux.color'

    • @risunobushi_ai
      @risunobushi_ai  10 วันที่ผ่านมา

      It might be because you're missing the auxiliary preprocessors. You can find them here: github.com/Fannovel16/comfyui_controlnet_aux

  • @ronilevarez901
    @ronilevarez901 12 วันที่ผ่านมา

    Definitely not using confiUi any time soon. Not for me at all. Why wasting so much effort making something with all those intricate and confusing entangled lines when I can get an almost identical result in seconds using automatic?
    Yes , confi gives a lot of control, apparently, but that control is not necessary to achieve great results with good prompting and other techniques. All the super innovative methods developed for Comfi that I've seen are easy to imitate with other tools, even in command line, so I'll pass. I wish people could see it too so they would focus on improving other tools instead of Comfi.
    (Although, the idea of using custom noise to influence the generation is great).

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา +2

      Well, that's easily said: I personally like node based interfaces much more than standard web UIs and CLI. I spent a lot of time learning Houdini and Blender Geo Nodes, so it sort of comes natural to me, as to many others. I'm all for having different interfaces for different users, so I prefer having the option of choosing which one to use depending on the task and on the kind of use I want to make of it.
      Also with comfyUI I can spend time working on the "perfect" environment I want to set up, in order to automate the generations, and that's something I could do to a degree with other UIs, but it's much easier in comfyUI for me.
      It all comes down to personal preferences, I think!

    • @AB-wf8ek
      @AB-wf8ek 12 วันที่ผ่านมา

      It's one thing to not use ComfyUI because it's not useful to you, but to claim it's a waste of time for everyone else is ridiculous.
      I use it for animation, and there are a million things that it's better for than with Auto1111.
      For example, access to experimental nodes, performing transformations on image maps, muting processes with boolean switches, generating proper looping frame interpolation, propagating single images to multiple inputs at the same time, doing multi-step upscaling, animateDiff outpainting & upscaling, quickly swapping inputs for multiple inputs, isolating parameters to a single location, organizing complex workflows, etc.