ComfyUI - Live Stream! Let's make some amazing art with Stable Diffusion!

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024
  • Let's make something together! Come along and let's talk ComfyUI as well as other things related to generative AI. It's always fun! Note that the playback of this video will be available to channel patrons after the show is over. Catch it live! #comfy #stablediffusion
    Become a member to get exclusive access to perks!
    / @sedetweiler
    Gigabyte 17X Laptop is doing the inference today! Grab one here:
    amzn.to/3thtfpR
    For painting, I prefer Rebelle to Photoshop. Grab that here: tinyurl.com/2b...
    Join us on Discord! / discord (we have quiet rooms for MidJourney work)
    Print and sell your own artwork! www.printful.c...
    Music by share.epidemic...
    Backup your images! All you need is here: a.co/1uwXqv2
    Enjoy this video? Consider buying me a coffee! ko-fi.com/sede...

ความคิดเห็น • 37

  • @kpr2
    @kpr2 8 หลายเดือนก่อน +10

    Re: Grouping nodes - Hiya Scott! Just a quick tip: If you highlight the nodes you want to group together, then right click in an empty space and choose Add Selected Nodes To Group, it automatically sizes it to encompass the group. Might save ya some time (I also recommend double clicking the canvas & using the search to get the desired nodes rather than going through the long list like you do, but hey, you do you). Awesome info as always! Thanks much!
    Additionally, Re: Inpainting w/ a mask - I haven't had a chance to tinker much myself, but I believe that if you soften up your mask image (maybe apply a gaussian blur, at least to the edges?) it will blend better & not leave you those harsh seams. Still learning here, but that might help ya in the future.
    Finaly addition: OMG! Those butterfly dress designs are amazing! Quick, call a seamstress!

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +2

      Oh nice! I will give that a try. So many undocumented features to find!

    • @kpr2
      @kpr2 8 หลายเดือนก่อน

      @@sedetweiler Yeah, still very much at the tip of the iceberg here myself haha :)

  • @MPRX87
    @MPRX87 8 หลายเดือนก่อน +3

    Hi Scott, I noticed about halfway through you were having some issues making the image pop from being prompt based, into being controlled by the controlnet, when you were setting the start of it to 0.3. That 0.3 lines up with the step count, in your case 20, it would switch to the controlnet at step 6. The problem is that the latent noise space doesn't resolve in a linear sense for most of the schedulers that I've seen. They seem to function pretty exponentially. So within the first 5 out of 20 steps, it probably resolves something like 60% of the image in latent space. This means the primary and some of the secondary forms of the scene are already pretty etched in stone and this is why you couldn't get it to pop from the one to the other. To pull that off, you'd need to have the transition happen earlier like step 2-4 or so, but again, it's dependent on how many steps you're using. That's also why there are the advanced versions of the samplers where you can have them return noise or not for when you do multiple sample passes(the prepass hack is kind nice where you do 1-2 steps of random noise and then feed that into another sampler but don't pass along the leftover noise with it).
    Feel free to correct me if I'm wrong, but great video, I've been digging your content!

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +2

      That is correct, but I was trying to keep this to one sampler so people can understand it. I do use multiple samplers in many of the instructional videos for exactly the reason you mentioned.

  • @GamingDaveUK
    @GamingDaveUK 8 หลายเดือนก่อน +2

    Thank you for leaving it up long enough for those of us who work anti-social hours to be able to watch in its full! It is appreciated beyond measure. Very interesting video (and informative as always)

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +1

      Glad you enjoyed it!

    • @GamingDaveUK
      @GamingDaveUK 8 หลายเดือนก่อน

      @@sedetweiler I noticed you did not get to the local AI portion, however knowing that it exists meant I looked on the manager for it. I tried a couple but the only one I got to work is the one in CrasHUtils (found it by searching LLM in the manager) Had a bit of fun with that earlier, though keen to see ways to have it assist rather than fully replace your prompt ( the model i use is more geared towards storytelling as I use it for creating funny tales of our gaming guild to amuse the members, that may have factored into the ai returning massive prompts and usng all my tokens)
      Its a shame the SEGs mask resulted in such low res, I remember seeing a auto1111 tutorial where every item in the room was colour coded in the image so the user could change specific parts, I was hoping SEGS was the answer to that when I saw you could replace "all" with written text... not that that worked well lol, shame as it would be nice to take a photo of a city scape and be able to tag individual parts to be re-done in differnt styles. I am sure such things will come out though, the tech is moving so fast and people are creating new nodes even faster than that!

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +1

      I am sure this can be done, but it was not the result we needed for sure.

  • @titan_dev
    @titan_dev 3 หลายเดือนก่อน

    A doubt : How to control the resolution impact of controlnet images (non square mostly) in the generation (1024*1024). It causes crop effect which usually crops the control image to a square. Tried converting the image to square by blur extension/using color, but that has effect on the maps

  • @Darkwing8707
    @Darkwing8707 8 หลายเดือนก่อน

    One of the problems with the inpainting might have been because you used the vae inpainting node instead of the 'set latent noise mask' node. I find that that one works quite well. Also, for the midas depth map, you can use the node from the auxiliary preprocessors like you did for leres, instead of the midas one from WAS.
    An interesting thing to try for changing the dress could be using the Unsampler node. Latent Vision covers it in his Infinite Variations video.

  • @squirrelhallowino29
    @squirrelhallowino29 8 หลายเดือนก่อน

    Really cool stuff scott, the end result got it on a very usable quality.

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      Thank you, and thanks for joining in! Was a great time today!

  • @HexagonalColumbus
    @HexagonalColumbus 8 หลายเดือนก่อน

    Great stream! Congrats! And happy New Year everyone!

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +1

      Thanks! Happy new year to you as well!

  • @Designing-hc5pz
    @Designing-hc5pz 7 หลายเดือนก่อน +1

    how can i get this image of pink dress?

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน

      It is in the assets in the Community section here on TH-cam for channel Sponsors. Enjoy!

  • @f4ust85
    @f4ust85 6 หลายเดือนก่อน

    I wonder if you wouldnt be able to create the exact same result - in much higher image quality and with much more control - in 90 minutes in Photoshop. Strangely thats the case with most "practical" uses of SD when you really need something specific, controlled and printable.

  • @_carsonjones
    @_carsonjones 7 หลายเดือนก่อน

    Just started watching your channel as I dig in to ComfyUI. I'm at about 55 minutes in to the video and am immediately wondering if there's a way to use an image channel (i.e. Alpha created in PS) as part of the process. It would have a similar effect as the depth map but if the options to blur, control black/white levels, invert, etc. are added this could potentially afford a great deal of control.

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน

      Yup! The WAS suite has color channel nodes.

  • @sirmeon1231
    @sirmeon1231 6 หลายเดือนก่อน

    My problem with the inspire node for seeds is that it doesn't really give me random seeds - ever third or fourth image is just gonna be the same 🙈

  • @TheDocPixel
    @TheDocPixel 8 หลายเดือนก่อน

    Great stream! Just catching it now before "it" begins... so wishing all a Happy New Year!
    This seams as good a place as any, to ask if possibly you've heard of anyone making a UI enhancement to turn on or off, nodes within suites. Like a checkbox before each node, so that you only see relevant nodes you want when searching, instead of the 2 you always want jumbled in with 12 other duplicate function nodes. It becomes almost unbearable and confusing sometimes to even intermediate users when you install the "must have" suites as you suggest.

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      You can use control B to bypass a node or selected nodes, or Control M to mute them entirely. Does that help?

    • @TheDocPixel
      @TheDocPixel 8 หลายเดือนก่อน

      I’m sorry I wasn’t clear. I mean something that that controls which nodes get loaded in a mega suite, instead of all 50 nodes, just those that I want to use. Maybe something in Manager when you install the node suites, with a checkbox next to with individual nodes to make available, rather than all or nothing. @@sedetweiler

  • @korilifs
    @korilifs 5 หลายเดือนก่อน

    Hi. great content. I paid for the membership. I think you said I can get worflow for the videos. Where I can get them? thank you

    • @sedetweiler
      @sedetweiler  5 หลายเดือนก่อน

      They are in the community area here on TH-cam. Thank you for supporting the channel!

    • @sedetweiler
      @sedetweiler  5 หลายเดือนก่อน

      www.youtube.com/@sedetweiler/community

  • @techzuhaib99
    @techzuhaib99 8 หลายเดือนก่อน

    💯

  • @MustafaAAli-uv2sx
    @MustafaAAli-uv2sx 8 หลายเดือนก่อน

    Hello, my image loader saves all the old images i have loaded, how do i empty/reset it? thank you.

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน +1

      It is the contents of the Input folder in the comfy directory. You can just delete the ones you no longer need.

  • @GamingDaveUK
    @GamingDaveUK 8 หลายเดือนก่อน

    Managed to watch 46 minutes in my 40 minute break, I am hoping that it will still be available after my shift. I realise you often reply to me when i say this, but i cant read that reply once its behind the paywall. Who knows come april i may start having enoug money that i can sub, right now I cant even use comfy unless I am in a cheap electricity period (the wind we had here in the uk last week was a god send)
    Can you do a video on local LLM use? I didnt get that fair in your video and a stand alone tutorial on that would be handy, specially if the prompt from the ai can be added to the prompt we have asked for.

  • @ufukk54
    @ufukk54 8 หลายเดือนก่อน

    Hello sir. Will you share workflow ?

  • @MultiMam12345
    @MultiMam12345 5 หลายเดือนก่อน

    You do realize it’s the AI that is doing the chemistry so that the AI can get more chip power and get rid of humans 800 years sooner.

  • @richarddecosta
    @richarddecosta 8 หลายเดือนก่อน

    Do you know of a Colab notebook for this?

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      Yes, it is on the comfy git repository.

  • @Bakobiibizo
    @Bakobiibizo 8 หลายเดือนก่อน

    I'll set you up with a text to speech model if you want

    • @Bakobiibizo
      @Bakobiibizo 8 หลายเดือนก่อน

      though i wouldnt do the whole stream in one shot, id have a model review the chat and pick out the best questions