SDXL ComfyUI img2img - A simple workflow for image 2 image (img2img) with the SDXL diffusion model

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 160

  • @TailspinMedia
    @TailspinMedia 10 หลายเดือนก่อน +18

    i love that you build your workflows from scratch and explain each node !

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +2

      Glad to hear that! I think it is important to know how it all fits together.

    • @B4zing4
      @B4zing4 10 หลายเดือนก่อน

      @@sedetweilerDid you upload the workflow json somewhere?

    • @glydstudios5632
      @glydstudios5632 8 หลายเดือนก่อน

      Exactly this.

  • @nick4uuk
    @nick4uuk ปีที่แล้ว +53

    I just want to say something else that maybe others have missed regarding your excellent tutorials. It's not JUST that they're technically good, it's also because your voice is SO easy to listen to. You could teach all day and it wouldn't be exhausting for the student I think.

  • @14MTH3M00N
    @14MTH3M00N ปีที่แล้ว +4

    Thankful that you are repeating the first steps in these videos. Makes it really a lot easier to remember

  • @Sim00n
    @Sim00n ปีที่แล้ว +1

    You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! 😘😘

  • @ThoughtFission
    @ThoughtFission ปีที่แล้ว +10

    Your tutorials are really great. They convey so much useful info in an easy to digest way. Thanks!

  • @Vitruvian2086
    @Vitruvian2086 ปีที่แล้ว

    I think this is still one of the best and easiest workflows that gives quick results, Thank you!

  • @RufusTheRuse
    @RufusTheRuse ปีที่แล้ว +1

    Thanks again for your video series here - I think it's very important for those picking up Comfy UI - it has helped me understand workflows I've loaded up off of the example workflow site. And I really appreciate it when you zoom into a node you're doing something interesting with - too far out and it becomes a compressed blur.

  • @spiralofhope
    @spiralofhope ปีที่แล้ว +1

    This worked great!
    Now I think I know enough to tie multiple previous tutorials together, to get several advantages. I don't know how to disable and reroute certain things easily, so it looks like I'll be maintaining multiple workflows.

  • @kiretan8599
    @kiretan8599 10 หลายเดือนก่อน

    Im just got into image generation and thank you for your vids. I truly appreciate you sharing your knowledge 👍

  • @spiralofhope
    @spiralofhope ปีที่แล้ว +3

    omg I've been working for hours to try to figure out how to integrate this into the refinement workflow from earlier in the playlist.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Did you get it? Just checking that you are all good now.

    • @spiralofhope
      @spiralofhope ปีที่แล้ว +1

      @@sedetweiler
      Edit: I think I figured it out! I had things in the wrong order and some values were a bit wonky.
      5 img2img steps, denoised to 0.80
      ->
      3 refiner initialization
      ->
      20 base sampling steps
      ->
      12 final refining steps (not sure about this one)
      --
      I don't think so.
      I started with the refiner workflow, ending with a final image. This is an exact copy of a previous tutorial and it works 100%
      At the end of that workflow is a KSampler. I take that and split it off its latent.
      I take that latent and pass it through an image scale to side, with the side length of the longest of height or width I've been working with.
      I pass that latent to a LatentAdd and bring it into an exact copy of the img2img tutorial.
      I take that LatentAdd and insert it just before the img2img KSampler.
      It.. "does stuff" but I intuit it isn't correct.
      I already cleaned it up with groups (which was new to me), and I'm working on cleaning it up with reroutes so I can show it. I don't like that reroutes are someone else's extension because it feels like something that can break on updates, but I hope that reroutes are popular enough to demand updates.

  • @vVinchi
    @vVinchi ปีที่แล้ว +1

    Amazing to see you are back to regular uploads😊

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Yes! Thank you! I just had to work that life balance thing, but I have it sorted now. Cheers!

  • @monbritt
    @monbritt ปีที่แล้ว +7

    can you help to use controlnet on the comfyUI?

  • @hleet
    @hleet ปีที่แล้ว

    wow ! I love your tutorials, you are the best resource on comfyUI topic. I'll grab that "manager" stuff to have custom nodes.

  • @royjones5790
    @royjones5790 ปีที่แล้ว +4

    I would love a tutorial explaining how to create 2 independent prompts, run them together for composition. Method (1) : Using position & area numbers (such as, starting from X0-x300, y0-y200 for prompt 1, etc.) Method (2) :Allowing the mouse-drawn mask area to be fed with the prompt -- please include previews of the masks being done & a preview of the mask combination. I'd really appreciate it. ComfyUI allows for greater composition control & understanding how to do it will really lean into it's strength

  • @leogamer4835
    @leogamer4835 ปีที่แล้ว +1

    I would like to see a second part of this video from img2img adding ksample refiner to the image to improve the composition, thanks for the video and good work

    • @untitled795
      @untitled795 ปีที่แล้ว

      its very easy to do and gives good results!

  • @seancollett3760
    @seancollett3760 ปีที่แล้ว

    Thank you! Really love working this way with SD. Can really fine tune the process to the graphics card and squeeze every drop out of the hardware.

  • @jjgravelle
    @jjgravelle ปีที่แล้ว +1

    It was nice spending the weekend with you, boss. Well, with your videos, anyway. You've got me psyched on ComfyUI.
    Two things I hope to see you cover, eventually:
    1.) You mentioned in another video that ComfyUI might be used for model training? Yes, please; and
    2.) RE: the .yaml tweak for sharing a model folder touches on a larger issue. Play around with AI for a few days, and you end up with Oobabooga, TavernAI, Docker, Jupyter, the WSL, half a terrabyte's worth of LLMS, et al AND competing versions of Python (for X you need 3.10, for Y you need 3.11)... then there are the Condas and Cudas and-- you get the picture. As methodical and organized as you are, I'm guessing your hard drive is set up logically and efficiently. A peek into the nuts and bolts of your setup would be, as the kids say, amaze-balls...
    Your pal,
    -jjg

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Awesome to hear buddy! 🥂

  • @nimoleying2037
    @nimoleying2037 ปีที่แล้ว

    You're awesome scott thanks for all these tutorials

  • @pedroavex
    @pedroavex ปีที่แล้ว +8

    Great video! Can you share the json file for this work flow?

  • @n3bie
    @n3bie 8 หลายเดือนก่อน

    When I followed this tutorial , after I created the primitive node and was going to connect it to the CLIPTextEncodeSDXL node as you did @0:54 I had a Text_I linker, but not the one for the text_g linker. As you do in this video. What am I doing wrong? I'm building this workflow underneath (and disconnected from) the default workflow. Could that be causing it?

    • @divye.ruhela
      @divye.ruhela 7 หลายเดือนก่อน

      Right click and convert text_g as input?

  • @jibcot8541
    @jibcot8541 ปีที่แล้ว +2

    I have the latest Derfuu nodes but I only have Width or Height Options on "Image scale to side" and not "Longest". any ideas why?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      I would just do a pull off the latest. Might be a new addition.

  • @Troyificus
    @Troyificus ปีที่แล้ว +6

    Your tutorials are amazing and incredibly informative. Really hoping you cover SDXL Lora training and maybe even animation creation using ComfyUI at some point?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +2

      Those are coming soon!

    • @Troyificus
      @Troyificus ปีที่แล้ว

      @@sedetweiler Awesome! Can't wait!

  • @LLCinema22
    @LLCinema22 ปีที่แล้ว

    Excellent tutorial, i cant wait to see how far you can go with ComfyUI

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      You and me both! :-)

    • @FirstLast-tx3yj
      @FirstLast-tx3yj ปีที่แล้ว

      ​@@sedetweilercaan i batch modify 10 frames i have like we use to do in old stable diffusion?
      The 10 frames are about the same extracted from a video and I want to batch modify them in a similar manor like in the normal stable diffusion
      Can i do that with sdxl?

  • @Enchanted.Explorations
    @Enchanted.Explorations 5 หลายเดือนก่อน

    GOLD! :) Thank you so much! :)

  • @Lucas-uk6fj
    @Lucas-uk6fj ปีที่แล้ว +2

    The shortcut key for the search box in the video, I want to know it,Thx Bro!

  • @Lacher-Prise
    @Lacher-Prise ปีที่แล้ว

    Hello great video GG so interesting @0:51' you add "primitive" where you find "logic" ? in my COMFYui i have not "logic" neither "_classifier" what custom node did you have added ?
    In other hand my own "utils + primitive" dont look like the same of yours STRING + Text BOX -- mine is "connect to widget input" without text box
    Thank you for your sharing your play list about COMFYui & stable are master class congratulations
    Team Shibuntu

  • @Fern-pe-fl
    @Fern-pe-fl ปีที่แล้ว +2

    What about inpainting using a mask? my current workflow relies a lot on a loop that looks like:
    prompting>inpaint>img2img>inpaint>img2img>inpaint... etc.

  • @roseydeep4896
    @roseydeep4896 ปีที่แล้ว +1

    idk if it's a stupid question to ask but I'm a newbie soooo.... Why does the SDXL CLIP encoder node have two text inputs? At first I thought it would be for both the positive and negative prompts but apparently it isn't. I don't get why you need to connect the primitive node twice lol.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      There are 2 different CLIP models you can leverage. I will go into this deeper in another video but for now, they are both either positive or negative.

  • @BlackDim100
    @BlackDim100 ปีที่แล้ว +1

    I guess you can also use the scaler to upscale the final image output?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I put out a video today on that, since you can do that, but there are much better ways to get great results.

  • @sonic55193
    @sonic55193 6 หลายเดือนก่อน

    Hi is there an image to image workflow that can change an image into multiple angles but the character remain consistent?

  • @lioncrud9096
    @lioncrud9096 ปีที่แล้ว

    is there a way to do higher batch number than 1 at a time....i use an Efficient Loader that combines Model, Prompts, VAE, and Batch Steps in one node. When i increase the batch number to say,3, Comfy only does 1 image.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Perhaps the batch number is being bugged when part of the group? That is how I would normally do more than one per batch.

  • @ramondiaz5796
    @ramondiaz5796 11 หลายเดือนก่อน

    Just in the step where you add ❝Derfuu nodes❞, it doesn't appear because when I install it, I get ❝import failed❞, what could I do?

  • @光影哲學
    @光影哲學 10 หลายเดือนก่อน

    Hi, thanks or the video, I wanna add control net into this node, use img to img to do some line art animate style but with is it possible? I tried to combine the node that you share about controlnet, but not success, thank you.

  • @KallutoZoldyck
    @KallutoZoldyck ปีที่แล้ว

    This helps with text to video, but how do I do image to video locally? Text to video is kind of useless with LoRA involvement if we actually want good results

  • @BenleGentil
    @BenleGentil ปีที่แล้ว +2

    This is great! How can we generate (not process) a batch of images from img2img?

  • @dr.gurn420
    @dr.gurn420 11 หลายเดือนก่อน

    how do you make your projects run and compile so quickly?? I tried using a VM on run diffusion and it still takes 2-5 minutes. Do you have any other VMs you would recommend that are affordable?
    thank you for the content

  • @JRis44
    @JRis44 ปีที่แล้ว

    lol didnt realize how short todays tutorial was going to be. Noice!
    On to the next!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Woot! Speed is the key, brother!

  • @samwalker4442
    @samwalker4442 10 หลายเดือนก่อน

    Perfect Scott, thanks again

  • @aZiDtrip
    @aZiDtrip ปีที่แล้ว

    that derfuu node, "image scale to side" whats the difference from using latent scale to side

  • @MaisnerProductions
    @MaisnerProductions ปีที่แล้ว +1

    How is it that all your width and height inputs say 1024, yet your output image appears to be a portrait aspect ratio?

  • @김기선-j5t
    @김기선-j5t ปีที่แล้ว

    I wonder whether there is workflow for i2i in batch because I want create video.

  • @hobolobo117
    @hobolobo117 ปีที่แล้ว

    Started your tutorials awesome work great teacher! Excited to see learn more!

  • @DiGiovanniDesign
    @DiGiovanniDesign ปีที่แล้ว

    Scott: I am really enjoying your videos. You mention in your videos if we have a question to drop it in the comments. I have a more general question about Stable Diffusion, ComfyUI, Automatic111, and MidJourney. Is it best to post here or somewhere else?

  • @jpgamestudio
    @jpgamestudio ปีที่แล้ว

    Hello brother, what is the function of the "clip space" on the menu on the right?

  • @CaptainKokomoGaming
    @CaptainKokomoGaming ปีที่แล้ว

    Is there a way to add a trigger or something similar if you want to keep the image can you trigger an upscale of the image?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Yes, you can use the Image Chooser node (search for choose).

    • @CaptainKokomoGaming
      @CaptainKokomoGaming ปีที่แล้ว

      Thank you for getting back to me. I am just starting to use Comfy and Your tutorials are the first thing that have got me to see the potential instead of the confusion and the overwhelming screen. @@sedetweiler

  • @maximebraem4115
    @maximebraem4115 ปีที่แล้ว +1

    does gfpgan work in comfui i keep getting error ('tuple' object has no attribute 'cpu')? can you make video about best roop implementations?

    • @justinwhite2725
      @justinwhite2725 ปีที่แล้ว

      Are you running from a CPU or gpu?
      I'm using a CPU and haven't been able to get Roop to work in either comfy or a111.

    • @maximebraem4115
      @maximebraem4115 ปีที่แล้ว +1

      @@justinwhite2725 gpu (amd) and roop works on both comfy and a111 the result is just realy pixulated if its an close up image so i thought gfpgan would make it better but it doesnt work

  • @haljordan1575
    @haljordan1575 ปีที่แล้ว

    thanks a lot for this, could you do a tutorial on how identify and solve common errors? like when load image refuses to load image.

  • @carlosrios5162
    @carlosrios5162 ปีที่แล้ว

    What if there's a refiner do i use the VAE of the refiner or the base model ? and which one goes to the vae decode ??

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      I would try them both, as each has a personality. You choose the one you feel is best for your situation.

  • @schumanncombo
    @schumanncombo ปีที่แล้ว

    could someone explain how to do this as a batch, to process a folder of images?

  • @BartoszBielecki
    @BartoszBielecki 4 หลายเดือนก่อน

    I was wondering: isn't that what (to a degree) an InapaintingConditioning does? The difference is that it can also use masking.

  • @JRis44
    @JRis44 7 หลายเดือนก่อน

    Had to come back to this video. Im having issues with making a lower quality image better. Its not so bad either...just needs to be high definition. Orginally ai generated too. Ill have to search other methods if i cant use this setup....right now im trying to reconfig the sampler using all the different sampler_name options i have.

  • @Deathshot_Official
    @Deathshot_Official 4 หลายเดือนก่อน

    F hell you just blinded me with you light theme, watching at night bruh XD

  • @32_dotrungkien71
    @32_dotrungkien71 5 หลายเดือนก่อน

    i don't found image scale to side

  • @RokSlana
    @RokSlana 7 หลายเดือนก่อน

    Great video, thank you !

  • @KnightZexen
    @KnightZexen ปีที่แล้ว

    I can't find your image scale to side I googled and I have checked the resources you provided none not even the manager gives me that node

  • @marklouisehargreaves
    @marklouisehargreaves ปีที่แล้ว

    In ComfyUI is it possible to change the folder the saved images go to?

  • @thecolonelpridereview
    @thecolonelpridereview ปีที่แล้ว

    Nice tutorial. What's the advantage or difference in using Primitive plus CLIPTextEncodeSDXL compared to using CLIPTextEncode(PROMPT) for your prompts?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      You can use both of the CLIP models if you use the SDXL one, otherwise you are only using one.

  • @fredmcveigh9877
    @fredmcveigh9877 ปีที่แล้ว

    Hi Scott
    When I used stable difusion 1.5 pre comfyui there was an option to render next to the hirez fix for tiling .Is there any way to replicate that in comfyui as I do a lot of seamless patterns ??

  • @anujpartihar
    @anujpartihar ปีที่แล้ว

    Sir is it possible to use this sdxl online? like without using the GPU since my laptop is not very powerful but I would love to dive into this stuff. I am very new to all this so forgive me for asking the obvious.

  • @GamingDaveUK
    @GamingDaveUK ปีที่แล้ว +1

    Can you add a link to the manager? I cant see it on civatai

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      civitai.com/models/71980/comfyui-manager

  • @joneschunghk
    @joneschunghk ปีที่แล้ว

    Would a "base" or "refiner" model be better for img2img?

  • @joebreaker11
    @joebreaker11 ปีที่แล้ว

    Guys, I cannot find the installed custom nodes in the search field - derfuu among them :( when I install custom nodes in the Manager - via simple installation or via Github URL - it shows "To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser." But when I clikc on the RESTART button, it show only RECONNECTING... that lasts foerver. If I click CLOSE and refresh the browser, a blank page appears instaed of the COMFY interface - and I have to restart the SDXL. But it won't change anything - the node still won't appear :(((( Can anyone help me?

  • @goactivemedia
    @goactivemedia 9 หลายเดือนก่อน

    Hi do you have a video on this but using a second image as what you want the first to look like for that kind of look? so image to image style to image I guess? thanks

  • @tcb133
    @tcb133 ปีที่แล้ว

    if i use comfyui in google colab, how can i get that Manager to work?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I don't think so. In the colab you can specify the mods you want to install, but I don't think the manager will work since a restart is required to make the new nodes show up.

  • @renanarchviz
    @renanarchviz ปีที่แล้ว

    the node Utils folder is not apper

  • @randymonteith1660
    @randymonteith1660 ปีที่แล้ว

    Just started to learn ComfyUI. When I select a primitive it does not have the 'string' output on it? Mine only says " connect to widget input " . Next time could you have a json file or PNG image that contains your workflow. Thanks for the tutorial

    • @s.anonyme6855
      @s.anonyme6855 ปีที่แล้ว +1

      The primitive will adapt to what you link it.

    • @randymonteith1660
      @randymonteith1660 ปีที่แล้ว

      @@s.anonyme6855 thanks for the tip

  • @hhaaasshh
    @hhaaasshh ปีที่แล้ว +1

    many thanks mister

  • @Callitister
    @Callitister ปีที่แล้ว

    Can you please tell me how to save the image generated here?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Use the Save Image node at the end instead of the Image Preview.

    • @Callitister
      @Callitister ปีที่แล้ว

      @@sedetweiler how to switch to save image node? And Where it will be saved?

  • @chgian77
    @chgian77 ปีที่แล้ว

    Awesome!, finally I can create my first breathtaking images on my desktop! could you please create one tutorial to explain how can we design separately the character, the background and then combine?

  • @erdbeerbus
    @erdbeerbus ปีที่แล้ว

    great! is there an option to use that for a image sequence in an automatic way? thx in advance!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      You can just drop it into the middle of any workflow.

    • @erdbeerbus
      @erdbeerbus ปีที่แล้ว

      @@sedetweiler thank you ...

  • @MAVrikrrr
    @MAVrikrrr ปีที่แล้ว

    what about img2img with sdxl base + sdxl refined?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Sure! That works great!

    • @MAVrikrrr
      @MAVrikrrr ปีที่แล้ว

      @@sedetweiler It's confusing to put it all together properly. For base+refiner I use advanced KSampler, so I guess that I have to pass image latent to base KSampler, and then put start step > 0. This will be equivalent to denoise value.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      That is correct, but I am trying to keep things a bit simpler as it is still early on in the series. You are ahead of the curve my friend!

  • @gmfPimp
    @gmfPimp ปีที่แล้ว

    Is this really image to image or is it how to use an image already generated with comfyUI in a workflow? For example, with stable diffusion, you can use an image as the input and then create from that image.

  • @heikohesse4666
    @heikohesse4666 ปีที่แล้ว

    super awesome, thank you

  • @Maxime_motion
    @Maxime_motion ปีที่แล้ว

    I will like to see how we can make portrait of yourself ( or other peronne) in comfy ui

  • @froztbytes
    @froztbytes ปีที่แล้ว

    I take it the next one would be a ComfyUI+SDXL inpainting tutorial?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Yes, that is coming soon!

  • @mikebert
    @mikebert ปีที่แล้ว

    stupid question, how can i remove a slot from a node?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Depends. Which slot?

    • @mikebert
      @mikebert ปีที่แล้ว

      @@sedetweiler i Made a " target height" Slot wrongly in the sdxl..Text..node

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      @@mikebert so is it a nested node or did you code one?

    • @mikebert
      @mikebert ปีที่แล้ว

      @@sedetweiler its the one from your Tutorial, the second after the Text primitive. (I dont remember the exact Name). I solved it by making a new node

  • @RongmeiEntertainment
    @RongmeiEntertainment ปีที่แล้ว

    lost it when you click Manager button which I don't have

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      There is a link in the description to go download that from civit.

  • @purposefully.verbose
    @purposefully.verbose ปีที่แล้ว

    is there a good workflow for controlnet in CUI?

    • @purposefully.verbose
      @purposefully.verbose ปีที่แล้ว

      oh i see someone already asked... so - that tut next? :)

    • @purposefully.verbose
      @purposefully.verbose ปีที่แล้ว

      also also, are any of the custom nodes on CivitAi "questionable"? lots of flavors, but some look kinda sketchy.

  • @spiralofhope
    @spiralofhope ปีที่แล้ว +1

    It took me a while to figure out / learn that SDXL is different stuff with different models. Whoops!

  • @divye.ruhela
    @divye.ruhela 7 หลายเดือนก่อน

    I really wanted to get rid of that math node and had no intension of coming and looking for an alternative here! Haha

  • @marjolein_pas
    @marjolein_pas ปีที่แล้ว

    Thanks a lot! again.

  • @alexandrucosma6982
    @alexandrucosma6982 10 หลายเดือนก่อน

    Love it!

  • @hitmanehsan
    @hitmanehsan ปีที่แล้ว

    whats ur gpu?

  • @krystiankrysti1396
    @krystiankrysti1396 ปีที่แล้ว

    Thats nice but you skipped refiner which is a bummer, without it results are meh, why did you do that? How i can connect refiner now ?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      It just made it easier for this video. You can always add in the refiner.

  • @Макс-к3ъ
    @Макс-к3ъ ปีที่แล้ว

    thanks you so much from ukrainian soldier

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Wishing you the best!

    • @Макс-к3ъ
      @Макс-к3ъ ปีที่แล้ว

      @@sedetweiler thank you, friend, your videos are very informative and powerful. I have already worked for 16 months at the front in the infantry.. now I am trying to grasp new technologies, although I suffer from digital critinism)

  • @ImAlecPonce
    @ImAlecPonce ปีที่แล้ว

    Thanks a ton :)

  • @petermclennan6781
    @petermclennan6781 ปีที่แล้ว +1

    I don't understand this compulsion to speak and point and click so rapidly. Not everyone is as familiar with this system as you are. For those of us brand spanking new to this interface, this tutorial borders on unintelligible. PLEASE, slow down and explain a little more about what you're doing and WHY. You speak and click WAY too fast. And I'm a native English speaker with six months experience with Stable Diffusion on another UI. Subtitles aren't available here for some reason.
    In other words, follow the basic tenet of teaching: 1) tell them what you're going to tell them, 2) tell them, 3) tell them what you told them. It works. I taught hi tech subjects to adults for years.
    Then, the tutorials will be of lasting benefit to everyone, rather than an exercise in noob frustration.

  • @Q8CRAZY8Q
    @Q8CRAZY8Q ปีที่แล้ว +1

    ty vm

  • @jeffg4686
    @jeffg4686 ปีที่แล้ว

    how come nobody takes comfyUI to the web - rather than needing a kickass gpu?
    A service like midjourney, but comfyUI - or does it exist already?

  • @LouisGedo
    @LouisGedo ปีที่แล้ว

    👋

  • @christianholl7924
    @christianholl7924 ปีที่แล้ว

    why are people, faces and hands still so f.cked up in SDXL? Even good 1.5 checkpoints generate much better results.. I really don't get into SD >1.5

    • @MultiOmega1911
      @MultiOmega1911 ปีที่แล้ว

      Seems like the vae is struggling a bit with small faces, closeups looks insanely good

  • @Overtone-1
    @Overtone-1 11 หลายเดือนก่อน

    Amazing, Thank you.

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน

      Glad you liked it!