SDXL ComfyUI img2img - A simple workflow for image 2 image (img2img) with the SDXL diffusion model

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ค. 2023
  • In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered image. I show you how to drop it into a standard workflow as well as how to adjust it to get as much difference we we would like. This is a simple graph and can be used as a point of departure for your AI art or other stable diffusion projects. Note that I don't go overboard here with the noise, and you can really push this into another realm if you add noise over 50%, so feel free to experiment!
    If you are confused, check out the SDXL graph basics here: • SDXL ComfyUI Stability...
    #stablediffusion #sdxl #comfyui #img2img
    Grab some of the custom nodes from civit.ai: civitai.com/tag/comfyui
    Here is the manager: civitai.com/models/71980/comf...
    Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)
    huggingface.co/stabilityai/st...
    The refiner is also available here (OFFICIAL):
    huggingface.co/stabilityai/st...
    Additional VAE (only needed if you plan to not use the built-in version)
    huggingface.co/stabilityai/sd...
  • ภาพยนตร์และแอนิเมชัน

ความคิดเห็น • 151

  • @TailspinMedia
    @TailspinMedia 2 หลายเดือนก่อน +7

    i love that you build your workflows from scratch and explain each node !

    • @sedetweiler
      @sedetweiler  2 หลายเดือนก่อน

      Glad to hear that! I think it is important to know how it all fits together.

    • @B4zing4
      @B4zing4 2 หลายเดือนก่อน

      @@sedetweilerDid you upload the workflow json somewhere?

    • @glydstudios5632
      @glydstudios5632 19 วันที่ผ่านมา

      Exactly this.

  • @ThoughtFission
    @ThoughtFission 10 หลายเดือนก่อน +9

    Your tutorials are really great. They convey so much useful info in an easy to digest way. Thanks!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Glad you like them!

  • @seancollett3760
    @seancollett3760 10 หลายเดือนก่อน

    Thank you! Really love working this way with SD. Can really fine tune the process to the graphics card and squeeze every drop out of the hardware.

  • @spiralofhope
    @spiralofhope 7 หลายเดือนก่อน +1

    This worked great!
    Now I think I know enough to tie multiple previous tutorials together, to get several advantages. I don't know how to disable and reroute certain things easily, so it looks like I'll be maintaining multiple workflows.

  • @nick4uuk
    @nick4uuk 10 หลายเดือนก่อน +45

    I just want to say something else that maybe others have missed regarding your excellent tutorials. It's not JUST that they're technically good, it's also because your voice is SO easy to listen to. You could teach all day and it wouldn't be exhausting for the student I think.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +3

      Wow, thank you!

    • @serpaolo7413
      @serpaolo7413 10 หลายเดือนก่อน +2

      yes he has a voice for radio@@sedetweiler

    • @SachavaV
      @SachavaV 9 หลายเดือนก่อน +1

      1000% agree

    • @GlassHexagonalColumbus
      @GlassHexagonalColumbus 8 หลายเดือนก่อน +2

      you just fell in love with Scott, admit it

    • @AbstraktKardman
      @AbstraktKardman 8 หลายเดือนก่อน

      yep

  • @14MTH3M00N
    @14MTH3M00N 5 หลายเดือนก่อน +2

    Thankful that you are repeating the first steps in these videos. Makes it really a lot easier to remember

    • @sedetweiler
      @sedetweiler  5 หลายเดือนก่อน

      Glad it is helpful!

  • @RufusTheRuse
    @RufusTheRuse 10 หลายเดือนก่อน +1

    Thanks again for your video series here - I think it's very important for those picking up Comfy UI - it has helped me understand workflows I've loaded up off of the example workflow site. And I really appreciate it when you zoom into a node you're doing something interesting with - too far out and it becomes a compressed blur.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Glad it was helpful!

  • @hleet
    @hleet 10 หลายเดือนก่อน

    wow ! I love your tutorials, you are the best resource on comfyUI topic. I'll grab that "manager" stuff to have custom nodes.

  • @Sim00n
    @Sim00n 9 หลายเดือนก่อน +1

    You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! 😘😘

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      I appreciate it!

  • @vVinchi
    @vVinchi 10 หลายเดือนก่อน +1

    Amazing to see you are back to regular uploads😊

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Yes! Thank you! I just had to work that life balance thing, but I have it sorted now. Cheers!

  • @AlekseyMarin12
    @AlekseyMarin12 5 หลายเดือนก่อน

    I think this is still one of the best and easiest workflows that gives quick results, Thank you!

    • @sedetweiler
      @sedetweiler  5 หลายเดือนก่อน +1

      Glad you think so!

  • @royjones5790
    @royjones5790 10 หลายเดือนก่อน +3

    I would love a tutorial explaining how to create 2 independent prompts, run them together for composition. Method (1) : Using position & area numbers (such as, starting from X0-x300, y0-y200 for prompt 1, etc.) Method (2) :Allowing the mouse-drawn mask area to be fed with the prompt -- please include previews of the masks being done & a preview of the mask combination. I'd really appreciate it. ComfyUI allows for greater composition control & understanding how to do it will really lean into it's strength

  • @kiretan8599
    @kiretan8599 2 หลายเดือนก่อน

    Im just got into image generation and thank you for your vids. I truly appreciate you sharing your knowledge 👍

  • @spiralofhope
    @spiralofhope 7 หลายเดือนก่อน +3

    omg I've been working for hours to try to figure out how to integrate this into the refinement workflow from earlier in the playlist.

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน

      Did you get it? Just checking that you are all good now.

    • @spiralofhope
      @spiralofhope 7 หลายเดือนก่อน +1

      @@sedetweiler
      Edit: I think I figured it out! I had things in the wrong order and some values were a bit wonky.
      5 img2img steps, denoised to 0.80
      ->
      3 refiner initialization
      ->
      20 base sampling steps
      ->
      12 final refining steps (not sure about this one)
      --
      I don't think so.
      I started with the refiner workflow, ending with a final image. This is an exact copy of a previous tutorial and it works 100%
      At the end of that workflow is a KSampler. I take that and split it off its latent.
      I take that latent and pass it through an image scale to side, with the side length of the longest of height or width I've been working with.
      I pass that latent to a LatentAdd and bring it into an exact copy of the img2img tutorial.
      I take that LatentAdd and insert it just before the img2img KSampler.
      It.. "does stuff" but I intuit it isn't correct.
      I already cleaned it up with groups (which was new to me), and I'm working on cleaning it up with reroutes so I can show it. I don't like that reroutes are someone else's extension because it feels like something that can break on updates, but I hope that reroutes are popular enough to demand updates.

  • @nimoleying2037
    @nimoleying2037 7 หลายเดือนก่อน

    You're awesome scott thanks for all these tutorials

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน

      My pleasure!

  • @hobolobo117
    @hobolobo117 10 หลายเดือนก่อน

    Started your tutorials awesome work great teacher! Excited to see learn more!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Awesome, thank you!

  • @samwalker4442
    @samwalker4442 2 หลายเดือนก่อน

    Perfect Scott, thanks again

  • @lakislambrianides7619
    @lakislambrianides7619 10 หลายเดือนก่อน

    Excellent tutorial, i cant wait to see how far you can go with ComfyUI

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      You and me both! :-)

    • @FirstLast-tx3yj
      @FirstLast-tx3yj 10 หลายเดือนก่อน

      ​@@sedetweilercaan i batch modify 10 frames i have like we use to do in old stable diffusion?
      The 10 frames are about the same extracted from a video and I want to batch modify them in a similar manor like in the normal stable diffusion
      Can i do that with sdxl?

  • @monbritt
    @monbritt 10 หลายเดือนก่อน +7

    can you help to use controlnet on the comfyUI?

  • @Troyificus
    @Troyificus 10 หลายเดือนก่อน +6

    Your tutorials are amazing and incredibly informative. Really hoping you cover SDXL Lora training and maybe even animation creation using ComfyUI at some point?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +2

      Those are coming soon!

    • @Troyificus
      @Troyificus 10 หลายเดือนก่อน

      @@sedetweiler Awesome! Can't wait!

  • @i.venture
    @i.venture 3 หลายเดือนก่อน

    Amazing, Thank you.

    • @sedetweiler
      @sedetweiler  3 หลายเดือนก่อน

      Glad you liked it!

  • @heikohesse4666
    @heikohesse4666 10 หลายเดือนก่อน

    super awesome, thank you

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Welcome 😊

  • @leogamer4835
    @leogamer4835 10 หลายเดือนก่อน +1

    I would like to see a second part of this video from img2img adding ksample refiner to the image to improve the composition, thanks for the video and good work

    • @untitled795
      @untitled795 8 หลายเดือนก่อน

      its very easy to do and gives good results!

  • @alexandrucosma6982
    @alexandrucosma6982 2 หลายเดือนก่อน

    Love it!

  • @pedroavex
    @pedroavex 10 หลายเดือนก่อน +7

    Great video! Can you share the json file for this work flow?

  • @BenleGentil
    @BenleGentil 10 หลายเดือนก่อน +2

    This is great! How can we generate (not process) a batch of images from img2img?

  • @marjolein_pas
    @marjolein_pas 10 หลายเดือนก่อน

    Thanks a lot! again.

  • @Hash_Boy
    @Hash_Boy 10 หลายเดือนก่อน +1

    many thanks mister

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Yup yup!

  • @jjgravelle
    @jjgravelle 10 หลายเดือนก่อน +1

    It was nice spending the weekend with you, boss. Well, with your videos, anyway. You've got me psyched on ComfyUI.
    Two things I hope to see you cover, eventually:
    1.) You mentioned in another video that ComfyUI might be used for model training? Yes, please; and
    2.) RE: the .yaml tweak for sharing a model folder touches on a larger issue. Play around with AI for a few days, and you end up with Oobabooga, TavernAI, Docker, Jupyter, the WSL, half a terrabyte's worth of LLMS, et al AND competing versions of Python (for X you need 3.10, for Y you need 3.11)... then there are the Condas and Cudas and-- you get the picture. As methodical and organized as you are, I'm guessing your hard drive is set up logically and efficiently. A peek into the nuts and bolts of your setup would be, as the kids say, amaze-balls...
    Your pal,
    -jjg

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Awesome to hear buddy! 🥂

  • @digiovannidesign910
    @digiovannidesign910 10 หลายเดือนก่อน

    Scott: I am really enjoying your videos. You mention in your videos if we have a question to drop it in the comments. I have a more general question about Stable Diffusion, ComfyUI, Automatic111, and MidJourney. Is it best to post here or somewhere else?

  • @fredmcveigh9877
    @fredmcveigh9877 10 หลายเดือนก่อน

    Hi Scott
    When I used stable difusion 1.5 pre comfyui there was an option to render next to the hirez fix for tiling .Is there any way to replicate that in comfyui as I do a lot of seamless patterns ??

  • @Fern-pe-fl
    @Fern-pe-fl 10 หลายเดือนก่อน +2

    What about inpainting using a mask? my current workflow relies a lot on a loop that looks like:
    prompting>inpaint>img2img>inpaint>img2img>inpaint... etc.

  • @ImAlecPonce
    @ImAlecPonce 10 หลายเดือนก่อน

    Thanks a ton :)

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      You're welcome!

  • @jibcot8541
    @jibcot8541 10 หลายเดือนก่อน +2

    I have the latest Derfuu nodes but I only have Width or Height Options on "Image scale to side" and not "Longest". any ideas why?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      I would just do a pull off the latest. Might be a new addition.

  • @JRis44
    @JRis44 9 หลายเดือนก่อน

    lol didnt realize how short todays tutorial was going to be. Noice!
    On to the next!

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      Woot! Speed is the key, brother!

  • @chgian77
    @chgian77 9 หลายเดือนก่อน

    Awesome!, finally I can create my first breathtaking images on my desktop! could you please create one tutorial to explain how can we design separately the character, the background and then combine?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      Yes, soon

  • @n3bie
    @n3bie 21 วันที่ผ่านมา

    When I followed this tutorial , after I created the primitive node and was going to connect it to the CLIPTextEncodeSDXL node as you did @0:54 I had a Text_I linker, but not the one for the text_g linker. As you do in this video. What am I doing wrong? I'm building this workflow underneath (and disconnected from) the default workflow. Could that be causing it?

  • @thecolonelpridereview
    @thecolonelpridereview 7 หลายเดือนก่อน

    Nice tutorial. What's the advantage or difference in using Primitive plus CLIPTextEncodeSDXL compared to using CLIPTextEncode(PROMPT) for your prompts?

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน +1

      You can use both of the CLIP models if you use the SDXL one, otherwise you are only using one.

  • @Lucas-uk6fj
    @Lucas-uk6fj 9 หลายเดือนก่อน +2

    The shortcut key for the search box in the video, I want to know it,Thx Bro!

  • @roseydeep4896
    @roseydeep4896 10 หลายเดือนก่อน +1

    idk if it's a stupid question to ask but I'm a newbie soooo.... Why does the SDXL CLIP encoder node have two text inputs? At first I thought it would be for both the positive and negative prompts but apparently it isn't. I don't get why you need to connect the primitive node twice lol.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      There are 2 different CLIP models you can leverage. I will go into this deeper in another video but for now, they are both either positive or negative.

  • @joneschunghk
    @joneschunghk 10 หลายเดือนก่อน

    Would a "base" or "refiner" model be better for img2img?

  • @BlackDim100
    @BlackDim100 10 หลายเดือนก่อน +1

    I guess you can also use the scaler to upscale the final image output?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      I put out a video today on that, since you can do that, but there are much better ways to get great results.

  • @haljordan1575
    @haljordan1575 5 หลายเดือนก่อน

    thanks a lot for this, could you do a tutorial on how identify and solve common errors? like when load image refuses to load image.

  • @anujpartihar
    @anujpartihar 10 หลายเดือนก่อน

    Sir is it possible to use this sdxl online? like without using the GPU since my laptop is not very powerful but I would love to dive into this stuff. I am very new to all this so forgive me for asking the obvious.

  • @jpgamestudio
    @jpgamestudio 10 หลายเดือนก่อน

    Hello brother, what is the function of the "clip space" on the menu on the right?

  • @dr.gurn420
    @dr.gurn420 3 หลายเดือนก่อน

    how do you make your projects run and compile so quickly?? I tried using a VM on run diffusion and it still takes 2-5 minutes. Do you have any other VMs you would recommend that are affordable?
    thank you for the content

  • @user-fu5sz4su8u
    @user-fu5sz4su8u 9 หลายเดือนก่อน

    I wonder whether there is workflow for i2i in batch because I want create video.

  • @marklouisehargreaves
    @marklouisehargreaves 10 หลายเดือนก่อน

    In ComfyUI is it possible to change the folder the saved images go to?

  • @MaisnerProductions
    @MaisnerProductions 9 หลายเดือนก่อน +1

    How is it that all your width and height inputs say 1024, yet your output image appears to be a portrait aspect ratio?

  • @user-bu8jc1eb1z
    @user-bu8jc1eb1z 2 หลายเดือนก่อน

    Hi, thanks or the video, I wanna add control net into this node, use img to img to do some line art animate style but with is it possible? I tried to combine the node that you share about controlnet, but not success, thank you.

  • @randymonteith1660
    @randymonteith1660 10 หลายเดือนก่อน

    Just started to learn ComfyUI. When I select a primitive it does not have the 'string' output on it? Mine only says " connect to widget input " . Next time could you have a json file or PNG image that contains your workflow. Thanks for the tutorial

    • @s.anonyme6855
      @s.anonyme6855 10 หลายเดือนก่อน +1

      The primitive will adapt to what you link it.

    • @randymonteith1660
      @randymonteith1660 10 หลายเดือนก่อน

      @@s.anonyme6855 thanks for the tip

  • @maximebraem4115
    @maximebraem4115 10 หลายเดือนก่อน +1

    does gfpgan work in comfui i keep getting error ('tuple' object has no attribute 'cpu')? can you make video about best roop implementations?

    • @justinwhite2725
      @justinwhite2725 10 หลายเดือนก่อน

      Are you running from a CPU or gpu?
      I'm using a CPU and haven't been able to get Roop to work in either comfy or a111.

    • @maximebraem4115
      @maximebraem4115 10 หลายเดือนก่อน +1

      @@justinwhite2725 gpu (amd) and roop works on both comfy and a111 the result is just realy pixulated if its an close up image so i thought gfpgan would make it better but it doesnt work

  • @erdbeerbus
    @erdbeerbus 9 หลายเดือนก่อน

    great! is there an option to use that for a image sequence in an automatic way? thx in advance!

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      You can just drop it into the middle of any workflow.

    • @erdbeerbus
      @erdbeerbus 9 หลายเดือนก่อน

      @@sedetweiler thank you ...

  • @lioncrud9096
    @lioncrud9096 8 หลายเดือนก่อน

    is there a way to do higher batch number than 1 at a time....i use an Efficient Loader that combines Model, Prompts, VAE, and Batch Steps in one node. When i increase the batch number to say,3, Comfy only does 1 image.

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      Perhaps the batch number is being bugged when part of the group? That is how I would normally do more than one per batch.

  • @carlosrios5162
    @carlosrios5162 6 หลายเดือนก่อน

    What if there's a refiner do i use the VAE of the refiner or the base model ? and which one goes to the vae decode ??

    • @sedetweiler
      @sedetweiler  6 หลายเดือนก่อน

      I would try them both, as each has a personality. You choose the one you feel is best for your situation.

  • @CaptainKokomoGaming
    @CaptainKokomoGaming 5 หลายเดือนก่อน

    Is there a way to add a trigger or something similar if you want to keep the image can you trigger an upscale of the image?

    • @sedetweiler
      @sedetweiler  5 หลายเดือนก่อน

      Yes, you can use the Image Chooser node (search for choose).

    • @CaptainKokomoGaming
      @CaptainKokomoGaming 5 หลายเดือนก่อน

      Thank you for getting back to me. I am just starting to use Comfy and Your tutorials are the first thing that have got me to see the potential instead of the confusion and the overwhelming screen. @@sedetweiler

  • @spiralofhope
    @spiralofhope 6 หลายเดือนก่อน +1

    It took me a while to figure out / learn that SDXL is different stuff with different models. Whoops!

  • @ramondiaz5796
    @ramondiaz5796 3 หลายเดือนก่อน

    Just in the step where you add ❝Derfuu nodes❞, it doesn't appear because when I install it, I get ❝import failed❞, what could I do?

  • @aZiDtrip
    @aZiDtrip 4 หลายเดือนก่อน

    that derfuu node, "image scale to side" whats the difference from using latent scale to side

  • @KallutoZoldyck
    @KallutoZoldyck 6 หลายเดือนก่อน

    This helps with text to video, but how do I do image to video locally? Text to video is kind of useless with LoRA involvement if we actually want good results

  • @froztbytesyoutubealt3201
    @froztbytesyoutubealt3201 10 หลายเดือนก่อน

    I take it the next one would be a ComfyUI+SDXL inpainting tutorial?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Yes, that is coming soon!

  • @Maxime_motion
    @Maxime_motion 9 หลายเดือนก่อน

    I will like to see how we can make portrait of yourself ( or other peronne) in comfy ui

  • @GamingDaveUK
    @GamingDaveUK 10 หลายเดือนก่อน +1

    Can you add a link to the manager? I cant see it on civatai

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      civitai.com/models/71980/comfyui-manager

  • @schumanncombo
    @schumanncombo 6 หลายเดือนก่อน

    could someone explain how to do this as a batch, to process a folder of images?

  • @tcb133
    @tcb133 9 หลายเดือนก่อน

    if i use comfyui in google colab, how can i get that Manager to work?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      I don't think so. In the colab you can specify the mods you want to install, but I don't think the manager will work since a restart is required to make the new nodes show up.

  • @KnightZexen
    @KnightZexen 10 หลายเดือนก่อน

    I can't find your image scale to side I googled and I have checked the resources you provided none not even the manager gives me that node

  • @joebreaker11
    @joebreaker11 4 หลายเดือนก่อน

    Guys, I cannot find the installed custom nodes in the search field - derfuu among them :( when I install custom nodes in the Manager - via simple installation or via Github URL - it shows "To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser." But when I clikc on the RESTART button, it show only RECONNECTING... that lasts foerver. If I click CLOSE and refresh the browser, a blank page appears instaed of the COMFY interface - and I have to restart the SDXL. But it won't change anything - the node still won't appear :(((( Can anyone help me?

  • @purposefully.verbose
    @purposefully.verbose 10 หลายเดือนก่อน

    is there a good workflow for controlnet in CUI?

    • @purposefully.verbose
      @purposefully.verbose 10 หลายเดือนก่อน

      oh i see someone already asked... so - that tut next? :)

    • @purposefully.verbose
      @purposefully.verbose 10 หลายเดือนก่อน

      also also, are any of the custom nodes on CivitAi "questionable"? lots of flavors, but some look kinda sketchy.

  • @gmfPimp
    @gmfPimp 6 หลายเดือนก่อน

    Is this really image to image or is it how to use an image already generated with comfyUI in a workflow? For example, with stable diffusion, you can use an image as the input and then create from that image.

  • @Q8CRAZY8Q
    @Q8CRAZY8Q 9 หลายเดือนก่อน +1

    ty vm

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      You are welcome!

  • @Callitister
    @Callitister 9 หลายเดือนก่อน

    Can you please tell me how to save the image generated here?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      Use the Save Image node at the end instead of the Image Preview.

    • @Callitister
      @Callitister 9 หลายเดือนก่อน

      @@sedetweiler how to switch to save image node? And Where it will be saved?

  • @renanarchviz
    @renanarchviz 4 หลายเดือนก่อน

    the node Utils folder is not apper

  • @MAVrikrrr
    @MAVrikrrr 9 หลายเดือนก่อน

    what about img2img with sdxl base + sdxl refined?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      Sure! That works great!

    • @MAVrikrrr
      @MAVrikrrr 9 หลายเดือนก่อน

      @@sedetweiler It's confusing to put it all together properly. For base+refiner I use advanced KSampler, so I guess that I have to pass image latent to base KSampler, and then put start step > 0. This will be equivalent to denoise value.

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน +1

      That is correct, but I am trying to keep things a bit simpler as it is still early on in the series. You are ahead of the curve my friend!

  • @user-fz4xr3xg4j
    @user-fz4xr3xg4j 9 หลายเดือนก่อน

    thanks you so much from ukrainian soldier

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      Wishing you the best!

    • @user-fz4xr3xg4j
      @user-fz4xr3xg4j 9 หลายเดือนก่อน

      @@sedetweiler thank you, friend, your videos are very informative and powerful. I have already worked for 16 months at the front in the infantry.. now I am trying to grasp new technologies, although I suffer from digital critinism)

  • @mikebert
    @mikebert 10 หลายเดือนก่อน

    stupid question, how can i remove a slot from a node?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Depends. Which slot?

    • @mikebert
      @mikebert 10 หลายเดือนก่อน

      @@sedetweiler i Made a " target height" Slot wrongly in the sdxl..Text..node

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      @@mikebert so is it a nested node or did you code one?

    • @mikebert
      @mikebert 10 หลายเดือนก่อน

      @@sedetweiler its the one from your Tutorial, the second after the Text primitive. (I dont remember the exact Name). I solved it by making a new node

  • @krystiankrysti1396
    @krystiankrysti1396 10 หลายเดือนก่อน

    Thats nice but you skipped refiner which is a bummer, without it results are meh, why did you do that? How i can connect refiner now ?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      It just made it easier for this video. You can always add in the refiner.

  • @ehsankholghi
    @ehsankholghi 4 หลายเดือนก่อน

    whats ur gpu?

  • @RongmeiEntertainmentGroup
    @RongmeiEntertainmentGroup 9 หลายเดือนก่อน

    lost it when you click Manager button which I don't have

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      There is a link in the description to go download that from civit.

  • @LouisGedo
    @LouisGedo 10 หลายเดือนก่อน

    👋

  • @jeffg4686
    @jeffg4686 4 หลายเดือนก่อน

    how come nobody takes comfyUI to the web - rather than needing a kickass gpu?
    A service like midjourney, but comfyUI - or does it exist already?

  • @petermclennan6781
    @petermclennan6781 5 หลายเดือนก่อน +1

    I don't understand this compulsion to speak and point and click so rapidly. Not everyone is as familiar with this system as you are. For those of us brand spanking new to this interface, this tutorial borders on unintelligible. PLEASE, slow down and explain a little more about what you're doing and WHY. You speak and click WAY too fast. And I'm a native English speaker with six months experience with Stable Diffusion on another UI. Subtitles aren't available here for some reason.
    In other words, follow the basic tenet of teaching: 1) tell them what you're going to tell them, 2) tell them, 3) tell them what you told them. It works. I taught hi tech subjects to adults for years.
    Then, the tutorials will be of lasting benefit to everyone, rather than an exercise in noob frustration.

  • @christianholl7924
    @christianholl7924 10 หลายเดือนก่อน

    why are people, faces and hands still so f.cked up in SDXL? Even good 1.5 checkpoints generate much better results.. I really don't get into SD >1.5

    • @MultiOmega1911
      @MultiOmega1911 10 หลายเดือนก่อน

      Seems like the vae is struggling a bit with small faces, closeups looks insanely good

  • @goactivemedia
    @goactivemedia หลายเดือนก่อน

    Hi do you have a video on this but using a second image as what you want the first to look like for that kind of look? so image to image style to image I guess? thanks

  • @Lacher-Prise
    @Lacher-Prise 9 หลายเดือนก่อน

    Hello great video GG so interesting @0:51' you add "primitive" where you find "logic" ? in my COMFYui i have not "logic" neither "_classifier" what custom node did you have added ?
    In other hand my own "utils + primitive" dont look like the same of yours STRING + Text BOX -- mine is "connect to widget input" without text box
    Thank you for your sharing your play list about COMFYui & stable are master class congratulations
    Team Shibuntu