ComfyUI Infinite Upscale - Add details as you upscale your images using the iterative upscale node

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 258

  • @JosephKuligowski
    @JosephKuligowski ปีที่แล้ว +9

    how do you disable the use_filled_vae ?
    EDIT: So the reason why you have to install the BlenderNeko: Tiled Sampling for ComfyUI that's the reason behind this issue.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Oh, good find! I will pin this comment.

    • @kaziahmed
      @kaziahmed ปีที่แล้ว +1

      @@sedetweiler Where do I put the BlenderNeko: Tiled Sampling in my ComfyUI directory?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +3

      All of those go under custom noise. You can use the Manager to install it and that makes life easier.

    • @kaziahmed
      @kaziahmed ปีที่แล้ว +1

      Thank you! @@sedetweiler

    • @kaziahmed
      @kaziahmed ปีที่แล้ว +1

      Btw, great video! Thank you for such an informative tutorial. @@sedetweiler

  • @justinwhite2725
    @justinwhite2725 ปีที่แล้ว +1

    I like that the pipe takes inputs rather than just loads the model. I've gotten great results using a different clip from a different model.

  • @Sim00n
    @Sim00n ปีที่แล้ว +5

    You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! ⭐🌟 and love the recap at the end of the video, excellent !!!🤩🌟

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Wow, thank you! Glad you enjoyed it!

  • @margotpaon
    @margotpaon ปีที่แล้ว +2

    Amazing tutorial Scott. Thank you very much! I'm learning about stable diffusion and ComfyUI and this class helped me a lot with upscalers and I hope everyone realized that in addition to adding the purple hair we can remove some detail with a negative prompt

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Glad you enjoyed it!

  • @AIMusicExperiment
    @AIMusicExperiment ปีที่แล้ว +3

    You are a hero! Everytime I watch one of your videos, I learn stuff that I would have never guessed. Impact pack is a huge Go To for me, I also LOVE the efficiency nodes.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Thanks for watching!

  • @AbstaartKardman
    @AbstaartKardman ปีที่แล้ว +4

    Great tutorial! Thanks for taking the time to clear these things up. I have to mention that I happened to be watching your tutorial right before my daily workout routine, which added a whole new unexpected layer of entertainment when mixing academia with athleticism . Thank you again for sharing your knowledge!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +2

      Great to hear! I have a few more coming that will be mind blowing as well.

  • @CyberthonTV
    @CyberthonTV ปีที่แล้ว +2

    I really like how you step through your tutes, step by step and clear as a bell!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Thank you!

    • @nepobedivititanik
      @nepobedivititanik ปีที่แล้ว +1

      ​@@sedetweilercan you please upload workflow file?

  • @DurzoBlunts
    @DurzoBlunts ปีที่แล้ว +6

    For those with low vRAM, this node eats up vRAM! A useable alternative is the Ultimate SD Upscaler custom node. It's not as vRAM hungry. This iterative limits me to about 7 steps and 1.5x upscale. Whereas I can do 2.25 or even 2.5x with SDUltimate Upscaler.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      That is actually the other method I use for upscaling, but I wanted to cover this one as well since there are other strategies I show in here that are not exactly related to the upscaler but are helpful to know overall. Cheers! That video is also coming soon!

    • @DurzoBlunts
      @DurzoBlunts ปีที่แล้ว

      ​@@sedetweilerI completely agree with your documentation of everything and making sure the viewer knows how it works. Doing a great job for the new comers to SD node based gen.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Thank you!

  • @ChielScape
    @ChielScape ปีที่แล้ว +23

    Darn young'uns 'n' their wayfoo models

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +3

      Kids these days. Geez. ;-)

  • @Marcus-si7su
    @Marcus-si7su 11 หลายเดือนก่อน +1

    Very nice and slow, showing everything how it fits together, really liked watching this.

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน +1

      Awesome, thank you!

  • @Puckerization
    @Puckerization ปีที่แล้ว +27

    Excellent tutorial Scott, thank you. I've had the Impact Nodes installed for a week or so but it's really hard to find tutorials on their various functions. I've learned a lot from this video. Please add more Impact Nodes tutorials when you get the chance.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +5

      Thank you! Yes, there are going to be a lot more coming as I think this is a wonderful pack of custom nodes.

    • @DurzoBlunts
      @DurzoBlunts ปีที่แล้ว +3

      Impact pack creator has a TH-cam channel they're uploading examples and eventually tutorials to.
      Channel is 'Dr Lt Data' I believe.

    • @Puckerization
      @Puckerization ปีที่แล้ว +2

      @@DurzoBlunts Yes, I've seen them. They are all silent movies of someone who knows what they are doing but can't communicate it very well to the rest of us.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +3

      Yes, I found him a few days ago and it helped with some of the new stuff. I have been using it for a few weeks now, but hopefully my videos will get his pack more noticed.

  • @tsutsen1412
    @tsutsen1412 ปีที่แล้ว +5

    The best videos on Comfy! Love it, thank you very much!

  • @piersyfy4148
    @piersyfy4148 ปีที่แล้ว +3

    The best ComfyUI tutorial I've come across. Thank you so much mate!

  • @hakandurgut
    @hakandurgut ปีที่แล้ว +6

    This channel is the only one i have all notifications on. Also the only channel i dont fast forward :) i enjoy all moments of the videos

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Wow, thanks! You made my day! Cheers!

    • @hakandurgut
      @hakandurgut ปีที่แล้ว +1

      Hope you get to find more time for more videos.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Yup! More are on the way soon!

  • @TailspinMedia
    @TailspinMedia 10 หลายเดือนก่อน

    this is awesome, i love that you walk through the workflow nodes to explain what is happening.

  • @brandonflores4
    @brandonflores4 11 หลายเดือนก่อน +1

    things certainly escalated this video. thank you so much, could not understand without you.

  • @Enricii
    @Enricii ปีที่แล้ว +3

    Thanks for sharing, there is a huge need to explain custom nodes! Sometimes they don't even have the "automatic" input node to choose from, so it's quite difficult to understand their usage (not speaking about impact pack here).
    Regarding the topic of the video, I've been experimenting with different upscale methods and nodes, this one included. The outcome, in my opinion, is that Ultimate SD upscale with controlnet tile is the best method (like it is in A1111) :D

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Yup! I agree, and that video is probably next. However, I wanted to cover some of the concepts in here that might be useful when dealing with the node driven process. It isn't the best upscaler by far, but it does give us another tool in our pocket. Cheers!

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 ปีที่แล้ว +2

    This was just so awesome video. This really shows the power of ComfyUI. PLease bring in more videos like these. There very rare videos like these on youtube. Only few people actually are uploading ComfyUI videos.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +2

      I will keep them coming!

  • @reapicus557
    @reapicus557 7 หลายเดือนก่อน

    Excellent video! I need to get in the habit of using the pipes more often. Also, I had no clue about the iterative upscalers, nor have I really been able to figure out hooks before now. This has helped me a bunch. :)

  • @AIAngelGallery
    @AIAngelGallery ปีที่แล้ว +2

    Just wow! Thx for introducing this cool nodes!

  • @mikerhinos
    @mikerhinos ปีที่แล้ว +2

    Didn't know this technique thanks !
    It's changing quite a lot the base image though compared to traditional tiled upscale.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I did have my noise pretty high, so I could have controlled that. However, I always like to see what details it adds, so I sort of enjoy this process of exploration.

  • @ekot0419
    @ekot0419 5 หลายเดือนก่อน

    Thank you so much for this tutorial. Now I am learning how to use it. Well, I could have just download workflow and be done with it. But then I haven't learned anything other than knowing how to use it.

  • @MugiwaraRuffy
    @MugiwaraRuffy 5 หลายเดือนก่อน

    first off all, cool new approach learned here. But furter more, learned just some minor, but handy tricks. like SHIFT+Clone for keeping connections. Or that you can set the line direction on reroute nodes.

  • @ChielScape
    @ChielScape ปีที่แล้ว +4

    I've been using an iterative upscale method where I basically did what A1111 does with img2img and getting good results. Rather than upscaling the latent, I upscale the image and then re-endcode to latent between every step. As you mentioned, upscaling latent images does weird things. The first x2 upscale step uses 0.40 denoise, while using 0.20 for the second. Impact nodes do seem useful, i've been looking for a way to concat prompts.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +3

      I think there are so many ways to bend this, and I love that you are coming at it from another angle but finding some of the same things. Keep the ideas coming!

    • @nuppanuppa
      @nuppanuppa ปีที่แล้ว

      how you do it with every step?

  • @PaulFidika
    @PaulFidika ปีที่แล้ว

    Holy shit this man is a master of ComfyUI. I feel like 'master of ComfyUI' could be a full college course.

  • @sunshineo23
    @sunshineo23 6 หลายเดือนก่อน

    I'm just shocked after you correct the starting denoise to 0.3, the change to the image is almost like edit the image by prompt. This is going to change the world for a lot of people

  • @xxraveapexx2750
    @xxraveapexx2750 หลายเดือนก่อน

    i learned so many things in this video i didn't know before :D notjust upscaling but also the copy a Node and STRG + arrow ↑ trick.. do u have a video for all these little QOL combinations?

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 7 หลายเดือนก่อน

    Nice Workflow. So, basically, this is the HiRes Fix in Automatic1111, but more advanced and customizable.

  • @JamesPound
    @JamesPound ปีที่แล้ว +1

    Thanks for sharing your process. It's great to see innovation. I'm not sure ComfyUI is the best choice for this workflow though, each step is getting less detailed overall. It might be better to have more control of what happens between each step (ala Auto1111).

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      You can do that on here as well, I just wanted to show one method. Another method video is coming soon that you might prefer, or even mix them together!

  • @Padybu
    @Padybu ปีที่แล้ว +1

    Just what I needed, Thank you!

  • @FusionDeveloper
    @FusionDeveloper 11 หลายเดือนก่อน

    That's cool, I didn't know you could do stuff like this to have it choose a random one:
    {sunrise|sunset|raining|morning|night|foggy|snowing}

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน

      Yup! Lots of other prompt tricks in there as well.

  • @geoffphillips5293
    @geoffphillips5293 2 หลายเดือนก่อน

    It's great fun. Using with an image work pretty well when it's an AI image, not so good on real photos.

  • @justinwhite2725
    @justinwhite2725 ปีที่แล้ว +1

    Average is pretty straight forward (literally just takes the matrix and averages all the numbers), but what is the difference between concat and combine? And how do they interact with control nets before them?
    I've gotten strange results when using either depending on which connection I add things to.
    I have yet to see any documentation that really clarifies the difference.
    My understanding is that combine is basically the :: operator in midjourney, which makes me wonder what concat does.
    It can't be adding words to the end of the prompt because it's post-encode. It probably appends the matrix to the end of the previous one but what does that actually do in terms of how it's processed?

  • @RickHenderson
    @RickHenderson ปีที่แล้ว +1

    Great work Scott. Do you have a workflow for taking just an image that's already generated and then upscaling it? Thanks.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Yes, I have done that often (even in the live stream today). I am not sure I have a video on that specifically, but I do it all the time.

  • @Potts2k8
    @Potts2k8 5 หลายเดือนก่อน +1

    Sorry, am noob, but is there a way to utilize this for the option of upscaling already existing images?
    Everything I've tried either gives me errors or takes ages with no change to the original image - hell, even makes it worse most times.

  • @JeanDupont-x6j
    @JeanDupont-x6j ปีที่แล้ว +1

    Very well explain, I like it.
    Question: Do you think it's possible to preserve the pink t-shirt when you change hair color.
    I wonder I there is way to preserve element color (I try cutoff but the result wasn't perfect).

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      There are a lot of ways, but the graph would get complicated. However, I think we are going there soon as a lot of the basics are covered now.

  • @hippotizer
    @hippotizer 10 หลายเดือนก่อน

    extremely useful things to learn from this video!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Glad to hear that!

  • @michaelbayes802
    @michaelbayes802 ปีที่แล้ว +1

    Hi Scott, thanks for your great videos! keep 'em coming. One question though what is the main advantage of using this upscale process? Is it quality or is it quicker - not sure I understood after watching the video why I should use this. thanks

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      This was just an example of an upscaler workflow, and there are many. I did this one first to show some of the more interesting aspects you can use, like late prompt injection, provider nodes, and other little aspects. It probably isn't the best upscaler, but much better than the default node. Another one is coming soon that is my favorite one but isn't anywhere near as interesting to setup.

  • @pn4960
    @pn4960 ปีที่แล้ว +5

    I can use SDXL with my 6GB graphics card in comfyUI ! isn’t it amazing ?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +3

      I have a 4gb laptop that can also run it... Slow for sure, but the fact it works is pretty amazing! Cheers!

  • @johnmcaleer6917
    @johnmcaleer6917 ปีที่แล้ว +1

    I've downloaded some 'monster workflows' from some very clever users but can't see much value in them compared to your lovely simple workflows....I'm not sure Comfy needs to be complicated as much as some graphs make it...your vids are so accessible, keep em coming...nice simple inpainting one would be good if you are in need of suggestions...😉

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Glad you like them! Inpainting is coming soon! I am actually doing that live on Discord today at the official Stability.ai Thursday broadcast.

  • @Lorentz_Factor
    @Lorentz_Factor 11 หลายเดือนก่อน +1

    So with some models with SDXL. I had this working pretty well, however other models. It seems to fade to a gray and hazy look. Do you have any idea why this might be happening? I've tried to Justin to CFG but that doesn't seem to have much effect but to be faded and fried
    Also, did you mean to title the video iterative or infinite?

  • @TR-707
    @TR-707 11 หลายเดือนก่อน

    after i hooked up my own upscale models WHEWWWW this is insane

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน +1

      Woot!

    • @TR-707
      @TR-707 11 หลายเดือนก่อน

      @@sedetweiler it's very funky to edit the image with sharpening and higher contrast to crisp it up before the upscaling that usually blands them out

  • @AILifeHacks
    @AILifeHacks ปีที่แล้ว

    great video - very concise explanation and easy to follow

  • @andresz1606
    @andresz1606 11 หลายเดือนก่อน

    Could you explain why you have set such a high CFG in the HookProvider and a low CFG in the UpscalerProvider? The default values are the other way round. I can't believe how those values worked fine in your case because they failed miserably in my workflow.

  • @samwalker4442
    @samwalker4442 8 หลายเดือนก่อน

    It does make me laugh when your OCD is triggered....you set mine off as well!!

  • @mohanedkhater
    @mohanedkhater หลายเดือนก่อน

    I love your content
    But I can't really find this model anywhere, any chance to provide a link to the checkpoint?
    Thank you

  • @patagonia4kvideodrone91
    @patagonia4kvideodrone91 ปีที่แล้ว +1

    It would be very useful if you can share any of those images with us, so we can obtain the blue print, to simplify putting it into practice, with the manager, which allows us to install the missing nodes, it is much simpler, the video is very good, it looks very the process is clear, I had been testing several upscalers, x4 x8 that I was able to generate photos of up to 16000x16000 of 500mb each, the good thing about this technique is that better details can be applied as it is enlarged,

  • @EranMahalu
    @EranMahalu 11 หลายเดือนก่อน

    amazing tutorial, thanks! question - do you think it can work off of an input image rather than a prompt?

  • @nerdaxic
    @nerdaxic 7 หลายเดือนก่อน

    Thank you, this was super helpful tutorial ✌🏻

  • @ThedjAwesome
    @ThedjAwesome 10 หลายเดือนก่อน

    Your videos are really helpful. Thanks for making them. After, I ran the process I received a 3rd image without purple hair. The upscaler, 4x_NMKD-Superscale-SP_178000_G.pth, I input to PixelKSampleUpscalerProvider gave me a result without purple hair and is blurry. I may try to track down the upscaler you used. Anyways, how do I re-run the whole process? If I click, Queue Prompt in an attempt to redo the whole process, it does nothing.

  • @cclarkk
    @cclarkk ปีที่แล้ว

    Thanks for these fantastic videos! They've been incredibly helpful.
    Can I use this workflow/ComfyUI to generate videos too ? Be it from a single frame of an existing video or from the latent noise.

  • @JonnieMo
    @JonnieMo ปีที่แล้ว

    Fantastic demo, thank you!

  • @monkeymediapl
    @monkeymediapl 11 หลายเดือนก่อน +1

    Hi. Awesome tutorial but in my case something goes wrong. If in PixelKSampleUpscalerProviderPipe i put denoise under 1.0, eg. like you 0.3, i got low quality output. When left it at 1.0 everything is super crisp, but lot different than first generated image. Do you have any clue how to make this work?

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน

      You sure it was the denoise and not another setting? I did that in the video and caught myself later.

    • @monkeymediapl
      @monkeymediapl 11 หลายเดือนก่อน

      @@sedetweiler i'm afraid that is denoise...

  • @PatrickIsbendjian
    @PatrickIsbendjian ปีที่แล้ว

    @sedetweiler Thanks for a great tutorial!
    I tried your workflow step by step and had no issues. However, I found that the result's quality with the latent upscale was not up to my expectations with some jaggy lines. I decided to experiment a bit and tried to use an Iterative Upscale (Image). It turns out that it needs the same Provider and basically produces the same results as the other Upscaler. It seems that the only difference is that it take the VAE as input and outputs an Image thus saving the VAEDecode node.
    Now the interesting part: if I plug the 4x_UltraSharp into the upscaler_model input, I get much better results (but it slows down the generation) As far as I know, the model is supposed to work only on images not on latents, yet everything goes smoothly whether the Iterative Upscale is Latent or Image. It seems that the Provider does Decode/Encode as necessary. Am I correct or am I missing something?

  • @MrMorvar
    @MrMorvar 9 หลายเดือนก่อน

    Is this workflow still valid? Compared to Automatic111's img2img tab upscaling with just eular a and denoising of 0.2-0.3, I'm clearly losing detailed lines when working on anime pictures. Even if I'd go up really slowly and 5 steps

  • @LatentNoise-m9v
    @LatentNoise-m9v ปีที่แล้ว

    Thanks for the clear explanations.
    I tried to insert a controlnet (the last canny SDXL 256) into the conditioning pipeline, but image generation fails after the first sampler. It seems that this workflow is not compatible with Controlnet. Is there a solution to avoid this?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Hmmm, it should work. I will have to give it a try.

  • @otakufra
    @otakufra 10 หลายเดือนก่อน

    Hello, thx so much for your tutorial. I'm just wondering if i can add a 4th upscaler, or even a 5th ? But i can't figure out how to get further the 3th. Do you have any tips please ? Thx again scott

  • @--signald
    @--signald ปีที่แล้ว +2

    Hey Scott, another good tutorial. I did a test at the mid-point of this tut and my first upscale from 704 x 448 to 1880 x 1200 (close enough to 1920 x 1080 to work with) took 19 minutes! Apples to oranges, but using Controlnet Tile in A1111 took 2.5 minutes. I'm working on a series of Deforum animations that mean upscaling of over 40,000 frames. I turned to this tut in the hope that Comfy would come to the 2.5 min rescue. Any chance you've got a trick in your pocket for us animators? Because this won't cut it. (Oh, and I haven't seen a Load Batch node. Is there one?)

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      I am sure once we have controlnet we can get the times closer together.

  • @Skettalee
    @Skettalee ปีที่แล้ว +1

    I was hoping you would get to the point where you change the starting image to an upload or whatever you call it dialog. Like i have my own pictures I took as a kid and would love to see how it would try to upscale it. Could you show us how to then add you own image at the beginning to upscale

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Yes, I can do this in a video. I have done things like that in live-streams but not in an official video yet.

    • @Skettalee
      @Skettalee ปีที่แล้ว

      @@sedetweiler i would love to see it and learn it. comfyui is still so confusing to me, i feel like im just learning it with trail and error an a little serch and find. But the thing is with all the fresh technology through generative ai it changes so fast and the tutorials im finding are out of date for some of them, ill subscribe and hope to see you live soon! you know what imma ask you !

  • @Nyarlatha
    @Nyarlatha ปีที่แล้ว

    thank you so much! you guys are amazing.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ ปีที่แล้ว

    Hi Scott, I've got a use case question. I need to upscale an image that I generated on SDXL using my own trained lora for a highly detailed photorealistic portrait of a person. Now, I am noticing that the skin, which is what I want mostly latent generated, is very good in my LoRA. Is there a way to inject those LoRA weights through the model in the UpscalerProvider Pipe? Only problem is I generated the image in auto1111, so the weight interpreation is a little different. But in principle, would you think there is a workflow to enhance via Iterative Upscaling and piping a lora into it, maybe using weight blocks? Any thoughts? Would love to get this right.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I might have to give you an example for this, but you can easily do it. Don't get into the mindset that you can only use one checkpoint. You can always load others and use them in different places in the workflows as long as they are compatible.

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ ปีที่แล้ว

      Ok will pursue this further. I managed to finish my job, but with very high resolution but not able to easily control that Lora skin I wanted. I get stuck testing and seeing anything really effective coming from it. Anyway, small use case, nothing to make big fuss about. Thanks

  • @CBDuRietz
    @CBDuRietz ปีที่แล้ว

    Pretty new to ComfyUI and working through the tutorials right now.
    One question I have is: In the "KSampler (pipe)", is the VAE output channel the same VAE that also is passed out in the BASIC_PIPE output channel, or is it a VAE modified by the KSampler Node?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      It's the same all the way across the grid. Typically we don't mess with the VAE, but sometimes we will use another one, but we would probably specify that and it would be obvious. No steps should be hidden.

    • @CBDuRietz
      @CBDuRietz ปีที่แล้ว

      Thanks. I kind of suspected that, but was a little confused by the uppercase/lowercase naming convention, and trying to understand it.
      Do you know the rationale? The input side is mostly in lowercase, while the output side a little more mixed, usually uppercase but sometimes lowercase.
      Perhaps I'm just overthinking it, being a software developer by trade. 🙂

  • @taakefyrste
    @taakefyrste ปีที่แล้ว

    I appreciate these comfy go threw videos highly Scott. Great content! I wonder if midjourney engine (model?) will be accessible in stable diffusion! I find that better than dsxl for time beeing! Keep on the great work!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      No, unfortunately there business model means they cannot release their model like stability.ai can. However, there are some models that are close, and you can also use midjourney images you have downloaded in your pipelines to do alterations.

  • @StargateMax
    @StargateMax 11 หลายเดือนก่อน

    Is it outdated by now? Because the workflow setup does not work. I installed all the required stuff, but it keeps throwing me this error:
    When loading the graph, the following node types were not found: MilehighStyler
    And there doesn't seem to be any fix to this as of 12/9/2023.

    • @ThedjAwesome
      @ThedjAwesome 10 หลายเดือนก่อน

      Worked fine for me today.

  • @Shirakawa2007
    @Shirakawa2007 ปีที่แล้ว

    Great video! I'm slowly learning ComfyUI, coming from Automatic1111 (for the easy use of SDXL with my 6gb gpu). One thing I'd like to ask is, what would be the method equivalent to the upscaling you can get in the "extras" tab in Automatic1111? Because whenever I try to upscale to something bigger than 2048x2048 I got vram issues (when in A1111 I can go to x4 that value in Extras tab). Any help will be appreciated!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Yes, there are methods for that and I will be covering another upscale method soon.

  • @artplaenan445
    @artplaenan445 ปีที่แล้ว

    Hello and thx for this share !
    For some reason, at each Iterative Upscale node, my generation becomes brighter and brighter. Do you have an idea, pleaz ?

  • @TransformXRED
    @TransformXRED ปีที่แล้ว

    Hey Scott...
    Do you have a video about the best practices to manage all the workflows config files?
    Because I always find myself having a proper node workflow, but I test so many new things that it's getting messy, then I get 10 versions of something and I always start over at the end lol. I'm sure there are ways to stay organized.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I tend to keep my favorite ones in a folder on my desktop. I also put the one from today in the Posts area of the channel for Sponsors, and I would probably rename that one to something like "Upscaler Base" and remove a few of the testing nodes. I do have another video coming soon that might actually help you a ton in this area, so perhaps I will push that one up near the top. Cheers!

    • @TransformXRED
      @TransformXRED ปีที่แล้ว

      @@sedetweiler Thanks four your reply!
      I just recreated what you did in the video, I tested with the upscale model "Siax", well, It's pretty interesting, since the upscale model is super sharp, the 3x added some type of grain into the final image :D
      Thanks for these videos btw, they are great.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I would keep playing with the noise, sampler, scheduler, and all that until you get something you love. It can change a ton by just tweaking values.

  • @GlassHexagonalColumbus
    @GlassHexagonalColumbus ปีที่แล้ว

    Whenever I'm pasting with Shift key it actually doubles pasted object (. Edited - checked on my second device with different OS - same problem

  • @ownimage
    @ownimage ปีที่แล้ว

    Thanks for these, just what I was looking for ... could you share the json for the final flow?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      It is in the posts for the page and is visible for channel sponsors.

  • @uk3dcom
    @uk3dcom ปีที่แล้ว +1

    So many nuggets here. 🙂

  • @uk3dcom
    @uk3dcom ปีที่แล้ว

    Hi Scott, I'm following along with your tutorial but the PixelKSamplerUpscalerProviderPipe node is asking for a use_tiled_vae? This doesn't show on your version of the node? What to do?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      See the pinned comment. You are probably missing a component. Cheers!

  • @erdbeerbus
    @erdbeerbus ปีที่แล้ว

    Crazy
    Is it possible to load a stack of images like a task into a comfy UI workflow to change a sequence of images in this way? Thx in advance

  • @ferniclestix
    @ferniclestix ปีที่แล้ว

    Great tutorial, thanks!

  • @ysy69
    @ysy69 ปีที่แล้ว

    Hi Scott, would you say that the iterative scaling possible by ComfyUI is now part of "best practices" for upscaling (SD1.5 and SDXL) ?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      I sure think so. It takes any image and adds those details everyone seems to want.

  • @leonardhinkelmann5629
    @leonardhinkelmann5629 ปีที่แล้ว

    I tried this but for some reason the pixelksampleupscalerproviderpipe has a tile size even when use_tiled_vae is disabled and returns nothing useful. Did they make a mistake with an update of the custom node or what am i missing?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      This is a bit dated, so things might have changed. All of these nodes get updated several times a days, so be 100% sure both comfy and all of the custom nodes are updated.

  • @rolarocka
    @rolarocka ปีที่แล้ว

    Wow I'ma try this soon 🎉😍, thx 🙏

  • @adisatrio3871
    @adisatrio3871 ปีที่แล้ว

    How to deal with that color bleeding? That purple is not only for the hair, but also to a lot of other things.

  • @dominikstolfa4579
    @dominikstolfa4579 7 หลายเดือนก่อน

    I would like to use this method to add details to already existing non-ai picture. Is it possible?

  • @adrient104
    @adrient104 11 หลายเดือนก่อน

    "Pretty simple graph"… I’m like 😵🍝

  • @GuitarWithMe100
    @GuitarWithMe100 ปีที่แล้ว +1

    On my PixelKsampleUscalerProviderPipe there is a boolean option use_tiled_vae, how do i check this?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Just click it and it will enable.

    • @drltdata
      @drltdata ปีที่แล้ว +2

      Update your ComfyUI to latest version.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      See the pinned comment. You are probably missing the tiling node like I was.

  • @teodosiytanev5762
    @teodosiytanev5762 ปีที่แล้ว +1

    Nice tutorial, but how do I upscale a pre-existing image that isn't already ai generated?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Just use the image loader and VAEencode it to a latent and keep the workflow the same. There is nothing special about the AI image compared to any other. Just getting it into the workflow using the loader is the only extra step. Cheers!

    • @teodosiytanev5762
      @teodosiytanev5762 ปีที่แล้ว

      @@sedetweiler thanks but, im getting
      "Error occurred when executing PixelTiledKSampleUpscalerProviderPipe:
      object of type 'NoneType' has no len()"
      reguardless of which upsclaler provider im using

  • @Artem-ch5bh
    @Artem-ch5bh ปีที่แล้ว

    For me under use_tiled_vae there is tile_size i don't what to put in therefore my result or the second image is completely zoomed in and you can't see the actual character. can someone help?

  • @explorer945
    @explorer945 ปีที่แล้ว

    Awesome video. Can you do a video where we can do image to image masking and inpainting with just prompts and nodes (no manual masking). Is that possible? Similar to stability AI API

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      Yes I can. I like workflows that don't tend to make assumptions on locations of things.

    • @explorer945
      @explorer945 ปีที่แล้ว

      @@sedetweiler can you do a video on it plz🙏

  • @jamesclow108
    @jamesclow108 ปีที่แล้ว +1

    Darn, my PixelKSampleUpscalerProviderPipe it has another pin called use_tiled_vae above the 'basic pipe' pin. not sure where I went wrong there? Anyone know where I should plug this into? Just seen that pinned comment about Blend Neko, will give that a go. Updated comfy, updated Impact, restarted comfy, removed node, added node, same issue. hmm
    I found the issue to be the sdxl vae that I had fed in at the beginning. I just connected the vae from load checkpoint instewad and problem gone!

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      See the pinned comment. There is a component it needed but it wasn't documented.

  • @chrisfreilich
    @chrisfreilich ปีที่แล้ว

    Great, if a bit overwhelming, tutorial! One thing is different for me, in than the PixelKSampleUpscalerProviderPipe has an input called 'use_tiled_vae' that's required in order to work. I couldn't find a simple BOOLEAN node so I had to kludge together a few other nodes to create a FALSE for that input. Any idea why the difference, and maybe an easy way to input a BOOL?

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      I think you might have the wrong node, as some are quite similar in name.

    • @drltdata
      @drltdata ปีที่แล้ว

      Update your ComfyUI to latest version.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      You might also use the manager and install BlenderNeko: Tiled Sampling

    • @dcpuzzles2990
      @dcpuzzles2990 11 หลายเดือนก่อน

      I know it's late, but may be useful if someone else has the same issue - if you right click on the node, you should get the option to convert any input to a widget, which puts it into the properties list. In this case it would put the input as a switch that is disabled by default, but you can enable it in the properties section

  • @hdkr4ik
    @hdkr4ik 9 หลายเดือนก่อน

    Could you give the .json settings for this case?

  • @kryless7775
    @kryless7775 ปีที่แล้ว

    Does not work for me , even with blendeneki, there is this "use_tiled_vae" option and i don't know what to do with it...

  • @luiswebdev8292
    @luiswebdev8292 11 หลายเดือนก่อน

    great tutorial!!

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน

      Thank you!

  • @craiggrella
    @craiggrella 9 หลายเดือนก่อน

    how do you show the steps through the upscaler? Is that a setting in manager or something else?

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      yes, you can enable TAESD slow previews and they will show up.

  • @DealingWithAB
    @DealingWithAB ปีที่แล้ว

    it seems way too many steps for something that should be simple. Anyway to use img2img like in 1111 to make this easier/faster? I've stayed away from comfyui since to me it complicates everything instead of making it easier on the person.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      The goal of comfy is to really let you modify and understand the process. It won't be for everyone. Some people just want to drive a car, while others like to get in there and understand how it works and change it to perhaps make something better. It's not going to be easier, but it will actually teach you how it works. So, if you want to understand the process, stick with it. But, if you just want to make pretty pictures fast, this probably isn't going to be your thing. Either one is a good choice.

  • @8klofi
    @8klofi ปีที่แล้ว

    It would be a great help if you could provide links to the models, as I think many of us here, try to duplicate what you have, and at least for me, it's a bit difficult to see the model name in the nodes, as it's quite small.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว

      This should work with any model, that part really isn't that important.

  • @R0209C
    @R0209C 5 หลายเดือนก่อน

    Thank you so much ❤❤

  • @vilainm99
    @vilainm99 ปีที่แล้ว

    A bit late to the show, but.... Impact nodes do seem to install, but when I do Add Node, it is not in the dropdown list after several restarts of ComfyUI (I succesfully installed the custom nodes from the other video tutorials.... Anybody any idea what's going on? How can I debug the installation?

    • @vilainm99
      @vilainm99 ปีที่แล้ว

      This happens via the Manager and via git clone....

    • @PatrickIsbendjian
      @PatrickIsbendjian ปีที่แล้ว

      I suggest you look at what is displayed in the console when ComfyUI starts up. It displays a message for each of the custom_nodes packages and will certainly throw some error message if something is wrong.

  • @random11
    @random11 ปีที่แล้ว

    is there a similar workflow for Automatic1111?

  • @greypsyche5255
    @greypsyche5255 7 หลายเดือนก่อน

    You can use a pixel upscal model in latent space? How is that possible?

  • @nirsarkar
    @nirsarkar 9 หลายเดือนก่อน +1

    You would have been a great Prof. Scott. Thank god you are not! :) Thanks for this series.

  • @blacktilebluewall
    @blacktilebluewall 8 หลายเดือนก่อน

    hey! can you provide me the soapmix model, if you have it yet. Can't find anywhere in Civitai

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      It isn't that great. I don't even have it any longer. Sorry.

  • @othoapproto9603
    @othoapproto9603 11 หลายเดือนก่อน

    WARNING - I've learned the hard way to review all the nodes necessary to successfully make a tut work, before building only to be left at a dead end due to author's assumption you have all the tools needed. Or the node is out of date, or just not available. If authors would provide an .png with the metadata from the tut it would help.

    • @sedetweiler
      @sedetweiler  11 หลายเดือนก่อน

      I do, but for channel sponsors. However, it is the same graph as appears in the video.

  • @linuxsever5727
    @linuxsever5727 ปีที่แล้ว

    Sometimes you make a mess of information. Too much info at once. Thanks for the series.

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      Sorry, it is all moving a bit fast.

    • @linuxsever5727
      @linuxsever5727 ปีที่แล้ว +1

      @@sedetweiler I understand that you wanna show different techniques and ways but confusing 🤯. You may consider divide to videos and focus one thing at a time 😁. Thanks for caring my opinions 🙏.

  • @0A01amir
    @0A01amir ปีที่แล้ว

    Thank you but it's too much for my PC. can you teach Inpainting like you did img2img in ComfyUI ? changing the final image's face or cloths or background etc...

    • @sedetweiler
      @sedetweiler  ปีที่แล้ว +1

      It works if you have 3gb of video ram. That isn't much!

    • @0A01amir
      @0A01amir ปีที่แล้ว +1

      @@sedetweiler Oh yeah i tried it, first phase was 25sec, second one took 40sec with CPU fan going berserk :D third one was faster. i Use Hitpaw for fixing faces and upscaling, not worth it in Comfy and it's not doable in WebUI at all with my GTX 970 (gives memory error in the middle of upscaling).