Why ComfyUI is The BEST UI for Stable Diffusion!
ฝัง
- เผยแพร่เมื่อ 10 ต.ค. 2023
- This is why ComfyUI is the BEST UI for Stable Diffusion
#### Links from the Video ####
Olivio ComfyUI Workflows: drive.google.com/file/d/1iUPt...
Akatsuzi SD 1.5 Workflows: civitai.com/models/59806/sd15...
Akatsuzi SDXL Workflows: civitai.com/models/118005/sdx...
UltimateSDUpscaler for A1111 • ULTIMATE Upscale for S...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas - แนวปฏิบัติและการใช้ชีวิต
This is the first time I see comfy UI. I work with nodes on daily basis and this can make my workflow alot easier now! Thanks
Hello Olivio.
It is nice to see you are excited about ComfyUI again. Please keep these videos coming. Thanks!
As someone who comes from the VFX Industry and working with Software like Houdini and Nuke day by day THIS is what I was looking for! Projects like ComfyUI will bring AI to the Pros! This is great! Please more Tutorials in this direction 😊
Having worked with Blender and Unreal Engine... I'm with you bro!
yeah but the pros are just going to generate vulgar anime like everyone else tho. at least it will be professional. lmfao. (just kidding of course)
Yup, and now even the non-pros with absolutely no background in art can generate vfx and art content. Eventually, vfx artists or normal artists won't be a thing anymore, everyone can basically and easily do what an artist does. And why people will pay artists to create content when there are billion of content out there created by basically anyone.
Ohhh yeah! Great presentation. Yes please! More info and workflows for ComfyUI please!
Another benefit of using ComfyUI (I don't think you mentioned that) : when you relaunch a worklow after adjusting some parameters, it only recomputes the nodes that need to be updated.
For example, if you change your sharpening node and relaunch the computation, all the nodes that are before won't be recomputed (unless you have a random seed or something like that).
This is super smart and usefull !
That is really good to know.
He did actually mention that.
Yes I'd love to see moe complex workflows. Just gettign started with my own rig. Been watchign and downling soft2ware for over 3 months now and I am now beginning. Thank you for the ongoing videos they are truly helpful.
Thanks Olivio, great stuff. Would love to see more ComfyUI workflows
Perfect timing. I'm setting up my new computer and am about to install ComfyUI. I tested it a bit on my old computer, but want to use it much more now.
Excellent guide! Simple and easy to understand. ComfyUI doesn't have to be complicated at all, and you can do so much more with it if you want to.
but it can be if you want it to be lol
You sold me O! Thanks for all the help! Love your voice
🎉 Woohoo! Comfy is exactly what I was planning on learning next.
My main argument for comfyUI is that at this moment it is much more VRAM efficient than A1111.
You can save some more VRAM and gain speed by integrating the 'Tome Patch Model' node into the model pipeline.
I´ve been using Comfy and I love how efficient it is. Also I just learned that I just can throw the image and it loads the workflow!!! OMG, thanks!
Crystal clear, thnx!
Comfy is really the best i love it!, Thanks Olivio! ♥
Cool! I will try it tonight. I'm a week old on a1111 + deforum. Thank you for sharing!
Excellent video! Extremely comprehensive for an introduction to the usefulness of ComfyUI! Maybe we can speed up the communities workflows!
I'm keen to try it!
Thank you Olivio Sarikas, you always seem to release a new video just when I'm about to try something out lol!
Ok, ok. I will give it a try. Thanks!
Yeah, dear friend, give us more detailed and complicated workflows. By the way, thank you so much for bringing such a flexible UI to our attention. Cheers😊
Wow, the coincidences of waking up and new video at the sane min
What a brilliant explanation of Comfy UI. I had such a negative impression of it, due to all the videos about Automatic 1111, but after using A1111, I found myself somewhat underwhelmed. Your explanation has opened up my mind to what is possible when you can design your own workflow. Really looking forward to more content from you, and I would love to hear what you have to say about ControlNet and detailer workflows.
Just installed it ;) Finally a GUI that is actually a upgrade from A1111 :) And it seems to be a huge advantage to have learned the basics of how things work in A1111 first because I could jump right into this and set it up doing the same workflow as I had in A1111 immediately no problem.
This was your first Video with ComfyUI I didn't skipped away.. 😆 Normaly I always avoud it, but now it's my favorite system, because it implements the newest technology as fastest as possible.. 😀
Thanks to this video I finally burst through all the barriers and got ComfyUI running on my Ryzen 7 mini computer, cpu only. To figure out the extensions I had to watch a couple other intro videos, one of which started with a blank canvas and stepped through it one node at at time to make it clear what was going on. Wish I had gotten this running months ago. So very powerful. You can automate every step of the most complex workflow and the setup is saved in the image. Genius. You can run a half dozen tabs with a different workflow in each tab. Genius. Very impressed. To be fair you can run both this and automatic1111, thanks to the config file that tells comfyui where the models are. Will keep on doing inpainting on a1111 for now.
Thanks! I'll give it a try! It would be nice to see a tutorial with controlnet. Cheers!
Thanks!
Thanks for the video and YES i'm very interested in learning how to use controlnets in ComfyUI
All Spanish Stable Diffusion Channels are talking about your models for ComfyUI. I've had problems with Automatic 1111 due to my medium GPU so I'm pretty excited to try this new WebUI.
One comment I would make is that SDXL models require higher Tile Padding size and Mask blur when using Ultimate SD upscaler. between 2-4 x higher values to compensate for SDXL's extra pixel requirements.
I´m "converted". Been playing around with it for a couple of days and I wholeheartedly agree with you, man, It is truly the best UI out there. Very powerful and flexible, easy on lower-grade machines like mine, very stable. Some folks hate it, but I personally find the whole connecting "cables" between the nodes thing strangely satisfying (it reminds me of the physical equipment I used as a VJ years ago)
Haha ! ComfyUI start to give more and more traction :) I'm very happy to see it :)
Yes please do some videos on more advanced ComfyUI topics! Controlnet, T2I-Adapter, IPAdapter, ADetailer etc.
Thank you. I will look into that
this type of thing always amazes me. Comments complement the poster, however there is so much info missing.
I Love ComfyUI ❤
Yes, you do need a controlnet video. Controlnet is one of the best things about a1111 so if you want to convert people, you need to have a video showing how to use some of the most loved extensions in a1111. Mainly controlnet, but also consider going through a list of the top 20 or so extensions and let people know what will work and not work in comfyui or how to use an alternate method that accomplishes the same thing. Probably a custom node. Consider a 3rd video in the series about video manip/batching, assuming comfyui can do that.
"show us how to use [...] consider going through a list of the top 20 or so extensions and let people know [...] Consider a 3rd video"
Ever considered using your brain ? You should try, it's actually useful.
Bruh... try keeping up with literally everything in the ai space. People's time is limit. More importantly, he's asking what would be most useful/what we'd like to see. I gave him an answer I thought I would find useful, but also that I felt his audience might as well. Since most of these videos tend to rehash already known things because new people pursuing the same knowledge I guess are always being added, I thought it prudent to suggest something that would be densely useful instead of the first 20% being on the same line of "here's how to install a1111" in a video about how to use like a single controlnet model.
Use my brain? You clearly don't want to put in the time to make an actual comment. So maybe you should be being used here is your time and have it be somewhere else.
@@HanSolocambo
@@HanSolocambo that's a pretty rude statement considering olivas asked the question and this guy just answered it. This is a channel on how to use stable diffusion after all. Maybe use your brain ?
i gave it a shot, cause a1111 didn't install and i spent like a lot of time in cmd and fixing the errors it had, but then i just gave up with it and tried comfiui. and it's great, worked the first time, and the fact, that everything is saved directly in image is just great. also the interface is totally nice. you need to take like an afternoon to learn the nodes and all, but then it's more flexible
It sounds like you didn't have a standalone version of A1111. Was it some time ago when you tried to install A1111? Like ComfyUI, A1111 is now a standalone so there's no need to install Python or anything and there shouldn't be any install issues. Of course, if Comfy is working for you then stick with it.
oh, ok i'll try again with the standalone. this was when sdxl came out. but yes it might well be some problems cause i use python a lot and already have a ton of libraries and other things for python, which might have interfered with this.@@Elwaves2925
ComfyUI is way worse with its errors. Spent the last day fixing all the errors, just to get even more errors. Got it working, it's great, but the installation is much worse than A1111. Needs to be optimized.
I think its great that AIs have made their own YT channels now, nice job!
So I'm not the only one who really thinks this looks like an AI-generated avatar and background presenting here.
I suppose he does livestreams too, which seems difficult to do if the channel really is presented by an AI-presenter.
The need for far more complex workflows with multiple controlnets, multiple loras, adetailer, regional prompter is the reason I'm still on A1111... it's just so much easier to start adding extensions as I see the need for them while experimenting with how I want to generate the image.
I don't understand the obsession with that noodle soup monstrosity!
You don't NEED all that, but it's there if you want to finetune. You can get great results with just a few nodes and a community model in comfyUI
yes but, for people with 8Gb of VRam in the mood of animediff-traveler or SDXL comfyUI is a must I bet... anyway, I love A1111, I do not want that spaguetti node interface :(((
A1111 just at first glance is soooo much more simple to use, ComfyUI reminds me of Blender with all the nodes and it's just very confusing to figure out. I just want to figure out how to set it up to work just like A1111 does and be done with it 🤣
@@CoqueTornado my 8gb ram runs fine with SDXL on SDNext.
Thanks for the video.🤝 Yes, I would like to see a ControlNet with two preprocessors for SDXL models.
The Google search "ComfyUI examples" should lead you to the projects examples page. - Multiple ControlNets and LoRAs are pretty easy to do, once you know.
I would love to see more complex videos of Comfy UI.
I would enjoy a follow up video with controlnet, maybe roop, and other advanced features. Thanks for the work you do!
Thank you. I would be happy to see ComfyUI with ControlNet workflow
The Google search "ComfyUI examples" should lead you to the projects examples page. - Multiple ControlNets and LoRAs are pretty easy to do, once you know.
I like that concept of a workbench, since stable diffusion does not have anything similar, but how is it done to use control net. Another thing I love about Comfy is that it doesn't lose what you're doing because the previous information is maintained, which saves you a lot of time. It is true about the cables but it is a matter of getting used to the mixing console as this type of interface is known. 😄
AGREE 100%
I'll definitely give it a go. Any tutorials on how to install it?
ComfyUI is the best for sure :)
ComfyUI reminds me of N.I.Reaktor from my sound production days in college.
I am a big fan of ComfyUI. Steep learning curve for sure, but absolutely worth it. Especially for automated and flexible workflows with external inputs and outputs. - By the way, to create or enhance prompts via generative AI during runtime, it is also possible to integrate a local LLM or GPT via the the OpenAI API in the flow via custom nodes.
Oh Man, need this ! Is there a tutorial on this ? Thks
@@AdvancExplorerthere are multiple a1111 extensions that take openAI API keys to generate prompts. however i couldn't get them to actually generate prompts, even tho my API key shows it has been used
Have you tried this and how successful has it been towards what specific workflow? Sound like a great idea concept just trying to see how you woudl use it and what you might accomplish.
I think I would recommend learning A1111 first, I did that and could jump right into ComfyUI and in no time set it up with a work flow I wanted because I did understand how the basics works and what is needed.
Sold
Danke Olivio. Ich bin gespannt herauszufinden wieviel effizienter diese Methode der K.I. Generation ist. :)
I want to see more complex workflows. This reminds me of Quartz Composer. We used that a ton at Facebook Engineering back in 2012-2015
my system isnt woking properly with automatic1111 anymore and so i switched to comfy ui and its a whole new beast i still need to learn so much
Fantastic review. Really helps me understand the power of this tool. With that said, I'm good with A1111 since I'm just a hobbyist atm. If I took it commercial, I'd definitely have a second look at Comfy
More complex workflow demos please good sir
Us MacOS users need a video on getting this set up properly, please. I’ve been able to use the portable PC version w no problems but it’s not working properly on my (waaaaay better Mac).
And I do like this UI a lot. Thanks for sharing your info. 😎
Confy UI gives us the control we need when we work professionaly on images. There will be tools for everyone, but this is quickly becoming the de-factor local install for pros.
Hi,
Love you videos! thank you for the education.
Can you please make a video on face detailer combined with ad detailer?
After the live stream on Sunday I've been learning ComfyUI.. Its fast, and can handle way higher resolution than A1111. I can generate 3K on a 4090.
Thank you! I like to see previews between steps of creation process. Experimenting controlnets. ComfyUI can provide it with it’s nodes? I’d like a workflow for that..
One of the things I really did not like when I tried it was the image viewer. Looking at your results in a111 and vladmandic/automatic is a lot more enjoyable.
You know that there are costom nodes for that as well right?
Fooocus ftw!
I love ComfyUI, I can run dozens of tasks all day with the upscale feature without fear of Out Of Memory with my 8GB VRAM card 😁😁😁
cool stuff Olivio! I love Comfy its like having many custom A1111 at your fingertips! One Question on your WFlows.. at the start the first efficiency node on the left side has a space for the Lora stack. I can see that you just have to plug in one of those in to there and that's it. my question is if I plug in a Lora in there that is for SD1.5 but then the switch the checkpoint to an SDXL model how do the loras react to that do they error out?
Hello my friend! Thanks for your work, it's always interesting to watch your tips and tricks!
ComfyUI still the most dark horse for me, because i don't really enjoy to play with nodes in AI art workflow. I mean nodes is cool, i like to play with nodes in Blender to create interesting materials, but in art generation it looks to confusing for me, i want to do this much simpler without configuration on every single step. I like 1111 because it do most things automatically, i like to test models, and then analyze large table sheets of my prompts made by several models, look how every community SD model doing stuff, compare it and then choose some interesting results and make hires and upscale versions. I tried node workflows i InvokeAi, and have mixed impressions. In my opinion 1111 still most powerful tool. ComfyUI looks a little to complicated for me. But i think i can give it a shot if it is really so easy to install.
Just go ahead. After a week it's not complicated at all anymore, it's enjoyable and fun to play. You can do xy plots easily if you want to compare. There's a portable version of Comfy and you can set the folders with all your models (from A1111) in the configuration file which you can find in the main ComfyUI folder.
Follow-up videos on Comfy UI with ControlNet, Adetailer etc would be great. Thanks.
After trying comfyUI I couldn't go back to A1111. Better flexibility, better performance, easy to use once you get over the learning curve.
Thanks for the video. In A1111 you can use control net and have a range where for example it will try every sampler and runs a model on them or try out every model for a prompt. Trying every sampler against every scheduler to plus a huge number of other variables to see which result is best is an impossible task. Can you do a range of settings against a setting or settings in comfyUI?
Great video - I might be sold - we'll see if it can render a bunch of different Loras (characters) and combine them into one setting
There is a workflow for that indeed.
@@tripleheadedmonkey6613 Haven't found it yet - know where to look?
can you make a video about organization of files and metadata with tools for comfyui? I'm using plot xy but i would like to use excel or some other program to check differences and have some grapths instead of using folders. I would love to expand different's output on 100s of of images
A video about SDXL G and L negative/positive prompts would be cool if you haven't done that already. Also Base vs. Target sizes in SDXL generating would be appreciated. Thank you for all of your videos on SD!
My first time looking at ComfyUI and I've got to say that I feel overwhelmed. I used Bing images to create a Roman Legionnaire, I really like the look of one of the images but it has no way to save an image and use it as a base for future images. I asked around at some of the AI groups on facebook and they recommended this program. I guess my question is, can I use the same face and body to create other images in either a differnet background, clothing and such like that?
would you recommend only using one lora at a time? or is more than one not a thing?
I do a lot of image to image using my photography. Suppose I want to re-enter a workflow sometime later, does ComfyUI keep a reference to my original image so that I can resume work?
Yes, please with control. That is the main feature distinguishing SD (plus LORA)
This is the best tutorial on Comfy UI workflows with samples so far, thanks!
Guess I'll try it. Looks needlessly complicated, but with custom nodes maybe I can solve that. But mainly .. I can have the result image in the middle of the screen, right ? Because A1111 is killing my neck :-D
I don't think you mentioned it in the video but where is the default location for putting new/downloaded workflows?
I still think im going to stick with 1111 for right now.. it just seams a lot easier for us beginners!
You are right, for beginners A1111 is much easier. But when you have used A1111 a while and understand how all this stuff works, you have the knowledge to begin with ComfyUI.
great now this worked for your first simple upscale workflow.. one node manualy. but than it worked... only thing, how get i an preview for the created tiles. this is very helpful in A1111 for me to get an impression if everything goes on fine, without having to wait with my photato for to long
comfy ui won't let me use the normal vram mode, comfy ui always set the setting to lowvram mode everytime i want generate an image, even though i've set the setting to normal vram mode, idk if that's a bug or not. A1111 on the other hand let me use my whole vram.
What about "Workflow Builder" of InvokeAI.
Is it the same thing ?
I can't find the Efficient and Ultimate nodes. Where can I download them?
how do u used lora on your workflow sir??
Simple stupid question: what do you use the "queue front" button for?
Thanks for the tutorial. Sadly, the SDXL workflows lead to the error "missing MileHighStyler" which cannot be found anywhere. I left her a note at Civitai.
I keep getting warnings during the terminal that says something along the lines of not on PATH, and the custom workflow I loaded won't work. I have no idea what's going on, what is path?
How do I get seed from picture ?
My brain is too smooth for ComfyUI.
Don't worry, smooth-brained human, some practice with ComfyUI will put some wrinkles on that cerebral cortex in no time!
It is all good and ok but what about things like inpainting, outpainting, and other Automatic1111 extension features like wildcards, adetailer, regional prompter etc?
can you guide you on the choice of Windows computer configurations to use these AI?
How do you create tables showing the effect of parameters changing?
Do embeddings now work just by specifying the name, without "embedding:" ?
I just can't get Stable Diffusion installed and I do pay for Mid Journey. Mid journey doesn't really produce what I'm telling it to. I get weird things just from simple prompts. Is Stable Diffusion any different? Will it make a picture you upload to it look like what you uploaded?
Can I just use SDXL as the upscaler to make my workflow faster?
Do you know if this can fix the "burning" issue with the last steps in A1111? And would you know what causing that in the first place, by chance?
not sure what you mean by "burning" issue. if the image looks strange this might be the wrong VAE, or CFG or Steps count. more CFG usually needs more steps.
B will toggle the bypass a lot faster.
Can you use anything like Dynamic Prompts extension in ComfyUI? I really like using Dynamic Prompts to setup a long batch generation of many images in similar styles then pick my favorites when it is done.
I got those errors out of both of the UI, any idea?
Prompt outputs failed validation: Failed to convert an input value to a INT value: steps, None, int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
Value not in list: sampler_name: '6' not in (list of length 22)
Value not in list: scheduler: 'euler_ancestral' not in ['ddim_uniform', 'simple', 'sgm_uniform', 'exponential', 'karras', 'normal']
Failed to convert an input value to a FLOAT value: denoise, normal, could not convert string to float: 'normal'
Value not in list: preview_method: '1' not in ['auto', 'latent2rgb', 'taesd', 'vae_decoded_only', 'none']
Value not in list: vae_decode: 'auto' not in ['true', 'true (tiled)', 'false']
KSampler (Efficient):
- Failed to convert an input value to a INT value: steps, None, int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
- Value not in list: sampler_name: '6' not in (list of length 22)
- Value not in list: scheduler: 'euler_ancestral' not in ['ddim_uniform', 'simple', 'sgm_uniform', 'exponential', 'karras', 'normal']
- Failed to convert an input value to a FLOAT value: denoise, normal, could not convert string to float: 'normal'
- Value not in list: preview_method: '1' not in ['auto', 'latent2rgb', 'taesd', 'vae_decoded_only', 'none']
- Value not in list: vae_decode: 'auto' not in ['true', 'true (tiled)', 'false']