I've been following you since I first discovered Comfy. Your videos did so much to help me realize "what is possible?" "yeah...pretty much anytning you can dream of aif you are smart enough to figure out or learn from those that are." 'Let's turn this into a rodent, because why wouldn't you?" (Such a great way to demonstrate that one can always turn a workflow over on its head on a whim) After a few months of tinkering with Comfy (staying up until sunrise trying to figure out why the Clipvision and IPAdapter always failed on me), I finally feel like everything is coming together and I can now imagine and build my own workflows. Absolutely would not have been possible without your videos and you especially sharing your workflow files so readily (Amazing to see you on OpenArt!!) I am gushing.... Long story short: Thank you so much, you magnificent nerdy rodent bastard!
I've been in the comfy space for a bit, but I'm glad I watched this, I picked up a few tips, and realized where I picked up the coloring nodes habit from. Seriously though thank you for posting your workflows, taking my time and taking reposer xl apart I was able to realize a random idea I had one morning. I thought, what happens if I shove the conditioning of an IPAdapter directly into a face detailer node? after some hacksmithery, and rewatching the reposer video a few times so I could figure out which nodes to grab I learned that you get a faceswap.
At last, someone starts at the beginning, most channels start by showing how to script Python and send rockets to the moon, as a beginner....Thank you.
Thanks as always. I have a question: if I have jpg files of specific models, can I change their outfits to be identical? I tried to change them, but they seem to be slightly different.
Thank you very good instructions, Can do a video about how to add prompt styler any workflow, I look on TH-cam no one made a proper tutorial. My experience with Fooocus webui Styles makes huge differece in image quality. I wanna add styles to comfyui. I download promt styler to Comfyui but i could not manage to connect the noodels. Thank you
is consistent characterS possible where we have a story plot and we want our created specific styled characters to be in it and all following the same styles with no loss of any details, I need atleat 10+ characters for my plot and most cases all characters will be on one scene
Hey is there a ComfyUI node that allows me to render an Output video of the Denoising stage, you know the Preview you see on the KSamplers at the Denoising stage if you have Preview method set to Auto or TAESD is there a way to output that as a video?
I have a few question about linux and conda.I used ubuntu a few years ago but it can be easily breakable after installing different dependencies.You are always using conda. So can i create conda env for every github project and install apps without break another and its only cost extra disk space? Conda download requirements as rar or tar these are for common usage but its extract packages to different env right?
Yup. Conda manages your environments with ease, meaning you can run one app which depends on cuda 11.7 and python 3.8 right alongside another needing cuda 12.3 and python 3.11!
What do you mean by a positive and negative? I’ve heard this quite a bit (I haven’t tried Comfy yet). But I was wondering what a "negative and positive" was?..
love this setup but when going to download a workflow I am yet to get one to work, they all give errors and not only for me read comments, so can we say that workflow downloads just dont work ?
This video is more for creating your own workflows than downloading ones from other people. Once you are happy with creating your own, getting others to work is a piece of cake!
Hello Rodent, been struggling here, hope you can help me, but my comfyUI is not generating image at all, nothing happens after creating workflow, custom or default. 😵
Can’t say I’ve seen that error, but as a guess you could be trying to load a model of the wrong type - such as a vision model instead of an sd checkpoint
@NerdyRodent that's tough, but is there a way to just stop it from crashing as in can you tell me a safe way to use it and what to avoid ?, if it's possible, I would really appreciate the help
@@keepitshort4208 best option is: 8+GB VRAM Nvidia card, Linux (for stability, security, performance, reliability and ease of use) and at least 32 GB RAM
🎉I clicked on your video because it is the only search result that shows a real person who is not having booby brest I don’t know why (I say this as a rhetorical question but we all know why) I don’t know why everyone has to put girls all over the place and no boys it is just annoying 😮😮😮😮
I have a criticism but it is not to belittle, I value your channel a lot. those anywhere are crap... If you are going to make an example I would tell you to omit them, I haven't seen if you explain anywhere how to obtain them, but I've searched and couldn't find them. I didn't even find them in the manager. Even though in your practical use they are useful once everything is assembled, since they are difficult to install, or even find, looking at other examples for example, I looked for them and there were just a couple of things, like anywhere of images and latent or something like that, I don't remember it now but looking at other examples, they didn't even work because they used other things there that the one I found didn't have. and the rest did not exist anywhere, the other thing is that if they upload a photo of a workflow the problem is that you don't see where things are going or where they come from, and there are many things that do not work if it is not connected to the correct place, it is like saying, I explain to you where things come from but in reality I DO NOT SHOW YOU WHERE THEY COME FROM, and you are going to go crazy to discover it, that is, something completely useless. . All of this is just constructive criticism, I'm not trying to be a troll or anything, but the truth is, the more complex the workflow is, the less you notice it, and the less you learn where things come from. And a workflow, its name says it, you have to see the flow of information, because otherwise what's the point of using comfyui, Let's say the advantage of workflow is to see how things flow clearly, and not somewhere they connect. You know how it works, and for you it is practical, but for the normal user or those who do not know the node it is a problem. I think that, for example, the tool is poorly designed, because at most it should have the option to visualize where it is going and where it is coming from. Like, to share the workflow you can show that invisible thread... (perhaps you have to mention that to whoever generates them) but it happened to me the day before yesterday that it was impossible for me to find that. Now, that search for how to connect things made me find other examples and I finally managed to make what I wanted work, and I don't know that it is also something complicated to explain. I know it takes time etc. That's why I also appreciate the videos. As a suggestion, I would also tell you to take a look, or if you haven't already done so, at the SD XL Turbo, which generates images on the fly, in 1 second! (which has advantages for instantly seeing the prompt although disadvantages for other reasons if one wants to manually select from a variety) I'm implementing that to generate some things and it's fascinating, example, genre >>> mood >>> create video I'm still not sure how to implement controlnet, for example, but I'm doing things that have very nice quality. perhaps usable to make film shorts, or video games. I am also looking at how to use the turbo sd to make videos, since if it took a few minutes before, now things should be able to be done in much less time depending on the scale.
Like I show in the video, to install custom nodes you can use ComfyUI manager’s search feature. In your specific example, you could type “everywhere” in the search box to find and install the node. Also as shown in the video, you can hide or unhide the connections. Check my GitHub for SDXL turbo workflows! Obviously I can’t do videos on the new models, as they have a non commercial use license - which I detail at the end of this video. Hope that helps, and good luck with your workflow creations!
Microphone & sound settings are on point in this one.
I've been following you since I first discovered Comfy. Your videos did so much to help me realize "what is possible?"
"yeah...pretty much anytning you can dream of aif you are smart enough to figure out or learn from those that are."
'Let's turn this into a rodent, because why wouldn't you?" (Such a great way to demonstrate that one can always turn a workflow over on its head on a whim)
After a few months of tinkering with Comfy (staying up until sunrise trying to figure out why the Clipvision and IPAdapter always failed on me), I finally feel like everything is coming together and I can now imagine and build my own workflows.
Absolutely would not have been possible without your videos and you especially sharing your workflow files so readily (Amazing to see you on OpenArt!!)
I am gushing.... Long story short: Thank you so much, you magnificent nerdy rodent bastard!
Thank you, Mr. Rodent
And thank you for watching 😄
Thank you, Nerdy Rodent! 🙌
Thank you too!
I've been in the comfy space for a bit, but I'm glad I watched this, I picked up a few tips, and realized where I picked up the coloring nodes habit from. Seriously though thank you for posting your workflows, taking my time and taking reposer xl apart I was able to realize a random idea I had one morning. I thought, what happens if I shove the conditioning of an IPAdapter directly into a face detailer node? after some hacksmithery, and rewatching the reposer video a few times so I could figure out which nodes to grab I learned that you get a faceswap.
That’s the cool thing about Comfy!
Thanks, if I were a ComfyUI newbie, this would be a great reference to start.
In a few weeks when AGI is released I want the official Nerdy Rodent ASI Agent.
Nerdy Rodent has been achieved (internally)
I want a little AI rodent robot that controls me by pulling my hair, maybe I can hide it under a hat or something.
your channel is awesome, deserve more views :)
Glad you think so!
Amazing tutorial and workflow!
Glad you liked it!
very Informative and useful video. Thanks bro ❤❤❤
Thanks for all your hard work.
My pleasure!
Great video as always, I'd chapter it based on topics.
Thank you so so so much for this. I have such a good understanding and I learned so much.
Thanks for this. How do you add models so that I can select one in "Load Checkpoint" node? I can' select any in there...
can you make workflows img2img using referenceonly with sdxl turbo? i'll definitely waiting for this
i have node Zoe - Depth Map and Realistic Line Art doesn't show resolution option how fix this😃😀
Great tutorial!!!!!!!!🙏🏻🙏🏻🙏🏻🙏🏻😊😊
Thank you 😁
Hello Roddy, u have a great channel. don't forget your ROOTS, i mean the early videos. :D
cd / 😉
Very nice video!
Thank you very much!
Sir, I've been waiting for a long time. I would appreciate it if you could post it on the support channel. Thank you
.
Super useful. Thank you !
Very helpful!
Great crash course my furry friend!
What is the workflow for creating the best quality images with SD?
At last, someone starts at the beginning, most channels start by showing how to script Python and send rockets to the moon, as a beginner....Thank you.
Hei! Thanks for your work:) On 11 minute yellow coloured conflicting nodes how do you solve this challenges?
ComfyUI manager does that automatically
Thanks as always. I have a question: if I have jpg files of specific models, can I change their outfits to be identical? I tried to change them, but they seem to be slightly different.
Sweet video ❤
I enjoy Middle mouse button to move the canvas around
Thank you!
Thank you very good instructions, Can do a video about how to add prompt styler any workflow, I look on TH-cam no one made a proper tutorial. My experience with Fooocus webui Styles makes huge differece in image quality. I wanna add styles to comfyui. I download promt styler to Comfyui but i could not manage to connect the noodels. Thank you
It’s super easy! Add the node using any of the 3 methods shown, then connect the noodles as shown here too! 😀
very helpful information, thank you for sharing this
is consistent characterS possible where we have a story plot and we want our created specific styled characters to be in it and all following the same styles with no loss of any details, I need atleat 10+ characters for my plot and most cases all characters will be on one scene
Hey is there a ComfyUI node that allows me to render an Output video of the Denoising stage, you know the Preview you see on the KSamplers at the Denoising stage if you have Preview method set to Auto or TAESD is there a way to output that as a video?
Thanks so much
Thank you so much! 🤩 😘✌👍
No problem 👍
I have two GPU's how do make sure comfyui is using the most powerful of the two nvidea cards?
I have a few question about linux and conda.I used ubuntu a few years ago but it can be easily breakable after installing different dependencies.You are always using conda. So can i create conda env for every github project and install apps without break another and its only cost extra disk space? Conda download requirements as rar or tar these are for common usage but its extract packages to different env right?
Yup. Conda manages your environments with ease, meaning you can run one app which depends on cuda 11.7 and python 3.8 right alongside another needing cuda 12.3 and python 3.11!
Hello my nerdy friend
🥰✌️💕🤘
Hello 😊👋
@@NerdyRodent I hope you’re staying warm in your nest. I hear you’re getting some really cold weather over the pond
@@kariannecrysler640 it’s a bit chilly, but I have an electric blanket 👍 Hope it’s not too cold over that side!
@@NerdyRodent had 5 inches snow that melted last couple days. The cold is back today though lol. Wood heat helps
What do you mean by a positive and negative? I’ve heard this quite a bit (I haven’t tried Comfy yet). But I was wondering what a "negative and positive" was?..
positive = what you want, negative = what you don't want
@@NerdyRodent OK, get ya. Thank you. 👍
Thank you so much for the foresight! Just to clarify, why use upscaling instead of generating it in the desired resolution from the first latent?
A high res fix or refiner pass can improve the image
love this setup but when going to download a workflow I am yet to get one to work, they all give errors and not only for me read comments, so can we say that workflow downloads just dont work ?
This video is more for creating your own workflows than downloading ones from other people. Once you are happy with creating your own, getting others to work is a piece of cake!
thank you!
Hello Rodent, been struggling here, hope you can help me, but my comfyUI is not generating image at all, nothing happens after creating workflow, custom or default. 😵
For the default workflow, try using the default sd1.5 model
Thank you
Can this be used with video diffusion models?
Yup!
Thank you! :)
Ctrl+Z undos my prompt instead of the workflow
That’s right. If you’re typing a prompt, you can also undo the text… though that was in there from the start, whereas the node undo is fairly new.
what does it mean error TypeError: Failed to fetch. please help
Can’t say I’ve seen that error, but as a guess you could be trying to load a model of the wrong type - such as a vision model instead of an sd checkpoint
my python crashed because im using 1660ti I guess, anyway to make things work ? Because I wanna learn comfy ui
People have said they can use it with just 4GB VRAM, but it’s a much better experience with 8+
@NerdyRodent that's tough, but is there a way to just stop it from crashing as in can you tell me a safe way to use it and what to avoid ?, if it's possible, I would really appreciate the help
@@keepitshort4208 best option is: 8+GB VRAM Nvidia card, Linux (for stability, security, performance, reliability and ease of use) and at least 32 GB RAM
I'm a big fan of Nerdy Rodent, I was about to say he is a cool cat but no...
😆
🎉I clicked on your video because it is the only search result that shows a real person who is not having booby brest I don’t know why (I say this as a rhetorical question but we all know why) I don’t know why everyone has to put girls all over the place and no boys it is just annoying 😮😮😮😮
among many .. your videos actually don't assume anything.
I have a criticism but it is not to belittle, I value your channel a lot.
those anywhere are crap...
If you are going to make an example I would tell you to omit them,
I haven't seen if you explain anywhere how to obtain them, but I've searched and couldn't find them. I didn't even find them in the manager.
Even though in your practical use they are useful once everything is assembled, since they are difficult to install, or even find, looking at other examples for example, I looked for them and there were just a couple of things, like anywhere of images and latent or something like that, I don't remember it now but looking at other examples, they didn't even work because they used other things there that the one I found didn't have.
and the rest did not exist anywhere, the other thing is that if they upload a photo of a workflow the problem is that you don't see where things are going or where they come from,
and there are many things that do not work if it is not connected to the correct place, it is like saying, I explain to you where things come from but in reality I DO NOT SHOW YOU WHERE THEY COME FROM, and you are going to go crazy to discover it, that is, something completely useless. .
All of this is just constructive criticism, I'm not trying to be a troll or anything, but the truth is, the more complex the workflow is, the less you notice it, and the less you learn where things come from.
And a workflow, its name says it, you have to see the flow of information, because otherwise what's the point of using comfyui,
Let's say the advantage of workflow is to see how things flow clearly, and not somewhere they connect.
You know how it works, and for you it is practical, but for the normal user or those who do not know the node it is a problem.
I think that, for example, the tool is poorly designed, because at most it should have the option to visualize where it is going and where it is coming from. Like, to share the workflow you can show that invisible thread... (perhaps you have to mention that to whoever generates them) but it happened to me the day before yesterday that it was impossible for me to find that.
Now, that search for how to connect things made me find other examples and I finally managed to make what I wanted work, and I don't know that it is also something complicated to explain.
I know it takes time etc. That's why I also appreciate the videos.
As a suggestion, I would also tell you to take a look, or if you haven't already done so, at the SD XL Turbo, which generates images on the fly, in 1 second! (which has advantages for instantly seeing the prompt although disadvantages for other reasons if one wants to manually select from a variety)
I'm implementing that to generate some things and it's fascinating,
example, genre >>> mood >>> create video
I'm still not sure how to implement controlnet, for example, but I'm doing things that have very nice quality.
perhaps usable to make film shorts, or video games.
I am also looking at how to use the turbo sd to make videos, since if it took a few minutes before, now things should be able to be done in much less time depending on the scale.
Like I show in the video, to install custom nodes you can use ComfyUI manager’s search feature. In your specific example, you could type “everywhere” in the search box to find and install the node. Also as shown in the video, you can hide or unhide the connections. Check my GitHub for SDXL turbo workflows! Obviously I can’t do videos on the new models, as they have a non commercial use license - which I detail at the end of this video. Hope that helps, and good luck with your workflow creations!
\>