The Corridor guys used the i2i Alternative test to make their first RPS animation. It's a really powerful tool. Glad it made it to comfy in a better implementation.
Give the Iterative Mixing Sampler a go too. It’s a more faithful unsampler, using the actual LDM algorithm to generate the noised sequence (see the Batch Unsampler node).
Excellent, thanks! Finally someone who uses notes to document information inside the workflow - very helpful! I did't know these notes, although I was already looking around for something like that - perfect! I am using SDXL Models in a similar workflow and that can help to "melt" a cut out figure into a new invironment... unsampling the "rough" composing automatically made by blending 2 images, and resampling. When using "collage of a..." as initial prompt and "photo of a..." when resampling. If you have some alternativ ideas for my "melting figure into new background"-process, I am always interested as I try to optimize that. The idea is to change figure while maintaining the same environment but make that figure integrate with ai into the background seamless.
As shit as I am w/ Comfy, and as resistant I've been to using it, your vids are the only ones I actively go looking for if I need to figure something out with it. Sooo even though I'll likely not find myself comfy w/ comfy I figure you earned my $5 a month (+ing you patreon after commenting).....you're always positive in the comments and break these things down pretty well w/o making unnecessarily long videos. Keep at it. -J
I dont understand why I am getting weird colors and the image looks incomplete. I am not seeing any errors too. Can someone share the original workflow with github ?
It’s been out a while, but I had the pack installed for a different node and only just started playing with this specific one as I’ve been playing with noise a lot recently
Also - where does the CN Clips get their inputs from? Im trying to recreate without the everywhere nodes that cause conflicts on my setup :) Im getting an error at the Ksampler stage when the controlnet modules are turned on, and I have both pos/net prompts receiving from model node's Clip_Out. is that wrong? im not sure what else could be erroring. If I connect the first prompts to ksampler conditioning instead - it does work. something about the controlnet prompts..
How does the 1st step work, where you are not using the Output Control section? What are the pos & neg inputs for the Ksample if the Output Control section has been bypassed? It still works in your video even though it should not.
I couldn't get any decent results with SDXL just to save people spending time on this. I'd always get completely different images out to what goes in. But that's with the unsampler, with other methods that are out there, things aren't so bad and the controlnet stuff is helpful.
Impressive. Does this replace IPAdapter nodes ? I'm not fond of IPAdapater, way too much nodes to use it and you never know which model to load and it happens to crash a lot :/
Amazing stuff. Have you by any chance developed this :P? It seems to have gotten out of the radar by most channels. Not related, but since you're extremely knowledgeable: I'm not sure if you have done any video showing the "CFG Rescale" node, but do you know how it works?
Dynamic thresholding I covered a while back for A1111 in - AMAZING A1111 Stable Diffusion Extensions You Might Have Missed! th-cam.com/video/tP5yy6A4GJw/w-d-xo.html - it’s basically that
@AvizStudio From their Github: "This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt." Img2img would just take the original image to generate. while here you take a noise that would generate that image, the point is to be able to do variations of that image
@@AvizStudio the seed will decide the noise you add to the input image latent(with 1.0 denoise you have only input image, and with 0.0 you have only noise, so txt2img) . Here it gives you the latent, not the seed, so you can consider it as finding the best input image latent, denoise and seed. I'm not sure if for all latent there exists a seed that will give that latent, if it's the case then you are right.
Zoe-DepthMapPreprocessor and LineArtPreprocessor failed to load and fail to import when using the manager to install missing custom nodes. Is there an alternative for these nodes? How do I download them if there are alternatives? Thanks for any help on this.
I laughed uncontrollably after the last image generation at the end of the video. Was that still using epic realism, or one of your custom models? That smile had some Grinch vibes too
Try to use "reference" model + "Canny" for SDXL models. This will give you much more interesting results than older models (with high "cfg"). Try taking a reference image from the street and writing "snow" in promt text...
@@NerdyRodentIn principle, you don’t need to redo anything in your work flow (for SDXL), just replace the old models with new T2i SDXL Line & Depth (I checked - everything works well)
I have a question: Was it a mistake from You that you connected the input of Unsampler from ClibTextEncode instead from the Controlnet output? i tried working with your workflow to make a face look angry. All the time the output had high contrast which made the output look burned ,untill I changed the Positive and Negative inputs of the Unsampler . U had them connected from the ClibTextEncode to Unsampler directly . Now I connected the Apply Controlnet output to the unsampler and it became very good normal without any contrast.
@@NerdyRodent thanks for your reply. can I ask you another question? I want to learn similar things like unsampler to change face details or make changes in the (latent space) similar to Unsampler instead of things like inpainting . Do you know what I should learn?
Am I dumb, or the only .json workflow in your link is for the QR monster, and the rest are just .png files? I can't figure our where to download the comfyui workflows
my images with controlnet are coming out extremely polarized and overexposed even with low strength and negative prompts on a standard 1.5 model. Any advice how to fix it?
I’ve not got an AMD card, but my guess is that should work just fine! How to Install ComfyUI in 2023 - Ideal for SDXL! th-cam.com/video/2r3uM_b3zA8/w-d-xo.html
Dude I learn so much from your videos, I have already created 2 music videos with the stuff i learn from you. thank you so much for doing this! I gotta know - is this really your voice and accent?
It's a great video but my result image was 512 x 768 without any error and not upscale to higher resolution when I was inputs the 512 x 768 imagemand using your workflow, I don't know why you can input the lower resolution then output the higher resolution? you said your image with automatic upscaled to 1136 x 1440 resolution, i don't know why I can't do that. 😅 thanks
I think you're a grat teacher... sort of. I like to build these myself, so the .json or a better explanation of the nodes is necessary. It's frustrating getting the abridged version when I would like more indepth instructions. Please find time to break these down like other ComfyUI TH-camrs.
You can indeed save the workflow image provided as a .json file if you like! What is it specifically about the unsampler node that you'd like to know? It basically does just what I show in the video (and as it’s name suggests!) As for building your own, check out my ComfyUI Essentials video - th-cam.com/video/VM9snsuoqBc/w-d-xo.html
I think you're an amazing teacher. please keep doing them exactly like you are now as those long-winded ones are frustrating. keeping them focused and clear like you do is much better and thank you for the workflow
@@MrSporf Some people need additional support. Luckily, I crafted a better workflow after I realized there are too many nodes on screen. It is able to do everything and requires less connections between nodes using "efficient" nodes instead of the typical ones.
@@CoreyJohnson193 good for you, well done. I guess you didn't need that extra support after all. Also, the workflow images are much better than the json files because you can actually see what is going on in the workflow.
Honestly she looks completely different, it's not even the same face at all. Everything is very disproportionate from her mouth to her nose to her eyes and basically everything else. It doesn't look like the same person except at a distance if you squint.
Dude, you're awesome. Somehow, I learn about all the most exciting stuff from your channel first.
Glad you enjoy the stuffs 😀
The Corridor guys used the i2i Alternative test to make their first RPS animation. It's a really powerful tool. Glad it made it to comfy in a better implementation.
They used EbSynth as well.
@@mich_elle_x not for the first one iirc. They did use davinci deflicker though
Give the Iterative Mixing Sampler a go too. It’s a more faithful unsampler, using the actual LDM algorithm to generate the noised sequence (see the Batch Unsampler node).
Sweet!! I completely forgot that there had been something like that already a while ago...this one seems to be a lot more powerful! 😊
Hi, thanks for this video. I will test this with my new realism SDXL models
This channel is always fantastically entertaining😌 And thanks for putting together the cool stuff on git
Glad you enjoy it!
Okay, got this, thanks, it works pretty well, if model impacts the style.
Love it! Can this be applied to AnimDiff/ip adapter workflows?
worked with lcm with 4 steps , 8 steps in the final one. good stuff.
Nice! Was going to test that too 😀
Excellent, thanks! Finally someone who uses notes to document information inside the workflow - very helpful! I did't know these notes, although I was already looking around for something like that - perfect! I am using SDXL Models in a similar workflow and that can help to "melt" a cut out figure into a new invironment... unsampling the "rough" composing automatically made by blending 2 images, and resampling. When using "collage of a..." as initial prompt and "photo of a..." when resampling. If you have some alternativ ideas for my "melting figure into new background"-process, I am always interested as I try to optimize that. The idea is to change figure while maintaining the same environment but make that figure integrate with ai into the background seamless.
why do I have a smaller image at the output?How can I increase it?
As shit as I am w/ Comfy, and as resistant I've been to using it, your vids are the only ones I actively go looking for if I need to figure something out with it. Sooo even though I'll likely not find myself comfy w/ comfy I figure you earned my $5 a month (+ing you patreon after commenting).....you're always positive in the comments and break these things down pretty well w/o making unnecessarily long videos. Keep at it. -J
Thanks! I was resistant to comfy to start with as well, but now it does seem comfy 😆
cool. love unconventional things like this! thanks
I dont understand why I am getting weird colors and the image looks incomplete. I am not seeing any errors too. Can someone share the original workflow with github ?
Wow! Since when does it exist? Is it something new? It seems so effective! Great video!
It’s been out a while, but I had the pack installed for a different node and only just started playing with this specific one as I’ve been playing with noise a lot recently
Thanks a lot for remind, a hard to remember all SD possibilities)
I know right… so many things to test and try!
Another masterpiece dropped boys 💥
It would be interesting to see a similar warflow for SDXL models
You can change the models to SDXL ones and you'll be good to go :)
@@NerdyRodent Weird when I run this with SDXL it generates total garbage, works fine with 1.5... I wonder why??
Very cool. Will use this for my enhancing workflow I'm developping... works for XL?
just awesome thanks !!!
Also - where does the CN Clips get their inputs from? Im trying to recreate without the everywhere nodes that cause conflicts on my setup :) Im getting an error at the Ksampler stage when the controlnet modules are turned on, and I have both pos/net prompts receiving from model node's Clip_Out. is that wrong? im not sure what else could be erroring. If I connect the first prompts to ksampler conditioning instead - it does work. something about the controlnet prompts..
same
I don't think you posted the workflow. The last one in the list is from 3 days ago "SDXL_Reposer_Basic.png"
It's there. ctrl+f and search Unsampler and you'll find it.
It's the Renoiser.png
How does the 1st step work, where you are not using the Output Control section? What are the pos & neg inputs for the Ksample if the Output Control section has been bypassed? It still works in your video even though it should not.
I do not find the workflow on your github, don't uploaded yet?
I couldn't get any decent results with SDXL just to save people spending time on this. I'd always get completely different images out to what goes in. But that's with the unsampler, with other methods that are out there, things aren't so bad and the controlnet stuff is helpful.
Impressive. Does this replace IPAdapter nodes ? I'm not fond of IPAdapater, way too much nodes to use it and you never know which model to load and it happens to crash a lot :/
Hello, could you make a video about DreamCraft 3d, a image to 3d method that came out a few days ago?
@NerdyRodent Is this specific workflow on your Github? I can't identify it by name...
Yup, the unsampler one is there
Amazing stuff. Have you by any chance developed this :P? It seems to have gotten out of the radar by most channels.
Not related, but since you're extremely knowledgeable: I'm not sure if you have done any video showing the "CFG Rescale" node, but do you know how it works?
Dynamic thresholding I covered a while back for A1111 in - AMAZING A1111 Stable Diffusion Extensions You Might Have Missed!
th-cam.com/video/tP5yy6A4GJw/w-d-xo.html - it’s basically that
What the difference from regular image to image?
Did you watch the full video?
@AvizStudio From their Github:
"This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt."
Img2img would just take the original image to generate. while here you take a noise that would generate that image, the point is to be able to do variations of that image
@@vintagegenious
Hmm OK interesting
@vintagegenious
Is that equivalent to "guessing the seed number of given picture"? Pretending the picture is generated?
@@AvizStudio the seed will decide the noise you add to the input image latent(with 1.0 denoise you have only input image, and with 0.0 you have only noise, so txt2img) . Here it gives you the latent, not the seed, so you can consider it as finding the best input image latent, denoise and seed. I'm not sure if for all latent there exists a seed that will give that latent, if it's the case then you are right.
Zoe-DepthMapPreprocessor and LineArtPreprocessor failed to load and fail to import when using the manager to install missing custom nodes. Is there an alternative for these nodes? How do I download them if there are alternatives? Thanks for any help on this.
You can drop me a dm on www.patreon.com/NerdyRodent 😀
I laughed uncontrollably after the last image generation at the end of the video. Was that still using epic realism, or one of your custom models? That smile had some Grinch vibes too
That’s my girlfriend! Also yes, epic realism there 😉
Was working on exactly this. This is why this rodent... is the man. Now about the work... a lot of contrast needs to be removed.
What's the advantage to adding noise with the ksampler and give him another prompt?
Try to use "reference" model + "Canny" for SDXL models. This will give you much more interesting results than older models (with high "cfg"). Try taking a reference image from the street and writing "snow" in promt text...
Thanks for the tip!
@@NerdyRodentIn principle, you don’t need to redo anything in your work flow (for SDXL), just replace the old models with new T2i SDXL Line & Depth (I checked - everything works well)
I have a question:
Was it a mistake from You that you connected the input of Unsampler from ClibTextEncode instead from the Controlnet output?
i tried working with your workflow to make a face look angry. All the time the output had high contrast which made the output look burned ,untill I changed the Positive and Negative inputs of the Unsampler .
U had them connected from the ClibTextEncode to Unsampler directly .
Now I connected the Apply Controlnet output to the unsampler and it became very good normal without any contrast.
Good spot 😉
@@NerdyRodent thanks for your reply. can I ask you another question? I want to learn similar things like unsampler to change face details or make changes in the (latent space) similar to Unsampler instead of things like inpainting .
Do you know what I should learn?
Sounds as if you’d probably like refacer then! Refacer - Painting to Realistic (and Vice-Versa) in ComfyUI
th-cam.com/video/r7Iz8Ps7R2s/w-d-xo.html
@@NerdyRodent you are great 😃 thanks 🙏
Thanks for the video. Where do I get the control net loras from?
The resources section has links to the stabilityai control loras, sd models and more!
@@NerdyRodent I found them. Thanks.
I get a huge error at the ksampler Advanced node that starts off "mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)"
sdxl mix sd15
Thanks
And thank you too!
Another great workflow for free. Amazing! I am getting an error on Zoe Depth Map. Just bypassing this node it works. Maybe not installed?
Am I dumb, or the only .json workflow in your link is for the QR monster, and the rest are just .png files? I can't figure our where to download the comfyui workflows
you can scroll down to find the workflows 😀
i can't find this workflow on link u provide ?
Thanks !
Welcome!
Do you have a model download address for control lora?
my images with controlnet are coming out extremely polarized and overexposed even with low strength and negative prompts on a standard 1.5 model. Any advice how to fix it?
Prompts certainly help for me! Things like “dark” or “high contrast” in +ve, or like I show in the video with -ve prompting
@@NerdyRodent will try, thanks for reply!
Can you do the Animorph book cover transformation?
is there any way to make comfyui running on linux using Rocm 4.0, as I believe this is the latest supported version for my rx 580
I’ve not got an AMD card, but my guess is that should work just fine! How to Install ComfyUI in 2023 - Ideal for SDXL!
th-cam.com/video/2r3uM_b3zA8/w-d-xo.html
Dude I learn so much from your videos, I have already created 2 music videos with the stuff i learn from you. thank you so much for doing this!
I gotta know - is this really your voice and accent?
No, I’m not actually British but am in fact a space rodent from Alpha Centuri!
@ if this is AI, I have to know which model you used for this voice/accent
Also, is there a way to commission you (budget is there) . Let me know if you do consulting I would love to chat.
Does this works with sdxl?
it's strange : i ddon't have the resolution in the node ZOE so i have an error ! i've dowloaded the model still have error
Check the troubleshooting section for info on how to fix your local installation. 90% of the time you’ll need to update all 😊
Exist a way to use unsampler in automatic 1111??
I think "Noise Inversion" in "Tiled Diffusion" does something similar. Look it up. It's in the img2img tab.
My Unsampler only generates a black image
Changed the sampler. Now it works
It's a great video but my result image was 512 x 768 without any error and not upscale to higher resolution when I was inputs the 512 x 768 imagemand using your workflow, I don't know why you can input the lower resolution then output the higher resolution? you said your image with automatic upscaled to 1136 x 1440 resolution, i don't know why I can't do that. 😅 thanks
Pro tip: you create this node yourself using SamplerCustom node that is native to comfy.
Also allows for more customization
👋
👋
Works but the mtp notes thing it downloaded the missing node and I could no longer use ComfyUI as it was unresponsive until I deleted it.
I think you're a grat teacher... sort of. I like to build these myself, so the .json or a better explanation of the nodes is necessary. It's frustrating getting the abridged version when I would like more indepth instructions. Please find time to break these down like other ComfyUI TH-camrs.
You can indeed save the workflow image provided as a .json file if you like! What is it specifically about the unsampler node that you'd like to know? It basically does just what I show in the video (and as it’s name suggests!) As for building your own, check out my ComfyUI Essentials video - th-cam.com/video/VM9snsuoqBc/w-d-xo.html
I think you're an amazing teacher. please keep doing them exactly like you are now as those long-winded ones are frustrating. keeping them focused and clear like you do is much better and thank you for the workflow
@@MrSporf Some people need additional support. Luckily, I crafted a better workflow after I realized there are too many nodes on screen. It is able to do everything and requires less connections between nodes using "efficient" nodes instead of the typical ones.
@@CoreyJohnson193 good for you, well done. I guess you didn't need that extra support after all. Also, the workflow images are much better than the json files because you can actually see what is going on in the workflow.
@@MrSporf Why not just have both?
Honestly she looks completely different, it's not even the same face at all. Everything is very disproportionate from her mouth to her nose to her eyes and basically everything else. It doesn't look like the same person except at a distance if you squint.
You gotta be blind mate... I've got the workflow and it does an excellent job.
@@tonikunec does it do a better job than the girl shown in the video thumbnail? Because if not, then I can safely say you sir are the blind one.