at 8:02 just a feedback: at this timestamp it would be helpful to announce the new node you added and what it is, it almost seems like you had a hard cut there from prior video and skipped over where you introduced that new node in your workflow because it just jumps into talking about previewing output. but i enjoyed the video and thanks for making it.
Hi dear friend, thank you for mentioning, in that time I have added inpaint conditioning node. It’s very important to check the output and compare them
@@ArchAi3D Indeed, and thank you for showing the comparisons. my comment was just that in future videos when you cut to a new node, it may be good to say what it is and any brief elaboration about it and then proceed to demo. but i have subscribed and will see your other videos as well. thank you!
@@ArchAi3D I'll watch it again and apply on the run. The faces seemed to work perfect and the car struggled a lot, I was wondering what the actual differences were, but let me check again. Thanks
This is such awesome instruction but please please can you try to STOP ZOOMING AND PANNING CONSTANTLY it is incredibly distracting, and makes the instruction hard to follow.
It's really useful, man, looking forward for more and thanks again for the contributions to our ComfyUI community!
More to come!
I am searching this type of inpainting video for a long time. It's very helpful. Thanks a lot.
Please do a vid on workflows for fixing feet and hands. This imho seems like the most challenging aspect of AI image gen at this point in time.
You say it is possible to download the unet sdxl inpaint but I'm unsure where. Do you please have a link to that file?
Cheers
at 8:02 just a feedback: at this timestamp it would be helpful to announce the new node you added and what it is, it almost seems like you had a hard cut there from prior video and skipped over where you introduced that new node in your workflow because it just jumps into talking about previewing output. but i enjoyed the video and thanks for making it.
Hi dear friend, thank you for mentioning, in that time I have added inpaint conditioning node.
It’s very important to check the output and compare them
@@ArchAi3D Indeed, and thank you for showing the comparisons. my comment was just that in future videos when you cut to a new node, it may be good to say what it is and any brief elaboration about it and then proceed to demo. but i have subscribed and will see your other videos as well. thank you!
@@YouCanDoItTootorials :)
I will try to do that, thank you for your comment
hello, great videos. just sub'ed your patreon. Here though, I didnt see your introduction of "perturbed guidance"...
I am noticing the perturbed attention guidance increases contrast and therefore introduces some edge artefacts
thanks! why did the first example work without an inpainting model and the 2nd did not initialy? / why did you turn off differential diffusion?
in which minutes of video?
buy the way you can use all models for inpainting now, because you have Differential Diffusion :)
@@ArchAi3D I'll watch it again and apply on the run. The faces seemed to work perfect and the car struggled a lot, I was wondering what the actual differences were, but let me check again. Thanks
@@ArchAi3D applied everything again. thanks a lot! 10/10 information
I am considering getting a Patron membership. How do you think, can i follow you With GTX1080ti ? Does it make Sense?
for running ComfyUI it is better to use RTX cards and with your GPU it will work slowly
This is such awesome instruction but please please can you try to STOP ZOOMING AND PANNING CONSTANTLY it is incredibly distracting, and makes the instruction hard to follow.
I will try to make it better
Thank you for your comment