I've watched dozens of comfyui tutorials. You're the best at explaining every step without rambling, or pretending we already know the basics. Gonna watch everything you got in this channel :)
That works but what If I want to control the size? The "Empty Latent Image" node is now replaced by "VAE Encode" which is connecting to the latent input image inside the KSampler node. How can I fix that?
i Copied Your Hole workflow by Following this video. And got it i guess its way better than stable diffusion Rendering Process and it Save Time. and Thank You Sir❤
What node should we add if we want our own custom output image size? I have “empty latent image” node, but there’s seemingly nowhere to attach it with the “load image” node attached.
I did not manage to do what I wanted but thanks I understood the designer, what I try to do is add tattoos on a face without it changing, so I’m stuck with the denoise. So I’m back on photoshop. Is it possible to do this? add a layer of detail on an image without changing the origin?
You took me off guard. You are a wonderful teacher. Clear, concise and YOU just make sense! :) Liked and subscribed! Quick question please. What node do you add for width and height and where should it be placed? Thank you.
Can you use img2img like midjourney, where an image is used as "inspiration" for the output image but the whole image changes instead of bring the same image/background/position?
Very good, simple, straight to the point. Thank you sir. I have a doubt - Yes I want to use a reference image, but also want to change the aspect ratio, size and resolution for the output. As we cannot connect empty latent node directly to ksampler now, how do i do it?
Can you try the image-generating process with an unclear photo? Many times AI cannot perfectly represent the details and materials of the original image, such as the style and material of the clothes worn by the clothing model.
Thanks for your amazing tutorials! Very straight forward and easy to follow! I've spent days looking for a tutorial that was easy to understand and actually worked! You're definitely the best! I'm a newbie, do you have more tutorials on what each node represents and how to download / setup models, loras, controlnet etc.? And also can we run ComfyUI in Google Colab Pro? Thanks, again!
Thank you for the suggestion - i will look into this and see what I can wrangle. Text is always pretty tough with diffusion models, so there might be some limitations here. Stay tuned.
Hey - just remembered this comment. The Stable Cascade does rather good, though not reliably, text writing in the outputs. May be worth checking out for your project.
Yeah, shouldn’t be a problem - check out the change styles video I did - it uses grounding Dino to apply styles or change to specific items like glasses, shirts, etc
what if i want to change a real life photo to stylized while keeping the model face and pose same?? example : changing the model's clothes to be futuristic or change the background into a forest
Pose you'll want to use ControlNet. For face, either a specially trained LoRA, face swap, IP Adapter (it won't retain likeness but can help with general features). The rest would be prompting.
Большое спасибо за подсказку. У меня получалось изображение не имеющее связи с образцом, оказалось что я забыл уменьшить шум с 1 до 0,5. Теперь все идет как нужно. Thanks a lot for the hint. Earlier, I received an image that has nothing to do with the sample, it turned out that I forgot to reduce the noise from 1 to 0.5. Now everything is going as it should.
any way to use this with flux? My workflow doesn't use ksampler but rather samplercustomadvanced. I tried to route based on the note names, but it seems to just ignore my input image and simply use the text prompt
I've been away for a bit but just started taking a look at Flux this evening - super cool stuff! Still need to learn the basics before I am ready to share anything here on the channel, but suspect some new videos in the next couple of weeks or so. In the mean time, here's a thread that uses the same `Sampler Custom Advanced` node in an img2img flow that might help: www.reddit.com/r/StableDiffusion/comments/1eigdbk/img_2_img_with_flux/
Yeah definitely. Easy to do with landscapes, scenes, etc. as you can raise the denoise (probably around .8-.9) and change your prompt accordingly. However, if doing this to characters, people, etc. you'll lose some of their likeness in the process. To counteract, I think using an IP Adapter or LoRA may help to counteract this.
i hate watching tutorials to learn stuff as i learn by imitation and trial but you way of explanation is very calm and clear. Thanks alot.
I've watched dozens of comfyui tutorials. You're the best at explaining every step without rambling, or pretending we already know the basics. Gonna watch everything you got in this channel :)
Thank you for the nice feedback - happy to see the tutorial was useful for ya!
Straight to the point and easy to understand~ Thanks for that.
That works but what If I want to control the size? The "Empty Latent Image" node is now replaced by "VAE Encode" which is connecting to the latent input image inside the KSampler node.
How can I fix that?
i Copied Your Hole workflow by Following this video. And got it i guess its way better than stable diffusion Rendering Process and it Save Time. and Thank You Sir❤
What node should we add if we want our own custom output image size? I have “empty latent image” node, but there’s seemingly nowhere to attach it with the “load image” node attached.
I did not manage to do what I wanted but thanks I understood the designer, what I try to do is add tattoos on a face without it changing, so I’m stuck with the denoise. So I’m back on photoshop.
Is it possible to do this? add a layer of detail on an image without changing the origin?
Newbie question but what's your shortcut to search across installed nodes when you create a new node?...
Double click on the workspace and the search box should show up
You took me off guard. You are a wonderful teacher. Clear, concise and YOU just make sense! :) Liked and subscribed!
Quick question please. What node do you add for width and height and where should it be placed? Thank you.
Quick and simple i love it, thank you so much for this!
Can you use img2img like midjourney, where an image is used as "inspiration" for the output image but the whole image changes instead of bring the same image/background/position?
Very good, simple, straight to the point. Thank you sir. I have a doubt - Yes I want to use a reference image, but also want to change the aspect ratio, size and resolution for the output. As we cannot connect empty latent node directly to ksampler now, how do i do it?
thank you for helpful video.
btw, do you know any node works like "sketch" in A1111?
I searched for long time but haven't find any of them.
Hi I have a question, Can I change the width and height of an image to image?
Can you try the image-generating process with an unclear photo? Many times AI cannot perfectly represent the details and materials of the original image, such as the style and material of the clothes worn by the clothing model.
Is there a way to encode an image, but scale/crop it down to a desired aspect ratio in comfy UI?
Thanks for your amazing tutorials! Very straight forward and easy to follow! I've spent days looking for a tutorial that was easy to understand and actually worked! You're definitely the best! I'm a newbie, do you have more tutorials on what each node represents and how to download / setup models, loras, controlnet etc.? And also can we run ComfyUI in Google Colab Pro? Thanks, again!
Thanks so much!! This was a clear explanation of img2img. What if I have a pic of myself? I wouldn't have a prompt to start...
Sure you would! 1man, smiling, glasses, etc. - just describe what you see in the picture and that is your prompt :-)
oh wow...will definitely be testing that. @@PromptingPixels
Hi! Great tutorial. Definitely the best on TH-cam. Quick question- at what point in this workflow could a Lora be added?
great tutorial. thanks buddy. Please do more...
Absolutely - anything in particular you'd like to learn more about?
Yep. text writing or text animation on AI Videos. Could you please make a video about it ?@@PromptingPixels
Thank you for the suggestion - i will look into this and see what I can wrangle. Text is always pretty tough with diffusion models, so there might be some limitations here. Stay tuned.
Thanks@@PromptingPixels
Hey - just remembered this comment. The Stable Cascade does rather good, though not reliably, text writing in the outputs. May be worth checking out for your project.
can i add mask and make changes only specific part say ad headphones ?
Yeah, shouldn’t be a problem - check out the change styles video I did - it uses grounding Dino to apply styles or change to specific items like glasses, shirts, etc
How to Change Clothes with Precision in ComfyUI (IPAdapter & Grounding Dino)
th-cam.com/video/81IajUc76pY/w-d-xo.html
Perfect and simple explanation ❤🎉 thank you.
WOW~~~~~~you made me wanna learn this one
what if i want to change a real life photo to stylized while keeping the model face and pose same?? example : changing the model's clothes to be futuristic or change the background into a forest
Pose you'll want to use ControlNet. For face, either a specially trained LoRA, face swap, IP Adapter (it won't retain likeness but can help with general features). The rest would be prompting.
Большое спасибо за подсказку. У меня получалось изображение не имеющее связи с образцом, оказалось что я забыл уменьшить шум с 1 до 0,5. Теперь все идет как нужно.
Thanks a lot for the hint. Earlier, I received an image that has nothing to do with the sample, it turned out that I forgot to reduce the noise from 1 to 0.5. Now everything is going as it should.
any way to use this with flux? My workflow doesn't use ksampler but rather samplercustomadvanced. I tried to route based on the note names, but it seems to just ignore my input image and simply use the text prompt
I've been away for a bit but just started taking a look at Flux this evening - super cool stuff!
Still need to learn the basics before I am ready to share anything here on the channel, but suspect some new videos in the next couple of weeks or so. In the mean time, here's a thread that uses the same `Sampler Custom Advanced` node in an img2img flow that might help:
www.reddit.com/r/StableDiffusion/comments/1eigdbk/img_2_img_with_flux/
@@PromptingPixels thank you!
Can img2img do which the img is Pop Art or Abstract Art?
Yeah definitely. Easy to do with landscapes, scenes, etc. as you can raise the denoise (probably around .8-.9) and change your prompt accordingly. However, if doing this to characters, people, etc. you'll lose some of their likeness in the process. To counteract, I think using an IP Adapter or LoRA may help to counteract this.
Very nice explanation.
Helal olsun abim sevdim seni
Perfect video
thank you so much
96% of all comfyui videos doesnt work as they are told
Hopefully this one did? 🤞