I understand English is not your first language, despite that your videos were the best ones for explaining how to install and use ComfyUI nodes etc. Showing and using examples and going through the steps is a great help. Thank you very much, you have helped me understand with less pain in the process.
There are a couple of other alternative nodes to inpainting that I've found, like ComfyI2I and some custom nodes in Impact Pack. Do you think they're worth trying out or are they about the same? I'm looking to do auto face and hands fixing, there are facedetailer nodes specifically in other node packs but it looks like clipseg as described here kinda just works already. Do you think it makes sense to use clipseg for that or should i use a more specialized node?
Manual for personal images is the way to go since most of them are jpegs and don't have enough data in them, auto/smart mask will make a mess on them, clipseg/sam for your generations, like someone else in comments, i like to see faceswap tutorial in comfyui, thank you.
I understand that clipseg and sam can look at any image regardless of image format. So jpg things should not relevant? (Except that jpg quality is bad)
@@AIAngelGallery well tbh they can't figure the depth or in medium quality images anything. you select/type a tree in the background, they select half of the right arm of the person far away from the tree.
This guy is living the future, on the 13th of august 2566 to be exact 😆 How do you even manage to set your system time that far into the future? BTW: Great video, thanks 🙂
@@saltyseadog4719because not all country in the world uses Gregorian calendar. There are many kinds of calendar system, and in Buddhist country, they use Buddhist Era (BE) calendar.
Thank you for all your videos, they are very helpful. I have a problem when I use set latent noise mask nothing happens even if I use the highest noise values, do you know why it could be? I can only get results when I use VAE encode
Hallo my indian friend. I work since one week with comfyui and learning a lot how to do things with it, but i dont understand why the ui do wat its do.... now i found your Video and you teach it so god! Thx, greetings from Germany 😊
New person question: When you have all of this hooked up to your main workflow and you hit "Queue Prompt" it runs through the whole thing instead of going to the mask subroutine, screwing everything up. I didn't see you bypass anything so how do you avoid that?
Excellent video, thanks! But when I'm inpainting and I use a image generated by the default workflow using copy/paste clipspace, the inpainting is fast but when I use inpainting loading a new image from the disk the inpainting is slower because everytime I queue prompt it "requests to load basemodel" again. Is there a way to doesn't load a model everytime when using image from disk like source?
Sometimes inpainting through SetLatentNoiseMask generates very poorly compared to using Vae for Inpainting, poorly meaning that though it stays in the mask, what it generates is not logically good or desired compared to Vae for Inpainting which seems to 'understand' better what is intended or desired. Have you seen this or have advice? in your example at 6:51 using SetLatentNoiseMask works great so i dont know why i sometimes get poor results unless it is the model i am using. do some models work poorly with inpainting and can you provide your experience on that? Thank you for your channel
Could you make a tutorial on how to swap face with comfyui? What i mean is to changetheface of a photo with the face taken from another Hlto. Thanks in advance!
hi please help u added graphics to the t-shirt using a prompt u typed a flower and changed it to flower but what if i have an existing graphics image. how can i add my existing graphics to the t-shirt
how use an image as a mask, comfyUi masking is not perfect, lets say you use photoshop to create a perfect mask, then import the alpha into ComfyUI for masking the image and regenerate that part of the image, how to do that please, can someone help?
Why does the KSampler node in your video display a realtime preview image? Mine does not 😐 How can I enable this? *EDIT:* Found the solution! Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end. It will show the steps in the KSampler panel, at the bottom ☺
@@AIAngelGallery Nice, thx! Now all I wish is to have some kind of "switch" or "toggle" node to chose what workflow will be processed when I click on "Queue Prompt". Right now it always processes BOTH workflows (inpainting the chosen image and creating a completely new one).
I have a problem, when I inpaint, it somehow makes the entire image more contrasty and removes detail, and if I keep copying the result and inpaint again it keeps get darker worsening quality. How can I fix this and why does it do this?
@@AIAngelGallery the whole picture, only the mask area gets regenerated but the whole picture get this contrast increase. I set it up exactly like you did.
A great tutorial .. I'm having trouble loading Impact pack ... Note in manager "please edit the impact-pack.ini file in the ComfyUI-Impact-Pack directory and change 'mmdet_skip = True' to 'mmdet_skip = False." I can't find the impact-pack .ini file .. can you help . thank you
it's in "ComfyUI-Impact-Pack" Folder in Custom Nodes. however, i never need to edit that stuff. to use manager and impact pack proper;y, you should update main ComfyUI and Manager using "git pull" command
I understand English is not your first language, despite that your videos were the best ones for explaining how to install and use ComfyUI nodes etc. Showing and using examples and going through the steps is a great help. Thank you very much, you have helped me understand with less pain in the process.
Thx a lot
AGREED!!!! This bloke is good!!!
Thank you for showing the several methods of masking and inpainting!
Thanks for showing off some of the extra nodes and how they work. Automatic masking is so cool.
เป็นคลิปที่ดีมากๆ ฝึกตั้งแต่ง่ายๆ ไปยากขึ้นเรื่อยๆ ผมก็นั่งทำซ้ำๆ จนจำได้แล้ว ขอบคุณครับ
Thank you, finally a step by step comfyui that is easy to understand without a lot of nodes tangling together, Liked and subscribed
ผมตั้ง node เหมือนพี่เลย แต่ของผมมันไปรัน gen ภาพข้างบนด้วย อยากให้มันรันแต่ inpaint ต้องทำไงครับ ทำไมของวิดีโอคลิปพี่ มันรันแต่ inpaint
Thanks a bunch for sharing this! Hope you'll make more videos about ComfyUI! Can't wait to see more from you.
thx, I will create much more vdos about comfyui for sure!
thank you ever so much for all the clips na kub. I hardly subscribed to a channel nowadays, but I sub yours kub....
thx a lot!, r u the same person with @MunkTVDOMUNK? I'm also the fan of that channel!
Thank you so much. I’m munk kub 🙏🏼🙏🏼
Want to learn more about upscaling kub.
@@munkmegtube oh, i also plan that next episode will be the upscale topic. Matches your requirement perfectly!
Well explained thanks
Thank you so much for the step by step of how it work. Keep up the great work!
Спасибо большое за видео, как раз то, что я искал.
very well explained and super clear
Great tutorial, thanks!
Do you know how to inpaint a specific image to the tshirt?
Awesome work. Keep it up. I hope to see a future video on how to create different poses using the same face and body
Thx for your request
Thankyou for the tutorial, easy to understand 👍👍👍👍
Thank you, lots of great ways to inpaint in ComfyUI !
Very informative and helpful for a comfy UI beginner. Keep up your good work. 😄
Thx, i will continue creating good comfyui clips
Thanks
Good stuff, keep it up! We love it!
Great vid but what if i want to create mask of batch of images?
You can do that with load image batch in WAS Node suite github.com/WASasquatch/was-node-suite-comfyui
Thank you a lot for your explanation and comparison it helps a lot to understand.
Do you think Clipseg can be used in batch images?
It realy helps! Thanks.
Thanks !
Thanks for the video..hope you will make many more comfyui videos
Sure!
can you also use this with incremental_images or is this mainly for just a single image use case?
Subscribed from Cambodia
ตามดูครบทุกตอน ดูว่ายอดเยี่ยมและน่าสนใจมากครับ ขอรบกวนสอบถามครับ
1. สำหรับคนทั่วไปถือว่า ComfyUIใช้งานง่ายกว่า automatic 1111 ใช่ไหมครับ (กำลังจะลง 1111 พอดีมาเจอคลิปนี้ครับ)
3. เทรนlora ได้ไหมครับ
ตอบไปในเม้นอีกอันแล้วนะครับ
Thank you, I learnt so many things.
Glad it was helpful!
Thx, I tried the "VAE encoder for inpainting" more but get less before your excellent toturial
Thank you for sharing
Great job krub. 👍👍👍
Thanks 👍
There are a couple of other alternative nodes to inpainting that I've found, like ComfyI2I and some custom nodes in Impact Pack. Do you think they're worth trying out or are they about the same?
I'm looking to do auto face and hands fixing, there are facedetailer nodes specifically in other node packs but it looks like clipseg as described here kinda just works already. Do you think it makes sense to use clipseg for that or should i use a more specialized node?
Impact pack detailers node is super powerful. I will cover it in future episode because it is very complex one.
Excellent!
clipseg is so awesome
Very well explained. Keep it up man!
Good stuff! Thanks!
Thank you for your content. HOW can i put a logo (or any image) on her shirt?
how to make a model with a wet t-shirt? prompts dont seem to work very well. thnaks
you can use this LORA (watch EP07 about how to use it)
civitai.com/models/17391/wet-t-shirt-lora
Manual for personal images is the way to go since most of them are jpegs and don't have enough data in them, auto/smart mask will make a mess on them, clipseg/sam for your generations, like someone else in comments, i like to see faceswap tutorial in comfyui, thank you.
I understand that clipseg and sam can look at any image regardless of image format. So jpg things should not relevant? (Except that jpg quality is bad)
@@AIAngelGallery well tbh they can't figure the depth or in medium quality images anything. you select/type a tree in the background, they select half of the right arm of the person far away from the tree.
Wow Very Good turtorial.
Glad that you like it
Great video, Thanks!
You're welcome!
thx man
ไม่ทราบว่าพอจะบอกได้ไหมครับว่าในคลิปนี่ใช้ Checkpoint ชื่อเต็มๆว่าอะไรครับ? พอดีชื่อมันซ้อนกันมองไม่ออก (แต่ถ้าบอกไม่ได้ก็ไม่เป็นไรครับ)
Epicrealism ครับ รายละเอียดอยู่ในคลิปแรก EP01 ของช่องเลย
อ้อ ขอบคุณครับ 🙏@@AIAngelGallery
This guy is living the future, on the 13th of august 2566 to be exact 😆
How do you even manage to set your system time that far into the future?
BTW: Great video, thanks 🙂
Oh, it's Buddhist Era year , equal 2023+543 = 2566
oh lol kk
@@AIAngelGallery
why is it like that?
@@AIAngelGallery
@@saltyseadog4719because not all country in the world uses Gregorian calendar. There are many kinds of calendar system, and in Buddhist country, they use Buddhist Era (BE) calendar.
มาแล้วครับอาจารย์
สงสัยอะไรถามได้นะครับ
Thank you for all your videos, they are very helpful. I have a problem when I use set latent noise mask nothing happens even if I use the highest noise values, do you know why it could be? I can only get results when I use VAE encode
fantastic thanks!
Glad you like it!
Hallo my indian friend.
I work since one week with comfyui and learning a lot how to do things with it, but i dont understand why the ui do wat its do.... now i found your Video and you teach it so god! Thx, greetings from Germany 😊
New person question: When you have all of this hooked up to your main workflow and you hit "Queue Prompt" it runs through the whole thing instead of going to the mask subroutine, screwing everything up. I didn't see you bypass anything so how do you avoid that?
Excellent. 👍
Thx ^^
nice tutorial, how do you inpaint better hands ?
Thank you!
You're welcome
Excellent video, thanks! But when I'm inpainting and I use a image generated by the default workflow using copy/paste clipspace, the inpainting is fast but when I use inpainting loading a new image from the disk the inpainting is slower because everytime I queue prompt it "requests to load basemodel" again. Is there a way to doesn't load a model everytime when using image from disk like source?
this means if i use IP Adapter then I will be able to directly put logo on the tshirt ?
Kind gentleman....can you show how to edit parts of body please? Using inpaint? Shape of arms, legs, fingers or even the face....So is this possible?
สุดยอดมากเลยครับคุณ ขอบคุณนะครัับ
มีสุดยอดกว่านี้อีกเยอะเลยครับ จะค่อยๆ สอนไปนะครับ
@@AIAngelGallery รอเลยครับ❤
Sometimes inpainting through SetLatentNoiseMask generates very poorly compared to using Vae for Inpainting, poorly meaning that though it stays in the mask, what it generates is not logically good or desired compared to Vae for Inpainting which seems to 'understand' better what is intended or desired. Have you seen this or have advice? in your example at 6:51 using SetLatentNoiseMask works great so i dont know why i sometimes get poor results unless it is the model i am using. do some models work poorly with inpainting and can you provide your experience on that? Thank you for your channel
Lora ใช้กับ comfyui ได้ไหมครับ
ได้ครับ มีnode เอาไว้ load lora ด้วย
@@AIAngelGallery ทำได้ละครับ ขอบคุณครับ
@@AIAngelGallery ใน comfyui ถ้าใช้มากกว่า หนึ่ง model ต้องทำยังไงครับ🙏🏼🥹+ รอ upscale อยู่นะครับ ทำ หลายวิธีมาก ออกมาไม่ดีเลยครับ
@@munkmegtube รอแป๊ปนะครับ พอดียุ่งๆ กับงานหลักอยู่ (ผมเป็นวิทยากรสอน Excel กับ Power BI ครับ)
กด queue Prompt อย่างไรครับ ให้มันทำแค่ ksample เดียว ผมกดแล้วมันทำงาน 2 อันเลยครับ
กด ctrl+m mute หรือ ctrl+b bypass ตัวที่ไม่ต้องการ
Спасибо бро
Could you make a tutorial on how to swap face with comfyui? What i mean is to changetheface of a photo with the face taken from another
Hlto. Thanks in advance!
Thx for your request
hi please help u added graphics to the t-shirt using a prompt u typed a flower and changed it to flower but what if i have an existing graphics image. how can i add my existing graphics to the t-shirt
the "manager" button is not present in my ui do u need to enable that somewhere or have i downloaded the wrong version?
Is this work for SDXL?
Can i have the workflows all methods mentioned here please
What if I want to put an external image on the shirt?
hello, is it posible 2d character to psd file, each body parts another image file, how can we do?
สอนวิธีใช้งาน Lora หน่อยครับผมติดตามอยู่นะครับผม ดีมากๆเลยครับ
เดี๋ยวสอนในตอนต่อไปครับ
how use an image as a mask, comfyUi masking is not perfect, lets say you use photoshop to create a perfect mask, then import the alpha into ComfyUI for masking the image and regenerate that part of the image, how to do that please, can someone help?
Can I inpainting to change the shirt the model is wearing into a shirt I already have photos of
You should do that in photo editor first, then use the result to inpaint again with not so much denoise
Why does the KSampler node in your video display a realtime preview image? Mine does not 😐 How can I enable this? *EDIT:* Found the solution! Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end. It will show the steps in the KSampler panel, at the bottom ☺
You can also set it in the manager (easier)
@@AIAngelGallery Nice, thx! Now all I wish is to have some kind of "switch" or "toggle" node to chose what workflow will be processed when I click on "Queue Prompt". Right now it always processes BOTH workflows (inpainting the chosen image and creating a completely new one).
@@MikevomMars if the result is the same (same fixed seed), comfyui will not generate that part
@@MikevomMars comfy roll custom node have switch node, take a look.
คนไทยใช้มะ
Hi are these nodes sdxl compatible?
Nevermind. It all worked alright. Great tutorial man. Very much appreciated! Clipseg and SAM are great!
I have a problem, when I inpaint, it somehow makes the entire image more contrasty and removes detail, and if I keep copying the result and inpaint again it keeps get darker worsening quality. How can I fix this and why does it do this?
Get darker only in the inpainted area? Or whole picture?
@@AIAngelGallery the whole picture, only the mask area gets regenerated but the whole picture get this contrast increase. I set it up exactly like you did.
@@AIAngelGallery I think it was the vae I was using
how to outpaint in comfyui
A great tutorial .. I'm having trouble loading Impact pack ... Note in manager "please edit the impact-pack.ini file in the ComfyUI-Impact-Pack directory and change 'mmdet_skip = True' to 'mmdet_skip = False." I can't find the impact-pack .ini file .. can you help . thank you
it's in "ComfyUI-Impact-Pack" Folder in Custom Nodes. however, i never need to edit that stuff.
to use manager and impact pack proper;y, you should update main ComfyUI and Manager using "git pull" command
Does the value of 1234 in the seed have a function? What happens if I change the seed number?
The result will change