segment-anything-2 ai.meta.com/blog/segment-anything-2/ github.com/kijai/ComfyUI-segment-anything-2 Model : huggingface.co/Kijai/sam2-safetensors/tree/main Save to ComfyUI/models/sam2
Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1 Edit : i needed to change the security of my manager to weak to install this.
Awesome videos on florence, thanks for your time creating these. A quick question, when I use Florence for captions in an animateDiff and IPAdapters workflow, I get 2 results: 1- the final animation 2- the animation with the Florence captions. For some reason the Florence captions is much faster even though it is set at the same frame rate (24fps) as the Video combine for the plain animation (without the captions). Any idea why this is happening or how to fix it? Thanks in advance 🙏
bert-base-uncased does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. 显示这个如何解决。谢谢
I was thinking this exact flow when Sam 2 was released. The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2. Awesome job. 🎉
On 4:00 you say "then you can load up segment anything2" but it doesn't show how you loaded the nodes? Could you please explain how you were able to go from the blank screen to the full node setup? I'm stumped on this step. Thank you!
I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this
Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too. I noticed, it happened when an object moving different angle.
had the same issue, large worked better but it still was not perfect. Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should
@@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.
segment-anything-2
ai.meta.com/blog/segment-anything-2/
github.com/kijai/ComfyUI-segment-anything-2
Model : huggingface.co/Kijai/sam2-safetensors/tree/main
Save to ComfyUI/models/sam2
its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!
Yes, he is very quick to implement, whenever new model release, he will create a custom node done. 😊
Who?
Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1
Edit : i needed to change the security of my manager to weak to install this.
How can we have IPadapter ignore the background and only change the style of the subjects?
Hey Can I use mimic motion with this?
Yes, we are using this for mimic
@@TheFutureThinker can i use image to image with mimic motion?
Is it possible to use the model to segment Stable Difusion / Midjourney deformed pictures? (multiple fingers, blurry flaces, etc)
Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.
Nice! ☺️
Can you post video inpainting with sam2 sd 1.5 model please
thansk for the tutorial, one question though : which is better ? Animatediff or mimic motion ?
Awesome videos on florence, thanks for your time creating these.
A quick question, when I use Florence for captions in an animateDiff and IPAdapters workflow, I get 2 results:
1- the final animation
2- the animation with the Florence captions.
For some reason the Florence captions is much faster even though it is set at the same frame rate (24fps) as the Video combine for the plain animation (without the captions).
Any idea why this is happening or how to fix it?
Thanks in advance 🙏
bert-base-uncased does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. 显示这个如何解决。谢谢
Thanks , i will update my wf to try SAM2.
Have fun😉
I was thinking this exact flow when Sam 2 was released.
The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2.
Awesome job. 🎉
Florence 2 FT works well with this, should give it a try.
hi. is there a way to output/save only the orange? instead of the mask of the orange?
yeah. cut by mask
On 4:00 you say "then you can load up segment anything2" but it doesn't show how you loaded the nodes? Could you please explain how you were able to go from the blank screen to the full node setup? I'm stumped on this step. Thank you!
Your system config ?
It doesn't work good . Sometimes it doesn't segment good if there are several things
nice explanation
It's very complex task
I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this
Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too.
I noticed, it happened when an object moving different angle.
had the same issue, large worked better but it still was not perfect.
Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should
@@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.
@@aivideos322toggle the individual objects selector
is that comfyUI? yes i see now...
齊藤さんだぞ