ComfyUI With Meta Segment Anything Model 2 For Image And AI Animation Editing

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 36

  • @TheFutureThinker
    @TheFutureThinker  4 หลายเดือนก่อน +7

    segment-anything-2
    ai.meta.com/blog/segment-anything-2/
    github.com/kijai/ComfyUI-segment-anything-2
    Model : huggingface.co/Kijai/sam2-safetensors/tree/main
    Save to ComfyUI/models/sam2

  • @goodie2shoes
    @goodie2shoes 4 หลายเดือนก่อน +4

    its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!

    • @TheFutureThinker
      @TheFutureThinker  4 หลายเดือนก่อน +1

      Yes, he is very quick to implement, whenever new model release, he will create a custom node done. 😊

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 4 หลายเดือนก่อน

      Who?

  • @aivideos322
    @aivideos322 4 หลายเดือนก่อน +1

    Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1
    Edit : i needed to change the security of my manager to weak to install this.

  • @thegtlab
    @thegtlab 4 หลายเดือนก่อน

    How can we have IPadapter ignore the background and only change the style of the subjects?

  • @MrZuhaib69
    @MrZuhaib69 3 หลายเดือนก่อน +1

    Hey Can I use mimic motion with this?

    • @TheFutureThinker
      @TheFutureThinker  3 หลายเดือนก่อน +1

      Yes, we are using this for mimic

    • @MrZuhaib69
      @MrZuhaib69 3 หลายเดือนก่อน +1

      @@TheFutureThinker can i use image to image with mimic motion?

  • @antoniojoaocastrocostajuni8558
    @antoniojoaocastrocostajuni8558 4 หลายเดือนก่อน

    Is it possible to use the model to segment Stable Difusion / Midjourney deformed pictures? (multiple fingers, blurry flaces, etc)

  • @crazyleafdesignweb
    @crazyleafdesignweb 4 หลายเดือนก่อน

    Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.

  • @DP-zw8sb
    @DP-zw8sb 4 หลายเดือนก่อน

    Can you post video inpainting with sam2 sd 1.5 model please

  • @thibaudherbert3144
    @thibaudherbert3144 3 หลายเดือนก่อน

    thansk for the tutorial, one question though : which is better ? Animatediff or mimic motion ?

  • @suzanazzz
    @suzanazzz 3 หลายเดือนก่อน

    Awesome videos on florence, thanks for your time creating these.
    A quick question, when I use Florence for captions in an animateDiff and IPAdapters workflow, I get 2 results:
    1- the final animation
    2- the animation with the Florence captions.
    For some reason the Florence captions is much faster even though it is set at the same frame rate (24fps) as the Video combine for the plain animation (without the captions).
    Any idea why this is happening or how to fix it?
    Thanks in advance 🙏

  • @hefuture
    @hefuture หลายเดือนก่อน

    bert-base-uncased does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. 显示这个如何解决。谢谢

  • @kalakala4803
    @kalakala4803 4 หลายเดือนก่อน

    Thanks , i will update my wf to try SAM2.

  • @santicomp
    @santicomp 4 หลายเดือนก่อน

    I was thinking this exact flow when Sam 2 was released.
    The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2.
    Awesome job. 🎉

    • @TheFutureThinker
      @TheFutureThinker  4 หลายเดือนก่อน +1

      Florence 2 FT works well with this, should give it a try.

  • @lionhearto6238
    @lionhearto6238 4 หลายเดือนก่อน

    hi. is there a way to output/save only the orange? instead of the mask of the orange?

    • @alecubudulecu
      @alecubudulecu 4 หลายเดือนก่อน

      yeah. cut by mask

  • @adrivgsgt628
    @adrivgsgt628 4 หลายเดือนก่อน +1

    On 4:00 you say "then you can load up segment anything2" but it doesn't show how you loaded the nodes? Could you please explain how you were able to go from the blank screen to the full node setup? I'm stumped on this step. Thank you!

  • @__________________________6910
    @__________________________6910 4 หลายเดือนก่อน

    Your system config ?

  • @Kikoking-y9b
    @Kikoking-y9b 4 หลายเดือนก่อน +2

    It doesn't work good . Sometimes it doesn't segment good if there are several things

  • @OnlySong007
    @OnlySong007 4 หลายเดือนก่อน

    nice explanation

  • @__________________________6910
    @__________________________6910 4 หลายเดือนก่อน +1

    It's very complex task

  • @weirdscix
    @weirdscix 4 หลายเดือนก่อน

    I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this

    • @TheFutureThinker
      @TheFutureThinker  4 หลายเดือนก่อน +1

      Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too.
      I noticed, it happened when an object moving different angle.

    • @aivideos322
      @aivideos322 4 หลายเดือนก่อน +1

      had the same issue, large worked better but it still was not perfect.
      Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should

    • @TheFutureThinker
      @TheFutureThinker  4 หลายเดือนก่อน +1

      @@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.

    • @authorkevin
      @authorkevin 4 หลายเดือนก่อน

      ​@@aivideos322toggle the individual objects selector

  • @Username56291
    @Username56291 4 หลายเดือนก่อน

    is that comfyUI? yes i see now...

  • @machanmobile4216
    @machanmobile4216 4 หลายเดือนก่อน

    齊藤さんだぞ