ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ธ.ค. 2024

ความคิดเห็น • 151

  • @David_Fernandez
    @David_Fernandez 3 วันที่ผ่านมา +1

    Must say: It's such a pleasure to listen to your calm voice tone. Plus very much appreciated all the info. Thanks!

  • @MindsMystery24
    @MindsMystery24 หลายเดือนก่อน

    At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you

  • @Hebrideanphotography
    @Hebrideanphotography 6 หลายเดือนก่อน +8

    People like you are so important. Too many gatekeepers out there. ❤

  • @AI.Studios.4U
    @AI.Studios.4U 2 หลายเดือนก่อน +1

    Thanks to you I have created my first video using ComfyUI! Your video is priceless!

  • @ZergRadio
    @ZergRadio หลายเดือนก่อน

    I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried.
    And I am very impressed by it, simply because it worked.
    And my video came out really nice.
    Subscribed!

  • @1010mrsB
    @1010mrsB 2 หลายเดือนก่อน

    You're amazing!! I was lost for so long and when I found this video I was found

  • @jdsguam
    @jdsguam 5 หลายเดือนก่อน +1

    I've been having fun with this workflow for a few days already. It is amazing what can be done on a laptop in 2024.

  • @RokSlana
    @RokSlana 3 หลายเดือนก่อน

    This looks awesome. I gotta give it a try asap. Thanks for sharing.

  • @ted328
    @ted328 8 หลายเดือนก่อน +2

    Literally the answer to my prayers, have been looking for exactly this for MONTHS

  • @stinoway
    @stinoway 3 หลายเดือนก่อน

    Great video!! Hope you'll drop more knowledge in the future!

  • @SAMEGAMAN
    @SAMEGAMAN 2 หลายเดือนก่อน

    Thank you for this video❤❤

  • @alessandrogiusti1949
    @alessandrogiusti1949 8 หลายเดือนก่อน +1

    After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!

  • @EternalAI-v9b
    @EternalAI-v9b หลายเดือนก่อน

    Hello, how did you make that effect with your eyes at 0:20 please?

  • @gorkemtekdal
    @gorkemtekdal 8 หลายเดือนก่อน +1

    Great video!
    I want to ask that can we use init image for this workflow like we do on Deforum?
    I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts.
    Do you know how does it possible on ComfyUI / AnimateDiff?
    Thank you!

    • @abeatech
      @abeatech  8 หลายเดือนก่อน +1

      I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.

  • @TechWithHabbz
    @TechWithHabbz 8 หลายเดือนก่อน +1

    You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      Thanks for the sub!

  • @Ai_mayyit
    @Ai_mayyit 7 หลายเดือนก่อน

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'
    your video timestep: 04:20

  • @CoqueTornado
    @CoqueTornado 8 หลายเดือนก่อน +1

    great tutorial, I am wondering... how many vram does this setup need?

    • @abeatech
      @abeatech  8 หลายเดือนก่อน +1

      i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi

    • @CoqueTornado
      @CoqueTornado 8 หลายเดือนก่อน

      @@abeatech thank you!! will try the two suggestions! congrats for the channel!

  • @paluruba
    @paluruba 8 หลายเดือนก่อน +2

    Thank you for this video! Any idea what to do when the videos are blurry?

    • @jesseybijl2104
      @jesseybijl2104 7 หลายเดือนก่อน

      Same here, any answer?

  • @andrruta868
    @andrruta868 6 หลายเดือนก่อน

    I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.

  • @EmoteNation
    @EmoteNation 4 หลายเดือนก่อน

    Bro u r doing really good job, i hav only one question,
    in this video u did image to video morphing so can u do video to video morphing?
    Or can u make morphing video by using only text / prompt?

  • @SylvainSangla
    @SylvainSangla 8 หลายเดือนก่อน

    Thanks a lot for sharing this, very precise and complete guide ! 🥰
    Cheers from France !

  • @dmitrykonovalov9366
    @dmitrykonovalov9366 วันที่ผ่านมา

    nice! why did you stop making more tutorials?

  • @hoptoad
    @hoptoad 6 หลายเดือนก่อน

    this is great!
    do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?

  • @SF8008
    @SF8008 8 หลายเดือนก่อน +1

    Amazing! Thanks a lot for this!!!
    btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)

  • @ComfyCott
    @ComfyCott 7 หลายเดือนก่อน

    Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!

  • @juliensylvestreeee
    @juliensylvestreeee 3 หลายเดือนก่อน

    Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3

  • @user-yo8pw8wd3z
    @user-yo8pw8wd3z 7 หลายเดือนก่อน

    good video. where can i find the link to the additional video masks? I don't see it in the description

  • @AlderoshActual-z3k
    @AlderoshActual-z3k 5 หลายเดือนก่อน

    Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!

  • @MariusBLid
    @MariusBLid 8 หลายเดือนก่อน +1

    Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram

  • @pedrobrandao7664
    @pedrobrandao7664 5 หลายเดือนก่อน

    Great tutorial

  • @aslgg8114
    @aslgg8114 8 หลายเดือนก่อน +1

    What should I do to make the reference image persistent

  • @GNOM_
    @GNOM_ 4 หลายเดือนก่อน +1

    Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!

    • @tadaizm
      @tadaizm 4 หลายเดือนก่อน

      Розібрався?

    • @GNOM_
      @GNOM_ 3 หลายเดือนก่อน

      @@tadaizm так, розібрався. Просто скопіювати свою маску як путь і вставити.Нажаль масок мало.Скачати інщі маски теж та щє проблема, фіг знадеш.

  • @chinyewcomics
    @chinyewcomics 7 หลายเดือนก่อน +1

    Hi, does anybody know how to add more images to create a longer video?

  • @petertucker455
    @petertucker455 6 หลายเดือนก่อน

    Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?

  • @retrotiker
    @retrotiker 4 หลายเดือนก่อน

    Great tutorial! Your content is super helpful. Just wondering, where are you these days? We'd love to see more Comfy UI tutorials from you!

  • @lucagenovese7207
    @lucagenovese7207 6 หลายเดือนก่อน

    Insane!!!!! Ty so much!

  • @Injaznito1
    @Injaznito1 7 หลายเดือนก่อน

    NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!

  • @goran-mp-kamenovic6293
    @goran-mp-kamenovic6293 6 หลายเดือนก่อน

    5:30 what do you do to see the duration :)

  • @yannickweineck4302
    @yannickweineck4302 หลายเดือนก่อน

    in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.

  • @mcqx4
    @mcqx4 8 หลายเดือนก่อน +1

    Nice tutorial, thanks!

    • @abeatech
      @abeatech  8 หลายเดือนก่อน +1

      Glad it was helpful!

  • @Murdalizer_studios
    @Murdalizer_studios 5 หลายเดือนก่อน

    nice bro. Thank you🖖

  • @Danaeprojectful
    @Danaeprojectful 2 หลายเดือนก่อน

    hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks

  • @GiancarloBombardieri
    @GiancarloBombardieri 6 หลายเดือนก่อน

    it worked so fine. but now it sends an error at the Load video Path, is there any update??

  • @cabb_
    @cabb_ 8 หลายเดือนก่อน

    ipiv did an incredible job with this workflow!. Thanks for the tutorial.

  • @evgenika2013
    @evgenika2013 6 หลายเดือนก่อน

    Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?

  • @0x0abb
    @0x0abb วันที่ผ่านมา

    I have the workflow working but my videos look very uninteresting - too abstract and the matte animation is very obvious

  • @ollyevans636
    @ollyevans636 5 หลายเดือนก่อน

    i don't have an ipadapter folder in my models folder, should i just make one?

  • @axxslr8862
    @axxslr8862 8 หลายเดือนก่อน +1

    in my comfy UI there is no manager option ...... help please

    • @ESLCSDivyasagar
      @ESLCSDivyasagar 7 หลายเดือนก่อน

      search in youtube how to install

  • @ywueeee
    @ywueeee 8 หลายเดือนก่อน

    can could one add some kind of ip adaptar to add your own face to transform?

  • @Caret-ws1wo
    @Caret-ws1wo 7 หลายเดือนก่อน +2

    Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?

    • @DanielMatotek
      @DanielMatotek 12 วันที่ผ่านมา

      Same did you ever figure it out

    • @Caret-ws1wo
      @Caret-ws1wo 11 วันที่ผ่านมา

      @@DanielMatotek This was a while ago, but i believe I changed models

  • @ImTheMan725
    @ImTheMan725 7 หลายเดือนก่อน +1

    Why can't your morph 20/50 pictures?

  • @juginnnn
    @juginnnn 4 หลายเดือนก่อน

    how can I fix "Motion module 'AnimateLCM_sd15_t2v.ckpt' is intended for SD1.5 models, but the provided model is type SD3."???

  • @CarCrashesBeamngDrive
    @CarCrashesBeamngDrive 7 หลายเดือนก่อน

    cool, how long did it take you?

  • @efastcruex
    @efastcruex 7 หลายเดือนก่อน

    Why my generated animation very different from the reference images

  • @人海-h5b
    @人海-h5b 8 หลายเดือนก่อน +2

    Help! I encountered this error while running it
    Error occurred when executing IPAdapterUnifiedLoader:
    Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

  • @MichaelL-mq4uw
    @MichaelL-mq4uw 8 หลายเดือนก่อน

    why do you need controlnet at all? can it be skipped and morph without any mask?

  • @Halfgawd_Halfdevil
    @Halfgawd_Halfdevil 8 หลายเดือนก่อน

    Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?

  • @damird9635
    @damird9635 6 หลายเดือนก่อน

    Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?

  • @velvetjones8634
    @velvetjones8634 8 หลายเดือนก่อน

    Very helpful, thanks!

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      Glad it was helpful!

  • @Cats_Lo_Ve
    @Cats_Lo_Ve 6 หลายเดือนก่อน

    How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!

  • @rayzerfantasy
    @rayzerfantasy 3 หลายเดือนก่อน

    How much GPU VRAM is needed?

  • @rowanwhile
    @rowanwhile 8 หลายเดือนก่อน

    Brilliant video. thanks so much for sharing your knowledge.

  • @produccionesvoid
    @produccionesvoid 7 หลายเดือนก่อน

    when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?

  • @CS.-ph2fr
    @CS.-ph2fr 6 หลายเดือนก่อน

    how to add more than 4 images

  • @yomi0ne
    @yomi0ne หลายเดือนก่อน

    copying video address of the animation doesn't work, it copies an .webm link, please help :(

  • @frankiematassa1689
    @frankiematassa1689 7 หลายเดือนก่อน

    Error occurred when executing IPAdapterBatch:
    Error(s) in loading state_dict for ImageProjModel:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
    I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this

  • @TinyLLMDemos
    @TinyLLMDemos 7 หลายเดือนก่อน

    where do i get your input images

  • @MSigh
    @MSigh 8 หลายเดือนก่อน

    Excellent! 👍👍👍

  • @kwondiddy
    @kwondiddy 7 หลายเดือนก่อน

    I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:"
    I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?

  • @saundersnp
    @saundersnp 8 หลายเดือนก่อน

    I've encountered this error : Error occurred when executing RIFE VFI:
    Tensor type unknown to einops

  • @tetianaf5172
    @tetianaf5172 7 หลายเดือนก่อน

    Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help

  • @WalkerW2O
    @WalkerW2O 7 หลายเดือนก่อน

    Hi Abe aTech, very informative and i like your work very much.

  • @randomprocess7876
    @randomprocess7876 3 หลายเดือนก่อน

    Anybody know how to scale this to more than 4 images.. ive tried but the masks are messing up the animation from the cloned nodes

  • @TinyLLMDemos
    @TinyLLMDemos 7 หลายเดือนก่อน

    how do i kick it off?

  • @SapiensVirtus
    @SapiensVirtus 6 หลายเดือนก่อน

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @ellopropello
    @ellopropello 4 หลายเดือนก่อน

    how awesome is that!
    but what needs to be done to get rid of these errors:
    When loading the graph, the following node types were not found:
    ADE_ApplyAnimateDiffModelSimple
    VHS_SplitImages
    SimpleMath+
    ControlNetLoaderAdvanced
    ADE_MultivalDynamic
    VHS_VideoCombine
    BatchCount+
    ADE_UseEvolvedSampling
    FILM VFI
    RIFE VFI
    Color Correct (mtb)
    VHS_LoadVideoPath
    IPAdapterUnifiedLoader
    ACN_AdvancedControlNetApply
    ADE_LoadAnimateDiffModel
    ADE_LoopedUniformContextOptions
    IPAdapterAdvanced
    CreateFadeMaskAdvanced

  • @AlexDisciple
    @AlexDisciple 7 หลายเดือนก่อน

    Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead

    • @AlexDisciple
      @AlexDisciple 7 หลายเดือนก่อน

      I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?

    • @AlexDisciple
      @AlexDisciple 7 หลายเดือนก่อน

      Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it

  • @MACH_SDQ
    @MACH_SDQ 7 หลายเดือนก่อน

    Goooooood

  • @Blaqk_Frozste
    @Blaqk_Frozste 4 หลายเดือนก่อน

    I copied pretty much everything you did and my animation outputs looks super low quality?

  • @DanielMatotek
    @DanielMatotek 12 วันที่ผ่านมา

    Tried for ages couldn't make it work, every image is very pixelated and crazy cannot wor it out

  • @cohlsendk
    @cohlsendk 8 หลายเดือนก่อน

    Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''

    • @cohlsendk
      @cohlsendk 8 หลายเดือนก่อน

      Got it :D

  • @devoiddesign
    @devoiddesign 8 หลายเดือนก่อน

    Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node.
    !!! Exception during processing!!! IPAdapter model not found.
    Traceback (most recent call last):
    File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models
    raise Exception("IPAdapter model not found.")
    Exception: IPAdapter model not found.

    • @tilkitilkitam
      @tilkitilkitam 8 หลายเดือนก่อน

      same problem

    • @tilkitilkitam
      @tilkitilkitam 8 หลายเดือนก่อน +1

      ip-adapter_sd15_vit-G.safetensors - install this from the manager

    • @devoiddesign
      @devoiddesign 8 หลายเดือนก่อน

      @@tilkitilkitam Thank you for responding.
      I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.

  • @zarone9270
    @zarone9270 8 หลายเดือนก่อน

    thx Abe!

  • @yakiryyy
    @yakiryyy 8 หลายเดือนก่อน

    Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images.
    The results I get are pretty different from the reference images.
    Am I wrong in my assumption?

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation

    • @efastcruex
      @efastcruex 7 หลายเดือนก่อน

      @@abeatech Is there any way to make the result more like reference images

  • @creed4788
    @creed4788 8 หลายเดือนก่อน

    Vram required?

    • @Adrianvideoedits
      @Adrianvideoedits 7 หลายเดือนก่อน

      16gb for upscaled

    • @creed4788
      @creed4788 7 หลายเดือนก่อน

      @@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?

    • @Adrianvideoedits
      @Adrianvideoedits 7 หลายเดือนก่อน

      @@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards

  • @balibike9024
    @balibike9024 4 หลายเดือนก่อน

    I've got an error message
    Error occurred when executing IPAdapterUnifiedLoader:
    IPAdapter model not found.
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models
    raise Exception("IPAdapter model not found.")
    What shoud I do ?

    • @balibike9024
      @balibike9024 4 หลายเดือนก่อน

      Success now !
      I re-install ip-adapter_sd15_vit-G.safetensors from the manager

  • @0x0abb
    @0x0abb 11 วันที่ผ่านมา

    I maybe missing something but the workflow is different so it's not working

    • @0x0abb
      @0x0abb 8 วันที่ผ่านมา

      I finally realized, I had the wrong workflow file...it's working now

  • @Adrianvideoedits
    @Adrianvideoedits 7 หลายเดือนก่อน +1

    you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.

    • @7xIkm
      @7xIkm 6 หลายเดือนก่อน

      idk maybe a seed? efficiency nodes?

    • @rudyNok
      @rudyNok 4 หลายเดือนก่อน

      Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.

  • @pro_rock1910
    @pro_rock1910 8 หลายเดือนก่อน

    ❤‍🔥❤‍🔥❤‍🔥

  • @artificiallyinspired
    @artificiallyinspired 6 หลายเดือนก่อน

    "it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.

    • @artificiallyinspired
      @artificiallyinspired 6 หลายเดือนก่อน

      I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.

    • @eyoo369
      @eyoo369 5 หลายเดือนก่อน +1

      @@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select

  • @rooqueen6259
    @rooqueen6259 7 หลายเดือนก่อน

    Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c

  • @vivektyagi6848
    @vivektyagi6848 3 หลายเดือนก่อน

    Awesome but could you slow it down please.

  • @人海-h5b
    @人海-h5b 8 หลายเดือนก่อน +1

    Help! I encountered this error while running it

    • @人海-h5b
      @人海-h5b 8 หลายเดือนก่อน +1

      Error occurred when executing IPAdapterUnifiedLoader :
      module 'comfy.model base’ has no attribute 'SDXl instructpix2pix

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

    • @Halfgawd_Halfdevil
      @Halfgawd_Halfdevil 8 หลายเดือนก่อน

      @@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?

  • @ErysonRodriguez
    @ErysonRodriguez 8 หลายเดือนก่อน

    noob question: why my results more different from my output

    • @ErysonRodriguez
      @ErysonRodriguez 8 หลายเดือนก่อน

      i mean, what images i loaded have different output instead transitioning

    • @abeatech
      @abeatech  8 หลายเดือนก่อน

      The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module

  • @3djramiclone
    @3djramiclone 8 หลายเดือนก่อน

    This is not for beginners, put that on the description mate

    • @kaikaikikit
      @kaikaikikit 7 หลายเดือนก่อน

      what are you are crying about...go find a beginner class when it's too hard to understand...

  • @nonprofit7163
    @nonprofit7163 5 หลายเดือนก่อน

    did anyone else run into some errors while following this video?

  • @suetologPlay
    @suetologPlay 6 หลายเดือนก่อน

    Вообще ни чего не понятно что ты там делал! быстр быстро прокликал и смотрите что у меня получилось. куда,чего,как не показал.

  • @anthonydelange4128
    @anthonydelange4128 6 หลายเดือนก่อน

    its morbing time...

  • @goran-mp-kamenovic6293
    @goran-mp-kamenovic6293 6 หลายเดือนก่อน

    urred when executing CheckpointLoaderSimple:
    'model.diffusion_model.input_blocks.0.0.weight'
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI
    odes.py", line 516, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config
    model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet
    unet_config = detect_unet_config(state_dict, unet_key_prefix)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config
    model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :P

  • @financialjourney4u
    @financialjourney4u 24 วันที่ผ่านมา

    Thanks for this I've followed the steps shown but seeing this erorr messg what am I doing wrong here
    Failed to validate prompt for output 53:
    * CheckpointLoaderSimple 564:
    - Value not in list: ckpt_name: 'SD1.5\juggernaut_reborn.safetensors' not in ['dreamshaper_8.safetensors', 'flux1-schnell-bnb-nf4.safetensors', 'juggernaut_reborn.safetensors', 'realvisxlV50_v50LightningBakedvae.safetensors', 'revAnimated_v2Rebirth.safetensors']
    * LoraLoaderModelOnly 563:
    - Value not in list: lora_name: 'SD1.5\Hyper-SD15-8steps-lora.safetensors' not in ['AnimateLCM_sd15_t2v_lora.safetensors', 'Hyper-SD15-8steps-lora.safetensors', 'flux1-redux-dev.safetensors', 'v3_sd15_adapter.ckpt', 'vae-ft-mse-840000-ema-pruned.ckpt']
    Output will be ignored