IOJ
IOJ
  • 59
  • 37 669
Animate-X versus UniAnimate for Image Animation & Image to Image Pose Transfer
This video introduces Animate-X, which is an image animation project, and compares it with UniAnimate which was introduced in an earlier video. The comfyUI nodes for Animate-X were included in the comfyUI-UniAnimate-W github repository, so they can be gotten by installing or updating that repository with the comfyUI manager.
Here's the link to the github repository:
github.com/Isi-dev/ComfyUI-UniAnimate-W
Here's the link to download the checkpoint (animate-x_ckpt.pth
):
huggingface.co/Shuaishuai0219/Animate-X/tree/main
If you have already installed UniAnimate nodes, then all you need to do to get the Animate-X nodes is to update ComfyUI-UniAnimate-W via the ComfyUI Manager. Search using:
UniAnimate Nodes for ComfyUI
You can watch this video to know more about the installation:
th-cam.com/video/NFnhELV4bG0/w-d-xo.html
You can read a written installation guide by clicking any of the following links:
medium.com/@isinsemail/windows-installation-guide-for-unianimate-and-animate-x-in-comfyui-d69cc2e6b5d1
penioj.blogspot.com/2024/12/windows-installation-guide-for.html
Here are the links to all the image animation projects mentioned in the video:
Animate-X: lucaria-academy.github.io/Animate-X/
UniAnimate: unianimate.github.io/
MimicMotion:tencent.github.io/MimicMotion/
MusePose: github.com/TMElyralab/MusePose
If you’d like to support my work, you can buy me a coffee here: buymeacoffee.com/isiomo
มุมมอง: 2 633

วีดีโอ

Blazing Fast Image Transformation with ComfyUI - My Ultimate image to image Workflows
มุมมอง 61221 วันที่ผ่านมา
I now present my ultimate image to image comfyui workflows. These are three workflows that specialize in different aspects of image style transformation. You can find below the links to the github repositories, models, loras, and controlnets used in the workflows. Main GitHub repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows GitHub repository for Image Painting Assistant node...
Is X-Portrait2 really that good? How does it compare with LivePortrait & others?
มุมมอง 447หลายเดือนก่อน
This is a comparison between the recently released X-Portrait2 and other Face animation tools like LivePortrait. The code and models for X-Portrait2 have not yet been released and I plan to implement a comfyUI node when they are released, if no other person does so in time.
Easily create 512x512 Driving Video for LivePortrait (Face Animation) in ComfyUI
มุมมอง 1.3Kหลายเดือนก่อน
This is a brief overview of a new comfyUI node called 'Video for LivePortrait' which can be used to create a driving video that can be used in livePortrait for best results from a source video. Github Repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows If you’d like to support my work, you can buy me a coffee here: buymeacoffee.com/isiomo
Easily Create Video Compilations in ComfyUI (videos to video)
มุมมอง 1.5K2 หลายเดือนก่อน
This video introduces a new ComfyUI custom node called 'Join Videos' and shows how it can facilitate the compilation of videos to form a new video. Github repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows If you’d like to support my work, you can buy me a coffee here: buymeacoffee.com/isiomo
ComfyUI Inspyrenet Rembg Assistant node
มุมมอง 7223 หลายเดือนก่อน
ComfyUI Inspyrenet Rembg Assistant node
Quickly Create Paintings & Cartoons from Images & Videos in ComfyUI
มุมมอง 3833 หลายเดือนก่อน
Quickly Create Paintings & Cartoons from Images & Videos in ComfyUI
New ComfyUI UniAnimate Nodes for Image to Video and Image to Image Pose Transfer
มุมมอง 8K3 หลายเดือนก่อน
New ComfyUI UniAnimate Nodes for Image to Video and Image to Image Pose Transfer
Comparing current image to sketch or line art nodes in ComfyUI
มุมมอง 4043 หลายเดือนก่อน
Comparing current image to sketch or line art nodes in ComfyUI
Introducing ComfyUI Image to Drawing Assistants
มุมมอง 7693 หลายเดือนก่อน
Introducing ComfyUI Image to Drawing Assistants
Installation Guide for ComfyUI UniAnimate Nodes
มุมมอง 1.9K4 หลายเดือนก่อน
Installation Guide for ComfyUI UniAnimate Nodes
Basic ComfyUI workflow for UniAnimate (An image animation ai project)
มุมมอง 4.1K4 หลายเดือนก่อน
Basic ComfyUI workflow for UniAnimate (An image animation ai project)
From dusk to dawn (stable diffusion animation) #ai #blender #stablediffusion #film #animation
มุมมอง 529 หลายเดือนก่อน
From dusk to dawn (stable diffusion animation) #ai #blender #stablediffusion #film #animation

ความคิดเห็น

  • @Falkonar
    @Falkonar 3 วันที่ผ่านมา

    What is workflow for creating 1 minute long video? I should render 32frames animations and stick them together? I need to use last frame for the next sequence? I figure out that 32frames is maximum for consistency. Echo Mimic have same issue but with 97frames long.

    • @Isi-uT
      @Isi-uT 3 วันที่ผ่านมา

      Using first frame conditioning is an option- you set useFirstFrame to True and use the last frame of your generated video as your image in the next generation. You can easily use the last frame as an image by uploading the generated video and, assuming it's 32 frames, you can skip the first 31 frames in the video upload node. For resolution_x=512, I will advise that the initial image should be 512x768. 32 frames seem to be the ideal for most of these video generation tools, but I have gotten good results with 48 frames using Animate-x and 120 frames with UniAnimate-long, and I could probably go higher. A lot depends on the seed, image and video quality & resolution.

    • @Falkonar
      @Falkonar 2 วันที่ผ่านมา

      @@Isi-uT I agree, but last frame could be blurry and each next step would be video quality degradation. If I would upscale each 32th frame then the video would blink a little every upscaled image. Maybe Framer could help somehow. I try to think maybe there are more options and can not imagine how.

    • @Isi-uT
      @Isi-uT วันที่ผ่านมา

      Considering the issues you raised, the alternative is to run as many frames as possible, probably skipping some frames and using video interpolation to later make up, and also using faceswap or restoration to ensure consistency in the face. I sometimes have to manually edit some of the frames in my generated video.

    • @Falkonar
      @Falkonar 23 ชั่วโมงที่ผ่านมา

      @@Isi-uT Thank you for you advise.I will try restoration and interpolation. Maybe resampling technique

    • @Isi-uT
      @Isi-uT 12 ชั่วโมงที่ผ่านมา

      You're welcome.

  • @Falkonar
    @Falkonar 4 วันที่ผ่านมา

    Animate-X works only with 512? No motion for some reason on 768

    • @Isi-uT
      @Isi-uT 4 วันที่ผ่านมา

      Yes, all the tests I have done also suggest that to be the case. I think the main animate-x repo only showed inferences with 512. I included 768 in the comfyui version because I assumed that it will also work since it was built on unianimate and others. But it seems the animate-x model may have been trained on only 512 resolution.

  • @sembarangsembarang2582
    @sembarangsembarang2582 5 วันที่ผ่านมา

    Hey, could you make tutorial for instaling animate-x in your computer? I have try it, but find many eror

    • @Isi-uT
      @Isi-uT 5 วันที่ผ่านมา

      I have already made a tutorial for installing unianimate which can also be followed to install animate-x as they were both implemented in the same github repository. You can watch it here: th-cam.com/video/NFnhELV4bG0/w-d-xo.html You can also read the written installation guide from any of the following sites: penioj.blogspot.com/2024/12/windows-installation-guide-for.html medium.com/@isinsemail/windows-installation-guide-for-unianimate-and-animate-x-in-comfyui-d69cc2e6b5d1

    • @sembarangsembarang2582
      @sembarangsembarang2582 5 วันที่ผ่านมา

      @Isi-uT thanks

    • @Isi-uT
      @Isi-uT 5 วันที่ผ่านมา

      You're welcome.

  • @AndroKarpo
    @AndroKarpo 5 วันที่ผ่านมา

    Hi, where can I download these workflows? I didn't find them in your github profile. And where can I download the prompt style encoder node?

    • @Isi-uT
      @Isi-uT 5 วันที่ผ่านมา

      You can install the nodes by searching for ComfyUI-Animation_Nodes_and_Workflows with the comfyui manager. After installation, you will find the workflows in Comfyui/custom_nodes/Comfyui-Animation_Nodes_and_Workflows/animationWorkflows/img2img.

  • @kayTran-y1v
    @kayTran-y1v 6 วันที่ผ่านมา

    can you make a comparison video between Mimicmotion, StableAnimator, UAnimate and Animate-X ? I feel really confused with all these models.

    • @Isi-uT
      @Isi-uT 5 วันที่ผ่านมา

      I've not tried MimicMotion and StableAnimator because their memory requirements look too high for my system. I will probably make a comparison video when I get a new one.

    • @imoon3d
      @imoon3d วันที่ผ่านมา

      Tất cả đều gần giống nhau vẫn hay bị lỗi tay, chân, mặt, quần áo

  • @王方涛
    @王方涛 7 วันที่ผ่านมา

    Can you give me some guidance on how to solve it?: Animate_X_Image_Long The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (1, 1).

    • @Isi-uT
      @Isi-uT 6 วันที่ผ่านมา

      You can see a possible solution here: github.com/Isi-dev/ComfyUI-UniAnimate-W/issues/23

    • @王方涛
      @王方涛 6 วันที่ผ่านมา

      @@Isi-uT thank you for your reply

    • @Isi-uT
      @Isi-uT 6 วันที่ผ่านมา

      You're welcome

  • @王方涛
    @王方涛 7 วันที่ผ่านมา

    May I ask: ”animate-x_ckpt.pth “In which folder should I put this?

    • @王方涛
      @王方涛 7 วันที่ผ่านมา

      ComfyUI \custom_nodes\ComfyUI-UniAnimate-W-main\checkpoints”

    • @王方涛
      @王方涛 7 วันที่ผ่านมา

      Found it

  • @KwaiBun
    @KwaiBun 7 วันที่ผ่านมา

    Thanks a lot for the sharing its amazing ! I found that animateX always do not move the char legs much while unianimate does well. Repose and long vid woroflow tested both show same problem. My character is humanoid but a big head and kid like body proportion. Did u run into similar problem?

    • @Isi-uT
      @Isi-uT 7 วันที่ผ่านมา

      Thanks. I had similar observations. Animate-x sometimes has issues following the pose but tends to maintain the character's appearance better than unianimate. There were other pose related functions in the main animate-x repo which I have not yet implemented in the comfyui nodes because they would lead to much higher RAM & VRAM usage and I don't yet know if they would make a significant difference in the pose. I will decide if to implement them after I test the main repo.

    • @KwaiBun
      @KwaiBun 7 วันที่ผ่านมา

      @@Isi-uT Thanks and looking forward!! i use the same Pig monster image in animateX page example. then put into my comfy animateX. the generated video/ repose don't have the same pose following as shown in their demo page, I suspect more is needed to reproduce the animateX original result

    • @Isi-uT
      @Isi-uT 7 วันที่ผ่านมา

      Thanks for the info. I have not yet made comparisons with the videos on the demo page. I will do so soon and modify the code where necessary.

  • @upsidedownhorror
    @upsidedownhorror 7 วันที่ผ่านมา

    i hope soon a company like leonardo implement this in their webpages, i dont have a rtx card so cannot try this☹

    • @Isi-uT
      @Isi-uT 7 วันที่ผ่านมา

      It would be amazing if platforms like Leonardo integrated something similar for wider accessibility. Hopefully, more cloud-based options become available.

  • @CrustyHero
    @CrustyHero 8 วันที่ผ่านมา

    how much time it takes?

    • @Isi-uT
      @Isi-uT 8 วันที่ผ่านมา

      For a 512x768 output resolution, 32 frames take around 8minutes.

  • @PhantasyAI0
    @PhantasyAI0 10 วันที่ผ่านมา

    it'd be cool if you can do hunyuan ai video. Trying to figure out all the settings for it with the ComfyUI-HunyuanVideoWrapper by Kijai. Please check it for us, for i2v and v2v (image to image/video to video).

    • @Isi-uT
      @Isi-uT 8 วันที่ผ่านมา

      I would've loved to, but my system does not meet the memory requirements.

  • @mrboofy
    @mrboofy 10 วันที่ผ่านมา

    hey! what is the maximum number of seconds an animation can last? thanks!

    • @Isi-uT
      @Isi-uT 10 วันที่ผ่านมา

      I think it depends on your VRAM.

    • @mrboofy
      @mrboofy 10 วันที่ผ่านมา

      @@Isi-uT Are you in banodoco? I have a 4090, I'm going to try it, I want to try that output with other wf. In the company I work we tried muse a while ago but it failed a lot

    • @Isi-uT
      @Isi-uT 8 วันที่ผ่านมา

      @@mrboofy No I'm just hearing of banodoco. Your system is perfect for all the workflows.

  • @clunatic
    @clunatic 11 วันที่ผ่านมา

    Thanks for all your tutorials ❤ Still getting this error "Name conflict with AIFSH/ComfyUI-UniAnimate. Cannot install simultaneously". Any suggestions? Thanks!

    • @Isi-uT
      @Isi-uT 11 วันที่ผ่านมา

      Thanks for your comment. AIFSH/ComfyUI-UniAnimate is another comfyui repository on Unianimate and it's advisable that you install only one, that's why the warning comes up when you want to install or update the nodes via the comfyui Manager. If you haven't installed AIFSH/ComfyUI-UniAnimate, then you can go ahead and install or update the 'unianimate nodes for comfyUI.'

    • @clunatic
      @clunatic 11 วันที่ผ่านมา

      @@Isi-uT Thanks for your swift reply. I don't have AIFSH/ComfyUI-UniAnimate installed but the ComfyUI Manager still shows a conflict. Should I ignore it? Also, v2-1_512-ema-pruned.ckpt marked as a "pickle" not safe?

    • @Isi-uT
      @Isi-uT 11 วันที่ผ่านมา

      Yes you can ignore it. I see it every time I want to update the uniAnimate nodes. You are right about the v2-1_512-ema-pruned.ckpt not being safe. I took the risk of using it that way, but I guess there should be a way to turn it into a safetensors format before using it.

  • @Manchuwook
    @Manchuwook 11 วันที่ผ่านมา

    How would this do for half-body protraits? I'd like to see something that does idle animations (neutral poses with breathing motions)

    • @Isi-uT
      @Isi-uT 11 วันที่ผ่านมา

      I will have to do more tests to give an accurate response. I'm currently away from my system. I'll respond within 4 days.

    • @Manchuwook
      @Manchuwook 11 วันที่ผ่านมา

      @@Isi-uT I'm not in a huge rush, please take your time. This is currently a hobby for me, so I won't demand anything unreasonable.

    • @Isi-uT
      @Isi-uT 11 วันที่ผ่านมา

      No problem. Cheers!

    • @Isi-uT
      @Isi-uT 7 วันที่ผ่านมา

      It works great for half-body portraits when there’s noticeable movement in the limbs or changes in body position in the driving video. It’s less effective for idle animations with minimal motions, such as subtle breathing.

    • @Falkonar
      @Falkonar 5 วันที่ผ่านมา

      Echo-Mimic v2 + sapience . All that magic should be in one place

  • @williamsunstrider3364
    @williamsunstrider3364 21 วันที่ผ่านมา

    Hi. Great work! Do you have documentation for your nodes that explains the function of each variable and setting of each node? Maybe I am just not finding it. If you could point me in the correct direction, it would be appreciated.

    • @Isi-uT
      @Isi-uT 21 วันที่ผ่านมา

      Thank you so much for your comment. Currently, I don’t have comprehensive documentation available, but I’ll be working on creating proper documentation in the nearest future. I will briefly describe the nodes here: The 'Align & Generate poses for UniAnimate' node This node takes in an image and a reference video and outputs the dwpose (a skeletal representation) of the person in the image, and frames of dwpose images representing the pose sequence of the person in the reference video. The 'Animate image with UniAnimate' node This node receives three inputs (the image, and the two outputs from the 'Align & Generate poses for UniAnimate' node) and outputs image frames which show an animation of the input image when connected to a 'video combine' node (the documentation for the 'Load Video' and 'Video Combine' nodes can be found in this github repository: github.com/Kosinkadink/ComfyUI-VideoHelperSuite ) -> The 'steps' variable determines the number of iterations used during the diffusion process to generate an image from noise. It influences the quality, detail, and speed of the generation. Higher steps result in more detailed image frames but at the cost of longer generation time. Several tests have shown that 30 steps is ideal, but I have gotten good results with 20 steps, depending on the input image and reference video. -> The 'useFirstframe' variable is used to decide if the first frame of the target pose sequence (from video) should be used to calculate the scale coefficient for aligning the pose sequence with the reference image (when set to True), or if a random frame from the target pose sequence should be used to calculate the scale coefficient for aligning the pose sequence with the reference image (when set to False). You can set it to 'False' so that a random frame is used for the calculation in cases where the first frame of the reference video does not include the entire face and full-body pose (hands and feet) which are required for more accurate estimations and better video generation results. -> The 'frame_interval' variable is used to determine how many frames to skip or use from the input pose sequence. If there are 30 frames and this variable is set to 1, then every frame will be used. If set to 2, then every second frame will be selected from the input pose sequence and therefore only 15 frames will be used. Higher frame intervals will result in faster video generation but less smooth animation. -> The 'max_frames' variable determines the total number of frames that will be used for the generation. If the video has 120 frames and the max_frames is set to 32, then only 32 frames will be generated. -> The 'resolution_x' variable determines the width and height of each output frame. There are only two options (512 & 768). Selecting 512 will result in a 512X768 video, while 768 will result in a 768X1216 video which takes more time to generate and requires more GPU, depending on the number of frames). If you only want to change the pose of an image using another image as reference, then 'useFirstFrame' should be false, 'frame_interval' should be 1, and 'max_frames' should be 1. I later added two more nodes which I introduced in this video: th-cam.com/video/Ne-DSBhfg8A/w-d-xo.html These node work directly with the input images and videos and therefore do not need the 'Align & Generate poses for UniAnimate' node. The 'Repose image with UniAnimate' node is for changing the pose of an image to that of a reference image, while the 'Animate image with UniAnimate_Long' node is for generating longer videos that maintain the properties of the input image much better than the 'Animate image with UniAnimate' node. I will briefly describe the significant parameters in the two nodes that are absent from the previous nodes. -> The 'dontAlignPose' variable is used to determine if any frame of the target pose sequence (from video or reference image) should be used to calculate the scale coefficient for aligning the pose sequence or reference image with the main image. If set to true, then no alignment calculation will be done and the main image (the image you want to repose or animate) has to be manually aligned with the image in the video or reference image before uploading them. Some poses that are difficult to align via calculation by the nodes can be better transferred by adjusting the main image such that the height of the person in the main image is the same as the height of the person in the reference video or image. You can always try your generations twice by changing the setting of this variable to see which one is more desirable. -> The 'context_size' variable controls how many frames are processed together or the number of frames in a context window (actual set of frames being processed together at any given time). Larger context sizes allow the model to consider more temporal information, improving coherence and consistency between frames. -> The 'context_stride' variable determines the spacing between context windows or the number of frames between the starting points of consecutive context windows. A stride equal to the context size means no frames are reused between windows. A smaller stride enables overlapping context windows, which can enhance temporal consistency but increases computational cost. -> The 'context_overlap' variable determines how much of a context window overlaps with the next window. Higher values enhance temporal consistency but increases computational cost. The context_overlap = context_size - context_stride. So if context_size = 16 and context_stride = 11, the context_overlap is 5 frames. The context_overlap might look unnecessary since it can be derived from the previous two variables, but I think there was a criteria for determining which to use in the code. I can't remember the criteria.

    • @williamsunstrider3364
      @williamsunstrider3364 20 วันที่ผ่านมา

      @@Isi-uT That was a huge help. Thank you for the information. Great work!

    • @Isi-uT
      @Isi-uT 20 วันที่ผ่านมา

      Thank you.

  • @strawberryhaze8836
    @strawberryhaze8836 23 วันที่ผ่านมา

    если б мы знали что это такое мы не знаем что это такое

    • @Isi-uT
      @Isi-uT 22 วันที่ผ่านมา

      Я не понимаю вашего комментария. Пожалуйста, не могли бы вы уточнить.

  • @williamsunstrider3364
    @williamsunstrider3364 27 วันที่ผ่านมา

    I have been having issues getting any of the UniAnimate nodes to work at all. Any help would be appreciated. One of my errors reads as follows: ReposeImage Failed to init class <class 'ComfyUI-UniAnimate-W.tools.modules.clip_embedder.FrozenOpenCLIPTextVisualEmbedder'>, with PytorchStreamReader failed reading zip archive: failed finding central directory

    • @Isi-uT
      @Isi-uT 27 วันที่ผ่านมา

      This looks either like a corrupted file issue or the models were not properly extracted from the zip file download into the ComfyUI-UniAnimate-W/checkpoints folder. You can check your ComfyUI-UniAnimate-W/checkpoints folder and see if you can find the 'open_clip_pytorch_model.bin' model. It is around 3.85GB in size. If the size is less, then you will have to download it again. Also ensure other models are present and fully downloaded: unianimate_16f_32f_non_ema_223000.pth ~ 5.5GB v2-1_512-ema-pruned.ckpt ~ 5.1GB yolox_l.onnx ~ 211.6MB dw-ll_ucoco_384.onnx ~ 131.2MB If you downloaded all the models in a zip file, then ensure the models are first extracted before placing them in the ComfyUI-UniAnimate-W/checkpoints folder. Please let me know the outcome.

    • @williamsunstrider3364
      @williamsunstrider3364 27 วันที่ผ่านมา

      @@Isi-uT Yes. I suspected that as well. I have downloaded them from the link you provided several times now. Could the source files be corrupted?

    • @williamsunstrider3364
      @williamsunstrider3364 27 วันที่ผ่านมา

      @@Isi-uT You were correct. It was the 'open_clip_pytorch_model.bin' file. I downloaded it manually and it got past that step. Working on the next error now.

    • @Isi-uT
      @Isi-uT 26 วันที่ผ่านมา

      Thanks for the feedback.

  • @michalgonda7301
    @michalgonda7301 29 วันที่ผ่านมา

    Heyo thanks for your video :) I would like to ask you something... I believe you know how to build comfy UI nodes right? there is one think its called OCIO OpenColorIO which can change color spaces and its used in NUKE etc... I would like to ask if there is a way to make it work in comfy UI node. In comfy UI you are passing PIL format mostly with sRGB format so no HDR is preserved. there are nodes to support EXR format which can be passed. I wonder if its possible to import OCIO into node and change color space from lets say sRGB to ACEScg color space. I tried to make a node maybe Im half way :D but im kinda noob in setting up enviroment code etc.

    • @Isi-uT
      @Isi-uT 28 วันที่ผ่านมา

      Thank you :) I am not familiar with OCIO OpenColorIO. I will look into it and see if it can be implemented in comfyui. I will give you a reply as soon as I fully understand the requirements.

    • @Isi-uT
      @Isi-uT 27 วันที่ผ่านมา

      I think it's possible to import OCIO in a node and change from one color space to another in comfyui. Since there are nodes to upload and save in EXR format, all that is needed is to create a node that receives the image tensor from the load EXR node, transforms the image to another color space depending on user input, and outputs the resulting image as a tensor to the Save EXR node. You can try implementing a code similar to what is shown below. I generated it using chatGPT and I have not tested it, but it looks reasonable. First install OCIO in your environment (e.g. pip install opencolorio), then code: import torch import OpenColorIO as ocio class TransformColorSpace: @classmethod def INPUT_TYPES(s): return { "required": { "image": ("IMAGE",), "config_filepath": ("STRING", {"default": "/path/to/config.ocio"}), "input_space": ("STRING", {"default": "sRGB"}), "output_space": ("STRING", {"default": "ACEScg"}), } } CATEGORY = "color" RETURN_TYPES = ("IMAGE",) RETURN_NAMES = ("transformed_image",) FUNCTION = "transform" def transform(self, image, config_filepath, input_space, output_space): # Load the specified OCIO configuration config = ocio.Config.CreateFromFile(config_filepath) ocio.SetCurrentConfig(config) # Create a processor for the input and output spaces processor = config.getProcessor(input_space, output_space) # Convert image to NumPy for processing numpy_image = image.squeeze().numpy() transformed_image = processor.applyRGB(numpy_image) # Convert back to Torch tensor transformed_image_tensor = torch.unsqueeze(torch.from_numpy(transformed_image), 0) return (transformed_image_tensor,) You can modify the code to provide selection options for the input and output space: Input color space (e.g., sRGB, ACES2065-1, Rec.709). Output color space (e.g., ACEScg, DCI-P3, Linear) You can browse or refer to the OCIO documentation for how to get the required config.ocio file. All the best.

    • @michalgonda7301
      @michalgonda7301 25 วันที่ผ่านมา

      ​@@Isi-uT thank you! :) Im working on fix now but I did get further :) ...I at least manage to implement Ocio into comfy :)

    • @Isi-uT
      @Isi-uT 24 วันที่ผ่านมา

      @@michalgonda7301 Thanks. Glad to hear you made progress.

  • @Nibot2023
    @Nibot2023 หลายเดือนก่อน

    I got this error. "Failed to init class <class 'ComfyUI-UniAnimate-W.tools.modules.autoencoder.AutoencoderKL'>, with Could not find module 'F:\ComfyUI_Ai\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torchaudio\lib\libtorchaudio.pyd' (or one of its dependencies). Try using the full path with constructor syntax."

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      You can check if 'libtorchaudio.pyd' exists in your folder: 'F:\ComfyUI_Ai\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torchaudio\lib\' If not, then uninstall and reinstall torchaudio in your python_embeded environment using your CLI as shown below: F:\ComfyUI_Ai\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded>python.exe -m pip install torchaudio --force-reinstall If you find 'libtorchaudio.pyd', then it might be a compatibility issue. Check your torch, torchaudio, and cuda versions and use the following link to check if they are compatible: pytorch.org/get-started/previous-versions/ If they are not, then install a torchaudio version that is compatible with your torch and cuda versions based on the information in the above link. Please let me know the outcome.

    • @Nibot2023
      @Nibot2023 หลายเดือนก่อน

      @@Isi-uT Just seeing your message now. Thank you for taking the time to answer. I checked the path and it is there. I will check if they are compatible. I will let you know if it works. Thanks!

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      You're welcome. Looking forward to the outcome.

  • @sope5488
    @sope5488 หลายเดือนก่อน

    If you made this it’s impressive

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      Thank you. I used a comfyUI workflow to convert a video to this comic-style animation.

  • @sudabadri7051
    @sudabadri7051 หลายเดือนก่อน

    Very cool

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      Thank you.

  • @ChikadorangFrog
    @ChikadorangFrog หลายเดือนก่อน

    liveportrait is dead once this is released

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      Totally agree.

  • @CyberMonk_36999
    @CyberMonk_36999 หลายเดือนก่อน

    cool!

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      Thank you.

  • @tanishqchahal9166
    @tanishqchahal9166 หลายเดือนก่อน

    heyy, got this error while reposing an image "Failed to init class <class 'ComfyUI-UniAnimate-W.tools.modules.autoencoder.AutoencoderKL'>, with /usr/local/lib/python3.10/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZNK3c105Error4whatEv". PLease help!!

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      It seems the torch & torchaudio in your comfyui environment are not compatible. You can check the versions of both libraries and confirm if they are compatible by visiting this pytorch site: pytorch.org/get-started/previous-versions/ Also note that this project was tested successfully with pytorch versions 2.0.1 & 2.3.1 with compatible torchvision and torchaudio libraries. I don't know if other versions work well.

  • @stoutimonstimulations
    @stoutimonstimulations หลายเดือนก่อน

    how do I get these nodes?

    • @Isi-uT
      @Isi-uT หลายเดือนก่อน

      You can get them from this github repository: github.com/Isi-dev/ComfyUI-Animation_Nodes_and_Workflows

  • @alfredfarr275
    @alfredfarr275 2 หลายเดือนก่อน

    You did it correctly

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      Thanks!

  • @IcebergLokaz
    @IcebergLokaz 2 หลายเดือนก่อน

    Try it without the bg

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      I did, but it looked better with the bg.

  • @animatedstoriesandpoems
    @animatedstoriesandpoems 2 หลายเดือนก่อน

    👍

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      Thank you.

    • @animatedstoriesandpoems
      @animatedstoriesandpoems 2 หลายเดือนก่อน

      @@Isi-uT Buddy....nothing appear at video combine node..... any suggestion ??

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      You can check your CLI for any error.

  • @LoopnMix
    @LoopnMix 2 หลายเดือนก่อน

    Awesome tutorial got both up and running Thanks!

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      Thank you.

  • @toptipss99
    @toptipss99 2 หลายเดือนก่อน

    When running I get this error: use_libuv was requested but PyTorch was built without libuv support. I hope the author can help me figure out how to fix it. Thank you

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      I thought I had updated the code to handle this error. Did you install the nodes within the last 2days?

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      If you did not install the nodes within the last 2days, you can try updating the nodes.

  • @OverWheelsRJ
    @OverWheelsRJ 2 หลายเดือนก่อน

    Hi! What is the shortcut to "search" to add nodes?

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      UniAnimate Nodes for ComfyUI

  • @petEdit-h9l
    @petEdit-h9l 2 หลายเดือนก่อน

    Can it do more moves, like bending, turning to the back ?

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      Yes it can. I have done a 360 before,which was great for guys, but long-haired ladies tend to lose part of their back hair. As for bending, it's great for male characters, but it doesn't handle boobs well. I will upload my tests soon.

  • @maikelkat1726
    @maikelkat1726 2 หลายเดือนก่อน

    he there, nice and simpel setuip , but the img2lineart assistant does not work? ik hangs all the time without any output? the other nodes run fine and fast.... what could be the issue? nog logging info is available ....hope to hear..

    • @maikelkat1726
      @maikelkat1726 2 หลายเดือนก่อน

      somehow it works when creating new workflow and not using ther one provided...both video and image..and fast! great

    • @Isi-uT
      @Isi-uT 2 หลายเดือนก่อน

      Good to know it now works. The img2lineart makes more computational demands than the other nodes when the deep_clean_up option is greater than zero. But setting it to zero results in lots of noise in the output. The alternative is to reduce the value of the details which, depending on the input image, could make most of the lines disappear.

  • @petEdit-h9l
    @petEdit-h9l 3 หลายเดือนก่อน

    Hi, is this better than mimic motion

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I really can't say if it is better in terms of output quality because I haven't tested mimic motion. I actually came across the paper on mimic motion before hearing of unianimate, but the VRAM requirements of mimic motion put me off. I don't know if any improvement has been made in that area since then. Based on what I have read, unianimate can animate images faster and can handle far longer video frames than mimic motion.

  • @TheOneWithFriend
    @TheOneWithFriend 3 หลายเดือนก่อน

    guys if somone needed to make Magna how with compyui what workflow he should use ?to capture the characters in separate?

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I suggest you watch the following video by Mickmumpitz since I haven't done much related to your questions. He shared some workflows which might help you get started: th-cam.com/video/mEn3CYU7s_A/w-d-xo.html

  • @벤치마킹-f1z
    @벤치마킹-f1z 3 หลายเดือนก่อน

    hi. Where is the workflow?

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      You can get the workflows from this github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W You will find two json files at the root of the repository: UniAnimateImg2Vid.json & uniAnimateReposeImg.json You can also find the workflows for the new nodes in the newWorkflows folder.

  • @DuCaffeine
    @DuCaffeine 3 หลายเดือนก่อน

    My friend, is Comfyui running on the cloud the same as running on my local PC, meaning it has the same features such as uploading any model, using any model, adding, removing, modifying, creating models, and many more features? Is this true??

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I have not yet used comfyui on the cloud, but based on what I have read & heard, it is generally true that you can do the things you mentioned. You can do as you like with models in colab since it uses your Google drive for storage, although I don't think you can run comfyui on colab with a free account. Other cloud services like comfy.icu & runComfy.com provide lots of models, workflows, and technical assistance that appear to make it easier to run comfyui than doing so locally. I don't know if there's provision to upload and modify models with these other platforms.

  • @darshannilankar586
    @darshannilankar586 3 หลายเดือนก่อน

    Can you provide me setting for rendering 20 sec video in HD quality

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      You should be able to render a video up to 20 sec if you have a high VRAM. The highest I have done is 4 sec. Someone mentioned rendering up to 370 frames which is a little above 12 sec for a 30fps video. The video quality depends on the inputs and the seed. The team behind the original project suggested using a seed of 7 or 11 in their project page. You have to keep experimenting with different seeds, and upscaling vids and images to find out what works best.

    • @darshannilankar586
      @darshannilankar586 3 หลายเดือนก่อน

      Thanks It was helpful. I'll try and response you when I get desire result

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 3 หลายเดือนก่อน

    Can i use a different checkpoint like dreamshaper SD1.5 in place of v2-1_512-ema-pruned.ckpt ,can this help with Low VRAM i have 4 gb vram.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I really don't know because I haven't tried it. All I did was create a comfyui wrapper for the original project, so I don't know much about the unified model.

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 3 หลายเดือนก่อน

    I installed it, poses are detected and aligned BUT i have only 4 gb vram therefore it shows Out of memory error.Is there setting which i can change in config file,at least reposer 1 frame should work.

  • @uljanafil
    @uljanafil 3 หลายเดือนก่อน

    @Isi-uT , i have this error(((( UniAnimateImageLong Unknown error

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      Please can you post the full error message let's see if we can resolve it.

  • @buike9306
    @buike9306 3 หลายเดือนก่อน

    "Can I add a webcam node instead of load a video node?

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      Interesting! I never considered that. I think you should be able to use a Webcam node, although I'm not familiar with any.

    • @buike9306
      @buike9306 3 หลายเดือนก่อน

      @@Isi-uT I would give a try to see if would work

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 3 หลายเดือนก่อน

    when i installed the custom node,IT MESSED UP MY PYTORCH VERSION.... Can you plz help me how to get it working on my existing comfy UI ,This is my system now, Total VRAM 4096 MB, total RAM 23903 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      The xformers requirement makes the installation quite difficult and it took me sometime to get it working. Can you check for any error in your CLI?

    • @ParthKakarwar-b7j
      @ParthKakarwar-b7j 3 หลายเดือนก่อน

      @@Isi-uT Actually when i installed unianimate Custom node it reindstalled Pytorch to other version, and my comfy ui was not working,then i deleted the Unianimate Custom node and i reinsatlled the pytorch version: 2.3.1+cu121 ,then comfy started working BUT now i am Not sure to install the custom node again as i fear it will mess up my pytorch again,can you help plz..Thanks for replying.

    • @uljanafil
      @uljanafil 3 หลายเดือนก่อน

      @@ParthKakarwar-b7j it`s my problem too

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      The pytorch version in the requirements.txt file in the Unianimate custom node is 2.3.1 which is the same as what you currently have, so I am quite surprised that it would install another pytorch version. The only other thing is to ensure that the xformers version is 0.0.27.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      An alternative is to have another comfyUI installed for unianimate to avoid dependency conflicts with other custom nodes. That's what I usually do for new custom nodes with requirements that conflict with the ones I already have.

  • @ParthKakarwar-b7j
    @ParthKakarwar-b7j 3 หลายเดือนก่อน

    when i installed the custom node,IT MESSED UP MY PYTORCH VERSION.... Can you plz help me how to get it working on my existing comfy UI ,This is my system now, Total VRAM 4096 MB, total RAM 23903 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      Your setup looks okay except the VRAM. I have only tested it with 12GB VRAM and sometimes my system struggles depending on the input.

    • @ParthKakarwar-b7j
      @ParthKakarwar-b7j 3 หลายเดือนก่อน

      @@Isi-uT Can't i use Reposer atleast,with 4gb vram...? and when i install Unianimaate custom node it reinsatll the pytorch to new and my comfy UI dont work can you help me with this.Thanks for your reply.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      The repose workflow might work. Considering your high RAM, I guess your shared VRAM is quite high. If setting up the node is interfering with the normal operation of your current Comfyui, then I suggest you install a separate Comfyui for unianimate so that you can focus on troubleshooting a specific installation. Sometimes I spend a whole day trying to make a workflow that worked in comfyui a month ago work again. The dependency conflicts among custom nodes and those introduced by updates is a serious issue.

  • @hawa11sfinest
    @hawa11sfinest 3 หลายเดือนก่อน

    God is good keep seeking the truth with an open heart and you will find it in Jesus

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      Thank you. I will keep seeking.

  • @Isi-uT
    @Isi-uT 3 หลายเดือนก่อน

    Please join the challenge let's learn from each other, and create a better world for all.

  • @DuCaffeine
    @DuCaffeine 3 หลายเดือนก่อน

    Give me the name of the form or workflow please.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      The new workflows are : image2VidLong.json & reposeImgNew.json You can find both workflows in the newWorkflows folder in this github repository: github.com/Isi-dev/ComfyUI-UniAnimate-W

  • @DuCaffeine
    @DuCaffeine 3 หลายเดือนก่อน

    Brother, can it be run on Google Colab?

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      There's an implementation by someone else on Google Colab, although I haven't tried it. You can see this github repository: github.com/camenduru/UniAnimate-jupyter

  • @williamlocke6811
    @williamlocke6811 3 หลายเดือนก่อน

    "The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4" So at first I was getting the above error but after some tinkering I found that if I set resolution_x to 768 the video will then render. But at 512 I get the above error. Is that something you can easily fix with an update to your node? Or maybe there is something I can do? The problem now is that when I was using your older node, I could get about 120 frames at 512. This was too short for my project. I become very excited to see you make a node that could render longer videos. But now at 768, that takes up a lot more VRAM so I can only get 100 frames (23.4GB of VRAM). Can't go much higher without OOM errors. So, at least for me, your new LONG node is producing shorter videos than your older node ;) I really hope there is an easy fix :) Anything you can think of to reduce the amount of VRAM needed would be VERY helpful.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I haven't come across this error. Please can you show the full error let me know the part of the code that threw it, and also the sizes of the picture and video so that I can try to reproduce the error.

    • @williamlocke6811
      @williamlocke6811 3 หลายเดือนก่อน

      @@Isi-uT Sure. I've used a variety of resolution videos as the input. All portrait orientation. I started with 720 X 1080 and I resized it down to 660x512 to see if that would help, but it didn't. I also tried different sized images, so that's not it. Here's what the CLI says... 2024-09-14 19:33:07,823 - dw pose extraction - INFO - All frames have been processed. 32 Ready for inference. Running UniAnimate inference on gpu (0) Loaded ViT-H-14 model config. Loading pretrained ViT-H-14 weights (X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/open_clip_pytorch_model.bin). X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\functional.py:5504: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen ative\transformers\cuda\sdp_utils.cpp:455.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal) Restored from X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/v2-1_512-ema-pruned.ckpt Load model from (X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\checkpoints/unianimate_16f_32f_non_ema_223000.pth) with status (<All keys matched successfully>) Avoiding DistributedDataParallel to reduce memory usage Seed: 30 end_frame is (32) Number of frames to denoise: 32 0%| | 0/25 [00:00<?, ?it/s] !!! Exception during processing !!! The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4 Traceback (most recent call last): File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\uniAnimate_Inference.py", line 128, in process frames = inference_unianimate_long_entrance(seed, steps, useFirstFrame, image, refPose, pose_sequence, frame_interval, context_size, context_stride, context_overlap, max_frames, resolution, cfg_update=cfg_update.cfg_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\inferences\inference_unianimate_long_entrance.py", line 76, in inference_unianimate_long_entrance return worker(0, seed, steps, useFirstFrame, reference_image, refPose, pose_sequence, frame_interval, context_size, context_stride, context_overlap, max_frames, resolution, cfg, cfg_update) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\inferences\inference_unianimate_long_entrance.py", line 467, in worker video_data = diffusion.ddim_sample_loop( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 825, in ddim_sample_loop xt, _ = self.ddim_sample(xt, t, model, model_kwargs, clamp, percentile, condition_fn, guide_scale, ddim_timesteps, eta, context_size=context_size, context_stride=context_stride, context_overlap=context_overlap, context_batch_size=context_batch_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 787, in ddim_sample _, _, _, x0 = self.p_mean_variance(xt, t, model, model_kwargs, clamp, percentile, guide_scale, context_size, context_stride, context_overlap, context_batch_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\diffusions\diffusion_ddim.py", line 719, in p_mean_variance y_out = model(latent_model_input, self._scale_timesteps(t).repeat(bs_context), **model_kwargs_new[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI\ComfyUI3\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W\tools\modules\unet\unet_unianimate.py", line 522, in forward concat = concat + misc_dropout(dwpose) ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~ RuntimeError: The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 4 Prompt executed in 41.08 seconds

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      I see, thanks. I will look into it.

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      The UNET model was getting the default resolution from the config file which could sometimes be different from the resolution used by the noise. I have updated the code to prevent the error. All you need to do is to add: cfg.resolution = resolution at line 240 in tools/inferences/inference_unianimate_long_entrance.py at line 234 in tools/inferences/inference_unianimate_entrance.py Please let me know how it goes. As for the VRAM requirement, I can't think of any other way to reduce it. The inference initially required at least 22GB of VRAM to run, but was reduced to around 10GB by transferring the clip_embedder and autoencoder computations to CPU. The only advantage I know for using the long version is to maintain consistency of appearance of the output.

    • @williamlocke6811
      @williamlocke6811 3 หลายเดือนก่อน

      @@Isi-uT YES! That worked perfectly and the results are awesome lol! Am now able to create MUCH longer videos. Just made one with 370 frames, and maybe I can do longer! Thanks so much for your nodes, your help and your hard work :)

  • @DuCaffeine
    @DuCaffeine 3 หลายเดือนก่อน

    Brother, I watched your channel and I have two questions. The first question is, can I upload any video of mine that has a certain movement and will put it on a picture in a very professional manner? The second question is, give me a way to install it, please, brother. By the way, I am a new subscriber. Please, brother, take care of me 💯

    • @Isi-uT
      @Isi-uT 3 หลายเดือนก่อน

      Yes, you can upload any video and the movement will be transferred to the picture, but I cannot guarantee that it will be very professional. Sometimes, extra editing might be needed. Please note that this implementation is for the Windows OS. You can watch a video on the installation here: th-cam.com/video/NFnhELV4bG0/w-d-xo.html Or you can install the custom nodes with the ComfyUI Manager by searching for: ComfyUI-UniAnimate-W You can download the required models (about 14GB) from huggingface.co/camenduru/unianimate/tree/main and place them in '\custom_nodes\ComfyUI-UniAnimate-W-main\checkpoints' folder. In case you haven't done so, you can download comfyUI from this link: www.comfy.org/