AnimateDiff ControlNet Animation v2.1 [ComfyUI]

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2023
  • Convert any video into any other style using Comfy UI and AnimateDiff.
    This Video is for the version v2.1 of the AnimateDiff Controlnet Animation workflow.
    Rendered Video Link: • You and I - Katy Perry...
    Workflow Download Links:
    1) Documented Tutorial + Workflows : / update-v2-1-lcm-95056616
    2) Google Drive Link : drive.google.com/drive/folder...
    My Discord Server : / discord
    Links Shown During Video :
    COMFY MANAGER:
    github.com/ltdrdata/ComfyUI-M...
    EFFICIENCY NODES v1.92:
    civitai.com/models/32342
    CONTROLNET MODELS:
    huggingface.co/lllyasviel/Con...
    CHECKPOINT MODELS , LORAS and VAE
    civitai.com/models
    LCM LORA
    huggingface.co/latent-consist...
    Animate Diff Motion Module
    1) civitai.com/models/139237
    2) huggingface.co/CiaraRowles/Te...
    -------------------------------------------------------------------------------------------------
    Music Used :
    N3X - Tell Me (freetouse.com)
    Damtaro - Far Away (freetouse.com)
    Markvard - Falling for You (freetouse.com)
    Stream Robin Hustin X Tobimorrow - Light It Up (feat. Jex) (SoundCloud)
    Wiguez, Rico 56 - Gone [NCS]
    -----------------------------------------------------------------
    SEO:
    Animatediff control net
    Animatediff animation
    Stable Diffusion animation
    comfyui animation
    animatediff webui
    animatediff controlnet
    animatediff github
    animatediff stable diffusion
    Controlnet animation
    how to use animatediff
    animation with animate diff comfyui
    how to animate in comfy ui
    animatediff prompt travel
    animate diff prompt travel cli
    prompt travel stable diffusion
    animatediff comfyui video to video
    animatediff comfyui google colab
    animatediff comfyui tutorial
    animatediff comfyui install
    animatediff comfyui img2img
    animatediff vid2vid comfyui
    comfyui-animatediff-evolved
    animatediff controlnet animation in comfyui
    katy perry stylization katy perry you and I katy fan art
    flicker free, non flicker, comfy animation, ai animation
    stable diffusion video
    stable diffusion animation
    warp fusion
    -------------------------------------------------------------------
  • บันเทิง

ความคิดเห็น • 297

  • @bolbolzaboon
    @bolbolzaboon 6 หลายเดือนก่อน +8

    This is by far the best tutorial I have seen on AnimatedDiff. Awesome job!

  • @grafik_elefant
    @grafik_elefant 4 หลายเดือนก่อน +3

    That was the most unproblematic install of a youtube tutorial i ever had in comfyUI. Thank you very much! 👏👏👏

  • @GuyXotic
    @GuyXotic 2 หลายเดือนก่อน +1

    This is the best workflow I've ever seen because it's consistency is maxed out, thank you so much for the tutorial because its very complex

  • @Bicyclesidewalk
    @Bicyclesidewalk 4 หลายเดือนก่อน +2

    Superb. Thank you for making this. Thank you for not locking it behind some paywall. You are the man, thanks!

  • @paopaoAnime
    @paopaoAnime หลายเดือนก่อน

    Even though I'm just a beginner, the workflow runs smoothly. Thank you very much for your tutorial.

  • @kixa5543
    @kixa5543 6 หลายเดือนก่อน

    This tutorial is very helpful, thank you very much Jerry!!

  • @pinguz
    @pinguz 2 หลายเดือนก่อน

    你真的好棒,感谢你的分享!
    You are amazing!Thanks for your workflows!

  • @variouslightning
    @variouslightning 5 หลายเดือนก่อน

    Mate, this is genius 👑

  • @hhkl3bhhksm466
    @hhkl3bhhksm466 5 หลายเดือนก่อน

    Excellent tutorial my friend, subscribed

  • @jinsen69
    @jinsen69 5 หลายเดือนก่อน +2

    Best tutorial and best consistency I've been using this. Thank you for sharing your tutorials and workflows.

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      You're very welcome!

  • @aitornado-pu6xh
    @aitornado-pu6xh 5 หลายเดือนก่อน +1

    정말 고맙습니다. 덕분에 제가 추구하는 동영상에 한발짝 더 다가갈수 있었습니다.
    자연스럽게 구독과 알림을 하게 되었네요
    간혹 여러 메세지가 뜨면서 안되는 경우가 생겼지만 영상을 다시 보며 이해하니 문제를 해결 할 수 있었습니다
    감사드립니다

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      감사합니다

  • @marcelo_coutinho
    @marcelo_coutinho 3 หลายเดือนก่อน

    This is pure gold! 👑

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      ❤️

  • @Vengar8
    @Vengar8 6 หลายเดือนก่อน

    amazing tutorial, thank you for sharing!

  • @liuvision5109
    @liuvision5109 5 หลายเดือนก่อน

    Great job!

  • @hemanthdonga1757
    @hemanthdonga1757 5 หลายเดือนก่อน

    Thanks for making this tut man
    🤗🤗

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      Welcome 😊

  • @pinksoy_
    @pinksoy_ 4 หลายเดือนก่อน +1

    감사합니다.

  • @androidgg5854
    @androidgg5854 5 หลายเดือนก่อน +1

    Bro i really want to thank you for you work it helps a lot ,by far best tutorial that i find !

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      thank you

    • @grafik_elefant
      @grafik_elefant 4 หลายเดือนก่อน

      i agree with this statement 100% 👍

  • @CosmicFoundry
    @CosmicFoundry 6 หลายเดือนก่อน

    great stuff!

  • @useful2021
    @useful2021 5 หลายเดือนก่อน

    Das ist wunderbar, Weile Dank

  • @JJCAFE_RAinly
    @JJCAFE_RAinly 5 หลายเดือนก่อน

    wonderful work!~~~

  • @shiccup
    @shiccup 5 หลายเดือนก่อน

    This is amazing

  • @bipinpeter7820
    @bipinpeter7820 หลายเดือนก่อน

    super cool!

  • @estebanmoraga3126
    @estebanmoraga3126 19 วันที่ผ่านมา +1

    Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?

  • @johnpope1473
    @johnpope1473 3 หลายเดือนก่อน

    amazing - nice effort.

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      Thanks a lot

  • @coffee80119
    @coffee80119 4 หลายเดือนก่อน

    Great video, i just started last week with AI and image generation so this is way out of my league for now, but i like watching it

  • @euehckxoehxiwk3
    @euehckxoehxiwk3 3 หลายเดือนก่อน

    thank youuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu , It helped me a lot in studying. I look forward to other videos, too

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      Your welcome

  • @kunhuang2189
    @kunhuang2189 3 หลายเดือนก่อน

    Thanks!

  • @carsoncarr578
    @carsoncarr578 หลายเดือนก่อน

    dope!

  • @chaos_artifical_intelligence
    @chaos_artifical_intelligence 5 หลายเดือนก่อน

    You are amazing🥰

  • @ohheyvoid
    @ohheyvoid 5 หลายเดือนก่อน

    very cool

  • @user-xz6zo7nn1t
    @user-xz6zo7nn1t 4 หลายเดือนก่อน

    Thank you very, very much for your videos. I have learned a lot and successfully created a video like yours. Thank you for your selfless sharing. You are a great promoter of AI.

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      Glad it helped!

    • @user-xz6zo7nn1t
      @user-xz6zo7nn1t 4 หลายเดือนก่อน

      @@jerrydavos If I add you as a member, will I be able to view member videos on TH-cam?

  • @user-fw8kd7zr1y
    @user-fw8kd7zr1y 6 หลายเดือนก่อน

    Great JoB!!!

  • @AIcecreamsoda
    @AIcecreamsoda 5 หลายเดือนก่อน

    Thank you very much ❤❤🙏🙏🔥🔥

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      You're welcome 😊

  • @kinozadelo
    @kinozadelo 6 หลายเดือนก่อน +1

    Thanks for the tutorial! I am curious what GPU do you have? Very fast rendering, even at places where you didn't speed up the source screen recording 😮

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      RTX 3070 TI 8GB Laptop GPU and 32 GB CPU Ram

  • @RKosmik
    @RKosmik 25 วันที่ผ่านมา

    I've used many other workflows and this one is by far one of the best one it comes to consistency. I do have a question. Most of my renders colors are crazy especially the background. Anyway to make the colors stay more consistent? Is it a model issue? Prompt? Sampler or CFG? Thanks! Amazing work!

    • @jerrydavos
      @jerrydavos  23 วันที่ผ่านมา

      I found out LCM is a downgrade ... version 4 gives the best results for the renders, you can try v4 with the default settings, and can improvise from their
      Use models which are trained mid or after year 2023 which gives the best compatible with animatediff.
      Rest Euler_A with normal should give best output.
      Use CFG between 5-7
      For Consistency, use smaller batches and character loras with proper prompts.

  • @mtyt9551
    @mtyt9551 6 หลายเดือนก่อน +1

    Very nice tutorial!
    Do you think there is amy way to decrease the "randomness" between the frames, and make it seem like a more continuous video?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      it might be possible in the near future... I am also experimenting on consistency of frames....

    • @chillsoft
      @chillsoft 6 หลายเดือนก่อน +1

      You seem new to the game :P What you are describing is the literal problem with diffusion techniques! :D

  • @edsonjr-dev
    @edsonjr-dev 6 หลายเดือนก่อน +1

    💜💜💜

  • @NikitinaYulia
    @NikitinaYulia 3 หลายเดือนก่อน

    Thank you so much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      Yes you can render any video length, by batch workflow: you render small segments of a video at a time, so you can have multiple small batches and render a long video.

  • @alexsuzuki4027
    @alexsuzuki4027 3 หลายเดือนก่อน

    awesome video. This is the most promising workflow I've seen, but I'm running into some interesting issues. Any ideas why changing the batch range alters my outputs so significantly? When i render a batch of 10 I can get some awesome vibrant results, but when I render 100 with no settings changed all of the frames are simplified and dull.

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน +1

      Hey, Make sure you use the Latest version 3 workflows, drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ
      also you can try changing the Animatediff motion module to something else than temporaldiff, like the new animatediff motion module ... or this one : civitai.com/models/139237?modelVersionId=154097
      Temporal Diff gives faded, yellow tinted results.
      Also Avoid using LCM it give faded results in many cases,

  • @edsonjr-dev
    @edsonjr-dev 5 หลายเดือนก่อน +1

    Man this is really cool, is it possible to change the character's clothes and the background for example? without the character having the same characteristics as the reference?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +2

      Yes, I am testing stuff out with masking, imgur.com/a/oFGAR33, clothing change might be also possible.

    • @edsonjr-dev
      @edsonjr-dev 5 หลายเดือนก่อน +1

      ​@@jerrydavos This was amazing and very satisfying result 💜💜💜

  • @AnotherComment-rl6fv
    @AnotherComment-rl6fv 5 หลายเดือนก่อน +5

    on 1_0 auto and manual json I get "When loading the graph, the following node types were not found: CeilNode" do you how to fix that?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +4

      Install it from here : github.com/aria1th/ComfyUI-LogicUtils
      Manager skips this one :/

  • @GomezBro
    @GomezBro 6 หลายเดือนก่อน

    This is insane 😨

  • @vedanthbora3106
    @vedanthbora3106 3 หลายเดือนก่อน

    Hey, thanks for the video. I have a question
    1 .In the first step we input the video and generate the frames and control net outputs.
    2. We now convert all frames + 2nd control net outputs from step 1st to generate the images. Now in the 2nd step, will the batch size be the total number of frames or frames + images of both the control net ?

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      only original frame's count.

  • @Chinese_animation_
    @Chinese_animation_ 6 หลายเดือนก่อน

    good!

  • @realisteachno
    @realisteachno หลายเดือนก่อน +2

    Thank you greatly for your tutorial!
    I get the following error when queue prompt in ControlNet PAsses, no quotes in Input Video Path, please help!
    "Prompt outputs failed validation: Failed to convert an input value to an INT value: quality, false, invalid literal for int() with base 10: 'false'"

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      Hey, Update your nodes and also update comfyui. and Use the Latest CN v4 export version.
      Error should go away.

    • @ryantengco4606
      @ryantengco4606 หลายเดือนก่อน

      @@jerrydavos Tried your suggestions, however still running into the same error.

  • @Racife
    @Racife 2 หลายเดือนก่อน +1

    Thank you for your video, subbed!
    I'm having trouble at 7:15 i don't have the pop up list of controlnet models like you do, am i missing a node?
    edit - fixed my own problem, googled the control net model file i was missing and downloaded all the pth files from that github link

  • @fabianmosele2321
    @fabianmosele2321 4 หลายเดือนก่อน +1

    this is giving me so 2008 vibes

  • @s0202512
    @s0202512 5 หลายเดือนก่อน

    Thanks for this amazing video, it helps me a lot. May I ask why my image are not continous between different batch of generate. Let's say I have a video which has 400 frames, and I serparate 2 batch in 200 frames, and my 2nd batch is not continous with the 1st batch, may I know how to fix it.

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Ya, it's one of the cons of Animatediff, it cannot do long videos in one go... You need expensive PC... So Instead in the workflow, you can do in batches .... and try the overlapping technique ... Here : th-cam.com/video/aysg2vFFO9g/w-d-xo.htmlsi=jUGNyx1PxJiFLlzA&t=192

  • @aiart21
    @aiart21 4 หลายเดือนก่อน +1

    can i ask you "how much time will spend with 4090 and 30 frame 30 second 720p video like your process comfy ui. i am spending 4hour with 1.7 sd webui. how much can i save my time with comfy ui than with 1.7 sd mov2mov+controlnet. thank you so much your great video.

    • @aiart21
      @aiart21 4 หลายเดือนก่อน

      Can I ask for a prompt to create a simple background? Or is there a process or extension that changes the background itself? When I use canny, even the background is captured, so the background is drawn as is with i2i. Please teach me how to create different backgrounds. For example, a setting in outer space or Mars

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      Hey, it take me around 6-7 hours on my 8GB RTX 3080 TI laptop GPU for 30 secs in 720p... in comfyu ( Combined time of all Steps : Raw + Refiner + Face Fix)
      And if you want to change the background then I have a separate raw workflow for that, Here: www.patreon.com/posts/v3-0-bg-changer-97728634
      It's still work in progress

    • @aiart21
      @aiart21 4 หลายเดือนก่อน

      @@jerrydavos Wow, Thank you for your pioneering steps. I am also making a video with Webui 1.7. Ultimately, I will have to use comfyui. thank you I will definitely try it out within this week. thank you.

  • @basitqureshi88
    @basitqureshi88 3 หลายเดือนก่อน

    Awsome work mate, can you tell me what terminal you are using that lets you start and stop comfy at around 1:44 mark

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      Stability Matrix

  • @obzerv9570
    @obzerv9570 3 หลายเดือนก่อน

    Thanks for your workflows, they are great!. I just been having one problem while running the "Animation Raw" and "AnimateDiff Refiner" (Haven't tried the AnimateDiff Face Fix yet). While processing those wokflows more times then not, it crashes and reboots my workstation. I'm running a rtx 3090 with an i9 and 160Gb DDR5 ram and only processing 30 frames. the common crash/reboot point according to the ComfyUI log file is this...
    [AnimateDiffEvo] - [0;32mINFO [0m - Using motion module motionModel_v01.ckpt:v1.
    It does seem to vary when it crashes though.
    Any idea or insight on why this is happening would be great.
    Thank you

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      1) ... If your PC crashes that means Somethin is choking the CPU Ram ( I had this crash while working with HD video and longer Duration)
      but I have 32GB and in your case it's 160GB which more than enough.
      Monitor your task manager > Performance Tab, While running that which is choking (Memory or GPU)
      2) See what Vram you have in your GPU card, mine is 8GB. which is sufficient for 100 frames in 1280x720 px.
      If you use higher resolution then this might also cap out your GPU vram and causes crash if your System is using graphics card for display.....
      3) And For this "[AnimateDiffEvo] - [0;32mINFO [0m - Using motion module motionModel_v01.ckpt:v1."
      I don't think this model should be a problem to crash... but you can change to any other for testing if this changes anything.
      Test with 10 frames with 856x480 px for a safe limit.

  • @ermexgameryt7298
    @ermexgameryt7298 6 หลายเดือนก่อน

    Amazing job as always! This will work with 6gb vram gpus?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Probably, out of ram errors frequently....

  • @cflvince4638
    @cflvince4638 6 หลายเดือนก่อน +1

    3) AnimateDiff Refiner - LCM v2.1.json. ''When loading the graph, the following node types were not found:
    Evaluate Integers
    Nodes that have failed to load will show as red on the graph". But ComfyUI Manger not the miss Node

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      The Evaluate Node is not maintained by the author anymore, so it gave error: I've updated the refiner here : drive.google.com/drive/folders/15hJM8zXeM9uZFGEjJjZaggrkNQVGp3ly?usp=drive_link
      it won't give any Evaluate error now

    • @cflvince4638
      @cflvince4638 6 หลายเดือนก่อน

      Thank you @@jerrydavos

  • @Sounds_magical
    @Sounds_magical 5 หลายเดือนก่อน

    @Jerry Davos AI, can I ask you please? I tried different Checkpoints and settings, but the noise and style on the image is almost everywhere the same((( Earlier I used these ckpts with webui A1111 (SD 1.5) and it worked correctly, but here with "ComfyUI_portable" I have some "Image noise issues". Thank you so much for the answer. P.S. Your work is so amazing. Thank you!

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +2

      Hey, If you use Comfyui Inside A1111( Comfyui_A1111_Extension) then the rendered images will be noisy and ugly.
      Please use standalone Comfyui like from Stability Matrix which is compatible with this workflow.
      Hope this answers your question!

    • @Sounds_magical
      @Sounds_magical 5 หลายเดือนก่อน

      @@jerrydavos Jerry, Thank you so much for the answer. I'm using ComfyUI_portable, not a A1111. Also, I've noticed this error in command line using run_nvidia_gpu.bat: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly. - Also, I have this ERROR: Cannot import D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui module for custom nodes: cannot import name 'CompVisVDenoiser' from 'comfy.samplers' (D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py).......... - Maybe the problem is here? Because all the images after your step by step tutorial in my "RAW Animation" look very ugly. It looks like "too much lora" or some glitches or artifacts in my face image. P.S. other thing: Are these embedding important? Because it says that I don't have them. Sorry, that disturbing you.

    • @Sounds_magical
      @Sounds_magical 5 หลายเดือนก่อน

      @@jerrydavos UPD: Now It works! It works perfect with a Batch Range 2 or more. With 1: it has this "noise'. Thank you so much for the tutorial!

  • @dragongaiden1992
    @dragongaiden1992 หลายเดือนก่อน

    Friend, your videos are impressive, my question is how much vram do I need to make an animation in 720p with your method?

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      I had 8GB Vram for these render in the video.

  • @jiaquanzhang9978
    @jiaquanzhang9978 5 หลายเดือนก่อน +1

    Thanks mate for the tutorial. I tried to install the controlnet models in the ckpts folder. However, I cannot find this folder under the comfyui_controlnet_aux, and even when I create one and put the file in the folder, the ComfyUI doesn't recognize it and won't let me choose the file. Any idea how to fix it?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Try these locations...
      1) ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
      2) ComfyUI\models\controlnet
      If using stability Matrix put them here :
      3) Stability Matrix\Models\ControlNet

    • @jiaquanzhang9978
      @jiaquanzhang9978 5 หลายเดือนก่อน

      @@jerrydavosIt works, thanks!

    • @gonzaliders1
      @gonzaliders1 5 หลายเดือนก่อน

      please pin this as a top comment!

  • @leoshawn
    @leoshawn 6 หลายเดือนก่อน

    It is a good job! what is the software you used in the Final Sequence Stage? thank you! 13:15

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      After effects

  • @CauedeMattos
    @CauedeMattos 5 หลายเดือนก่อน

    Great tutorial! I am a total noob in it, I still don't understand much but I'll dig in to it. Sorry for this stupid question but: Can I install all of it through Pinokio? It is just more beginner friendly for someone just starting like me, hehe. Thanks!

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      If it can run comfy then surely you can!

    • @CauedeMattos
      @CauedeMattos 5 หลายเดือนก่อน

      Thats great! Thanks! @@jerrydavos

  • @hypersouza2406
    @hypersouza2406 3 หลายเดือนก่อน

    First of all, thank you very much for your education. I have a question for you. 1) ControlNet_Passes_Export_v2.1.json only processes 10 frames when I export this file. What is the reason of this? It only took 1) Frames 2) Softedge 4)Lineart folders out. You also had the openpose folder.

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      Hey You need to change the batch range from 10,
      Here you can have a look here for using the latest version of controlnet exporter: www.patreon.com/posts/v4-0-controlnet-98846295

  • @princeperatta8572
    @princeperatta8572 6 หลายเดือนก่อน

    Hello thanks for the video! Do u know how to use more power of my gpu it seems comfyui only use 30-40% of gpu power?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Try using these arguments "--cuda-device 0 --highvram"

  • @Ai_Vs_Original
    @Ai_Vs_Original 3 หลายเดือนก่อน

    what sre the bbox and sam models for the face fix work flow?

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      They are used for face detection and then cropping out the faces for fixing them.

    • @Ai_Vs_Original
      @Ai_Vs_Original หลายเดือนก่อน

      You the best bro❤

  • @_B3ater
    @_B3ater 5 หลายเดือนก่อน

    Bro did you created all this yourself? You are a genius

    • @_B3ater
      @_B3ater 5 หลายเดือนก่อน

      And also i keep getting these glitchy images, i read your note and disabled the lora but it doesnt work either is there any other reason for this problem that you are aware of ?

    • @_B3ater
      @_B3ater 5 หลายเดือนก่อน

      Got it, for anyone having the same issue, if you are dumb like me, some checkpoints are not compatible.Try a different one 😵😳

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Yes, some models are not compatible, see this video for clarity : th-cam.com/video/aysg2vFFO9g/w-d-xo.htmlsi=v2Z4pnpDt2U-DtNq&t=147

    • @_B3ater
      @_B3ater 5 หลายเดือนก่อน +1

      Another update, i used motionmodelv01 instead of temporall diff.and used lineart and softedge instead of openpose.Changed the width and height with the same w and h as output images, and it solved the problem.Finally

  • @technicalhariom1291
    @technicalhariom1291 4 หลายเดือนก่อน

    Hey bro i have 6 gb vram gpu(rtx 3060 laptop) will i am able to do so and how much time would it take to render the same video like you (same length) can you give me an approx idea

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      6gb, If you made this same video then approx - 9 - 10 hours Raw + 7 - 8 Hour Refiner if it's not shifting to cpu due to low vram.... Maybe longer due to overloading
      It can be decreased by rendering at low resolution and batch size

  • @kiminaro224
    @kiminaro224 3 หลายเดือนก่อน

    hey man, reaaally cool stuff
    Do you happen have any image to video content on your patreon?

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      I had somthing for SVD... here : www.patreon.com/posts/ai-svd-with-more-93812677
      It take an image and outputs a video.

    • @kiminaro224
      @kiminaro224 3 หลายเดือนก่อน

      @@jerrydavos ty

  • @gonzaliders1
    @gonzaliders1 5 หลายเดือนก่อน +1

    where do i download the controlnets? the folder ckpt does not appear so i created one, i dont know if i did it correctly but well see in a bit

  • @Tryoutaccount
    @Tryoutaccount หลายเดือนก่อน

    Hi thanks for sharing this great workflow. I seem to be getting an error @input video path: Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' . Do you have any idea what causes this issue or how to solve it? Thanks in advance

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      hey please update your comfyui and all the custom nodes... it's mismatch error due to different versions

    • @Tryoutaccount
      @Tryoutaccount หลายเดือนก่อน

      @@jerrydavos Thanks for the quick reply! It didn't do the trick. Eventually, I got it working by adding numeric values in the quality fields for all image save nodes. Thanks again!

    • @kiwii806
      @kiwii806 หลายเดือนก่อน

      @@Tryoutaccount How did you solve it? I'm getting the same error

    • @Tryoutaccount
      @Tryoutaccount หลายเดือนก่อน +1

      @@kiwii806 updated comfyui and nodes. Then i replaced the nan(=not a number) values in the quality fields with a 1.

  • @david_ce
    @david_ce 4 หลายเดือนก่อน

    I saw your new workflows for this year with Ipadaptor
    When will you be making a video on this

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +2

      My v3 Series is almost complete... 2 workflows are remaining... then I'll focus more on making tutorials for them

    • @david_ce
      @david_ce 4 หลายเดือนก่อน

      @@jerrydavos thank you for all you’ve done already

  • @pokewayamv9950
    @pokewayamv9950 5 หลายเดือนก่อน

    I have 8gb vram what do you think I Should use the batch size and resolution .

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      I have too 8Gb vram -
      1) In Raw - I put around 480 x 856 (Vertical - Portrait) in the Dimensions max.
      2) Batch Range is 150 - 200 for Raw File and 100 for refiner file.
      In refiner Upscale is set to 1.2, above that it takes very long time.

  • @duplicatemate7843
    @duplicatemate7843 5 หลายเดือนก่อน +1

    what was the last song you at 12:00? You didnt link it haha. ty

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Nice nice, 😂
      Wiguez, Rico 56 - Gone [NCS] by Best No Copyright Music

  • @user-dk4qm9kg7k
    @user-dk4qm9kg7k 5 หลายเดือนก่อน

    when i ran the part4, i got
    Error occurred when executing SEGSDetailerForAnimateDiff:
    "can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first."
    How to fix it??? Thank you .

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      update the node, it out of date or re-install it manually from here if it does not fix : github.com/ltdrdata/ComfyUI-Impact-Pack

  • @trippy6158
    @trippy6158 5 หลายเดือนก่อน

    Hey, I'm wondering how you would do prompt travelling instead of just the normal prompt for the whole video?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      There is a feature in the raw workflows to enable the prompt travel in the version 3 folder here: drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ
      unmute and enable the prompt traveler node and use it like normal.

    • @trippy6158
      @trippy6158 5 หลายเดือนก่อน

      Thank you so much, you're a godsend@@jerrydavos

  • @qingfuqin9657
    @qingfuqin9657 3 หลายเดือนก่อน

    Hi, can you help me with a problem? Error occurred when executing DWPreprocessor

  • @self5738
    @self5738 หลายเดือนก่อน

    for some reason it wont save the renders at the end the image save has a red bar around it instead of a green one and loads the images but wont save them and im not sure what to di

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      you are running out of memory. Use lower batch range like half of what you are using now.
      If problem is not going update your comfyui and other nodes.
      Also give error logs if not solved

  • @hatuey6326
    @hatuey6326 3 หลายเดือนก่อน

    great tuto but at the refiner step i've got an out of memory with a 3060 grtx 12 gb. I'll try 50 by 50

  • @technicalhariom1291
    @technicalhariom1291 5 หลายเดือนก่อน

    Bro will it take two much time on 6gb rtx 3060 laptop gpu ?
    Can you tell me how much time will it take ?

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      ... about 20-30 mins for 10-20 frames in 480x856 px

  • @heysuvajit
    @heysuvajit 4 หลายเดือนก่อน

    Is ok to tell us what models are you using?
    As I can see the texts were very prominent

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      civitai.com/models/56680/imp
      This One is used for this video
      Also This one's my favorite : civitai.com/models/144249?modelVersionId=294575

    • @heysuvajit
      @heysuvajit 4 หลายเดือนก่อน

      @@jerrydavos thanks much 😊

    • @heysuvajit
      @heysuvajit 4 หลายเดือนก่อน

      Learning so many things from you in just one video, keep the good work going ❤

  • @hidalgoserra
    @hidalgoserra 5 หลายเดือนก่อน

    Hey there, thank you for your workflow, everything is installed, but as soon as i input the video in mp4 and the output folder i receive this error:
    Prompt outputs failed validation
    GetImageSize:
    - Required input is missing: images
    and in the console:
    ERROR:root:Failed to validate prompt for output 197:
    ERROR:root:* GetImageSize 200:
    ERROR:root: - Required input is missing: images
    ERROR:root:Output will be ignored
    i have put the correct path of my video without quotation marks.

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Hey @hidalgoserra, you can try these:
      1) when you load the workflow for the first time, after installing all the nodes, sometimes the workflow gets corrupt and connections get broken, Try dragging and dropping the workflow again
      2) If your comfy is not running as admin, it can fail to fetch the video file if it is in main C drive, so run as admin too is advisable,
      3) Please see if the Path has no spaces in front or in back.
      Hope it gets resolved, you can contact me discord : jerrydavos if you still face problem.

  • @SirKauron
    @SirKauron 5 หลายเดือนก่อน +1

    How can I achieve an anime-style result? What models or tools do you recommend?"

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      1) IMP - civitai.com/models/56680/imp
      2) Meinamix - civitai.com/models/7240/meinamix
      3) Hellokid2d - civitai.com/models/101254?modelVersionId=192071
      4) Mistoon - Anime - civitai.com/models/24149/mistoonanime
      These Models can give pretty good results.

    • @SirKauron
      @SirKauron 5 หลายเดือนก่อน

      @@jerrydavos Thanks but I tried, but I always get the same faces in the video. Does it have to do with some ControlNet, or do I need to adjust something else in Workflow?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      In Face Fix, using model + loras + proper prompts can improve the results
      @@SirKauron

  • @holerisen
    @holerisen 5 หลายเดือนก่อน

    Is it possible to move this workflow to stable diffusion? Would love a tutorial for that!

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Automatic 1111 can't load batch images in parallel like comfyUI, So It's not possible there yet.

    • @carstenli
      @carstenli 5 หลายเดือนก่อน

      @holerisen it IS stable diffusion btw. Just a more modular and advanced UI (ComfyUI).

  • @usk0602
    @usk0602 5 หลายเดือนก่อน

    When I follow the video and press Queue Prompt, the work stops after a certain time and I am forced to restart ComfyUI. i have RTX4070ti, i9-13900K and 32GB RAM, is this still not enough specs?

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      Make sure, your video is not more than 20 seconds and also not in HD or 4k,
      HD videos and longer videos overfills the ram hence hanging comfyui

  • @murphy6672
    @murphy6672 4 หลายเดือนก่อน

    Is it normal/intended for DWPose-Estimator to use the CPU instead of the GPU?

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      Yes, GPU is little buggy....so avoiding it currently

  • @ahmedzanklony8858
    @ahmedzanklony8858 4 หลายเดือนก่อน

    l would like to learn who to use comfy ui and i dont now if my laptop can run it my gpu is gtx1650ti and what do you recommend me to start learning from

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      Getting familiar with the basics of comfyui.... I watched them when I first started using comfy, the channels have a lot of usefull comfyui tutorials which helped me a lot in learning it
      1) th-cam.com/video/AbB33AxrcZo/w-d-xo.html&ab_channel=ScottDetweiler
      2) th-cam.com/video/LNOlk8oz1nY/w-d-xo.html&ab_channel=OlivioSarikas
      Then you should watch AnimateDiff Tutorials... and you should be good to go till you have a good pc or a cloud one.

    • @ahmedzanklony8858
      @ahmedzanklony8858 4 หลายเดือนก่อน

      @@jerrydavos thanks for responding ❤

  • @giorgio.nmazza
    @giorgio.nmazza 4 หลายเดือนก่อน

    Hi, i'm having an issue with WAS Nodes Suit. the error when i queue a prompt is " WAS_Boolean.Return_boolean() got an unexpected keyword argument 'boolean_number' i have trier reinstalling WAS, updating every node and changing the boolean value in the node, the json in question is the 1_0) ControlNet_Passes_Export_v3.0_Automatic and the node is ' Save Sources Frames'

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      The latest commit of the WAS node caused this error,
      Use this Version of the Node to Fix that issue: github.com/WASasquatch/was-node-suite-comfyui/tree/33534f2e48682ddcf580436ea39cffc7027cbb89
      Manually delete the was suite custom node and replace it with the above link.

    • @giorgio.nmazza
      @giorgio.nmazza 4 หลายเดือนก่อน

      Thank you sir, it worked@@jerrydavos

    • @giorgio.nmazza
      @giorgio.nmazza 4 หลายเดือนก่อน

      @@jerrydavos have you ever encountered an issue where the control nets would only load 5 images even if I change the batch number?

  • @trippy6158
    @trippy6158 5 หลายเดือนก่อน

    So is there no way to automate the rendering of batches? Seems kinda tedious to have to render 100 frames at a time manually

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      Currently, you can add all the batches to render queue which will take only a min... but even if one fails from out of memory error then it will skip that batch, and the final sequence naming will be disturbed....
      I have something in mind to overcome this ... and to automate the batches without fail.... will experiment it soon.

    • @trippy6158
      @trippy6158 5 หลายเดือนก่อน

      So you're referring to this workflow (ControlNet_Passes_Export_v3.0_Automatic) right? This is very helpful, just wondering if there's a way to do that with the Animation Raw workflows?@@jerrydavos

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      @@trippy6158 .... It can be done but will skip frames automatically on errors which disturb the sequence.

  • @user-td5ix7zu3b
    @user-td5ix7zu3b 3 หลายเดือนก่อน

    Where's the material that makes all images into videos? I need it so badly.

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน

      This node may help you: github.com/Kosinkadink/ComfyUI-VideoHelperSuite
      Or you may use after effects

  • @user-oc2rc9pz2p
    @user-oc2rc9pz2p 5 หลายเดือนก่อน

    Does this software require a video card? Hope you reply

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      RTX 8gb card

    • @user-oc2rc9pz2p
      @user-oc2rc9pz2p 5 หลายเดือนก่อน

      @@jerrydavos Thanks

  • @factbeast2775
    @factbeast2775 5 หลายเดือนก่อน +2

    Please make a video for v3.0

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      As soon as I get time, will make v3 Tutorial series.

  • @renwar_G
    @renwar_G 3 หลายเดือนก่อน

    Your a G bruv

  • @Stormthedude
    @Stormthedude 2 หลายเดือนก่อน +1

    Error occurred when executing DWPreprocessor:
    OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:270: error: (-5:Bad argument) Can't read ONNX file
    \ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\yzd-v/DWPose\dw-ll_ucoco_384.onnx in function 'cv::dnn::dnn4_v20221220::ONNXImporter::ONNXImporter'

    • @jerrydavos
      @jerrydavos  2 หลายเดือนก่อน

      Right Click on the DwPreprocessor node > Fix node and relink the connections like before, it should fix... Also update all nodes and Comfyui

  • @user-rx5cy2em5q
    @user-rx5cy2em5q 5 หลายเดือนก่อน

    When loading the graph, the following node types were not found:
    KJNodes for ComfyUI 🔗, But I have indeed downloaded this node. Please tell me how to solve it.

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      1) Delete the KJ nodes folder from the custom node directory
      2) Manually Download the latest version from here : github.com/kijai/ComfyUI-KJNodes
      3) Paste it in the Custom Nodes
      4) Run comfy as admin.
      See console log for errors,
      Let me know, if you need more help. 😊

    • @user-rx5cy2em5q
      @user-rx5cy2em5q 5 หลายเดือนก่อน

      [Impact Pack] Wildcards loading done.
      Traceback (most recent call last):
      File "G:\ComfyUI
      odes.py", line 1810, in load_custom_node
      module_spec.loader.exec_module(module)
      File "", line 883, in exec_module
      File "", line 241, in _call_with_frames_removed
      File "G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main\__init__.py", line 1, in
      from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
      File "G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main
      odes.py", line 1140, in
      from color_matcher import ColorMatcher
      ModuleNotFoundError: No module named 'color_matcher'
      Cannot import G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main module for custom nodes: No module named 'color_matcher'@@jerrydavos

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      come discord; jerrydavos and ill try to help from team viewer@@user-rx5cy2em5q

  • @walidflux
    @walidflux 5 หลายเดือนก่อน +9

    why it's so complicated

  • @heysuvajit
    @heysuvajit 4 หลายเดือนก่อน

    Does ConfyUI have any commandline or headless approach?

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      It can be run with cmd directly also.

    • @heysuvajit
      @heysuvajit 4 หลายเดือนก่อน

      @@jerrydavos Thanks I will check

  • @useful2021
    @useful2021 5 หลายเดือนก่อน

    Wau

  • @user-eu1hh8yy3k
    @user-eu1hh8yy3k 5 หลายเดือนก่อน

    "Is there a way to automate this workflow to run in a loop instead of manually adding to the queue each time?"

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      I'll working on that... Yes it can be possible

  • @shabuddinmohammed909
    @shabuddinmohammed909 5 หลายเดือนก่อน

    Plz sir can you plz tell me minimum requirement of laptop version to work.
    Plz.
    I have 2gb ram acer laptop. Is this work or not in laptop.
    Plz reply

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Unfortunately, It won't work on 2GB ram.... It needs 8GB RTX Graphics card with 32 GB CPU Ram

    • @shabuddinmohammed909
      @shabuddinmohammed909 5 หลายเดือนก่อน

      @Ai_Davos plz tell me which laptop have this all stuff plz sir

  • @gonzaliders1
    @gonzaliders1 5 หลายเดือนก่อน

    please make a video on Animate Anyone!

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      ya, I been wanting to look into it.

  • @sirj3714
    @sirj3714 5 หลายเดือนก่อน

    Hey! Help pls with Lora model at 07:49, cant understand where donwload this "add_saturation" and where put it

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      civitai.com/models/71192/saturation-tweaker-lora-lora
      it's optional btw
      put it in comfyu>models>loras folder

    • @sirj3714
      @sirj3714 5 หลายเดือนก่อน

      @@jerrydavos Thanks :)

    • @sirj3714
      @sirj3714 5 หลายเดือนก่อน

      @@jerrydavos Help with Lora links again pls at 12:33 "detailed_eye" and "eyeliner" models

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Eye liner lora is here civitai.com/models/128118/eyeliner-lora , you can find all loras on civit ai website, and Change the value to "none" to the loras which you don't have, it will run fine without them, it's not that much important. @@sirj3714

    • @sirj3714
      @sirj3714 5 หลายเดือนก่อน

      @@jerrydavos Ah okay, thank you 😊

  • @elwinjames
    @elwinjames 6 หลายเดือนก่อน

    Ufff this is aweosme , wish to know the Recommended system spec ?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      8GB RTX Card and 32 GB CPU Ram

    • @elwinjames
      @elwinjames 6 หลายเดือนก่อน

      @@jerrydavos thank you so much

    • @elwinjames
      @elwinjames 6 หลายเดือนก่อน

      @@jerrydavos will it work on Nvidia GTX 1080 ?

  • @utkarshsinha4594
    @utkarshsinha4594 หลายเดือนก่อน

    Please help the previously it was working now it is not working

  • @user-xc1uy6kd7e
    @user-xc1uy6kd7e 5 หลายเดือนก่อน

    After importing your workflow, it prompted me that it was missing CeilNode, how to solve it, I hope it can be answered, thank you

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน +1

      Install it from here : github.com/aria1th/ComfyUI-LogicUtils

    • @user-xc1uy6kd7e
      @user-xc1uy6kd7e 5 หลายเดือนก่อน

      Thanks!!!!!!!!!!!!!!!!!!!!!@@jerrydavos