AnimateDiff ControlNet Animation v2.1 [ComfyUI]
ฝัง
- เผยแพร่เมื่อ 31 ธ.ค. 2023
- Convert any video into any other style using Comfy UI and AnimateDiff.
This Video is for the version v2.1 of the AnimateDiff Controlnet Animation workflow.
Rendered Video Link: • You and I - Katy Perry...
Workflow Download Links:
1) Documented Tutorial + Workflows : / update-v2-1-lcm-95056616
2) Google Drive Link : drive.google.com/drive/folder...
My Discord Server : / discord
Links Shown During Video :
COMFY MANAGER:
github.com/ltdrdata/ComfyUI-M...
EFFICIENCY NODES v1.92:
civitai.com/models/32342
CONTROLNET MODELS:
huggingface.co/lllyasviel/Con...
CHECKPOINT MODELS , LORAS and VAE
civitai.com/models
LCM LORA
huggingface.co/latent-consist...
Animate Diff Motion Module
1) civitai.com/models/139237
2) huggingface.co/CiaraRowles/Te...
-------------------------------------------------------------------------------------------------
Music Used :
N3X - Tell Me (freetouse.com)
Damtaro - Far Away (freetouse.com)
Markvard - Falling for You (freetouse.com)
Stream Robin Hustin X Tobimorrow - Light It Up (feat. Jex) (SoundCloud)
Wiguez, Rico 56 - Gone [NCS]
-----------------------------------------------------------------
SEO:
Animatediff control net
Animatediff animation
Stable Diffusion animation
comfyui animation
animatediff webui
animatediff controlnet
animatediff github
animatediff stable diffusion
Controlnet animation
how to use animatediff
animation with animate diff comfyui
how to animate in comfy ui
animatediff prompt travel
animate diff prompt travel cli
prompt travel stable diffusion
animatediff comfyui video to video
animatediff comfyui google colab
animatediff comfyui tutorial
animatediff comfyui install
animatediff comfyui img2img
animatediff vid2vid comfyui
comfyui-animatediff-evolved
animatediff controlnet animation in comfyui
katy perry stylization katy perry you and I katy fan art
flicker free, non flicker, comfy animation, ai animation
stable diffusion video
stable diffusion animation
warp fusion
------------------------------------------------------------------- - บันเทิง
This is by far the best tutorial I have seen on AnimatedDiff. Awesome job!
That was the most unproblematic install of a youtube tutorial i ever had in comfyUI. Thank you very much! 👏👏👏
This is the best workflow I've ever seen because it's consistency is maxed out, thank you so much for the tutorial because its very complex
Superb. Thank you for making this. Thank you for not locking it behind some paywall. You are the man, thanks!
Even though I'm just a beginner, the workflow runs smoothly. Thank you very much for your tutorial.
This tutorial is very helpful, thank you very much Jerry!!
你真的好棒,感谢你的分享!
You are amazing!Thanks for your workflows!
Mate, this is genius 👑
Excellent tutorial my friend, subscribed
Best tutorial and best consistency I've been using this. Thank you for sharing your tutorials and workflows.
You're very welcome!
정말 고맙습니다. 덕분에 제가 추구하는 동영상에 한발짝 더 다가갈수 있었습니다.
자연스럽게 구독과 알림을 하게 되었네요
간혹 여러 메세지가 뜨면서 안되는 경우가 생겼지만 영상을 다시 보며 이해하니 문제를 해결 할 수 있었습니다
감사드립니다
감사합니다
This is pure gold! 👑
❤️
amazing tutorial, thank you for sharing!
Great job!
Thanks for making this tut man
🤗🤗
Welcome 😊
감사합니다.
Bro i really want to thank you for you work it helps a lot ,by far best tutorial that i find !
thank you
i agree with this statement 100% 👍
great stuff!
Das ist wunderbar, Weile Dank
wonderful work!~~~
This is amazing
super cool!
Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?
amazing - nice effort.
Thanks a lot
Great video, i just started last week with AI and image generation so this is way out of my league for now, but i like watching it
thank youuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu , It helped me a lot in studying. I look forward to other videos, too
Your welcome
Thanks!
dope!
You are amazing🥰
very cool
Thank you very, very much for your videos. I have learned a lot and successfully created a video like yours. Thank you for your selfless sharing. You are a great promoter of AI.
Glad it helped!
@@jerrydavos If I add you as a member, will I be able to view member videos on TH-cam?
Great JoB!!!
Thank you very much ❤❤🙏🙏🔥🔥
You're welcome 😊
Thanks for the tutorial! I am curious what GPU do you have? Very fast rendering, even at places where you didn't speed up the source screen recording 😮
RTX 3070 TI 8GB Laptop GPU and 32 GB CPU Ram
I've used many other workflows and this one is by far one of the best one it comes to consistency. I do have a question. Most of my renders colors are crazy especially the background. Anyway to make the colors stay more consistent? Is it a model issue? Prompt? Sampler or CFG? Thanks! Amazing work!
I found out LCM is a downgrade ... version 4 gives the best results for the renders, you can try v4 with the default settings, and can improvise from their
Use models which are trained mid or after year 2023 which gives the best compatible with animatediff.
Rest Euler_A with normal should give best output.
Use CFG between 5-7
For Consistency, use smaller batches and character loras with proper prompts.
Very nice tutorial!
Do you think there is amy way to decrease the "randomness" between the frames, and make it seem like a more continuous video?
it might be possible in the near future... I am also experimenting on consistency of frames....
You seem new to the game :P What you are describing is the literal problem with diffusion techniques! :D
💜💜💜
Thank you so much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance
Yes you can render any video length, by batch workflow: you render small segments of a video at a time, so you can have multiple small batches and render a long video.
awesome video. This is the most promising workflow I've seen, but I'm running into some interesting issues. Any ideas why changing the batch range alters my outputs so significantly? When i render a batch of 10 I can get some awesome vibrant results, but when I render 100 with no settings changed all of the frames are simplified and dull.
Hey, Make sure you use the Latest version 3 workflows, drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ
also you can try changing the Animatediff motion module to something else than temporaldiff, like the new animatediff motion module ... or this one : civitai.com/models/139237?modelVersionId=154097
Temporal Diff gives faded, yellow tinted results.
Also Avoid using LCM it give faded results in many cases,
Man this is really cool, is it possible to change the character's clothes and the background for example? without the character having the same characteristics as the reference?
Yes, I am testing stuff out with masking, imgur.com/a/oFGAR33, clothing change might be also possible.
@@jerrydavos This was amazing and very satisfying result 💜💜💜
on 1_0 auto and manual json I get "When loading the graph, the following node types were not found: CeilNode" do you how to fix that?
Install it from here : github.com/aria1th/ComfyUI-LogicUtils
Manager skips this one :/
This is insane 😨
Hey, thanks for the video. I have a question
1 .In the first step we input the video and generate the frames and control net outputs.
2. We now convert all frames + 2nd control net outputs from step 1st to generate the images. Now in the 2nd step, will the batch size be the total number of frames or frames + images of both the control net ?
only original frame's count.
good!
Thank you greatly for your tutorial!
I get the following error when queue prompt in ControlNet PAsses, no quotes in Input Video Path, please help!
"Prompt outputs failed validation: Failed to convert an input value to an INT value: quality, false, invalid literal for int() with base 10: 'false'"
Hey, Update your nodes and also update comfyui. and Use the Latest CN v4 export version.
Error should go away.
@@jerrydavos Tried your suggestions, however still running into the same error.
Thank you for your video, subbed!
I'm having trouble at 7:15 i don't have the pop up list of controlnet models like you do, am i missing a node?
edit - fixed my own problem, googled the control net model file i was missing and downloaded all the pth files from that github link
this is giving me so 2008 vibes
Thanks for this amazing video, it helps me a lot. May I ask why my image are not continous between different batch of generate. Let's say I have a video which has 400 frames, and I serparate 2 batch in 200 frames, and my 2nd batch is not continous with the 1st batch, may I know how to fix it.
Ya, it's one of the cons of Animatediff, it cannot do long videos in one go... You need expensive PC... So Instead in the workflow, you can do in batches .... and try the overlapping technique ... Here : th-cam.com/video/aysg2vFFO9g/w-d-xo.htmlsi=jUGNyx1PxJiFLlzA&t=192
can i ask you "how much time will spend with 4090 and 30 frame 30 second 720p video like your process comfy ui. i am spending 4hour with 1.7 sd webui. how much can i save my time with comfy ui than with 1.7 sd mov2mov+controlnet. thank you so much your great video.
Can I ask for a prompt to create a simple background? Or is there a process or extension that changes the background itself? When I use canny, even the background is captured, so the background is drawn as is with i2i. Please teach me how to create different backgrounds. For example, a setting in outer space or Mars
Hey, it take me around 6-7 hours on my 8GB RTX 3080 TI laptop GPU for 30 secs in 720p... in comfyu ( Combined time of all Steps : Raw + Refiner + Face Fix)
And if you want to change the background then I have a separate raw workflow for that, Here: www.patreon.com/posts/v3-0-bg-changer-97728634
It's still work in progress
@@jerrydavos Wow, Thank you for your pioneering steps. I am also making a video with Webui 1.7. Ultimately, I will have to use comfyui. thank you I will definitely try it out within this week. thank you.
Awsome work mate, can you tell me what terminal you are using that lets you start and stop comfy at around 1:44 mark
Stability Matrix
Thanks for your workflows, they are great!. I just been having one problem while running the "Animation Raw" and "AnimateDiff Refiner" (Haven't tried the AnimateDiff Face Fix yet). While processing those wokflows more times then not, it crashes and reboots my workstation. I'm running a rtx 3090 with an i9 and 160Gb DDR5 ram and only processing 30 frames. the common crash/reboot point according to the ComfyUI log file is this...
[AnimateDiffEvo] - [0;32mINFO [0m - Using motion module motionModel_v01.ckpt:v1.
It does seem to vary when it crashes though.
Any idea or insight on why this is happening would be great.
Thank you
1) ... If your PC crashes that means Somethin is choking the CPU Ram ( I had this crash while working with HD video and longer Duration)
but I have 32GB and in your case it's 160GB which more than enough.
Monitor your task manager > Performance Tab, While running that which is choking (Memory or GPU)
2) See what Vram you have in your GPU card, mine is 8GB. which is sufficient for 100 frames in 1280x720 px.
If you use higher resolution then this might also cap out your GPU vram and causes crash if your System is using graphics card for display.....
3) And For this "[AnimateDiffEvo] - [0;32mINFO [0m - Using motion module motionModel_v01.ckpt:v1."
I don't think this model should be a problem to crash... but you can change to any other for testing if this changes anything.
Test with 10 frames with 856x480 px for a safe limit.
Amazing job as always! This will work with 6gb vram gpus?
Probably, out of ram errors frequently....
3) AnimateDiff Refiner - LCM v2.1.json. ''When loading the graph, the following node types were not found:
Evaluate Integers
Nodes that have failed to load will show as red on the graph". But ComfyUI Manger not the miss Node
The Evaluate Node is not maintained by the author anymore, so it gave error: I've updated the refiner here : drive.google.com/drive/folders/15hJM8zXeM9uZFGEjJjZaggrkNQVGp3ly?usp=drive_link
it won't give any Evaluate error now
Thank you @@jerrydavos
@Jerry Davos AI, can I ask you please? I tried different Checkpoints and settings, but the noise and style on the image is almost everywhere the same((( Earlier I used these ckpts with webui A1111 (SD 1.5) and it worked correctly, but here with "ComfyUI_portable" I have some "Image noise issues". Thank you so much for the answer. P.S. Your work is so amazing. Thank you!
Hey, If you use Comfyui Inside A1111( Comfyui_A1111_Extension) then the rendered images will be noisy and ugly.
Please use standalone Comfyui like from Stability Matrix which is compatible with this workflow.
Hope this answers your question!
@@jerrydavos Jerry, Thank you so much for the answer. I'm using ComfyUI_portable, not a A1111. Also, I've noticed this error in command line using run_nvidia_gpu.bat: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly. - Also, I have this ERROR: Cannot import D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui module for custom nodes: cannot import name 'CompVisVDenoiser' from 'comfy.samplers' (D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py).......... - Maybe the problem is here? Because all the images after your step by step tutorial in my "RAW Animation" look very ugly. It looks like "too much lora" or some glitches or artifacts in my face image. P.S. other thing: Are these embedding important? Because it says that I don't have them. Sorry, that disturbing you.
@@jerrydavos UPD: Now It works! It works perfect with a Batch Range 2 or more. With 1: it has this "noise'. Thank you so much for the tutorial!
Friend, your videos are impressive, my question is how much vram do I need to make an animation in 720p with your method?
I had 8GB Vram for these render in the video.
Thanks mate for the tutorial. I tried to install the controlnet models in the ckpts folder. However, I cannot find this folder under the comfyui_controlnet_aux, and even when I create one and put the file in the folder, the ComfyUI doesn't recognize it and won't let me choose the file. Any idea how to fix it?
Try these locations...
1) ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
2) ComfyUI\models\controlnet
If using stability Matrix put them here :
3) Stability Matrix\Models\ControlNet
@@jerrydavosIt works, thanks!
please pin this as a top comment!
It is a good job! what is the software you used in the Final Sequence Stage? thank you! 13:15
After effects
Great tutorial! I am a total noob in it, I still don't understand much but I'll dig in to it. Sorry for this stupid question but: Can I install all of it through Pinokio? It is just more beginner friendly for someone just starting like me, hehe. Thanks!
If it can run comfy then surely you can!
Thats great! Thanks! @@jerrydavos
First of all, thank you very much for your education. I have a question for you. 1) ControlNet_Passes_Export_v2.1.json only processes 10 frames when I export this file. What is the reason of this? It only took 1) Frames 2) Softedge 4)Lineart folders out. You also had the openpose folder.
Hey You need to change the batch range from 10,
Here you can have a look here for using the latest version of controlnet exporter: www.patreon.com/posts/v4-0-controlnet-98846295
Hello thanks for the video! Do u know how to use more power of my gpu it seems comfyui only use 30-40% of gpu power?
Try using these arguments "--cuda-device 0 --highvram"
what sre the bbox and sam models for the face fix work flow?
They are used for face detection and then cropping out the faces for fixing them.
You the best bro❤
Bro did you created all this yourself? You are a genius
And also i keep getting these glitchy images, i read your note and disabled the lora but it doesnt work either is there any other reason for this problem that you are aware of ?
Got it, for anyone having the same issue, if you are dumb like me, some checkpoints are not compatible.Try a different one 😵😳
Yes, some models are not compatible, see this video for clarity : th-cam.com/video/aysg2vFFO9g/w-d-xo.htmlsi=v2Z4pnpDt2U-DtNq&t=147
Another update, i used motionmodelv01 instead of temporall diff.and used lineart and softedge instead of openpose.Changed the width and height with the same w and h as output images, and it solved the problem.Finally
Hey bro i have 6 gb vram gpu(rtx 3060 laptop) will i am able to do so and how much time would it take to render the same video like you (same length) can you give me an approx idea
6gb, If you made this same video then approx - 9 - 10 hours Raw + 7 - 8 Hour Refiner if it's not shifting to cpu due to low vram.... Maybe longer due to overloading
It can be decreased by rendering at low resolution and batch size
hey man, reaaally cool stuff
Do you happen have any image to video content on your patreon?
I had somthing for SVD... here : www.patreon.com/posts/ai-svd-with-more-93812677
It take an image and outputs a video.
@@jerrydavos ty
where do i download the controlnets? the folder ckpt does not appear so i created one, i dont know if i did it correctly but well see in a bit
Did you find out?
Hi thanks for sharing this great workflow. I seem to be getting an error @input video path: Failed to convert an input value to a INT value: quality, false, invalid literal for int() with base 10: 'false' . Do you have any idea what causes this issue or how to solve it? Thanks in advance
hey please update your comfyui and all the custom nodes... it's mismatch error due to different versions
@@jerrydavos Thanks for the quick reply! It didn't do the trick. Eventually, I got it working by adding numeric values in the quality fields for all image save nodes. Thanks again!
@@Tryoutaccount How did you solve it? I'm getting the same error
@@kiwii806 updated comfyui and nodes. Then i replaced the nan(=not a number) values in the quality fields with a 1.
I saw your new workflows for this year with Ipadaptor
When will you be making a video on this
My v3 Series is almost complete... 2 workflows are remaining... then I'll focus more on making tutorials for them
@@jerrydavos thank you for all you’ve done already
I have 8gb vram what do you think I Should use the batch size and resolution .
I have too 8Gb vram -
1) In Raw - I put around 480 x 856 (Vertical - Portrait) in the Dimensions max.
2) Batch Range is 150 - 200 for Raw File and 100 for refiner file.
In refiner Upscale is set to 1.2, above that it takes very long time.
what was the last song you at 12:00? You didnt link it haha. ty
Nice nice, 😂
Wiguez, Rico 56 - Gone [NCS] by Best No Copyright Music
when i ran the part4, i got
Error occurred when executing SEGSDetailerForAnimateDiff:
"can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first."
How to fix it??? Thank you .
update the node, it out of date or re-install it manually from here if it does not fix : github.com/ltdrdata/ComfyUI-Impact-Pack
Hey, I'm wondering how you would do prompt travelling instead of just the normal prompt for the whole video?
There is a feature in the raw workflows to enable the prompt travel in the version 3 folder here: drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ
unmute and enable the prompt traveler node and use it like normal.
Thank you so much, you're a godsend@@jerrydavos
Hi, can you help me with a problem? Error occurred when executing DWPreprocessor
for some reason it wont save the renders at the end the image save has a red bar around it instead of a green one and loads the images but wont save them and im not sure what to di
you are running out of memory. Use lower batch range like half of what you are using now.
If problem is not going update your comfyui and other nodes.
Also give error logs if not solved
great tuto but at the refiner step i've got an out of memory with a 3060 grtx 12 gb. I'll try 50 by 50
Bro will it take two much time on 6gb rtx 3060 laptop gpu ?
Can you tell me how much time will it take ?
... about 20-30 mins for 10-20 frames in 480x856 px
Is ok to tell us what models are you using?
As I can see the texts were very prominent
civitai.com/models/56680/imp
This One is used for this video
Also This one's my favorite : civitai.com/models/144249?modelVersionId=294575
@@jerrydavos thanks much 😊
Learning so many things from you in just one video, keep the good work going ❤
Hey there, thank you for your workflow, everything is installed, but as soon as i input the video in mp4 and the output folder i receive this error:
Prompt outputs failed validation
GetImageSize:
- Required input is missing: images
and in the console:
ERROR:root:Failed to validate prompt for output 197:
ERROR:root:* GetImageSize 200:
ERROR:root: - Required input is missing: images
ERROR:root:Output will be ignored
i have put the correct path of my video without quotation marks.
Hey @hidalgoserra, you can try these:
1) when you load the workflow for the first time, after installing all the nodes, sometimes the workflow gets corrupt and connections get broken, Try dragging and dropping the workflow again
2) If your comfy is not running as admin, it can fail to fetch the video file if it is in main C drive, so run as admin too is advisable,
3) Please see if the Path has no spaces in front or in back.
Hope it gets resolved, you can contact me discord : jerrydavos if you still face problem.
How can I achieve an anime-style result? What models or tools do you recommend?"
1) IMP - civitai.com/models/56680/imp
2) Meinamix - civitai.com/models/7240/meinamix
3) Hellokid2d - civitai.com/models/101254?modelVersionId=192071
4) Mistoon - Anime - civitai.com/models/24149/mistoonanime
These Models can give pretty good results.
@@jerrydavos Thanks but I tried, but I always get the same faces in the video. Does it have to do with some ControlNet, or do I need to adjust something else in Workflow?
In Face Fix, using model + loras + proper prompts can improve the results
@@SirKauron
Is it possible to move this workflow to stable diffusion? Would love a tutorial for that!
Automatic 1111 can't load batch images in parallel like comfyUI, So It's not possible there yet.
@holerisen it IS stable diffusion btw. Just a more modular and advanced UI (ComfyUI).
When I follow the video and press Queue Prompt, the work stops after a certain time and I am forced to restart ComfyUI. i have RTX4070ti, i9-13900K and 32GB RAM, is this still not enough specs?
Make sure, your video is not more than 20 seconds and also not in HD or 4k,
HD videos and longer videos overfills the ram hence hanging comfyui
Is it normal/intended for DWPose-Estimator to use the CPU instead of the GPU?
Yes, GPU is little buggy....so avoiding it currently
l would like to learn who to use comfy ui and i dont now if my laptop can run it my gpu is gtx1650ti and what do you recommend me to start learning from
Getting familiar with the basics of comfyui.... I watched them when I first started using comfy, the channels have a lot of usefull comfyui tutorials which helped me a lot in learning it
1) th-cam.com/video/AbB33AxrcZo/w-d-xo.html&ab_channel=ScottDetweiler
2) th-cam.com/video/LNOlk8oz1nY/w-d-xo.html&ab_channel=OlivioSarikas
Then you should watch AnimateDiff Tutorials... and you should be good to go till you have a good pc or a cloud one.
@@jerrydavos thanks for responding ❤
Hi, i'm having an issue with WAS Nodes Suit. the error when i queue a prompt is " WAS_Boolean.Return_boolean() got an unexpected keyword argument 'boolean_number' i have trier reinstalling WAS, updating every node and changing the boolean value in the node, the json in question is the 1_0) ControlNet_Passes_Export_v3.0_Automatic and the node is ' Save Sources Frames'
The latest commit of the WAS node caused this error,
Use this Version of the Node to Fix that issue: github.com/WASasquatch/was-node-suite-comfyui/tree/33534f2e48682ddcf580436ea39cffc7027cbb89
Manually delete the was suite custom node and replace it with the above link.
Thank you sir, it worked@@jerrydavos
@@jerrydavos have you ever encountered an issue where the control nets would only load 5 images even if I change the batch number?
So is there no way to automate the rendering of batches? Seems kinda tedious to have to render 100 frames at a time manually
Currently, you can add all the batches to render queue which will take only a min... but even if one fails from out of memory error then it will skip that batch, and the final sequence naming will be disturbed....
I have something in mind to overcome this ... and to automate the batches without fail.... will experiment it soon.
So you're referring to this workflow (ControlNet_Passes_Export_v3.0_Automatic) right? This is very helpful, just wondering if there's a way to do that with the Animation Raw workflows?@@jerrydavos
@@trippy6158 .... It can be done but will skip frames automatically on errors which disturb the sequence.
Where's the material that makes all images into videos? I need it so badly.
This node may help you: github.com/Kosinkadink/ComfyUI-VideoHelperSuite
Or you may use after effects
Does this software require a video card? Hope you reply
RTX 8gb card
@@jerrydavos Thanks
Please make a video for v3.0
As soon as I get time, will make v3 Tutorial series.
Your a G bruv
Error occurred when executing DWPreprocessor:
OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:270: error: (-5:Bad argument) Can't read ONNX file
\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\yzd-v/DWPose\dw-ll_ucoco_384.onnx in function 'cv::dnn::dnn4_v20221220::ONNXImporter::ONNXImporter'
Right Click on the DwPreprocessor node > Fix node and relink the connections like before, it should fix... Also update all nodes and Comfyui
When loading the graph, the following node types were not found:
KJNodes for ComfyUI 🔗, But I have indeed downloaded this node. Please tell me how to solve it.
1) Delete the KJ nodes folder from the custom node directory
2) Manually Download the latest version from here : github.com/kijai/ComfyUI-KJNodes
3) Paste it in the Custom Nodes
4) Run comfy as admin.
See console log for errors,
Let me know, if you need more help. 😊
[Impact Pack] Wildcards loading done.
Traceback (most recent call last):
File "G:\ComfyUI
odes.py", line 1810, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main\__init__.py", line 1, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main
odes.py", line 1140, in
from color_matcher import ColorMatcher
ModuleNotFoundError: No module named 'color_matcher'
Cannot import G:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main module for custom nodes: No module named 'color_matcher'@@jerrydavos
come discord; jerrydavos and ill try to help from team viewer@@user-rx5cy2em5q
why it's so complicated
Does ConfyUI have any commandline or headless approach?
It can be run with cmd directly also.
@@jerrydavos Thanks I will check
Wau
"Is there a way to automate this workflow to run in a loop instead of manually adding to the queue each time?"
I'll working on that... Yes it can be possible
Plz sir can you plz tell me minimum requirement of laptop version to work.
Plz.
I have 2gb ram acer laptop. Is this work or not in laptop.
Plz reply
Unfortunately, It won't work on 2GB ram.... It needs 8GB RTX Graphics card with 32 GB CPU Ram
@Ai_Davos plz tell me which laptop have this all stuff plz sir
please make a video on Animate Anyone!
ya, I been wanting to look into it.
Hey! Help pls with Lora model at 07:49, cant understand where donwload this "add_saturation" and where put it
civitai.com/models/71192/saturation-tweaker-lora-lora
it's optional btw
put it in comfyu>models>loras folder
@@jerrydavos Thanks :)
@@jerrydavos Help with Lora links again pls at 12:33 "detailed_eye" and "eyeliner" models
Eye liner lora is here civitai.com/models/128118/eyeliner-lora , you can find all loras on civit ai website, and Change the value to "none" to the loras which you don't have, it will run fine without them, it's not that much important. @@sirj3714
@@jerrydavos Ah okay, thank you 😊
Ufff this is aweosme , wish to know the Recommended system spec ?
8GB RTX Card and 32 GB CPU Ram
@@jerrydavos thank you so much
@@jerrydavos will it work on Nvidia GTX 1080 ?
Please help the previously it was working now it is not working
After importing your workflow, it prompted me that it was missing CeilNode, how to solve it, I hope it can be answered, thank you
Install it from here : github.com/aria1th/ComfyUI-LogicUtils
Thanks!!!!!!!!!!!!!!!!!!!!!@@jerrydavos