AnimateDiff ControlNet Animation v1.0 [ComfyUI]

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 พ.ย. 2023
  • Animation Made in ComfyUI using AnimateDiff with only ControlNet Passes.
    Main Animation Json Files:
    Version v1 - drive.google.com/drive/folder...
    New Version v2.1 - / update-v2-1-lcm-95056616
    Output Render 1 (SD Mashup) : • Models MashUp AnimateD...
    Output Render 2 (EpicRealism) : • Collide - AnimateDiff ...
    Output Render 3 (Fashion) : • AnimateDiff Control Ne...
    My Discord Server : / discord
    -----------------------------------------------------------------
    Resources Used :
    1) ComfyUI Impact Pack
    github.com/ltdrdata/ComfyUI-I...
    2) ComfyUI_FizzNodes
    github.com/FizzleDorf/ComfyUI...
    3) ComfyUI's ControlNet Auxiliary Preprocessors
    github.com/Fannovel16/comfyui...
    4) ComfyUI-Advanced-ControlNet
    github.com/Kosinkadink/ComfyU...
    5) ComfyUI Inspire Pack
    github.com/ltdrdata/ComfyUI-I...
    6) AnimateDiff Evolved
    github.com/Kosinkadink/ComfyU...
    7) ComfyUI-VideoHelperSuite
    github.com/Kosinkadink/ComfyU...
    8) Dynamic Thresholding
    github.com/mcmonkeyprojects/s...
    ---Checkpoint---
    1) Temporaldiff-v1-animatediff
    huggingface.co/CiaraRowles/Te...
    Put in : ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\models
    -----------------------------------------------------------------------------------------------------
    Reference video : • 'Collide' Dance Trend ...
    -----------------------------------------------------------------------------------------------------
    ◉ I also post all my Secrets techniques, tips & tricks, workflows, project files and Tutorials on my Patreon account : / jerrydavos ◉
    -------------------------------------------------------------------------------------------------
    Music Used :
    Vintage Lukrembo - Dream With Tea (freetousee.com)
    Stream Universal - Vibe Tracks (Soundcloud)
    Dstrion - Alibi (ft. Heleen) [NCS]
    Stream skyline - PHONK BEAT (SoundCloud
    -----------------------------------------------------------------
    SEO:
    Animatediff control net
    Animatediff animation
    Stable Diffusion animation
    comfyui animation
    animatediff webui
    animatediff controlnet
    animatediff github
    animatediff stable diffusion
    Controlnet animation
    how to use animatediff
    animation with animate diff comfyui
    how to animate in comfy ui
    animatediff prompt travel
    animate diff prompt travel cli
    prompt travel stable diffusion
    animatediff comfyui video to video
    animatediff comfyui google colab
    animatediff comfyui tutorial
    animatediff comfyui install
    animatediff comfyui img2img
    animatediff vid2vid comfyui
    comfyui-animatediff-evolved
    animatediff controlnet animation in comfyui
    -------------------------------------------------------------------
  • บันเทิง

ความคิดเห็น • 331

  • @ClancyInkk
    @ClancyInkk 7 หลายเดือนก่อน +31

    You've inspired me to download the whole lot, god help me downloading it all correctly and getting it working... anyway thank you for the video, maybe if I get there you'll see a creation of mine...

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +3

      I am Feeling proud that I inspired you

    • @illinium1727
      @illinium1727 10 วันที่ผ่านมา

      You inspired me too but I hope I can find the software. I dance very well but this effect makes it fabulous!❤

  • @lylewells5660
    @lylewells5660 6 หลายเดือนก่อน +3

    Excellent tutorial, one of the best I've seen on stable diffusion. Very descriptive, easy to follow, and the imagery (color coordinating functions) was very intuitive and really helps to understand the process. Thought I should let you know how much I appreciate the extra effort and generosity you implement in your teaching. Going to go watch your other videos and anticipate more tutorials of this quality from you in the future. Thank you, Love and Respect!

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Thank you for this generous reply @lylewells5660, it's been a pleasure to make tutorials like which make you feel satisfied to watch ❤️

  • @YooArtifical
    @YooArtifical 6 หลายเดือนก่อน

    This video is so clear and concise. Thank you so much.

  • @mattm7319
    @mattm7319 2 หลายเดือนก่อน

    I saw your video for the first time today. You're fast and to the point. Very good workflow and video. I hope the updated one works with the new controlnets :D

  • @user-ct5dw2rp1b
    @user-ct5dw2rp1b 5 หลายเดือนก่อน +1

    God, this is really amazing. Thank you so much, i was wanting a real way to learn more about this new AI Tools and this video that you maded is one piece of knowledge, hope to see more people learning from you, and that your channel get a lot of success

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Glad you liked it! 😊

  • @Freakarium
    @Freakarium 4 หลายเดือนก่อน +1

    omg aDetailer post-processing trick was awesome, thanks a lot ^_^

  • @micbab-vg2mu
    @micbab-vg2mu 7 หลายเดือนก่อน +1

    great workflow:)

  • @FortniteJams
    @FortniteJams 7 หลายเดือนก่อน +1

    Thankyou very much, really is one of the better ComfyUI animation tutorials, will take in the rest too. Again Cheers!!😍😍👍

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Glad it was helpful!

    • @FortniteJams
      @FortniteJams 7 หลายเดือนก่อน +1

      @@jerrydavos kind of can't tell you how much.

  • @finshingcatfly1483
    @finshingcatfly1483 7 หลายเดือนก่อน +1

    This is all i need thank you so much hard work!

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      You're welcome 😊

  • @Lamson777
    @Lamson777 8 หลายเดือนก่อน +2

    Great works, bro!

    • @jerrydavos
      @jerrydavos  8 หลายเดือนก่อน

      Appreciate it!

  • @ronnysempai
    @ronnysempai 6 หลายเดือนก่อน

    Thank you so much for share, excelente work

  • @wolflora
    @wolflora 7 หลายเดือนก่อน +1

    Best Tutorial! Thanks you so much! Music sooo goood❤

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      You're welcome!

  • @sarahmccarthy1720
    @sarahmccarthy1720 หลายเดือนก่อน

    you are the best! please keep making tutorials like this
    thanks a lot

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      More to come!

  • @RichardWieditz
    @RichardWieditz 6 หลายเดือนก่อน

    Good Work by Jerry Davos AI,
    well explained and well structured. Keep doing these videos. Pure gold.

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      thank you

  • @iacov_art7013
    @iacov_art7013 7 หลายเดือนก่อน +1

    Fantastic. I appreciate that ❤

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      You're welcome 😊

  • @PercuSoundSystem
    @PercuSoundSystem 6 หลายเดือนก่อน

    Thanks a lot. You answer to a lot of my question. Very good job !!!

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Glad to hear that!

  • @ertugrulbulbul8592
    @ertugrulbulbul8592 3 หลายเดือนก่อน

    Thank you so much!

  • @kriensboben2230
    @kriensboben2230 7 หลายเดือนก่อน +1

    nice video very detail Thank you

  • @tinytut
    @tinytut 7 หลายเดือนก่อน +1

    This is a very technical and forward-looking guide

  • @syltare_1771
    @syltare_1771 7 หลายเดือนก่อน

    와 요즘 comfyUI에 빠져있는대 아주 흥미로운 영상입니다 감사해요

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      감사합니다 !

  • @DeepStateCabal
    @DeepStateCabal 6 หลายเดือนก่อน

    Thank You!!

  • @AndyHTu
    @AndyHTu 7 หลายเดือนก่อน +1

    Damn man your video is incredible . Hope to see you do more videos. Would love to pick your brain. You need a patreon channel and teach this stuff ! Your channel is going to blow up . The presentation is stunning

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Brain: Well, I should thank you! I will enjoy it a lot 😈.
      Me : btw I already have a patreon account where I post my breakdowns and tutorials : patreon.com/jerrydavos
      and thanks I'll keep finding ways to improve and teach it.

    • @AndyHTu
      @AndyHTu 7 หลายเดือนก่อน

      Just saw this message right now. I'm going to join soon! @@jerrydavos

  • @joelface
    @joelface 7 หลายเดือนก่อน +7

    I will NOT be attempting this tutorial, but I enjoyed watching it and understanding what this process is like MUCH better than I did before, so thank you very much. The end results are awesome, and I'm excited to watch how this will evolve over the next few years. I've noticed that the soft-edge images result in a VERY similar result to the original video, since they include everything from clothing style to room dimensions. This is very useful when you WANT that, but I wonder if there will be advancements in this area that allow you a bit more freedom to keep certain elements from the original video while changing others more drastically. Regardless, thank you for sharing your process.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Ya, Using Softedge applies the original elements mostly, If It was purely openpose, you can make a skeleton dance without any elements from original video, but openpose markers jumps here and there in every frame of the video which make the body jitterey so hence it is avoided for now, It will develop in the future.
      Your welcome!

    • @AraShiNoMiwaKo
      @AraShiNoMiwaKo 6 หลายเดือนก่อน

      This shit will be obsolete in about two months. No need to waste time with all this shit.

    • @joelface
      @joelface 6 หลายเดือนก่อน

      @@AraShiNoMiwaKo even if true, it’s fascinating to understand these steps along the journey to better and better ai. A lot of this stuff will get automated, but understanding how it works now can help you understand how to get better and better results later as well, I bet.

    • @AraShiNoMiwaKo
      @AraShiNoMiwaKo 6 หลายเดือนก่อน

      @@joelface meh you don't need to know how an engine works to drive a car, or anything basically, this is the same. I'm an artist, I'm not into this nerdy thing. And btw this dude didn't even upload his json workflow, so thanks for nothing, show-off.

  • @redwong6119
    @redwong6119 8 หลายเดือนก่อน +1

    very detail, nice

    • @jerrydavos
      @jerrydavos  8 หลายเดือนก่อน

      Glad you like it! Thanks

  • @zakirtour
    @zakirtour 6 หลายเดือนก่อน

    thanks for this video

  • @pixelpisasu
    @pixelpisasu 4 หลายเดือนก่อน

    You are amazing ❤❤❤❤

  • @royalH-kz3eu
    @royalH-kz3eu 7 หลายเดือนก่อน +1

    you are great

  • @bobsaget420
    @bobsaget420 7 หลายเดือนก่อน +1

    10/10, thanks man!

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Glad you liked it!

    • @bobsaget420
      @bobsaget420 7 หลายเดือนก่อน

      @@jerrydavos very much. Really providing a ton of value. Also, your music taste is very good.
      One question, if you're open. Could you address how you got so much consistency? I tried this with a tiktok dance video and noticed that the clothing and hair color in particular shifted a lot. Is the answer to this just to put it in the prompt (for instance anime dancing girl at the end, you put in the prompt "black hair, skirt, etc)?
      Thanks again!

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      @@bobsaget420 Thanks for your Compliment I am very fond of music, and Yes Prompts help only up to a certain extent, After rendering from comfyu, I use Davinci Deflicker Fluro Light or sometimes Topaz Video AI for Deflickering.

  • @arossoft
    @arossoft 7 หลายเดือนก่อน +1

    Super!
    🤠

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Thanks!🤩

  • @wezone88
    @wezone88 7 หลายเดือนก่อน +24

    I don’t understand anything, but it’s very interesting

    • @privateName419
      @privateName419 7 หลายเดือนก่อน

      3d motion capture

    • @AhmadIzHere
      @AhmadIzHere 4 หลายเดือนก่อน

      😂 yeah

  • @wayneqwele8847
    @wayneqwele8847 7 หลายเดือนก่อน

    Thank you

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      You're welcome

  • @lee_sung_studio
    @lee_sung_studio 5 หลายเดือนก่อน +1

    Thank you. 감사합니다.

  • @MrHa0c
    @MrHa0c 7 หลายเดือนก่อน +1

    thank u , that great 😍😍😍😍🤩🤩🤩🤩

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      You're welcome 😊

  • @elifmiami
    @elifmiami หลายเดือนก่อน

    I actually have been looking easy and fast way to make animation workflow ! Thank you. Only one think I am considering is background. when I use plain background (white or green etc) does not listen to my prompt. How can I improve ?

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      Use the BG changer v2 file:
      Tutorial : www.patreon.com/posts/v3-0-bg-changer-97728634
      File:
      drive.google.com/drive/folders/1Y7FSlNOy1N2z71r0T91Ty8Yrq5IuwolE

  • @Denka_
    @Denka_ 6 หลายเดือนก่อน

    Hey you need more than 8 comfyUI exentisons btw! You're missing like 10 of them. When I loaded the .jsn a bunch of nodes were red! Thankfully the manager extension allowed me to install missing node extensions! (not sure if this is because I used V2 on accident)

  • @mick7727
    @mick7727 5 หลายเดือนก่อน

    this was laid out way more complicated than it needed to be.

  • @kgeo753
    @kgeo753 2 หลายเดือนก่อน

    Who are the people who are writing this software? It has to be people in the industry who are violating noncompete clauses and NDAs. This isn't the kind of thing you can learn to build in between shifts at the auto plant. Whatever the case is,, as a creator I'm grateful for it. It's opening up some possibilities.

  • @all_names
    @all_names 14 วันที่ผ่านมา

    Hi, what checkpoints and models should I use for the (black and white) results in pencil or sketch format?

  • @Disco_Tek
    @Disco_Tek 7 หลายเดือนก่อน +1

    Nice video. I'll send you over a couple of things I did via discord. I didn't use your exact workflow but a similar one from a couple of different deconstructed ones. Can we go over a post-processing in ComfyUI video doing things like facedetailer and stuff there? I started rendering on Comfy so I geniunely hate using A1111. Going to try the cross-fading of smaller batches like you mentioned now.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Ya, sure would love to see. I am finding ways to accomplish face fix in comfyui so you don't have to go to A1111 and would release in V2 of my ControlNet Animation Json workflow

  • @YSakhAlex65
    @YSakhAlex65 7 หลายเดือนก่อน +2

    Thank you, bro! Everything works! just one question, is it possible to somehow eliminate these joints between frames 50-51, 100-101, 150-151 and so on? this is very noticeable during transitions in these places, (I have a weak graphics card, so I only count this range) I made a glitch transition in PremierePro there so that it wouldn't be so noticeable

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +2

      Yes, it can be possible, just render few extra frames for Cross fading / Overlapping.
      For example :
      Batch 1: 1-50 frames
      Batch 2: 45-100 frames
      From frame 45-50 fade in the opacity of Batch 2,
      5 frames rendered extra for cross fading so it becomes a smooth unnoticeable transition,
      Total Output images 105 ( 100 source frames and 5 overlapping extra frames)
      I've used this overlapping frame technique here : th-cam.com/users/shortsErXBU5WD0ZY?feature=share
      You can even try 10 frames for even more smoothness.
      Hope your query is resolved!
      Good luck, Keep creating 😊

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      cdn.discordapp.com/attachments/1172537908633276487/1172607839307567134/image.png?ex=6560ef26&is=654e7a26&hm=77b612da88e7ccfc119e6f1c12e8f8dfab3148763b4766877919006405511352&
      Like this

  • @brook45-gg3qk
    @brook45-gg3qk 7 หลายเดือนก่อน

    top

  • @AnjarMoslem
    @AnjarMoslem หลายเดือนก่อน

    this is advanced! can I download all the links on your descriptions and just following your video?

  • @MisterCozyMelodies
    @MisterCozyMelodies หลายเดือนก่อน

    nice tutorial

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      Thanks

  • @alexmehler6765
    @alexmehler6765 8 หลายเดือนก่อน +2

    very nice tut , thanx for the config files . can you only make dancing annime girls? i want dancing skelettons and zombies instead

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +2

      You can surely use only the openpose controlnet pass and enter the "skeleton prompt" and it should be a skeleton dancing.
      And XD, dancing girls are just for testing, main application is beyond just "dancing". Next time will keep in mind.

  • @sophiebramley7594
    @sophiebramley7594 หลายเดือนก่อน

    at 2.33 i thought my computer was having a heart attack you little pranker !

  • @edsonjr-dev
    @edsonjr-dev 7 หลายเดือนก่อน +2

    Do you intend to create more tutorials for this channel? My like and wtachtime 100% 💜💜💜

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Yes I will, the Version 2 of this workflow which give more details is in process.

  • @user-iy2io7ko8u
    @user-iy2io7ko8u 7 หลายเดือนก่อน

    The two hed openposes can be used together to accurately capture movements. This is great, but can it also be used in this way with sd?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      it works on Sd

  • @danielo9827
    @danielo9827 7 หลายเดือนก่อน +2

    Very informative, thank you.
    I have both a question, and a suggestion.
    Question: for some reason, backgrounds come out garbled or messed up sometimes. Any idea what could be the solution? I've been learning ComfyUI and to animate for a couple of weeks, and haven't been able to get around this.
    Suggestion: Have you tried using Zoe Depth in place of Openpose? Or rather, use both together?
    During my many hours of learning and testing, I've found that Zoe Depth has insane detail, even better than OpenPose at times. Including hand/arm positioning.
    Thanks again!

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      In longer batch range above 20-40, the minute details gets blending between the frames hence, blurred or smudged background, Solution is to use shorter batch range and stich them in post, and to have a smoother transition, render some extra frames in every batch and overlap-fadeout them while sequencing.
      Some models supports longer batch range and some doesn't, I've noticed realistic and semi realistic models works well on longer batches,
      Context Length in the animatediff loader also sometimes cause the issue, try putting values 16,24,32.
      One more thing I've observed, Prompts keywords and prompt weights can cause this blurred background, only one keyword can ruin the whole picture, So I test changing the prompt with another synonym , negative prompts and totally avoid prompt weights and it gets fixed.
      This technology is new, the best way to overcome bugs is to experiment more and more and learn from it.
      For the Zoe depth, I agree with you, I've been testing with the depth in control net also, it turns out to be an extra helper in shaping the composition. So yeah it's good.

    • @danielo9827
      @danielo9827 7 หลายเดือนก่อน

      @@jerrydavos Thanks for the reply! I'm going to share a couple of my findings.
      I also noticed that the prompt can have negative effects on the final image. For example, sometimes the image would be over-saturated in red, or have too much black, and so on. Even though these colors were supposed to apply to very specific things (red hair, black pants, for example), the colors would often "bleed" outside of their intended target.
      I have found a potential solution to this problem. An extension called "ComfyUI Cutoff". According to the creator, you can specify parts of your prompt and "cut them off" from the rest of the prompt. (I'm a terrible explainer). Basically: it would ensure that "black" only applies to "pants". "Red" would only apply to "hair", and so on. In my testing, it seemed to help to some degree, though more testing is needed.
      I have managed to make a near-perfect smooth video, converting a real person into cgi/3d render. From what I've seen, if you want the exact same person/character/outfit (but still influence some things), Zoe Depth / Lineart work best, with OpenPose as a third option. For some reason, Zoe Depth is much more consistent than OpenPose for minute details like arm/hand positioning. OpenPose has a tendency to get a little... confused as to where arms/hands are (front, or behind subject). These will still allow you to change small details like clothing/hair/skin/eye color.
      Tile works great, but only if you want an exact replica of the original subject. Otherwise it can be a real fight to keep it from "intruding" in the final image.
      I have started to experiment with replacing a character completely, and I did manage to succeed, with one specific character (more testing to be done), and I'd even go as far as saying it came out really damn good. I managed to keep the background under control,
      Again, thank you very much for the reply, and the tips! I had no idea you could raise AnimateDiff's context... For some reason I thought that 16 was the maximum.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Ya, I agree with you Zoe depth can a lot compared to open pose. In my next project I will try to implement depth pass a major support.
      And as for the Comfy_cutoff node, in theory it should have worked but didn't (I compared it with my old render and still no change) with the animatediff loader or say with the temporaldiff model as the author said
      "Current limitations are that the labelling for my dataset was a bit off, so it has slightly reduced ability to interpret the prompt, i'll be releasing a new version that fixes that soon."
      Maybe animatediff loader model is the issue, so it has to do something which is causing this color "Bleed" I will test on other motion modules as well.
      If you don't mind, your render videos sounds interesting, can you give a link to it or message it on discord (jerrydavos), I am very curious to see what you have achieved so far.

    • @danielo9827
      @danielo9827 7 หลายเดือนก่อน

      Hey! I totally missed the notification for this message.
      I'd be happy to share some of the videos I have generated so far, if you still want them.
      I have videos from when I first started up to now.
      And I agree with the Cutoff node. It kind of worked but kind of didn't... I've experimented with a lot more things since then, and have managed to get some good consistency on some clips but of course, still trying to overcome certain challenges.
      I've added you on Discord. From aelemar

    • @unityofiranian1296
      @unityofiranian1296 6 หลายเดือนก่อน

      ​@@danielo9827 I have read your tips bro
      Can I see your videos too?!
      And see your workflow?!
      Unfortunately I don't have discord account

  • @kenrock2
    @kenrock2 6 หลายเดือนก่อน

    that kid in the end credits is so cute

  • @Shs233
    @Shs233 7 หลายเดือนก่อน +2

    I can't get same result as you. For the poses of control net its okay but I get a weird and horrible background. Is there any solution for this kinda problem?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +2

      Lower the Batch range to around 20, lower batches has the highest details, in longer batches the details gets blended and thus becomes a smudged background.
      and also try to change model, some support longer batches some does not.
      Epicrealism_naturalSinRC1VAE, Aniverse, animerge is best for realistic rendering and they support longer batches. and Meinamix for stylized.
      You can see the Bug List note inside the json file if it matches your problem

  • @freakyninjaman3
    @freakyninjaman3 6 หลายเดือนก่อน

    In the dancing videos, the outfits change throughout. But, at the end, the lady in the gold dress does not change very much at all. Is there a way or best practice to keep the character looking the same through the whole animation?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      The greater the screen coverage, less the change.... you can try the v3_mm_sd motion model for lesser change... It's still experimental to get good consistency

  • @user-vl8jv3zr7t
    @user-vl8jv3zr7t 7 หลายเดือนก่อน +1

    how do i use a lora along with a checkpoint model? I ve tried using lora loader after load checkpoint node but any lora gives me horrible results in vid2vid in comfyui

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      some loras are not compatible with AnimateDiff, you have to experiment and test which one works

  • @kamilkolmasiak9878
    @kamilkolmasiak9878 7 หลายเดือนก่อน +1

    I wonder why you did not put the Preprocessor nodes in the main json and just did it in a seperate json? Is there any performance difference?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Yes, for low graphics cards Ram is a major issue. It kept getting out of memory error. Plus It make the testing a whole lot faster as control net does not take the extra 2-5 seconds every time to get the pass. So yeah Performance gets better.

  • @jayrodathome
    @jayrodathome 6 หลายเดือนก่อน

    Can you do this with any video file? If I was to say download a video of someone dancing from TH-cam and then use that as my starting point would that work?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      You have to do the procedure and it will happen eventually

  • @TeslaElonSpaceXFan
    @TeslaElonSpaceXFan 7 หลายเดือนก่อน +1

    ❤❤

  • @user-ws1gn1du8t
    @user-ws1gn1du8t 7 หลายเดือนก่อน +1

    I want to know if Stable Diffusion can also make such a rendering work?What is unique about ComfyUI?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +3

      Yes, A1111 also has AnimateDiff, but It's take a lot of time setting it up again and again and sometimes animatediff does'nt work with controlnets. It's in development for A1111 we have to wait for a proper stable version.
      In comfyui you just have to drag and drop the workflow no need to setup again and again, this is unique about comfy

  • @marcobelletz4734
    @marcobelletz4734 7 หลายเดือนก่อน +1

    Great tutorial, btw there is something I don't understand on batch results.
    If I make a preview of a 20 frames batch the result is a set of specific images. Without changing anything in settings, with same seed, if I extend the batch to the full duration of the frames, the result is completely different (and definitely less interesting). If I render in batches of 20 frames skipping 20 frames each time, the results is visually better but each 20 frames the images changes so it's useless.
    Do you know how to fix this?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Suppose:
      Batch 1: 1-50 frames
      Batch 2: 45-100 frames
      From frame 45-50 fade in the opacity of Batch 2,
      5 frames rendered extra for cross fading so it becomes a smooth unnoticeable transition,
      Total Output images 105 ( 100 source frames and 5 overlapping extra frames)
      I've used this overlapping frame technique here : th-cam.com/users/shortsErXBU5WD0ZY?feature=share
      In your case, 20 frames gives the best result so you can render 2-3 frames extra and cross fade them in post. At least it will be less noticeable.
      And this is thing which will improvised in the future of Animatediff.

    • @marcobelletz4734
      @marcobelletz4734 7 หลายเดือนก่อน

      @@jerrydavos thank you for your answer but I already tried the overlapping technique but the result is really messy, it changes each 17 frames and sometimes the change is so drastic that the fading (or morphing with other AE techniques) is completely useless.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Use lineart pass with high weights in controlnet along with openpose if its a human video, else use HED plus Lineart with 0.8-0.9 weight

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      The problem is, this ai animation technique is new and still in "Experimental Bases" So you have to try various combinations and steps to get your desired results.
      It will be developed in the future.

    • @marcobelletz4734
      @marcobelletz4734 7 หลายเดือนก่อน

      @@jerrydavos yeah I know, it's always a matter of hundreds of tries

  • @altahookah
    @altahookah 7 หลายเดือนก่อน +2

    Hi! it doesnt show the "loadimages" node .. its just there the "loadimage" for only one file .. any idea? thx for the tutorial!!!

    • @altahookah
      @altahookah 7 หลายเดือนก่อน +1

      just solved updating comfyui "Update All" :D

  • @kacecarol
    @kacecarol 7 หลายเดือนก่อน

    7:00 " I have 3070 Ti laptop GPU" and this is the reason why you start to work on comfyui, guessed by someone who have exactly the same GPU😅

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      3070 Ti 8GB laptop was the best I could afford in 2022 😅

  • @donrikk8546
    @donrikk8546 7 หลายเดือนก่อน

    would their be a way to impliment this stick figure refrence pass as a realtime video recording for say full body tracking in VR. cuz i have used programs that do this kind of thing that mimics imu trackers and it does the whole base skeleton thing but this seems much more accurate idk if it has to do with the fact it has at black with gray outline effect i would think that would help the tracking of the skeleton much easier cuz it would make the process easier if the algorithm knows whats moving and what isnt and i dont believe the VR version does that. it would be fascinating to do because currently full body can ring u up to over 800 bucks at the least 176 with 5 imu trackers using slime vr trackers, so this would be a wonderful free alternative that might produce some pretty good results.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      I agree with you, it would be very usable for the mass without any mocap rig. There is still some issues to the "Skeleton" OpenPose tracker that it does not work well with body rotation yet, which is back view which is front view, it renders always the front view, which make sometimes weird results. This openpose technology should develop more in the future coherently using passes like depth and hed pass for figure out accurate pose, Then we won't be needing any mocap rig. And this skeleton can be rendered to anything you can imagine.

  • @icecoldmindset
    @icecoldmindset 7 หลายเดือนก่อน +1

    i am a begginer and not sure where to start do u have any videos you know that i could watch or any tips

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      First watch this video, it has all the basics:
      th-cam.com/video/z-AoELaJfn0/w-d-xo.htmlsi=kqpevdofHlNmyizm
      then explore on these, I learned a lot from these channels :
      1) www.youtube.com/@promptgeek
      2) www.youtube.com/@OlivioSarikas
      3) www.youtube.com/@sedetweiler
      4) www.youtube.com/@Pixovert

  • @Nibot2023
    @Nibot2023 4 หลายเดือนก่อน

    With this work flow - you can not use XL versions? Is there way to change the face to an animal? or does it have to conform to the pose of the face then?

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      1) Unfortunately, I can't test on SDXLs as I don't have the Hardware and SDXLs motion models are still needed to be trained properly.
      2) The faces Can't be changed with this workflow, Here might be something that can do like that : th-cam.com/video/4826j---2LU/w-d-xo.html&ab_channel=KoalaNation

  • @yasuka_isotaro
    @yasuka_isotaro 6 หลายเดือนก่อน

    Hey question beginner here i got some decent results with comfyui and stable diffusion. but how do you change the background in ADetailer, i cant find a way to do so. do you have a solution?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Adetailer is for Face Detailing, you can't change background with it, You can find workflows to change background in this site : openart.ai

    • @yasuka_isotaro
      @yasuka_isotaro 6 หลายเดือนก่อน +1

      @@jerrydavos i found a way too do it, extra's remove background, then merge and invert mask in ADetailer. then mask erosion on 16. still have to tweak but it seems possible. have too work on it some more.

  • @user-oz5hj9wr4r
    @user-oz5hj9wr4r 6 หลายเดือนก่อน

    The transformation rate matches gen z’a attention span

  • @ermexgameryt7298
    @ermexgameryt7298 6 หลายเดือนก่อน +1

    Hey man, i have a problem with the animations part at minute 6.00. When i start the test of 10 frames of animations, in controlnet 2 nodes becomes red and gives me errors. This 2 nodes : control_net_name: 'control_v11p_sd15_softedge.pth' not in [] || and : control_net_name: 'control_v11p_sd15_softedge.pth' not in []. How can i fix this??

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      You need to have some basic beginner experience in comfy in order to use this workflow easily.
      The error you are telling is of controlnet models not present, you have to download them from here huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
      and place them in their proper directories. (ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts)
      and please use my latest workflow from here : www.patreon.com/posts/update-v2-1-lcm-95056616
      it's much better and I will be uploading a tutorial on how to use it soon.
      Hope your query is resolved !

    • @ermexgameryt7298
      @ermexgameryt7298 6 หลายเดือนก่อน

      @@jerrydavos hi i have fixed yesterday night, thanks for your answer! Unfortunately i cant use this software because my notebook have a rtx 3060 with 6gb vram..there is a way to use it with 6gb, this is a disaster 🙈, when all seems working, it goes "out of memory"

  • @user-iy2io7ko8u
    @user-iy2io7ko8u 7 หลายเดือนก่อน

    Can bloggers provide an installation package for these plug-ins? There are many problems with the configuration environment and installation problems.

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      you have to explore tutorials on youtube

  • @Paranormal-yg4ph
    @Paranormal-yg4ph 6 หลายเดือนก่อน

    Thank you sincerely for making this video and your hard work. I have two question.
    1. Is there a way to decrease the randomness? I gave as prompt to wear something black. The girl i used starts with a white top and ends with a black lingery and in between changed color a few times. I am using the same settings in this demo video.
    2. What dimension input do you recommend to create a 1920x1080 video?
    3. Stable Diffusion: I can't select negative_1

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      1) You can try the V2 version of this workflow which creates more stable animations : www.patreon.com/posts/update-animate-94523632
      and for the randomness, it's part of how the AI animation works in animatediff, if it was not morphing, the animation would become choppy. it has its cons.
      2) Try with lower resolution in the raw animation workflow file first - around 1k resolution then upscale it in the refiner workflow to around 1.5k resolution for best results

    • @Paranormal-yg4ph
      @Paranormal-yg4ph 6 หลายเดือนก่อน

      @@jerrydavos I get the memory error (1st json). My video is 1920x1080 - 1min long. What do you advice? Thanks

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      downscale it to around 1280x720 using the settings inside, longer than 20 seconds crashes the save node. so Chop it into 3 x 20secs videos ....... or even 15 secs if it still crashes.
      @@Paranormal-yg4ph

    • @Paranormal-yg4ph
      @Paranormal-yg4ph 6 หลายเดือนก่อน

      Thanks, that solved the problem. I'm now at step 2 but I am stuck at step 5 "Paste the Passes Directories in ControlNets (1 and 2) Loaders and set their models and weight (See Neon Purple note)"
      I don't see a purple note near it and I can only input a directy. What directories should I point out? With the 1st json file it generated 5 directories: Canny, Depth, Frames, HED, Line, Normal, OpenPose.
      Thanks again!
      @@jerrydavos

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      I feel you are new to comfy, You need to have some more experience in using it. This workflow might be an advance stuff.
      For Face Closeups - Use Line Art and soft edge.
      For Half Body Shots - Use Line Art with Soft edge or Open Pose.
      For Full body shots - Use Open Pose with line Art or Soft edge
      For Architecture or landscape - Use Depth with Soft edge, line art or Canny.
      For Flat Logos, or Typos - Use Normal with Line Art, Soft edge.
      Check if open pose passes, are all over the place jumping around, jittering markers, or cannot find important body markers, then avoid the open pose pass.
      Two Control net passes are enough to make a good Render. You can experiment with any combination from the above.
      @@Paranormal-yg4ph

  • @lynnachan257
    @lynnachan257 6 หลายเดือนก่อน

    Hi, Jerry, could u tell me which url about Load Image From Directory node of ComfyUI?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      github.com/Kosinkadink/ComfyUI-VideoHelperSuite This is the one

  • @sisilet
    @sisilet 6 หลายเดือนก่อน

    Thanks for creating this tutorial. I'm not sure why I can't get it working. I use the same workflow file with the sample given HED and openpose images. I even use the same models that used in the workflow. I got blank (black) images when I set the batch range to 10. and then I tried to set the batch size to 1. I got colored noise and just barely see the shape of a human.

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Your Model or Lora is not compatible, Try epic realism model or mistoon

    • @sisilet
      @sisilet 6 หลายเดือนก่อน

      no luck with these models either. I suspect it is because i'm using a macos with apple silicon.

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      ya maybe@@sisilet

  • @user-ul6cc4hs9s
    @user-ul6cc4hs9s 4 หลายเดือนก่อน

    thank you However, as shown in the video, an error occurs while processing kssampler while running. Can you tell me the reason? comfyui NoneType' object has no attribute 'shape'

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน +1

      Lora, or Some Model might be missing... Please See this tutorial for installations : th-cam.com/video/qczh3caLZ8o/w-d-xo.html
      . Also Do Not use SDXLs models or SDXL loras in this workflow, It not compatible with it

    • @user-ul6cc4hs9s
      @user-ul6cc4hs9s 4 หลายเดือนก่อน

      @@jerrydavos I solved it thanks to Gower. Your skills are really great.

  • @AwakenedNarrator
    @AwakenedNarrator 7 หลายเดือนก่อน +1

    Why is the image coming out messy even though I followed the workflow you provided?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Sometimes, Models and Loras makes it messy, Please watch this Video for more info : th-cam.com/video/aysg2vFFO9g/w-d-xo.html

  • @Airbender131090
    @Airbender131090 6 หลายเดือนก่อน

    Help Please! I use your workflow (or in A1111) when i use softedge contlolnet my resaults always red-ish and washed out blury. WHY? yours are great....((

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Please check your controlnet models are properly set.
      If you are using comfyui inside A1111 (Comfy extension) this workflow will not work. Use standalone Comfy

  • @gibs2b
    @gibs2b 6 หลายเดือนก่อน

    Hello, problem here ! When I load "3) AnimateDiff Refiner - LCM v2.1.json" I get an error: "When loading the graph, the following node types were not found:
    Evaluate Integers
    Nodes that have failed to load will show as red on the graph." , I don't know where to get this node.... help please !

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      It's Efficiency-nodes-comfyui, it's no longer updated by the author, so it gives error.
      Evaluate Node error can be solved by Deleting the already present efficiency nodes folder inside custom nodes directory and manually copying and pasting the v1.92 in the custom_nodes folder from the following link and restarting Comfy as administrator to install the remaining dependencies : civitai.com/models/32342?modelVersionId=156415

    • @gibs2b
      @gibs2b 6 หลายเดือนก่อน

      @@jerrydavos Hi, Thanks 👍 It worked and I moved at the 4th step, but then again it ask for some 6 Nodes the Manager and I (manually) am not able to find : UltralyticsDetectorProvider, SAMLoader, ImpactSimpleDetectorSEGS_for_AD, SEGSPaste, SEGSDetailerForAnimateDiff, ToBasicPipe

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      github.com/ltdrdata/ComfyUI-Impact-Pack
      This is the Github for the Node, "Run as admin" before running comfy to install all the remaining dependencies @@gibs2b

  • @ooiirraa
    @ooiirraa 7 หลายเดือนก่อน

    I have tried a lot of workflows, but always the video changes drastically every 2 seconds (every 16 frames). Why it might be the case?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      the motion models are trained on 2-3 seconds of reference videos (16 frames to 32 frames) so for now it can handle only upto that. This should develop more in the future.

  • @g4priel208
    @g4priel208 หลายเดือนก่อน

    Wait is this only possible with Comfyui? Can automatic1111 also do it? I just ask cuz i see very few auto1111 tutorials for this type of workflow and its not even the same workflow

    • @jerrydavos
      @jerrydavos  หลายเดือนก่อน

      Yes you can do it in automatic 1111 also, at the time of release of this video it was not possible. Now you'll just need a good vram pc to run on A1111 for longer videos though

  • @saffyk1
    @saffyk1 3 หลายเดือนก่อน

    is the ADetailer in A1111 better than the FaceDetailer in Comfyui?

    • @jerrydavos
      @jerrydavos  3 หลายเดือนก่อน +1

      1) A1111 is simple which is good for beginners but it can be used only one time per image,
      2) On the other hand face detailer needs experienced user but can be manipulated to our desires and can be used multiple times.

  • @dragongaiden1992
    @dragongaiden1992 28 วันที่ผ่านมา

    Friend, I'm lost, there are many nodes that I can't install from missing nodes, what version of comfyui should I use???

    • @jerrydavos
      @jerrydavos  28 วันที่ผ่านมา +1

      No need to worry... just use the latest version, with all the latest custom nodes.
      Also use the version 4 workflows from the Gdrive folder for best experience

    • @dragongaiden1992
      @dragongaiden1992 28 วันที่ผ่านมา

      @@jerrydavos Thank you very much friend, I will try it again, your tutorials are one of the most complex that I have seen on TH-cam, my question is, is your method useful to change the style of the source video? Or can I also reinterpret the same pose from the source video but with a different character like with loras?

  • @Infiniteinsights0
    @Infiniteinsights0 2 หลายเดือนก่อน

    After moving to automatic1111, You only showed your adetailer when adjusting the face. Your denoising strength was at 0.75. I’m assuming you had controlnet enabled. Can I know what controlnet you used exactly

    • @jerrydavos
      @jerrydavos  2 หลายเดือนก่อน +1

      The Img2img denoise value is skipped (disabled )... It's not affecting the image, only Adetailer is taking place on only the face
      I've check the Skip Img2img box

    • @Infiniteinsights0
      @Infiniteinsights0 2 หลายเดือนก่อน

      @@jerrydavos okay thanks

  • @decambra89
    @decambra89 7 หลายเดือนก่อน

    07:08 how do you know, whats the calculation, how many frames can you batch generate according to widht, height and your gpu per batch

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      it's calculated by observation,
      just render in the first try if it shows out of memory error or not, increase resolution gradually till you see the out of memory error, then Note the "Requested" Memory if its in GBs then you need to lower the reso , if its in MBs then you are close to the Maximum limit your PC can handle.
      When 300 - 500 MB Requested memory then you need just little decrease in the Resolution by 10% then you should have you Max limit
      Same goes to the Batch range,
      Maximum Threshold = Reso + Batch Range
      When it goes above this then Out of memory error.
      Hope this make sense .

  • @benjaminbernard7709
    @benjaminbernard7709 7 หลายเดือนก่อน

    Is it possible to generate a matte to separate the FG and BG later?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Yes It's possible with this extension
      github.com/biegert/ComfyUI-CLIPSeg

  • @mariomaffiol
    @mariomaffiol 7 หลายเดือนก่อน +1

    What are the minimum requirements for a GPU, Ram and processor to run this version?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      RTX card with 8 GB Vram

  • @kssiu428
    @kssiu428 7 หลายเดือนก่อน +1

    I got KSampler error whenever i use controlnet, what problem could it be?
    Error occurred when executing KSampler:
    'ControlNetAdvanced' object has no attribute 'model_sampling_current'

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Maybe you are connecting wrong inputs and outputs, also check if you controlnet models are properly placed in the right directory or they might be corrupted.
      Else you can send your workflow on discord : jerrydavos or drop the link to the workflow in the comments.
      I'll try to help you.

    • @kssiu428
      @kssiu428 7 หลายเดือนก่อน

      @Ai_Davos ​Thanks for quick response man! But i just reused your workflow to modify a bit. All i did was only changed the to dreamshaper model, change the image size and the 2 path for the controlnet folder. You may take a look: drive.google.com/file/d/17T9Ntjqv8KbJ0eMUyXPcRdZTxyWnXRvj/view?usp=drive_link

    • @kssiu428
      @kssiu428 7 หลายเดือนก่อน +1

      nevermind i fix it by reinstall everything. Thanks!

  • @MrJi666
    @MrJi666 7 หลายเดือนก่อน

    I downloaded these three workflows and used the second workflow to complete the openpose and hed two sets of pictures of my 13-second video. Everything was normal. However, when I used the third workflow, as the video said, I Tested 10 pictures, and the results of the output pictures were all noisy pictures. The models and parameters were all the same as the videos. Could you please tell me what the problem is? How should I solve it? Thank you in advance, looking forward to your reply

    • @MrJi666
      @MrJi666 7 หลายเดือนก่อน

      Then I tried to increase the number of sampler steps, from 22 to 30-50, but it had no effect. The final images were all noise images. I thought it might be a compatibility issue, and then adjusted the large model many times to no avail. Anyway, I checked. There were a lot of parameters, but I didn’t find a solution to the problem in the end. I was a little bit broken, haha

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      1) Models and Loras produced these noisy artifacts, try different checkpoint model like epic realism and remove any loras
      2) Try simple prompts with no weights...
      3) Try Removing all Negative embeddings in the negative prompt box, to see if it changes anything
      4) SDXL models won't work with this workflow, this workflow needs to be converted to SDXL workflow first
      You can watch for more info here : th-cam.com/video/aysg2vFFO9g/w-d-xo.html

    • @MrJi666
      @MrJi666 7 หลายเดือนก่อน

      Thank you very much for your reply. You are very helpful in listing so many possible situations.I'll try,respect~@@jerrydavos

    • @MrJi666
      @MrJi666 7 หลายเดือนก่อน

      I've tried all of the above, but this workflow doesn't seem to work for me. Finally, I turned off the controlnet node, and the background was still full of noise.@@jerrydavos

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Please save your workflow in json file and the passes you have exported for using in the control net in a zip on a google drive and forward me the download link on discord. Username : jerrydavos , I'll test it and see what's the problem @@MrJi666

  • @aipamagica1
    @aipamagica1 6 หลายเดือนก่อน

    Thank you for the great tutorial. Unfortunately I get this error when executing, and I'm not sure how to get around it. : Error occurred when executing KSamplerAdvanced:
    Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
    If you have any ideas, that would be great. TIA

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Try setting the cuda device to gpu with this argument "--cuda-device 0"and try again
      you can also use this updated workflow for better results.
      www.patreon.com/posts/update-v2-1-lcm-95056616

    • @aipamagica1
      @aipamagica1 6 หลายเดือนก่อน

      Thank you @@jerrydavos

  • @vitaliyklitchko6664
    @vitaliyklitchko6664 7 หลายเดือนก่อน

    Well, those video Stable Diffusion AI are still very umm... unstable

  • @gvg0105
    @gvg0105 7 หลายเดือนก่อน +1

    after reinstall controlnet aux node for many times,still get a "Dwpreprocessro" not found error, do u know why?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Maybe some permission error from inside comfyUI so it can't download properly.
      Manually delete the comfyui_controlnet_aux folder from the Custom nodes in comfyui Directory if present
      Run cmd from inside the Custom node, go it address bar in the explore and type cmd and press enter then
      git clone github.com/Fannovel16/comfyui_controlnet_aux.git
      Try Running the cmd as administrator if it still can't download properly
      Hope it should get fixed, and download rest of the control net models from here : huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

    • @gvg0105
      @gvg0105 7 หลายเดือนก่อน

      Thx for reply,Still not work after clear the folder and reinstalled it.I replace DWpreprocessor not to "openpose recognition" node,press queue botton,it gives me a "Error occurred when executing OpenposePreprocessor:
      No module named 'matplotlib'."The control net models is ok, i can see them in load controlnet node.@@jerrydavos

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      ​ @gvg0105
      Option 1)
      I found this link: github.com/Fannovel16/comfy_controlnet_preprocessors
      which uses the matplotlib, try to git clone this one too and restart comfyui to download requirements.
      Option 2)
      Go in comfyui folder where the venv folder is located. Run the cmd there and enter these commands :
      - venv\Scripts\activate.bat
      - pip install matplotlib
      You can "pip install [missing module]" like this from cmd
      Option 3)
      If still not working, go in comfyui, download and paste these folders inside ComfyUI\venv\Lib\site-packages
      drive.google.com/file/d/1U1gUwHb_V0UkgBupPoO9CuixDikUZ_26/view?usp=sharing
      Backup the venv folder just in case.
      Option 4)
      If it still doesn't, rename or move the venv folder from comfyui directory, and start comfyui, it should make a fresh venv folder and download all the dependencies again (It's gonna take internet data in GBs )
      It should be resolved now else please send full error log

    • @gvg0105
      @gvg0105 7 หลายเดือนก่อน

      I found these is no folder named"venv" in ComfyUI_windows_portable\ComfyUI, then i create one use python,and tried option1,2,3,4.really comfused@@jerrydavos

    • @gvg0105
      @gvg0105 7 หลายเดือนก่อน

      [comfyui_controlnet_aux] | INFO -> Some nodes failed to load:
      Failed to import module dwpose because ImportError: cannot import name '_c_internal_utils' from partially initialized module 'matplotlib' (most likely due to a circular import) (F:\ai\ComfyUI_windows_portable\python_embeded\Lib\site-packages\matplotlib\__init__.py)@@jerrydavos

  • @patagonia4kvideodrone91
    @patagonia4kvideodrone91 7 หลายเดือนก่อน

    You would have to make a zip with a comfyui already configured with the nodes, it is really a problem when you cannot add the nodes from the common list, more than anything because it may be that they are incompatible with some other node that you do have in the list ,
    I'm already past installing nodes and having comfyui die,
    I even currently use 2 different versions, one that I update all the time and another that is a little older, from when the net control was working well, (things that do not work in the new one), to make matters worse, both versions are increasingly larger in space, (at first glance I think that with the old version and the net control that if it worked for me I may be able to do something similar, I know it is a tremendous amount of space, but I ask: is there any chance of taking your comfyu configured like this, making a copy without the checkpoints/loras, so that it does not spread too much and upload it to the internet with the nodes? Experience tells me that each comfyui is a world, and they can have more than one incompatible thing, personally I am doing very fun things in comfyui, but no I have a way to upload the complete comfyuis so that if I make a video, it works for everyone, I have even reached the maximum limit of nodes! Anyway, it would be greatly appreciated if there is that possibility of uploading it somewhere, if I just look at my folder controlnet weighs 22gb :S, I'm also running out of free disk space, but making room and downloading something slow even if it is very large is something that could be done, they are ambitious projects, but it should even be possible to use a single outfit, and character, without that varies at random, although for this it would require more nodes
    If at some point someone from comfyui answers me about how to add more, I will make myself dance to that but with a clingon face xD
    Greetings! If you think you have the chance to make a summarized backup, it would be greatly appreciated!

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      When I encounter something like this, I rename the original custom_nodes folder and make an empty one and run comfyui and install the Missing nodes with Manager, and restart 1-2 times the comfyui to download the dependencies and check if all are working fine, then I copy the New custom nodes in the old folder and restore back the name and not overwrite anything.
      Yes, I understand you problem, I also had the space issue for different versions, You can try the Link Shell Extension, I have all the models in one folder and I copied the symbolic link in every versions of a1111 and comfy, it's a trick to have multiple copies of same files without it's actual file size (it's type of a shortcut link with original properties) since then I was helped a lot by it in also everyday works on my PC.
      I have 2 x A1111 and 1x comfyui and it occupies space only for 1 of them, just the Ui files occupies the space.
      Hope this helps!

  • @user-ng9pb6ce3c
    @user-ng9pb6ce3c 6 หลายเดือนก่อน +1

    ValueError: only one element tensors can be converted to Python scalars
    Can someone help me understand this error ?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      You just have to delete the Dynamic Thresholding node from the workflow and it will run fine, without any errors. It got bugged after the recent update.
      Link for photo reference: imgur.com/a/9bBNPDo

  • @thesolitaryowl
    @thesolitaryowl 7 หลายเดือนก่อน

    anyone having an issue where they run the images through automatic1111 with the adetailer but none of the images are being saved anywhere, even though an output directory is specified??

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      If the output directory is not working for you then just don't write anything in it, Copy and paste the face fixed images from the A1111 img2img Output folder
      This same problem also happened to me when I Didn't made a output folder inside the explorer first. I thought A1111 will make a new directory itself but it didn't.

  • @yuxx1006
    @yuxx1006 6 หลายเดือนก่อน

    what is your gpu or computer specs to run all of this

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      8GB RTX Laptop GPU
      32 GB Ram

  • @matas320
    @matas320 6 หลายเดือนก่อน

    Error occurred when executing KSampler: only one element tensors can be converted to Python scalars. how to fix this error ?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      I've never encountered this error though you can try these, but it seems like a python error:
      1) Try deleting "FreeU" node from the Model pipeline, it won't affect the workflow.
      2) Make sure you are not using SDXL models inside this workflow.
      3) Re-install matplotlib through pip commands (Search google)
      4) Python version should be 3.10 or above.
      If the above didn't solve the problem, ask for help on Stable Diffusion Reddit page about this python error : www.reddit.com/r/StableDiffusion/

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน +1

      You just have to delete the Dynamic Thresholding node from the workflow and it will run fine, without any errors. It got bugged after the recent update
      Link for photo reference: imgur.com/a/9bBNPDo

  • @wingknightgaming1242
    @wingknightgaming1242 2 หลายเดือนก่อน

    I cant find HED lines please help , theres only HED Preprocessor provider

    • @jerrydavos
      @jerrydavos  2 หลายเดือนก่อน

      Search for "Hed soft-edge lines" or Hed preprocessor ..... the node is renamed in comfy
      Also install controlnet nodes

  • @aminurrahman4150
    @aminurrahman4150 7 หลายเดือนก่อน

    ksampler is not showing any preview like yours. what is the solution ?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      github.com/pythongosssss/ComfyUI-Custom-Scripts
      I think this was the node... It's been a long time , forgot which, It was enabled by one of the custom nodes from the description

  • @user-yb5es8qm3k
    @user-yb5es8qm3k 7 หลายเดือนก่อน

    I still don't understand how the frame number is set, how to operate the set to skip frames, can I make a video here

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      It's like a pizza system, you can't eat it all in one go, you slice into pieces and eat one by one.
      Similarly, Video is sliced into batches so PC can render them one at a time.
      And after all batches are done, it is glued together in the end

    • @user-yb5es8qm3k
      @user-yb5es8qm3k 7 หลายเดือนก่อน

      Thank you for your patience, I tried to set 0,50 yesterday; 50100; 100,150 and so on to the last number, but the resulting picture is more than the original frame number a lot of pictures, this is right@@jerrydavos

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      you may have accidently re rendered same frames, in the skip frames just add "+50" before every batch and it will do the math automatically @@user-yb5es8qm3k

    • @user-yb5es8qm3k
      @user-yb5es8qm3k 7 หลายเดือนก่อน

      Author What were you dealing with on AE in 14 minutes 20 seconds? I don't really understand

  • @aminurrahman4150
    @aminurrahman4150 7 หลายเดือนก่อน

    Is there any way to use it without gpu but faster? Like using virtual machine or something?

    • @jerrydavos
      @jerrydavos  6 หลายเดือนก่อน

      Comfy also runs with CPU but you can't virtually install RTX card, No.

    • @aminurrahman4150
      @aminurrahman4150 6 หลายเดือนก่อน

      @@jerrydavos ok. Thank you so much for your reply

  • @motion4ik
    @motion4ik 7 หลายเดือนก่อน

    Why is it that when I output 10 frames, it turns out a different picture, and when I output 50 frames, it turns out different?
    How can I check the picture before full launch?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      10-20 frames has the maximum details, above that (like 50) the background details gets smudged, AnimateDiff has a limit how much it can render frames in one go, so I use a trick to render 20 frames and stitch them in post while having a overlapping cross fade.
      I have given a breakdown of this in my next video : th-cam.com/video/aysg2vFFO9g/w-d-xo.html

    • @motion4ik
      @motion4ik 7 หลายเดือนก่อน

      @@jerrydavos thx

    • @motion4ik
      @motion4ik 7 หลายเดือนก่อน

      @@jerrydavos I have one more question: when I generate an image into text 2 image, the picture turns out rich and juicy. And when there are several frames, it turns out somewhat dull. What affects color and how can it be changed?

  • @aarvndh5419
    @aarvndh5419 7 หลายเดือนก่อน

    How to do in stable diffusion? Cuz I am using in my phone on Google colab so it's so small screen to use comfy ui

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน +1

      Sorry, It won't work on google colab.

    • @aarvndh5419
      @aarvndh5419 7 หลายเดือนก่อน

      @@jerrydavos I mean can you make a video for Animatediff img2img on automation1111 stable diffusion

  • @Eskes_Visualization
    @Eskes_Visualization 4 หลายเดือนก่อน

    please drop the link you used fix the face ......stable diffusion link

    • @jerrydavos
      @jerrydavos  4 หลายเดือนก่อน

      It's automatic 1111... to run stable diffusion and Adetailer is a plugin of A1111 to fix the face

  • @aluisioimaginauai
    @aluisioimaginauai 7 หลายเดือนก่อน

    Do you have a google colab link for it?

    • @jerrydavos
      @jerrydavos  7 หลายเดือนก่อน

      Sorry I have not used google collab, in theory the json file should be working inside the comfyui google colab version if it possible.

    • @aluisioimaginauai
      @aluisioimaginauai 7 หลายเดือนก่อน +1

      it works!!

  • @abinabraham1520
    @abinabraham1520 5 หลายเดือนก่อน

    can anyone help me with the error : "Error occurred when executing ControlNetLoaderAdvanced:
    PytorchStreamReader failed reading zip archive: failed finding central directory "

    • @jerrydavos
      @jerrydavos  5 หลายเดือนก่อน

      Copy the path of the controlnet images folder and paste it in the CN directory node... it should go away.