Koala Nation
Koala Nation
  • 29
  • 132 105
LivePortrait - Animate faces and create avatars with ComfyUI #comfyui #stablediffusion
Animate faces using LivePortrait in ComfyUI. Create new compositions of your images or videos, using reference videos to express with the mouth and the eyes. Create talking avatars using this custom nodes and workflow using ComfyUI. No AnimateDiff needed this time!
Unleash the power of Stable Diffusion with ComfyUI and AnimateDiff.
💻 Kijai LivePortrait: tinyurl.com/yc788zby
💻 ShadowCZ007 LivePortrait: tinyurl.com/y6bax7uc
📡 Official implementation: github.com/KwaiVGI/LivePortrait
🕸️ Discord: discord.gg/UaFQkfYgAQ
Git for windows: tinyurl.com/2jz3rxp5
#animatediff #comfyui #stablediffusion
============================================================
💪 Support this channel with a Super Thanks or a ko-fi! ko-fi.com/koalanation
☕ Amazing ComfyUI workflows: tinyurl.com/y9v2776r
🚨 Use Runpod and access powerful GPUs for best ComfyUI experience at a fraction of the price. tinyurl.com/58x2bpp5 🤗
☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️
👉 RunDiffusion: tinyurl.com/ypp84xjp 👉15% off first month with code 'koala15'
👉 ThinkDiffusion: tinyurl.com/4nh2yyen
🤑🤑🤑 FREE! Check my runnable workflows in OpenArt.ai: tinyurl.com/3j4z6xwf
The best ai animation tutorial!
Learn how to make comfyui animation workflow
and use controlnet stable diffusion prompts
ComfyUI: best open source animation software for pc
============================================================
CREDITS
===========================================================
Disloyal man walking with his girlfriend and looking amazed at another seductive girl by Antonio Guillem:
tinyurl.com/mr3x4ezp
🎵Music
Song: Bike Box, Artist: Kia, Music by: CreatorMix.com
It's Our Time, Music by CreatorMix.com
✂️ Edited with Canva, and ClipChamp. I record the stuff in powerpoint.
========================================================
© 2024 Koala Nation
#comfyui #animatediff #stablediffusion
มุมมอง: 432

วีดีโอ

Audioreactive AnimateDiff video clips with ComfyUI (AI Music Videos) #comfyui
มุมมอง 1.6K14 วันที่ผ่านมา
Create unbelievable visuals for music videos using this audio reactive workflow for ComfyUI and AnimateDiff. Using the audio nodes from SaltAI, masks can react to different audio frequencies following the rythm of the music. In combination with IP Adapter and AnimateDiff, exciting video clips and compositions can be made. Unleash the power of Stable Diffusion with ComfyUI and AnimateDiff. You c...
Inpainting AnimateDiff DINO (ComfyUI animation workflow) #stablediffusion #comfyui
มุมมอง 2Kหลายเดือนก่อน
Amazing control of your animations using masks with the the Magic trio: AnimateDiff, IP Adapter and ControlNet. Explore the use of masks to texturize objects and create fantastic videos. 🔥 Complete workflow with extras (package) tinyurl.com/46pj8tpu Support the channel and have also (beta) access to the discord with demo runnable workflows! 💻 AnimateLCM: tinyurl.com/y43bjhee 🕸️ Discord: discord...
Easy Image to Video with AnimateDiff (in ComfyUI) #stablediffusion #comfyui #animatediff
มุมมอง 14K2 หลายเดือนก่อน
Easily add some life to pictures and images with this Tutorial. The Magic trio: AnimateDiff, IP Adapter and ControlNet. Explore the use of CN Tile and Sparse Control Scriblle, using AnimateLCM for fast generation. 💻 AnimateLCM: tinyurl.com/y43bjhee 🕸️ Discord: discord.gg/UaFQkfYgAQ #animatediff #comfyui #stablediffusion 💪 Support this channel with a Super Thanks or a ko-fi! ko-fi.com/koalanatio...
ComfyUI AnimateDiff Guide Part 2 - Tutorial #stablediffusion #comfyui #animatediff
มุมมอง 2.2K2 หลายเดือนก่อน
Second video of the series to know how to use AnimateDiff Evolved and all the options within the custom nodes. Documentation and starting workflow to use in ComfyUI is available in: 📡 AnimateDiff Evolved: t.ly/GDR_Y 🖥️ Civit.Ai AnimateDiff parts 1 and 2: t.ly/07HuQ 🕸️ Discord: discord.gg/UaFQkfYgAQ #animatediff #comfyui #stablediffusion 💪 Support this channel with a Super Thanks or a ko-fi! ko-...
ComfyUI AnimateDiff Part 1 - Tutorial #stablediffusion #comfyui #animatediff
มุมมอง 3.8K3 หลายเดือนก่อน
First part of a video series to know how to use AnimateDiff Evolved and all the options within the custom nodes. Documentation and starting workflow to use in ComfyUI is available in: 📡 AnimateDiff Evolved: t.ly/GDR_Y Workflows examples available at: ko-fi.com/s/f16bccaff4 🕸️ Discord: discord.gg/UaFQkfYgAQ #animatediff #comfyui #stablediffusion 💪 Support this channel with a Super Thanks or a ko...
AI 3D animation: convert your favourite 2D cartoons with AnimateDiff in ComfyUI #stablediffusion
มุมมอง 1.1K3 หลายเดือนก่อน
In this workflow we show you the possibilities of using RAVE to convert 2D cartoons into 3D looking style animations. Using ComfyUI and AnimateDiff, in combination with the RAVE sampler and denoising, it is possible to transform a complete 2D sequence and achieve great 3D consistency. Best way to do comfyui video to animation in 3D. This is not a comfyui 3D render, as there is not 3D mesh creat...
NEW UPSCALING method ComfyUI with 2 samplers AnimateDiff v3 + AnimateLCM
มุมมอง 4.1K4 หลายเดือนก่อน
NEW UPSCALING method ComfyUI with 2 samplers AnimateDiff v3 AnimateLCM
ComfyUI: RAVE for video transformation (vid2 vid) #animatediff
มุมมอง 6K6 หลายเดือนก่อน
ComfyUI: RAVE for video transformation (vid2 vid) #animatediff
Stable Diffusion Interpolation Comfyui vid2vid Tutorial IPAdapter ControlNet
มุมมอง 2.9K7 หลายเดือนก่อน
Stable Diffusion Interpolation Comfyui vid2vid Tutorial IPAdapter ControlNet
LCM + AnimateDiff High Definition (ComfyUI) - Turbo generation with high quality
มุมมอง 9K7 หลายเดือนก่อน
LCM AnimateDiff High Definition (ComfyUI) - Turbo generation with high quality
Animatediff perfect scenes. Any background with conditional masking. ComfyUI Animation
มุมมอง 9K8 หลายเดือนก่อน
Animatediff perfect scenes. Any background with conditional masking. ComfyUI Animation
AnimateDiff + Instant Lora: ultimate method for video animations ComfyUI (img2img, vid2vid, txt2vid)
มุมมอง 35K8 หลายเดือนก่อน
AnimateDiff Instant Lora: ultimate method for video animations ComfyUI (img2img, vid2vid, txt2vid)
ComfyUI animation with segmentation: TrackAnything with SAM models
มุมมอง 9K9 หลายเดือนก่อน
ComfyUI animation with segmentation: TrackAnything with SAM models
TrackAnything - ComfyUI Segmentation (with ControlNet, TemporalNet and masks)
มุมมอง 2.9K9 หลายเดือนก่อน
TrackAnything - ComfyUI Segmentation (with ControlNet, TemporalNet and masks)
Stable Diffusion ComfyUI Animations - Easy with ControlNet and TemporalNet
มุมมอง 15K10 หลายเดือนก่อน
Stable Diffusion ComfyUI Animations - Easy with ControlNet and TemporalNet
ComfyUI - Vast.ai: tutorial - how to rent cheap high VRAM GPUs for your AI art
มุมมอง 6K10 หลายเดือนก่อน
ComfyUI - Vast.ai: tutorial - how to rent cheap high VRAM GPUs for your AI art
ComfyUI ControlNet animation (with TemporalNet) - Stable Diffusion
มุมมอง 3.3K11 หลายเดือนก่อน
ComfyUI ControlNet animation (with TemporalNet) - Stable Diffusion
Daft Punk - Technologic (Remix) Melodic Techno videoclip cyberpunk
มุมมอง 991ปีที่แล้ว
Daft Punk - Technologic (Remix) Melodic Techno videoclip cyberpunk

ความคิดเห็น

  • @obliostudio
    @obliostudio 20 ชั่วโมงที่ผ่านมา

    Thanks! Can you release the workflow for video to video? 🙏

  • @kizentheslayer
    @kizentheslayer วันที่ผ่านมา

    where do i save teh animate lcm model to?

  • @visualdrip.official
    @visualdrip.official 7 วันที่ผ่านมา

    hello, im wondering where the link is for the mask video?

    • @koalanation
      @koalanation 3 วันที่ผ่านมา

      You can download: ko-fi.com/s/7d5e1802f1 Use 0 if you want download it for free

    • @visualdrip.official
      @visualdrip.official 3 วันที่ผ่านมา

      @@koalanation thank you ! it will repeat on seamless loop? or its better without the loop?

    • @koalanation
      @koalanation 3 วันที่ผ่านมา

      @@visualdrip.official in the example the mask video is repeated several times so it seems it is looping. But you can also choose to have a 'static' mask, or to have a different one without looping...do what you like!

    • @visualdrip.official
      @visualdrip.official 3 วันที่ผ่านมา

      @@koalanation thanks so much for sharing!

  • @aivideos322
    @aivideos322 9 วันที่ผ่านมา

    You do a great job of explaining things. Well done

  • @elifmiami
    @elifmiami 10 วันที่ผ่านมา

    I was wondering how did you bring node number on the box ?

    • @koalanation
      @koalanation 8 วันที่ผ่านมา

      If you go to the Manager, on the left column you will see the option 'Badge'. There you can set the number of the node to appear over the node.

    • @elifmiami
      @elifmiami 7 วันที่ผ่านมา

      @@koalanation thank you !

  • @user-rk3wy7bz8h
    @user-rk3wy7bz8h 10 วันที่ผ่านมา

    Hi,thanks again iam very thankful. About the context options, is there any best combination to use for best output? Tbh I didn't understand d technical part. And I want to ask what do you recommend me to watch next, please :)

    • @koalanation
      @koalanation 8 วันที่ผ่านมา

      Hi! I normally used Standard Uniform or Looped Uniform. Depending on what you want to do, one may be working better than the other. You will need to experiment. Looped uniform works better for 'loops', at it blends last and first frames, but for general animations is also good. If you have not seen the first part of the series, look at th-cam.com/video/E_GnupiwAeY/w-d-xo.html A relatively simple workflow: Img2vid th-cam.com/video/yHMcsRZGMEo/w-d-xo.html If you want to do some character transformation, check out: th-cam.com/video/4826j---2LU/w-d-xo.html. This is complex, but I love the audio reactive workflow: th-cam.com/video/35H2VWbSQ08/w-d-xo.html

  • @art-hub-adults
    @art-hub-adults 11 วันที่ผ่านมา

    Error occurred when executing KSampler: 'NoneType' object has no attribute 'size'

  • @haihaict
    @haihaict 11 วันที่ผ่านมา

    how did you export masks generated by trackanything? did you rewrite some code or you can do that within its webui?

    • @koalanation
      @koalanation 8 วันที่ผ่านมา

      Yes, I made some changes in the code (with my limited programming skills, I managed to get it working like that): github.com/dsigmabcn/Track-Anything.git There are easier and better tools now integrated in ComfyUI. Check out: th-cam.com/video/5mHmDx4dWAM/w-d-xo.html

  • @liangmen
    @liangmen 12 วันที่ผ่านมา

    Amazing video tutorial, took two days to implement it and get it working now. Although still not quite understand some bits of it. Again, brilliant video, thank you so much!

    • @koalanation
      @koalanation 12 วันที่ผ่านมา

      @@liangmen Happy to hear you managed to get it working. It is not easy, definitely

  • @user-rk3wy7bz8h
    @user-rk3wy7bz8h 12 วันที่ผ่านมา

    Finally a detailed tutorial. Continue more like this video please

    • @koalanation
      @koalanation 12 วันที่ผ่านมา

      Thanks! There are not such deep dives, it is true...when I find some time I will do part 3

  • @generalawareness101
    @generalawareness101 13 วันที่ผ่านมา

    yeah, no to SD1.5 anything.

    • @AB-wf8ek
      @AB-wf8ek 6 วันที่ผ่านมา

      Did SD1.5 hurt your feelings?

  • @VaibhavShewale
    @VaibhavShewale 18 วันที่ผ่านมา

    only if i had a good system!

    • @koalanation
      @koalanation 18 วันที่ผ่านมา

      @@VaibhavShewale yep, it is heavy. The animation stuff at the end eats a lot of resources...

    • @VaibhavShewale
      @VaibhavShewale 18 วันที่ผ่านมา

      @@koalanation damn dude the hour of editing you have spent is just on next level

  • @AntonioSorrentini
    @AntonioSorrentini 19 วันที่ผ่านมา

    Well, yep! 😀😀😀 th-cam.com/video/y7VOBcrk7J8/w-d-xo.html

    • @koalanation
      @koalanation 18 วันที่ผ่านมา

      That is really cool!

  • @jagsAImagic
    @jagsAImagic 20 วันที่ผ่านมา

    pretty cool way to sync masks with audio and create a reactive music video. awesome.

    • @koalanation
      @koalanation 20 วันที่ผ่านมา

      Top! Love your videos!

  • @PixelsVerwisselaar
    @PixelsVerwisselaar 20 วันที่ผ่านมา

    😮 Thats awesome🙏. Will try it soon 😏

    • @koalanation
      @koalanation 20 วันที่ผ่านมา

      @@PixelsVerwisselaar good luck! Many cool things can be done with some imagination!

  • @jaydenvincent2007
    @jaydenvincent2007 23 วันที่ผ่านมา

    when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?

    • @koalanation
      @koalanation 23 วันที่ผ่านมา

      Hi! I have never encountered this error...googling it refers to an issue with MixLab nodes...not sure if that would be your case. Maybe try to disable or uninstall custom nodes to see if there is one affecting ComfyUI.

  • @enthuesd
    @enthuesd 23 วันที่ผ่านมา

    best videos thank you. great teacher

    • @koalanation
      @koalanation 23 วันที่ผ่านมา

      Love you enjoy it!

  • @enthuesd
    @enthuesd 24 วันที่ผ่านมา

    I'm just finding this now it's helped so much with my videos thank you so much

  • @xr3kTx
    @xr3kTx 24 วันที่ผ่านมา

    This did wonders

    • @koalanation
      @koalanation 24 วันที่ผ่านมา

      @@xr3kTx it is fun!

    • @xr3kTx
      @xr3kTx 24 วันที่ผ่านมา

      @@koalanation I took great inspiration from your workflow because I need to understand the tools at play, I actually did this with SDXL. I am using a framecap of 100, however the face seems to glitch. Can you suggest anything for the face glitching? I did use ipadapter with style and composition transfer, but every few frames it seems to redo the context.

    • @koalanation
      @koalanation 24 วันที่ผ่านมา

      @@xr3kTx I did not dare to use SDXL because of the GPU and VRAM requirements...besides, SDXL AnimateDiff is also difficult...with hotshot is ok, but then you are limited in a context window of 8 frames...not sure if testing with SD 1.5 is an option for you. You can always upscale and refine the output

    • @xr3kTx
      @xr3kTx 24 วันที่ผ่านมา

      @@koalanation I have had better results with SDXL personally (I am using a lora and sdxl respects it more for my character + ip adapter for style), I am using RTX A6000 on runpod so resources are less of a concern, its the workflow that I need to improve.

    • @koalanation
      @koalanation 24 วันที่ผ่านมา

      @@xr3kTx good to know. I may then give it another try...have you tried with free init in AD? Not sure how it will work with this setup, though. But it is a lot of trial and error, you know...

  • @YING180
    @YING180 26 วันที่ผ่านมา

    thank you for your video, that's very helpful

  • @user-Cyrine
    @user-Cyrine 26 วันที่ผ่านมา

    Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!

    • @koalanation
      @koalanation 25 วันที่ผ่านมา

      Thanks for the idea!

  • @vl7823
    @vl7823 27 วันที่ผ่านมา

    hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks

    • @koalanation
      @koalanation 26 วันที่ผ่านมา

      Hey! Not sure what the messages are with AMD, but maybe you can try first reducing the size of the latents and/or reducing the batch size. Looks like some limitation with the VRAM.

  • @Erika_Med
    @Erika_Med หลายเดือนก่อน

    Hello, thank you very much for the content you create, it is very valuable :). There is a node that you use in your workflow, which does not work is the COCO-SemPreprocessor. Which one could you replace it with?

    • @koalanation
      @koalanation หลายเดือนก่อน

      You can use for example ultralytics detector with Sam detector from the impacts pack, or for this case It will be easier if you do it by hand using the mask editor (right click over the image and that option should appear)

  • @estebanmoraga3126
    @estebanmoraga3126 หลายเดือนก่อน

    Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!

    • @koalanation
      @koalanation หลายเดือนก่อน

      Yes! You can use a reference video and use controlnets such as openpose, depth, lineart, etc, to guide the composition of each frame. There are many videos and tutorials about it.

    • @estebanmoraga3126
      @estebanmoraga3126 หลายเดือนก่อน

      @@koalanation Thanks for replying! The most I've been able to find are tutorials on animating a referenced image using prompts or generating a video using another video as reference also using prompts, have yet to find one where they animate a reference image based on a reference video, guess I just have to look harder tho!

    • @koalanation
      @koalanation หลายเดือนก่อน

      Check out: th-cam.com/video/XO5eNJ1X2rI/w-d-xo.html. Take into account this is rather complex with all the samplers and so on. Here: th-cam.com/video/Ka4ENd63VBo/w-d-xo.html, I think it is more clear, but take into account the IP Adapter node it does not work like in the video anymore.

  • @SiverStrkeO1
    @SiverStrkeO1 หลายเดือนก่อน

    Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.

    • @koalanation
      @koalanation หลายเดือนก่อน

      That seems difficult with this method If the windows are small. Reducing the scale factor may work. Otherwise some trick with masks and controlnets may work but I have not really tried it with sparsectrl

  • @TheFusssi
    @TheFusssi หลายเดือนก่อน

    Thank you :) I got it working with the pytorch version!

    • @koalanation
      @koalanation หลายเดือนก่อน

      Excellent! Have fun

    • @TheFusssi
      @TheFusssi 26 วันที่ผ่านมา

      @@koalanation short catch-up -> used it multiple times now + adapted your script and it works like a charm. Thank you very much for the video, as without it I would either have spended way more time setting it up or even have given it up before getting it to work.

    • @koalanation
      @koalanation 26 วันที่ผ่านมา

      @@TheFusssi good to hear!

  • @MarcusBankz68
    @MarcusBankz68 หลายเดือนก่อน

    I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?

    • @koalanation
      @koalanation หลายเดือนก่อน

      Sometimes with IP adapter is confusing...try to use the IP adapter model and clipvision separately (without using the unified loader) following the instructions of the IP adapter repo. I like plus and VIT-G. github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file

  • @user-bq5jo9zf7y
    @user-bq5jo9zf7y หลายเดือนก่อน

    I love you liked it!

  • @ManuelViedo
    @ManuelViedo หลายเดือนก่อน

    "easy"

    • @koalanation
      @koalanation หลายเดือนก่อน

      🤣🤣🤣

  • @HOT4C1DR41N
    @HOT4C1DR41N หลายเดือนก่อน

    I couldn't make it work :( I get this error every time: Error occurred when executing ADE_ApplyAnimateDiffModel: 'MotionModelPatcher' object has no attribute 'model_keys'

    • @koalanation
      @koalanation หลายเดือนก่อน

      Seems odd...are you using AnimateLCM_t2v? Maybe try with other model to see if it runs, or use the gen 1 AnimateDiff Loader

    • @katonbunshin5935
      @katonbunshin5935 หลายเดือนก่อน

      I have the same

    • @koalanation
      @koalanation หลายเดือนก่อน

      Use the model at: civitai.com/models/452153/animatelcm and make sure nodes and comfy is up to date.

    • @katonbunshin5935
      @katonbunshin5935 หลายเดือนก่อน

      @@koalanation oh... i wrote solution here but i dont know why is it not added... so... in my situation, there was problem when i was updating AnimateDiff from Manager. To fix it remove AnimateDiff from custom nodes and get AnimateDiff from repo, then place it in Custom nodes - works for me

    • @koalanation
      @koalanation หลายเดือนก่อน

      Ok! I have not see it neither...anyway, sometimes during updates this things happen

  • @shanboe810
    @shanboe810 หลายเดือนก่อน

    Thanks for this, easy to follow and full of information!.. Love it!

  • @user-cb4jx8og2k
    @user-cb4jx8og2k หลายเดือนก่อน

    great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?

    • @koalanation
      @koalanation หลายเดือนก่อน

      Hi! In principle you do not need to change it, but you can, of course. Take into account the 'tile' control net is rather strong and you cannot do big transformations. The Load video node allows you to use http addresses, but the Image load node not (at least did not work for me). That is why I use it for the randomized image.

  • @bordignonjunior
    @bordignonjunior หลายเดือนก่อน

    Geeez this takes long to run. which gpu do you have? amazing tutorial !!!

    • @koalanation
      @koalanation หลายเดือนก่อน

      Hi! Thanks! I am using a RTX4090/3090 or A5000 via Runpod, which generates the video rather fast. You can try to decrease the number of frames and also the resolution of the images. Try to do interpolation with 3 frames instead of 2.

  • @WiLDeveD
    @WiLDeveD หลายเดือนก่อน

    thanks. very useful tutorial.

    • @koalanation
      @koalanation หลายเดือนก่อน

      Thanks for watching!

  • @joonienyc
    @joonienyc หลายเดือนก่อน

    im back again , another great tutorial ...

    • @koalanation
      @koalanation หลายเดือนก่อน

      Thanks mate!

  • @sudabadri7051
    @sudabadri7051 หลายเดือนก่อน

    So good! Would adding rave to this increase the consistency even further. Thanks

    • @koalanation
      @koalanation หลายเดือนก่อน

      That sounds like a great idea! Take into account the VRAM requirements of RAVE, but apart from that, you could get something really cool combining these techniques!

    • @sudabadri7051
      @sudabadri7051 หลายเดือนก่อน

      @@koalanation cool ill try recreate this on a6000 card and add rave. Thanks!

  • @p4prateektv534
    @p4prateektv534 หลายเดือนก่อน

    Loved this. Thanks man

    • @koalanation
      @koalanation หลายเดือนก่อน

      Thanks for watching!

  • @boo3321
    @boo3321 หลายเดือนก่อน

    very easy tutorial only took me HOURS to do it, I'm curious how to make people walk or move with comfyui.

    • @koalanation
      @koalanation หลายเดือนก่อน

      Well, I cut quite a bit to show only the main steps, otherwise the video is mostly rendering... For moving people, controlNet with a reference video of someone walking is probably the way. With motion director should also be possible, I believe, but I need to find the time to try and see if results are 👍

  • @frankliamanass9948
    @frankliamanass9948 หลายเดือนก่อน

    It all worked and animates the image but every time it comes out very bright and faded. Any suggestion on how to fix it?

    • @frankliamanass9948
      @frankliamanass9948 หลายเดือนก่อน

      It appears the results in the tutorial are also faded and over brightened but at the end when you show examples they look fine. Did you find a fix or was it in your post processing?

    • @koalanation
      @koalanation หลายเดือนก่อน

      Depending on the source of image, settings, etc, the image might be too dark or too bright, as you say. There are nodes that do that. I like Image Filter adjustments. But I think it is better to use a regular video editor, it is faster and easier to use.

  • @vincema4018
    @vincema4018 หลายเดือนก่อน

    Used Pytorch template and follow your instruction. After showing "To see the GUI go to: 0.0.0.0:3000", went back to the instance interface, click "Open", it opens another JuperLab ...

    • @leonardholter17
      @leonardholter17 16 วันที่ผ่านมา

      I have the same problem

    • @leonardholter17
      @leonardholter17 16 วันที่ผ่านมา

      I got it to work when I added "-p 3000:3000 -e OPEN_BUTTON_PORT=3000" to the docker options

    • @vincema4018
      @vincema4018 16 วันที่ผ่านมา

      @@leonardholter17 thanks man!

  • @joonienyc
    @joonienyc หลายเดือนก่อน

    hey buddy , how did u copy second Ksampler with all lines connected duplicated ? at time line 4:40

    • @koalanation
      @koalanation หลายเดือนก่อน

      Copy normally with ctrl+c, then paste with ctrl+shift+v.

    • @joonienyc
      @joonienyc หลายเดือนก่อน

      @@koalanation ty my man

  • @coreypan8933
    @coreypan8933 หลายเดือนก่อน

    thx, ur videos helps :)

  • @whatthe573
    @whatthe573 2 หลายเดือนก่อน

    Great vid!

    • @koalanation
      @koalanation 2 หลายเดือนก่อน

      👍

  • @hamster_poodle
    @hamster_poodle 2 หลายเดือนก่อน

    hello! Does SparseControl work with AnimateDiff LCM properly? not V3?

    • @koalanation
      @koalanation 2 หลายเดือนก่อน

      Hi! With the V3 lora adapter works. I am not sure if that is the way it was intended, but it does something. I have tried to use the RGB sparse but I do not manage to get it work nicely...you can also switch to version 3 and fine tune results, but obviously generations will take longer

  • @koalanation
    @koalanation 2 หลายเดือนก่อน

    If you prefer it without the ai voice, check out with original voice (in Spanish) th-cam.com/video/GsOTnGeXCCg/w-d-xo.html

    • @tech3653
      @tech3653 14 วันที่ผ่านมา

      any toturoal for easy translation of voice offline using ai ?

  • @VanessaSmith-Vain88
    @VanessaSmith-Vain88 2 หลายเดือนก่อน

    Can you set up the whole thing for us to use it?

  • @VanessaSmith-Vain88
    @VanessaSmith-Vain88 2 หลายเดือนก่อน

    Yeah, that was really easy, piece of a cake 🤣

    • @koalanation
      @koalanation 2 หลายเดือนก่อน

      Yes, it was 🤣

  • @PixelArt_YW
    @PixelArt_YW 2 หลายเดือนก่อน

    Very cool and detailed tutorial! Wish there was a guide for advanced controlnet's weights too, thanks for your work!

  • @RhapsHayden
    @RhapsHayden 2 หลายเดือนก่อน

    Where would I add a custom trained lora? after the load checkpoint?