Check the IP adapters for XL, like ip-adapter-plus_sdxl_vit-h.bin (for example). Some of them work with the SD1.5 clipvision, otherwise download the SDXL clipvision. Make sure the IP adapters models are in the IP Adapter plus custom node folder
With SDXL you also need the SDXL controlnets...could that be the issue? You also need the SDXL text encoders... I am working with a new workflow with the IP adapter i say before and the sd1.5 clipvision and Animatediff and works...
Check out the article in Civit AI: tinyurl.com/33krneae And download the assets in the input.zip file. You have there the openposes, M-LSD lines and Zoe depth maps examples to use in the workflow. You can also use your own poses and backgrounds. Just make sure you are using the right controlnets and preprocessors (if needed)
I don't understand how the two ksamplers are used together, as in the video only the first ksampler (lcm) is connected to the vae decode, can you please explain?
Hi. Thanks for the question. The Latent of the 2nd K Sampler has to be connected to a VAE decode (then later can be connected to a Video Combine Node. I skipped that part to avoid repeating myself (I showed how is connected in the first KSampler), but it seems was not clear enough. I hope this clarifies your question.
@@koalanation thank you, so it means that I process the two different k samplers, each with a vae decode and then I put the two vae decodes together to a combined video?
@@yvann_ba for the higher quality video, connect the latent output of the 1st to the input latent of the second. Then the latent of the second sampler to the vae decode, and the image output to video combine. In the video I just want to show the results next to each other and generate two animations, but you do not need to for the first if you don't want to
Very good, I tweaked some stuff to meet my needs and it works great
Well done! That is the spirit!
It does a great job of making the lights disappear in the background.
Wow🔥, great optimization
Hello, sorry i didn't understand the command at 1:54 did you said ctrl+shit+b?
Ctrl shift v to paste with the connections
Great work! Thanks for the workflow.
I can't make it work with with SDXL though.
What should be used in IPAdapter and CLIP vision nodes?
Check the IP adapters for XL, like ip-adapter-plus_sdxl_vit-h.bin (for example). Some of them work with the SD1.5 clipvision, otherwise download the SDXL clipvision. Make sure the IP adapters models are in the IP Adapter plus custom node folder
@@koalanation thanks. It didn't work, but I don't give up 🙂
With SDXL you also need the SDXL controlnets...could that be the issue? You also need the SDXL text encoders... I am working with a new workflow with the IP adapter i say before and the sd1.5 clipvision and Animatediff and works...
What should I put in the openpose and background controlnet folder?
Check out the article in Civit AI: tinyurl.com/33krneae
And download the assets in the input.zip file.
You have there the openposes, M-LSD lines and Zoe depth maps examples to use in the workflow.
You can also use your own poses and backgrounds. Just make sure you are using the right controlnets and preprocessors (if needed)
thank you
@@koalanation
how did you create the MlSD line sequences as well as depth sequences?
I prepared the sequences separately, using a zoe map depth and MLSD lines preprocessors from the original background video.
I don't understand how the two ksamplers are used together, as in the video only the first ksampler (lcm) is connected to the vae decode, can you please explain?
Hi. Thanks for the question.
The Latent of the 2nd K Sampler has to be connected to a VAE decode (then later can be connected to a Video Combine Node. I skipped that part to avoid repeating myself (I showed how is connected in the first KSampler), but it seems was not clear enough. I hope this clarifies your question.
@@koalanation thank you, so it means that I process the two different k samplers, each with a vae decode and then I put the two vae decodes together to a combined video?
@@yvann_ba for the higher quality video, connect the latent output of the 1st to the input latent of the second. Then the latent of the second sampler to the vae decode, and the image output to video combine. In the video I just want to show the results next to each other and generate two animations, but you do not need to for the first if you don't want to
Thanks a lot !!
good
Goof
Love your work! Koala - want to collaborate?
Sounds interesting. How can I reach out to you?
My comments are getting deleted for some odd reason. Did you see my response?
Hi! I can see this response but no other message...not in held for review...