Really happy to hear that. That was the goal. To the point, no fuss. No extra detail. No smoke and mirrors. Just: “here’s Kijais workflow, here’s how to jump in.” We can worry about all the nuances and stuff later 🙏🏽
@@c0nsumption It kept crashing every time after training for around 1 min. Maybe training consumes more vram than usual tasks? Anyway, still good to try and learn.
This is really cool! As a 3d artist/animator it makes me wonder if its posible to make basic rendered animations as input to train. I wonder what kind of footage would work best for that, like high contrast, lots of lines etc. To maybe make the creation of those lora's a lot quicker. Or maybe it prefers realworld footage. Last year I playing with Deforum which has the ability to use camera movements exported from Blender, which at the time worked pretty well. This Lora tech seems a few steps back in regards to workflow and control, but ofcourse its a different tech. And I only got glitchy lsd footage out of Deforum, but that might have improved since.
Awesome tutorial thanks! Do you think multiple loras of the same video can be trained in chunks adjacently and then all used in 1 long animation chained together and their weights animated to be blended together to form one single coherent animation?
Hey man I find your tutorials the easiest to understand from all other comfyui tutorials on TH-cam keep it up! I was wondering if you selling any courses on comfyui? or any way I could pay you for an hour to help me fix an issue with generating images or maybe you have a friend with equal or close knowledge I could pay to teach me.
Thank you so much. Do I understand correctly that AI takes video with motion like a reference and for example if I want to train physics simulation lora. I can simulate in 3d software example and feed it but how does anidiff understand what should be physically animated in shot? Do you plan make another video with using pretrained anidiff loras in img2vid. And I want to create rotating lora that will keep object proportions.
hi, thank you for the info.... I am getting some error... Error occurred when executing ADMD_CheckpointLoader: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. what it could be?
@@c0nsumption Same here error, Animatediff v3 and adapter in place, i've read that new version of comfyui doesn't need xformers anymore, but they are used to train lora, is that true?:) i tried install comfy from scratch, portable version. First few lines of error: Error occurred when executing ADMD_CheckpointLoader: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : p : 0.0 `flshattF` is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) triton is not available requires A100 GPU `cutlassF` is not supported because: xFormers wasn't build with CUDA support `smallkF` is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 40
Hi thank you for the video! I have out of memory error (torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory) with 12Gb of RAM. Do you know if it's possible to train with 12Gb and if yes how to fix that ?
oh and a question sorry haha. The resulting temporal lora, is that linked to the promped that's used to generate the video? Or is the lora just the motion info. I tried to alter the prompt after one run, and now the whole workflow restarts again, I was under the impression that the motion lora can be used without retraining it.
Me too. Lol 🧍🏽♂️ Sorry, it’s been crazy demanding at work and getting the bills paid. Will be dropping some stuff soon on my most recent research project
Clean and detailed, make me confident enough to try it myself today. Thanks a lot!
Really happy to hear that. That was the goal. To the point, no fuss. No extra detail. No smoke and mirrors.
Just: “here’s Kijais workflow, here’s how to jump in.”
We can worry about all the nuances and stuff later 🙏🏽
@@c0nsumption It kept crashing every time after training for around 1 min. Maybe training consumes more vram than usual tasks? Anyway, still good to try and learn.
great tutorial my friend 🔥
My dude 🙌🏽 Thanks for being here :)
Happy you enjoyed 👍🏽
Thank you so much for this!
This is really cool! As a 3d artist/animator it makes me wonder if its posible to make basic rendered animations as input to train. I wonder what kind of footage would work best for that, like high contrast, lots of lines etc. To maybe make the creation of those lora's a lot quicker. Or maybe it prefers realworld footage.
Last year I playing with Deforum which has the ability to use camera movements exported from Blender, which at the time worked pretty well.
This Lora tech seems a few steps back in regards to workflow and control, but ofcourse its a different tech. And I only got glitchy lsd footage out of Deforum, but that might have improved since.
Thanks bro! If you feel like making a followup video, i would watch it! More on physics
yess i love this, great tutorial
Great tutorial!
Hey great vid , Just curious if these will work with SDXL as I am having trouble getting any motion lora to work with SDXL?
Top! thks
great! thank you
Awesome tutorial thanks! Do you think multiple loras of the same video can be trained in chunks adjacently and then all used in 1 long animation chained together and their weights animated to be blended together to form one single coherent animation?
Awesome tutorial! Can you explain how to add more than one training video?
Hey man I find your tutorials the easiest to understand from all other comfyui tutorials on TH-cam keep it up! I was wondering if you selling any courses on comfyui? or any way I could pay you for an hour to help me fix an issue with generating images or maybe you have a friend with equal or close knowledge I could pay to teach me.
Great video thank you, Instead of a prompt can you use a reference image?
? 🤔
Thank you so much. Do I understand correctly that AI takes video with motion like a reference and for example if I want to train physics simulation lora. I can simulate in 3d software example and feed it but how does anidiff understand what should be physically animated in shot? Do you plan make another video with using pretrained anidiff loras in img2vid. And I want to create rotating lora that will keep object proportions.
hi, thank you for the info....
I am getting some error...
Error occurred when executing ADMD_CheckpointLoader:
'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
what it could be?
Did you install the Animatediff v3 models and their associated adapter lora?
@@c0nsumption Same here error, Animatediff v3 and adapter in place, i've read that new version of comfyui doesn't need xformers anymore, but they are used to train lora, is that true?:) i tried install comfy from scratch, portable version.
First few lines of error:
Error occurred when executing ADMD_CheckpointLoader:
No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 2, 1, 40) (torch.float32)
key : shape=(1, 2, 1, 40) (torch.float32)
value : shape=(1, 2, 1, 40) (torch.float32)
attn_bias :
p : 0.0
`flshattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
`tritonflashattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
triton is not available
requires A100 GPU
`cutlassF` is not supported because:
xFormers wasn't build with CUDA support
`smallkF` is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 40
@c0nsumption , Still cannot run training:doing omething very wrong, tried reinstall....anyone had this kind of error?
Did i miss something?:) previous tutorials right? sorry if so, will check everything
I have same problem. Anyone fix the issue ?
How to train a motion module for animatediff?
Dooooood
Hi thank you for the video!
I have out of memory error (torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory) with 12Gb of RAM. Do you know if it's possible to train with 12Gb and if yes how to fix that ?
Try lowering your resolution for training. So train at 512 x 384 or some resolution under 512 x 512.
oh and a question sorry haha. The resulting temporal lora, is that linked to the promped that's used to generate the video? Or is the lora just the motion info.
I tried to alter the prompt after one run, and now the whole workflow restarts again, I was under the impression that the motion lora can be used without retraining it.
This workflow is for training Loras. If you wanted to use the motion Lora’s without retraining, you would use them in an AnimateDiff workflow
@@c0nsumption Thanks!!
i miss your videos
Me too. Lol 🧍🏽♂️
Sorry, it’s been crazy demanding at work and getting the bills paid.
Will be dropping some stuff soon on my most recent research project