No controlnets yet and no way to train or finetune Cosmos yet from what I have read. I'm sure there are people researching it, so hopefully we see more come out soon!
32:20 you used the wrong model by the way. You want the Cosmos-1.0-Autoregressive-13B-Video2World or Cosmos-1.0-Autoregressive-5B-Video2World I think. Edit: I think Autoreg is not yet supported in comfyui. I might be wrong.
Yeah, only the diffusion models are supported in ComfyUI currently. I followed Comfy’s documentation for which models to use. You can read a little more about the diffusion models here. developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform To be honest I don’t know much about the autoregressive models, I need to read up some more
Turns out you can also finetune/train the model using something they made called Nvidia Nemo. Can you make a video on that? Would love to do a full finetune or maybe create loras or something for this model! I've been getting insane results with image to video!
Based on this link, NeMo is for finetuning LLMs. www.nvidia.com/en-us/ai-data-science/products/nemo/?ncid=no-ncid Got a link for evidence of using it to train the Cosmos models?
Following up, I signed up for the Nvidia Developer Portal and confirmed that NeMo is just for finetuning LLMs. Hopefully there are people out there researching how to finetune Cosmos like we can with Hunyuan, CogX, LTX, etc.
@@PhantasyAI0 interesting, I stand corrected! Looks like NeMo can be used for this model, I’m going to give it a shot. If it works well, I will upload a video next week.
Yes, here you go! www.patreon.com/posts/cosmos-anime-120080465 Unfortunately, Cosmos is better suited for real-world examples because it's purpose is to generated real world datasets to train machines. It doesn't do animated video very well.
Very nice for img2vid and vid2vid! Wonder if controlnet also works with this model
No controlnets yet and no way to train or finetune Cosmos yet from what I have read. I'm sure there are people researching it, so hopefully we see more come out soon!
32:20 you used the wrong model by the way. You want the Cosmos-1.0-Autoregressive-13B-Video2World or Cosmos-1.0-Autoregressive-5B-Video2World I think.
Edit: I think Autoreg is not yet supported in comfyui. I might be wrong.
Yeah, only the diffusion models are supported in ComfyUI currently. I followed Comfy’s documentation for which models to use. You can read a little more about the diffusion models here. developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform
To be honest I don’t know much about the autoregressive models, I need to read up some more
Turns out you can also finetune/train the model using something they made called Nvidia Nemo. Can you make a video on that? Would love to do a full finetune or maybe create loras or something for this model! I've been getting insane results with image to video!
Based on this link, NeMo is for finetuning LLMs. www.nvidia.com/en-us/ai-data-science/products/nemo/?ncid=no-ncid
Got a link for evidence of using it to train the Cosmos models?
Following up, I signed up for the Nvidia Developer Portal and confirmed that NeMo is just for finetuning LLMs. Hopefully there are people out there researching how to finetune Cosmos like we can with Hunyuan, CogX, LTX, etc.
@@PhantasyAI0 interesting, I stand corrected! Looks like NeMo can be used for this model, I’m going to give it a shot. If it works well, I will upload a video next week.
How to get rid off this flickering effect in the movie?
You can try to put words like “flickering” or “flashing” in the negative prompt!
Can you show me the effect of generating videos from anime images?
Yes, here you go! www.patreon.com/posts/cosmos-anime-120080465
Unfortunately, Cosmos is better suited for real-world examples because it's purpose is to generated real world datasets to train machines. It doesn't do animated video very well.