This is only a preview release. They already plan to release a better version later. I think it is good enough, if you have the right workflow and right methods and right prompts, you can use the outputs to tell a story.
it produces realistic humans but like everything else cannot do chinese typography which is my use case. but it actually works on local install but spoiler its porn sux lol
your accent is good. i suppose you are Indian? Usually Indian accented English is comprehensible but so thick that its difficult to listen to. I do languages so i'm curious about your accent/origins, since the worlds languages are all converging.
@@QuizmasterLaw Thank you! No, I am not Indian. English, French, and a local language are part of our school curriculum, growing up. We are also required to take an Asian language. I chose Hindi, which might explain the similarity you hear in my accent.
Looking for your thoughts. It seems the videos are VERY pixelated / interlaced. Any idea how to fix that? I did an upscale and it was was WAY worse of course. It works well with my 3060.
If i drag your Workflow (link from your descr.) into my comfyui, nothing happend. Cant find the Nodes EmptyLTXVLatentVideo and ModelSamplingLTXV also cant select ltxv type in the Clip Loader. Comfyui and Manager are uptodate. What is going wrong?
Hello, did you drag the ComfyUI_00002_.webp file into ComfyUI interface? I checked and it worked. Can you try with this method: 1. click on the ComfyUI_00002_.webp name on Github. 2. It will say something like cannot be viewed. 3. Click on view raw. 4. it will open the video. 5. drag it into comfyui. (no need to download it.)
It works great on a 4090 if the prompts are long enough it can create some impressive results! about 10-15 seconds for a 5 sec video, generation itself maybe 5 seconds yes.
It is a bit late to buy a 4090 now, price is inflated and not much choice also here in europe. I will try it on my 4090 later and let you know the results.
@@CodeCraftersCorner Total prompt execution takes about 20 seconds with 30 steps, Not quite 4 seconds but still very impressive. fp8 or fp16 makes no difference for me, both top out at 20Gb vram and same time. Using workflow i found in the Lightricks LTX page i got slightly better results and there is also a img to video workflow there. That being said the results they are showing seem to be very cherry picked and most of the time it will not be so good, good thing is that doing multiple tries will go very fast compared to Cogvideo or mochi.
YourYour prompting is why you're getting bad performance. It wants LONG prompts: Longer is better! Like any prompting, most important stuff first, camera instructions last. Try "A dramatic cinematographic realistic film of a calm and beautiful pastoral scene that is suddenly shocked by a giant explosion! The waves of the explosion ripple forth but a clever fox is not caught off guard, he skillfully leaps high, twists around flying with the shockwaves through the earth to land safely on all fours, zoom in on his face he is looking at the camera gladly for he has been spared. Realistic lifelike film footage.". It will produce something possibly useful. If its not to your liking just regenerate. after about a half dozen regenerations if its still no good reprompt.
This is only a preview release.
They already plan to release a better version later.
I think it is good enough, if you have the right workflow and right methods and right prompts, you can use the outputs to tell a story.
Thanks for sharing! Looking forward to the next release.
it produces realistic humans but like everything else cannot do chinese typography which is my use case. but it actually works on local install
but spoiler its porn sux lol
your accent is good. i suppose you are Indian? Usually Indian accented English is comprehensible but so thick that its difficult to listen to. I do languages so i'm curious about your accent/origins, since the worlds languages are all converging.
@@QuizmasterLaw Thank you! No, I am not Indian. English, French, and a local language are part of our school curriculum, growing up. We are also required to take an Asian language. I chose Hindi, which might explain the similarity you hear in my accent.
@@CodeCraftersCorner Great, well, whatever you are doing for pronunciation training / accent reduction is working! You are easy to listen to.
Looking for your thoughts. It seems the videos are VERY pixelated / interlaced. Any idea how to fix that? I did an upscale and it was was WAY worse of course. It works well with my 3060.
Yes, right now, the resolution is quite low. Let's hope they release a higher quality model in the near future.
be sure to use negative prompts
If i drag your Workflow (link from your descr.) into my comfyui, nothing happend. Cant find the Nodes EmptyLTXVLatentVideo and ModelSamplingLTXV also cant select ltxv type in the Clip Loader. Comfyui and Manager are uptodate. What is going wrong?
Hello, did you drag the ComfyUI_00002_.webp file into ComfyUI interface? I checked and it worked. Can you try with this method:
1. click on the ComfyUI_00002_.webp name on Github.
2. It will say something like cannot be viewed.
3. Click on view raw.
4. it will open the video.
5. drag it into comfyui. (no need to download it.)
LTXVideo nodes do not work anymore, ComfyUI cannot import even with latest version. Manual install doesn't help either. Fresh ComfyUI install too.
That's strange, I'll try installing it again and check.
@CodeCraftersCorner I'm trying to reinstall old version of comfy from scratch. I'm getting a lot of strange errors with latest version
Could you also include the upscale process of those videos, at least to 1080p
Hello, i've updated the github repo with a workflow containing the upscaling process.
Thank you @@CodeCraftersCorner
It works great on a 4090 if the prompts are long enough it can create some impressive results! about 10-15 seconds for a 5 sec video, generation itself maybe 5 seconds yes.
Thanks for sharing!
Can I use RTX 3060 to install LTX video ComfyUI?
Yes, you should be able to.
@CodeCraftersCorner how long the video results?
@@blogger.recehan On 4090, the generation takes 5 seconds. On 4070, it takes 20 seconds. I don't own 3060 to test but I guess it will be longer.
@@CodeCraftersCorner thanks for the info sir
It is a bit late to buy a 4090 now, price is inflated and not much choice also here in europe. I will try it on my 4090 later and let you know the results.
Thanks, I forgot to share that it takes 20 seconds for 20 steps on 4070 ti.
@@CodeCraftersCorner Total prompt execution takes about 20 seconds with 30 steps, Not quite 4 seconds but still very impressive. fp8 or fp16 makes no difference for me, both top out at 20Gb vram and same time. Using workflow i found in the Lightricks LTX page i got slightly better results and there is also a img to video workflow there. That being said the results they are showing seem to be very cherry picked and most of the time it will not be so good, good thing is that doing multiple tries will go very fast compared to Cogvideo or mochi.
@@CodeCraftersCorner Oeh nice i will try 😏🙏
YourYour prompting is why you're getting bad performance. It wants LONG prompts: Longer is better! Like any prompting, most important stuff first, camera instructions last. Try
"A dramatic cinematographic realistic film of a calm and beautiful pastoral scene that is suddenly shocked by a giant explosion! The waves of the explosion ripple forth but a clever fox is not caught off guard, he skillfully leaps high, twists around flying with the shockwaves through the earth to land safely on all fours, zoom in on his face he is looking at the camera gladly for he has been spared. Realistic lifelike film footage.". It will produce something possibly useful. If its not to your liking just regenerate. after about a half dozen regenerations if its still no good reprompt.