Your videos are super helpful to me. Even though I am a Linux guy doing all this on a ubuntu variant I can translate what I need to. I really like how you get right to the point and keep it simple. I would appreciate the github links in the description though. keep up the great work!
I think my previous comment got deleted so I have removed the SHA256 values. The installer gave an error on the sd3.5_large_fp8_scaled.safetensors download due to a SHA256 mis,match When the webpage appeared, there was no SD 3.5 model displayed, only Flux-Schnell and SDXL. So, I downloaded the file manually using the link, created a folder "...Models\Stable-Diffusion\SD35fp8" and placed the download in there. Then pressed the Refresh button in the Models tab and SD 3.5 Large appeared in the list of models and its working ok. 58GB total space used. Great work again, thanks, but my brain is not big enough for the workflow tab. Got the Video clip gen working too but not with an image prompt so could do with an example of that as I got another error. "ComfyUI execution error: Only multiplication of row-major and column-major matrices is supported by cuBLASLt"
Noob warning here. I am _just_ getting into AI. I'm specifically interested in image and video generation run locally. This set up looks really good and your steps are really easy to understand. I just have a couple questions. The text input window specifies a limit of 75 words. I expect some of my descriptions will be more than that. Is that a hard limit or can that be changed some where? Second question is how hard is it to link something like ollama to it?
Had no problems getting it installed. Now using your exact prompt I get ab error "No backends available". I have no idea what this means. Also once I exit out of this how do I get back in?
following the installation directions, you have it install the backend (its early in the video)... then to run it again, you g into the SwarmUI folder and click on "launch-windows.bat"
As a person who studies AI video for a living, I'm not sure Mochi is the "future" of AI video. It has a lot of flaws when it comes to translating LLM training into output. You can see, for instance, in your pirate ship how the passing of a fully lit ship into shadow causes this huge black pixel bush to form under the prow. That's not good and it happens reliably no matter the subject.
Your videos are super helpful to me. Even though I am a Linux guy doing all this on a ubuntu variant I can translate what I need to. I really like how you get right to the point and keep it simple. I would appreciate the github links in the description though. keep up the great work!
@@kd4pba oh thank you. I forgot to put the links there I'll take care of that now
I think my previous comment got deleted so I have removed the SHA256 values.
The installer gave an error on the sd3.5_large_fp8_scaled.safetensors download due to a SHA256 mis,match
When the webpage appeared, there was no SD 3.5 model displayed, only Flux-Schnell and SDXL.
So, I downloaded the file manually using the link, created a folder "...Models\Stable-Diffusion\SD35fp8" and placed the download in there. Then pressed the Refresh button in the Models tab and SD 3.5 Large appeared in the list of models and its working ok.
58GB total space used.
Great work again, thanks, but my brain is not big enough for the workflow tab.
Got the Video clip gen working too but not with an image prompt so could do with an example of that as I got another error. "ComfyUI execution error: Only multiplication of row-major and column-major matrices is supported by cuBLASLt"
Noob warning here. I am _just_ getting into AI. I'm specifically interested in image and video generation run locally. This set up looks really good and your steps are really easy to understand. I just have a couple questions. The text input window specifies a limit of 75 words. I expect some of my descriptions will be more than that. Is that a hard limit or can that be changed some where?
Second question is how hard is it to link something like ollama to it?
lovly bro thankyou! please omnipaser next brother!!!
Thanks for the tutorial! Been eyeing swarm for a long time, will try mochi today, wonder if it will work on my 4060ti.
Had no problems getting it installed. Now using your exact prompt I get ab error "No backends available". I have no idea what this means. Also once I exit out of this how do I get back in?
following the installation directions, you have it install the backend (its early in the video)... then to run it again, you g into the SwarmUI folder and click on "launch-windows.bat"
what pc requirment for this?
theoretically you can do mochi all the way down to 8gb vram but it means waiting. I feel 16 would be comfortable for all this
@@cognibuild i only have 12gb
As a person who studies AI video for a living, I'm not sure Mochi is the "future" of AI video. It has a lot of flaws when it comes to translating LLM training into output. You can see, for instance, in your pirate ship how the passing of a fully lit ship into shadow causes this huge black pixel bush to form under the prow. That's not good and it happens reliably no matter the subject.
@@dirtydevotee it's s a stepping stone for sure and certainly the best thing we've for for accessible open source atm
Mow Chee