CogX is so legit, I can’t wait to share what I’ve done so far. Works really well in ComfyUI on a 3090, Ive gotten 10 sec. Long clips. (8fps run through FILM). 😊
Right now they have really good deals on PCS that would be able to run this. I found one mini PC that had an I-9 processor with a 680m Radeon GPU with 32 gigabytes of RAM and a 2 TB SSD for about $500, it also had the USB 40 gbps slot where you could add another external GPU if you wanted to. (8c 16t CPU 12c GPU)
I haven’t found a deal like this anywhere? Most pc’s like this are around $3,000. Where did you find? Also, some of these video generators need like 76gb just to run smoothly on one gpu…
@@OllieNCS CogVideoX runs pretty well on 8-12 GB of VRAM but yeah, something like Mochi just isn't feasible on consumer hardware. You can get something with 12 or even 16GB of VRAM for $1000-$1200 (SMI7N47S01 Slate 8 Mesh Gaming PC) but I don't know about $500.
I have a question about MIMIC PC that runs comfyUI workflows locally and use distant GPU`s... Can I use my custom nodes, because I write custom nodes, and I add them directly in my comfyUI repertory... For Character consistency, camera motions, and pose mimicking, and image processing...
First, your card is supported by cuDNN, the Nvidia cuda Deep Neural Network library is mandatory to be able to use your GPU. But, because you have only 8GB, it is almost useless, because you will be limited to small models... and I do not know the speed you can reach with 1408 cores... Because I have 7168 cores with 24Gb Vram, and images with any workflows are generated relatively fast.. but video is very slow, so you better follow BOB`s video and use confyUI and running distant GPU`s with mimic PC...
I really like to come here and check all the things I would not have the time to search myself, and have it condensed in a short video... but @1:15 ... when you talk about return on investments... you forgot to include the craziest ones, for those the return is the Creation itself, and the money is just a tool to achieve it... Like me, I do not make money, I spend money, and I have a lot of fun... My goal is creating a lot of incredible stuff, and my return is the results and the comments from the viewers !
Not sure about 1.5. I still think I get cleaner results on the older cog version, just without being able to make the longer duration. I found 1.5 makes very jumpy movements. I also quite like the LCM scheduler, again stops the distortion. Wonder why your node labels are in Chinese!
You have to review a new version of suno ai they're introduced a new feature called suno ai v4 and it's a better audio quality than in v3 and v2 and it's cleaner audio, sharper lyrics, and more dynamic song structures and also has other updates such as personas and remasters
The better services all rely on monthly subscriptions to huge companies who restrict the content people can make. This is open source and therefor preferable for many people, it will only improve as the community fine tune and improve it.
@@christiaantheron3318 The point is that’s it’s on your system, free, AND it’s in development. Nobody expects this kind of thing to be a final product. The fact that you’re able to run this and get anywhere near this quality without having to pay somebody for it it’s just another AI miracle that people are beginning to take for granted. When you trash early developmental work that’s just beginning, it really misses the point of why they share it in the first place. Not sure what your expectations are.
CogX is a ray of hope tbh, cause up till it appeared it seemed all video models were going to stay closed source and unable to run on local hardware
I just played with the open source LTX model on my own machine and wow. Going to try to get Mimic to add it.
Love how to focus on open source models, keep it up. I think you may be surprised how many of your subs have very powerful computers
I'd do a poll, but not sure I'd get enough response to really gauge that. Maybe when we hit 100k 😁
CogX is so legit, I can’t wait to share what I’ve done so far. Works really well in ComfyUI on a 3090, Ive gotten 10 sec. Long clips. (8fps run through FILM). 😊
Amazing like always, I think I saw cog on the pinokio too
Right now they have really good deals on PCS that would be able to run this. I found one mini PC that had an I-9 processor with a 680m Radeon GPU with 32 gigabytes of RAM and a 2 TB SSD for about $500, it also had the USB 40 gbps slot where you could add another external GPU if you wanted to. (8c 16t CPU 12c GPU)
I haven’t found a deal like this anywhere? Most pc’s like this are around $3,000. Where did you find? Also, some of these video generators need like 76gb just to run smoothly on one gpu…
What website?
@@OllieNCS CogVideoX runs pretty well on 8-12 GB of VRAM but yeah, something like Mochi just isn't feasible on consumer hardware. You can get something with 12 or even 16GB of VRAM for $1000-$1200 (SMI7N47S01 Slate 8 Mesh Gaming PC) but I don't know about $500.
GMKtech mini PC...
Yes, I just made a video on 6 gaming PC's you can buy under $500.
I have a question about MIMIC PC that runs comfyUI workflows locally and use distant GPU`s... Can I use my custom nodes, because I write custom nodes, and I add them directly in my comfyUI repertory... For Character consistency, camera motions, and pose mimicking, and image processing...
Interesting. I'd still be Interested in what can be done with a long installation process with a 1660 GPU with 8gb. But nice at the price
First, your card is supported by cuDNN, the Nvidia cuda Deep Neural Network library is mandatory to be able to use your GPU. But, because you have only 8GB, it is almost useless, because you will be limited to small models... and I do not know the speed you can reach with 1408 cores... Because I have 7168 cores with 24Gb Vram, and images with any workflows are generated relatively fast.. but video is very slow, so you better follow BOB`s video and use confyUI and running distant GPU`s with mimic PC...
Check the LTX Video too, seems like similar to this.
I really like to come here and check all the things I would not have the time to search myself, and have it condensed in a short video... but @1:15 ... when you talk about return on investments... you forgot to include the craziest ones, for those the return is the Creation itself, and the money is just a tool to achieve it... Like me, I do not make money, I spend money, and I have a lot of fun... My goal is creating a lot of incredible stuff, and my return is the results and the comments from the viewers !
Is Genmo Mochi1 available on mimic pc. I’ve noticed it’s a bit better and I’ve got some awesome results with some wacky inputs!
There actually is a workflow for that: home.mimicpc.com/app-image-share?key=73f27b5250394e0d8aa1a818529f65a5
Not sure about 1.5. I still think I get cleaner results on the older cog version, just without being able to make the longer duration. I found 1.5 makes very jumpy movements. I also quite like the LCM scheduler, again stops the distortion. Wonder why your node labels are in Chinese!
Because the folks with Mimic made that workflow, and are Chinese.
I liked your old background music better. The track was called silence. Great show though.
I gotta put my glasses on before reading the video titles. Was actually pretty shocked.
🤣🤣🤣🤣
You have to review a new version of suno ai they're introduced a new feature called suno ai v4 and it's a better audio quality than in v3 and v2 and it's cleaner audio, sharper lyrics, and more dynamic song structures and also has other updates such as personas and remasters
There are so many better things on the market today, why are you recommending something that is not good at all?
This runs locally on consumer hardware and for that purpose, it's about as good as it gets right now.
The better services all rely on monthly subscriptions to huge companies who restrict the content people can make. This is open source and therefor preferable for many people, it will only improve as the community fine tune and improve it.
This is beyond poor.
@@christiaantheron3318 The point is that’s it’s on your system, free, AND it’s in development. Nobody expects this kind of thing to be a final product. The fact that you’re able to run this and get anywhere near this quality without having to pay somebody for it it’s just another AI miracle that people are beginning to take for granted.
When you trash early developmental work that’s just beginning, it really misses the point of why they share it in the first place. Not sure what your expectations are.
It is natural but give it 5 years. That would surely be amazing then.