YESSIR! We called it. Video is coming. And by this time next year, all of this will run on 4 year old laptops. THERE IS NO MOAT and consumers have no loyalty to the giants. Gonna be a wild decade :)
In the meantime I'm building out a community led, AI solutions development platform for developers and innovators to collaborate on while having access to all the toolsets that may be needed. Stay tuned.
@@userwhosinterestedin well, even though we now have SOTA open source models for image, text, and very soon, video, I’m sure closed source advocates will always point to new frontiers. Last year, there were predictions that a public GPT4 level model would be untenable; all of a sudden, even an 8B model can compete. Essentially, even if all updates ceased, the tools we have now can be used to augment/automate any digital process.
The flux Dev version allows for commercial usage of images made by flux dev, what it doesn't allow is selling Flux dev or any finetunes of it as a service like a image generation service
It's funny how people think that Flux is an AI. It's not. It's a trained model for SD. If a prompt asks for CFG and interference steps, it's a SD prompt based site.
@@newfrontiers5673 it really isn't for the right use cases and with the right prompts. I used it for a project of mine, and it works really well after some heavy tweaking.
As far as I know only the Flux "schnell" and "dev" model are open source, but not the best "pro" version. But still probably the best open source models we currently have.
@@KOSMIKFEADRECORDS Imho it is - for me. I'm using it with ComfyUI, I found a great workflow which generates a first image with flux-schnell, and then redreams it with flux-dev and upscales it. The results at least for things I prompt are comparable to SD1.5 with custom models in terms of quality (which is very positive) but with much better prompt understanding. I never saw an open source image model which follows my prompts so good. Some result's I've got so far are just astonishing. It's also much better in rendering interactions between subjects than other models. It makes no sense to generalize to say "this is the best!!" as it depends what you want to generate, what you like.
The last name of Mario and Luigi is Mario ( Thats why they are called Mario Brothers) So, the full name of Luigi is Luigi Mario, and Mario's full name is Mario Mario. Since Luigi is green in colour, Luigi is already the "green mario"
I got Flux dev working on my home computer late in the day when it was released. The image quality is great so far and I was shocked at how easy it was to make a tiled upscaling workflow -- it just works with barely any tile inconsistency, even without any controlnets. The downside is it is slow -- on a computer where I could generate a 4k+ final image with SDXL in about 12 minutes, Flux takes about an hour. I haven't had a chance to really try to optimize the step number, though, so it's possible I could get it better. It also doesn't really use CFG but its own custom guidance scheme, which means that it does not support negative prompts. Depending on what you need to do, this could make life difficult for you (e.g. the "fried rice with no peas" quest). And it's not really something I care about, but I've seen people complaining about not being able to do styles well with Flux. It is definitely the case that Flux can kind of do its own style choices unprompted, such as a photographic image with a cartoony element within it.
@@ghost-user559 I generate an initial image at 1MP, upscale it with an AI model (4xUltrasharp works the best of the ones I’ve tried, even though it is an old model), and then do a second denoising pass at 9MP with strength of 0.2. Yes, doing lower res and upscaling with another model would be faster, but I find those upscales to be very obvious, with the detail looking fake at 100% zoom. So I always want to do a second pass.
You could also mention how diverse the output is for the same prompt! Many other models look some over fine-tuned, outputting almost the same image for a constant prompt again and again.
Flux is _awesome._ No disputing that. But there are some caveats. While it may not be (strictly speaking) censored, my understanding is that it isn’t trained on anything nsfw and so it’s naturally not really capable of such generations out-of-the-box. It’s also _not_ going to be an easy model to work with if you want to fine-tune it.
Actually I think it will change the strategy for what “fine tuning” is. For example one could create hundreds of Loras at a significantly reduced. Then massive Lora Merges with hundreds of smaller Loras could be merged with Flux to achieve a similar goal. Lora Merges would be a way going forward for these models that are cost prohibitive for consumer hardware to train.
Yeah, it's awesome in prompt understanding. It's a bummer it's lacking nsfw stuff, because this also mean, it's hard to get interactions between people at all, even sfw ones. But still, I really like the model. I'm using "Flux Schnell and Dev Workflow with Upscaling" from "Harmeet" for ComfyUI and was able even as non expert to get really nice results.
@@ghost-user559 I don't know what's the minimum on VRAM you need, my AMD graphics card has 20 GB, but I think 12GB are fine. RAM I have 64, and it uses 50. It takes 70 seconds for me to generate the final, upscaled image. But it should work with 12 GB VRAM if you use the lowvram parameter for ComfyUI and the MemoryMax Parameter which you could set on like 28000M as value. There is a manual on Reddit how you can set it up if you have a 12GB VRAM graphics card. The result will be the same but it takes a little bit longer.
If it is uncensored, then it is missing some training data. While some image outputs are bad, some of them are soo aesthetically nice, that something as simple as colored bottles looks like a work of perfect art.
it's not uncensored. A lot of misinformation in your video. Confusing Flux dev and pro. Quite misleading but i am used to your approximation by now. In addition the flux version that actually is useful is the dev one and is far from fast. Please inform yourself a bit before wasting people's time. Information is out there.
Hey Matt, I never get to see your videos this early. So I’m taking the time to say thank you for posting such excellent content. Keep up the awesome work bud!
Flux Schnell and Dev are currently NOT uncensored. Rather, they aren't "censored" because that's not how visual models work; they just weren't trained on any NSFW images so Flux has some real problems with some anatomy. It takes special workflows with refiners to fix it in ComfyUI right now.
First time i can generate beautiful logo patches with correct text around: enamel, photo realistic see through aged stained glass, embroidered, pastels.
The specialized tools offered by SmythOS’s integrated development environment (IDE) streamline AI coding and experimentation. This functionality boosts productivity and improves workflow management.
I am realizing that i will need to find money for a whole new machine based around a big ole gpu... And I love the fact that its now a problem because the software is a real possibility
Hi, black forest is a german region, south west: Schwarzwald (and all germans knows the cake (with cherrys, cream ..): "Schwarzwälder Kirschtorte", and the ham "Schwarzwälder Schinken") greetings from Frankfurt
Is it somehow possible to import a picture of yourself or a friend and have the same person in different styles or settings? Or do you know what's the best open source tool for that use?
Using an M2 Max and found generation for just 1 image taking awhile. Like minutes. I know it would probably be faster with an Nvidia card but wondering if it's normal. (I thought M2 Max chips had neural chips?)
Great video and while it’s better than others at copyrighted material etc it is still not completely coherent and doesn’t handle many subjects too well. Holding objects is still pure quackery generations. It feels like we’ve reached a bit of a AI generation wall. Even the best video gens still create fever dreams. While it’s all a little tiny baby step in the right direction it’s gonna be slower than we initially thought before we can reliably create a story/movie etc.
Well it’s not the output they care about. They say you can mostly do what you want with the images. The thing they will go after people for is Image Gen sites trying to host their model for profit. And you cannot use their models for training other models. This is a lot harder to enforce, but because of the watermarks, they could definitely tell if a model was overtrained on Flux images and the merge would be very obvious to them.
Anyone looking to buy a new video card to generate AI, please make the GB of VRAM the number 1 priority. The GB of VRAM directly determines what AI you can run. The speed and number of CUDA Cores and and the DDR VRAM speed, only effects generation time.
That model is actually the first base one that knows a lot of cars. Expecially European where lacking. It's amazing. That's what i expect from a base model. Sadly i think it won't be possible to train on it, unless you have a NASA pc.
i dont know the heck you guys doing, but my results are looking pretty much the same over and over, also if i use too much prompts, the ai just ignore.
time for me to get a better computer 🖥 😅 Honest question: What computer setup would you (this audience + Matt) get if you were going to upgrade this month? (not Mac because I have all android). Embarrassed to say I have been doing all of my AI stuff on my Chromebook and Chromebook Plus for 2 years (quite effectivel, lol😂) , but with Open Source taking off... it is time 🚀 🤖 Can't get away without running locally anymore. 🤗 Thanks all who respond😃👍🌴
@jasonhemphill8525 up to $4,000. less would be better. 😁 Thank you for caring 🤗 I will be needing to build some custom models for my business... and all of the normal video stuff. I'm hoping to make a custom animated avatar as a replacement for me in my videos (when we get there, tech makes it easier and easier so may be less complicated soon). and I am kind of thinking I will end up customizing a robot down the line for my kitchen methods. Not sure if the out-of-box 'casa' robots will know how to take the seeds and veins out of habaneros 😅🌶 (probably will if I just ask). Not getting too far ahead... I am not sure which environment I will use. Nvidia looks like they are partnering with this path, but i may go the META route open-sourced, a good setup to integrate with any. Thanks again, Jason! 😃
@@Emily-qg8iv I don’t know too much about model fine tuning or video avatars but with a budget of around $4000, you can do a lot. In the world of AI, VRAM is king. If you can find a good deal on used 3090s you can run two for a total of 48GB which should enough for interference for medium sized models at good quants. How familiar are you with what I am saying so far?
@jasonhemphill8525 Kind of familiar... I have never used a setup like this (but i want to, and every time i see a 'run locally' application, i am like 'dang it, man😁 I cant do that with this'. Do you think 2 3090s? It will be a change from doing everything in the cloud and from Chromebook apps with occasional Linux blunder, but I want to have options... I do know that there is a chance that pretty much we may have a whole new way to interact. 🤖🔉🤖🌱.. but I think computers will be around for the short-term future and perhaps longish term. I do wonder about power consumption, as well. (I live in a rental, so there are not many options to add new dedicated circuits for the setup. Currently, I only have a large monitor 🙃 (that I use to make my screen bigger). I don't have a computer to put the graphics card in. Basically, building one from the ground up. I considered doing a similar search to build a computer like MattVidPro did in one of his videos a while back, but rather than relying on an LLM that may not cool the computer properly, etc, wanted to ask a human who has actually knows, aka, you 😁 Don't feel obligated, just would like a bit of direction if possible 🙏😊 My familiarity of the AI world is I have been following it closely and using it/trying everything I can that comes out since chatGPT 3.5 came out, I think maybe before that, because i started playing with Midjourney before that. (but as far as playing on a good computer, no experience yet). 🌱🚀🤖😃
@@Emily-qg8iv that’s alright. Well the short version is AI models are big. REALLY big. Gpus are the fastest at running them but they have a major size constraint which is VRAM. At the moment 24Gbs is the most vram on consumer level cards. There are cards that have more but they are nearly an order of magnitude more expensive. In order for any of these models to work, they need to “fit” in the vram you have available. A used 3090 is the best compromise on vram capacity and price (as the 3090ti or 4090 are faster but with the same amount of vram). Some AI applications can also run on the cpu and system memory. It tends to be wayyyyyyy slower but ram capacity is significantly higher and cheaper. Although great strides have been made to make cpu inference faster, gpus are still king. If it was me, I’d get a relatively low end Ryzen chip on AM5 like a 7600. Pair that with around 128Gb of ddr5 at decent speeds. Two 3090s if you can get a good deal on those. And a case and power supply that can fit all of that computer. Try looking at completed builds on the PC parts picker website with “AI” in the title. “Build a pc sales” on reddit is a great resource to save some money too. Try some of the guys on “build a pc” on reddit too because they can get into the nitty-gritty of what EXACTLY to buy.
The sheer variety in this realm of technology is nothing short of astounding. The breadth of possibilities is virtually limitless, from the innovative advancements in artificial intelligence and machine learning to the groundbreaking developments in virtual and augmented reality. New discoveries and improvements enhance how we interact with our digital world each day. Whether it's the latest smartphone equipped with cutting-edge features or sophisticated software applications that streamline our daily tasks, the variety and progress in this sector continually reshape our lives in remarkable ways. Integrating these diverse technologies into varied industries-from healthcare to entertainment-showcases an impressive spectrum of potential benefits and uses, all of which promise to revolutionize our future.
I have an M1 with 16GB and could not get it to run. In comfyui it gets all the way to the sampler and then it spits out an error. The new Aura model will work on my mac but it literally takes 30 minutes. Best to just wait a few weeks and I’m sure it will be sorted.
Eventually we will probably be able to run the Schnell smaller model. But it will likely take an M Max or Ultra with a lot of Ram and Gpu if people are struggling in the PC world with top of the line 4090s lol. We are already like 15x slower than a pc dedicated GPU, but we make up for that with unified memory. I’m on 16GB as well, so probably Flux Dev will be out of range until the LCM version drops.
@@obscuremusictabs5927 Very true. We live in a wild tech boom. What it means for us in the Mac world is that we can safely rely on having these toys a year or two after they first release. Stable Diffusion was 6 months to hacky ways to run Auto 1111, and then about a year to native implementations like Draw Things and Mochi.
YESSIR! We called it. Video is coming. And by this time next year, all of this will run on 4 year old laptops. THERE IS NO MOAT and consumers have no loyalty to the giants. Gonna be a wild decade :)
In the meantime I'm building out a community led, AI solutions development platform for developers and innovators to collaborate on while having access to all the toolsets that may be needed.
Stay tuned.
What leads you to be so optimistic about the pace of generative AI development?
@@userwhosinterestedin well, even though we now have SOTA open source models for image, text, and very soon, video, I’m sure closed source advocates will always point to new frontiers.
Last year, there were predictions that a public GPT4 level model would be untenable; all of a sudden, even an 8B model can compete.
Essentially, even if all updates ceased, the tools we have now can be used to augment/automate any digital process.
funny how your comment was only 8 days old and now Flux is running on laptops with 4gb Vram...(so yes laptops more than 4 years old)
The flux Dev version allows for commercial usage of images made by flux dev, what it doesn't allow is selling Flux dev or any finetunes of it as a service like a image generation service
Great! But, it seems poe/quora (jews) are trying to do that. Not surprising
@@neoultra6528 that's fantastic, and the next line is your character reciting a poem about lemons.
It's funny how people think that Flux is an AI. It's not. It's a trained model for SD. If a prompt asks for CFG and interference steps, it's a SD prompt based site.
7:30 "we have Kirby at home" 💀
I've been here ever since your dall-e 3 prompt testing videos. I like how your videos keep progressively getting higher in quality! Keep it up!
12:51 "let's get friogay" 💀
Noticed the same! xD LOL
i like frogs, but not that much.
Must have trained that AI on Alex Jones.
Amazing model. Did my standard "an old man sitting on a tugboat eating a foot long". Looked incredible.
Thanks for mentioning us! Just gave you the PRO plan if you want to try us out
😂 coffee: for fish, by fish. Thanks. Great video.
If you can't reproduce it yourself then it's not open source
Gemma 2 2b is the biggest development in the text gen space IMO. It finally brings a good fast chatbot that you can run just about anywhere.
It's output is horrible though
facts though crazy power for 2b parameters
@@newfrontiers5673 it really isn't for the right use cases and with the right prompts. I used it for a project of mine, and it works really well after some heavy tweaking.
@@newfrontiers5673 It can be good if you use it right and work within its limits. It certainly requires more clever prompts to get consistent results.
why would we want that dough? 70b is the minimum for me. even 13b you can instantly tell how stupid it is.
FLUX is in another level. I have been playing with it and watching other people's generations and this is.... just insane quality. i am so Hyped
As far as I know only the Flux "schnell" and "dev" model are open source, but not the best "pro" version. But still probably the best open source models we currently have.
eh... i dont see it yet
@@KOSMIKFEADRECORDS Imho it is - for me. I'm using it with ComfyUI, I found a great workflow which generates a first image with flux-schnell, and then redreams it with flux-dev and upscales it. The results at least for things I prompt are comparable to SD1.5 with custom models in terms of quality (which is very positive) but with much better prompt understanding. I never saw an open source image model which follows my prompts so good. Some result's I've got so far are just astonishing. It's also much better in rendering interactions between subjects than other models. It makes no sense to generalize to say "this is the best!!" as it depends what you want to generate, what you like.
That's not luigi, thats green mario
The last name of Mario and Luigi is Mario ( Thats why they are called Mario Brothers)
So, the full name of Luigi is Luigi Mario, and Mario's full name is Mario Mario. Since Luigi is green in colour, Luigi is already the "green mario"
A Mario in Luigi's clothing, if you will! 😂
Hi. Just checked a couple of guys from the website - they are former employees of Stability AI
12:51 Alex Jones **Heavy Breathing**
I got Flux dev working on my home computer late in the day when it was released. The image quality is great so far and I was shocked at how easy it was to make a tiled upscaling workflow -- it just works with barely any tile inconsistency, even without any controlnets. The downside is it is slow -- on a computer where I could generate a 4k+ final image with SDXL in about 12 minutes, Flux takes about an hour. I haven't had a chance to really try to optimize the step number, though, so it's possible I could get it better. It also doesn't really use CFG but its own custom guidance scheme, which means that it does not support negative prompts. Depending on what you need to do, this could make life difficult for you (e.g. the "fried rice with no peas" quest). And it's not really something I care about, but I've seen people complaining about not being able to do styles well with Flux. It is definitely the case that Flux can kind of do its own style choices unprompted, such as a photographic image with a cartoony element within it.
Would it be much faster to generate a 2k and simply upscale all of your outputs? What resolution are you generating with currently?
@@ghost-user559 I generate an initial image at 1MP, upscale it with an AI model (4xUltrasharp works the best of the ones I’ve tried, even though it is an old model), and then do a second denoising pass at 9MP with strength of 0.2. Yes, doing lower res and upscaling with another model would be faster, but I find those upscales to be very obvious, with the detail looking fake at 100% zoom. So I always want to do a second pass.
You could also mention how diverse the output is for the same prompt! Many other models look some over fine-tuned, outputting almost the same image for a constant prompt again and again.
been using it and its ... like wow. for a 1.0? wow.
IM SUPER EXCITED ABOUT THIS MODEL RN
Flux is _awesome._ No disputing that. But there are some caveats. While it may not be (strictly speaking) censored, my understanding is that it isn’t trained on anything nsfw and so it’s naturally not really capable of such generations out-of-the-box. It’s also _not_ going to be an easy model to work with if you want to fine-tune it.
My prediction is that no one will fine tune it, basically.
Actually I think it will change the strategy for what “fine tuning” is. For example one could create hundreds of Loras at a significantly reduced. Then massive Lora Merges with hundreds of smaller Loras could be merged with Flux to achieve a similar goal. Lora Merges would be a way going forward for these models that are cost prohibitive for consumer hardware to train.
Yeah, it's awesome in prompt understanding. It's a bummer it's lacking nsfw stuff, because this also mean, it's hard to get interactions between people at all, even sfw ones. But still, I really like the model. I'm using "Flux Schnell and Dev Workflow with Upscaling" from "Harmeet" for ComfyUI and was able even as non expert to get really nice results.
@@vomm What Gpu and what amount of Ram and VRAM does it take? And how long to get a batch done?
@@ghost-user559 I don't know what's the minimum on VRAM you need, my AMD graphics card has 20 GB, but I think 12GB are fine. RAM I have 64, and it uses 50. It takes 70 seconds for me to generate the final, upscaled image. But it should work with 12 GB VRAM if you use the lowvram parameter for ComfyUI and the MemoryMax Parameter which you could set on like 28000M as value. There is a manual on Reddit how you can set it up if you have a 12GB VRAM graphics card. The result will be the same but it takes a little bit longer.
Was really excited about this video, but you are using Flux Pro which is in fact not open source.
I test and talk about the other models later in the video!
@@MattVidPro yeah but on your tests you're comparing 2 paid models
I've been using flux.1 on my 24GB GPU for 3 days already 😁
If it is uncensored, then it is missing some training data.
While some image outputs are bad, some of them are soo aesthetically nice, that something as simple as colored bottles looks like a work of perfect art.
it's not uncensored. A lot of misinformation in your video. Confusing Flux dev and pro. Quite misleading but i am used to your approximation by now. In addition the flux version that actually is useful is the dev one and is far from fast. Please inform yourself a bit before wasting people's time. Information is out there.
Does anyone know of a prompting guide? Stable diffusion has ways to add weight to some tokens and deemphasize others. That kind of thing
You entered Kirby twice, that's why Kirby has 2 sets of eyes. No, I'm serious.
Hey Matt, I never get to see your videos this early. So I’m taking the time to say thank you for posting such excellent content. Keep up the awesome work bud!
Your late
For flux LOWERING the cfg (known as flux guidance) down to as low as 1.25 makes it follow more. it isn't the cfg you know.
Midjourney wins in my book. midjourney’s getting a 6.2 update next month too 😏
One is free.
@@CoolhandLukeSkywalkrwhen it comes to free image gens, 100% flux
6:05 Obviously Kirby absorbed a hostile leader and became a cartoon caricature of a president.
Bro made that goldfish chin strong as hell😂
HE COMES IN SWINGING THE AI GOAT IS REAL, unbelievable roars the crowd, FLUXTACULAR, says the ai orungatang, An AI day to Aimember indeed I concur
Any way to self host?
You can, but you need a lot of Ram or VRAM
Flux Schnell and Dev are currently NOT uncensored. Rather, they aren't "censored" because that's not how visual models work; they just weren't trained on any NSFW images so Flux has some real problems with some anatomy. It takes special workflows with refiners to fix it in ComfyUI right now.
NOT UNSENSORED
First time i can generate beautiful logo patches with correct text around: enamel, photo realistic see through aged stained glass, embroidered, pastels.
My takeaway from this demo is that Ideogram nails it every time.
The specialized tools offered by SmythOS’s integrated development environment (IDE) streamline AI coding and experimentation. This functionality boosts productivity and improves workflow management.
I am realizing that i will need to find money for a whole new machine based around a big ole gpu...
And I love the fact that its now a problem because the software is a real possibility
Ye i enjoyed this one. Downloaded.
What is slaps?
Thanks
open-source is going boooom! if you agree, like❤
Hi, black forest is a german region, south west: Schwarzwald (and all germans knows the cake (with cherrys, cream ..): "Schwarzwälder Kirschtorte", and the ham "Schwarzwälder Schinken") greetings from Frankfurt
Bro I swear I got in all of this cuz u, and u keep updating us so we'll thanks for share !
On inference steps, the more the better, but slightly longer wait time.
It's getting pretty good. I look forward to running some static generations through Kling on my animations channel.
What do you mean with uncensored?
Is this ever likely to run locally? Hope so. When it does, please do a tutorial!
always a good time
Is it somehow possible to import a picture of yourself or a friend and have the same person in different styles or settings? Or do you know what's the best open source tool for that use?
You can't believe what people did with an uncensored AI Image video generator...
bro has 979 THOUSAND runway credits…….. 💀
how bro
where do you switch the safety mode?
Just tested the free version and it is censored. So I am giving this a fail.
The flux pro version is not open source, only the smaller dev version
ewww propietary deisgustin
To be fair we couldn’t run it locally anyway if we look at what the largest chat models look like
When will open source video generation come
Flux is coming to Freepik soon! 👍
paid
I hope open source ai would make good text to sfx generators
I think there is one but I can’t remember the name.
Using an M2 Max and found generation for just 1 image taking awhile. Like minutes. I know it would probably be faster with an Nvidia card but wondering if it's normal. (I thought M2 Max chips had neural chips?)
13:32 can you try this with Kling AI? I thought they were the better option for videos of eating.
That is Mario wearing Luigi's outfit.
Great video and while it’s better than others at copyrighted material etc it is still not completely coherent and doesn’t handle many subjects too well. Holding objects is still pure quackery generations. It feels like we’ve reached a bit of a AI generation wall. Even the best video gens still create fever dreams. While it’s all a little tiny baby step in the right direction it’s gonna be slower than we initially thought before we can reliably create a story/movie etc.
my question is how would they ever know for the non commercial aspect? for flux.1(dev)
Well it’s not the output they care about. They say you can mostly do what you want with the images. The thing they will go after people for is Image Gen sites trying to host their model for profit. And you cannot use their models for training other models. This is a lot harder to enforce, but because of the watermarks, they could definitely tell if a model was overtrained on Flux images and the merge would be very obvious to them.
@@ghost-user559 gotcha , thanks !
lol Mario with Lugi clothes
Anyone looking to buy a new video card to generate AI, please make the GB of VRAM the number 1 priority.
The GB of VRAM directly determines what AI you can run.
The speed and number of CUDA Cores and and the DDR VRAM speed, only effects generation time.
That model is actually the first base one that knows a lot of cars. Expecially European where lacking. It's amazing. That's what i expect from a base model. Sadly i think it won't be possible to train on it, unless you have a NASA pc.
Can I run it locally on my pc?
Your channel’s videos stopped getting recommended to me for some reason :(
You have failed to mentioned its fails, WTF. And you didn't even spell William Defoe. You need to be more objective.
12:50 Flux is turning the damn frogs gay!
i dont know the heck you guys doing, but my results are looking pretty much the same over and over, also if i use too much prompts, the ai just ignore.
Safety tolerance should be all the way up! because you want more tolerance
Floop this gote XDXD
Matt, what happened to your room? I missed the lemon! 😁
upgrades in progress...
It's not Uncensored. It's censored as f#ck.
5:44 Lawsuit incoming
time for me to get a better computer 🖥 😅 Honest question: What computer setup would you (this audience + Matt) get if you were going to upgrade this month? (not Mac because I have all android). Embarrassed to say I have been doing all of my AI stuff on my Chromebook and Chromebook Plus for 2 years (quite effectivel, lol😂) , but with Open Source taking off... it is time 🚀 🤖 Can't get away without running locally anymore. 🤗
Thanks all who respond😃👍🌴
What’s your budget?
@jasonhemphill8525 up to $4,000. less would be better. 😁 Thank you for caring 🤗 I will be needing to build some custom models for my business... and all of the normal video stuff. I'm hoping to make a custom animated avatar as a replacement for me in my videos (when we get there, tech makes it easier and easier so may be less complicated soon). and I am kind of thinking I will end up customizing a robot down the line for my kitchen methods. Not sure if the out-of-box 'casa' robots will know how to take the seeds and veins out of habaneros 😅🌶 (probably will if I just ask). Not getting too far ahead... I am not sure which environment I will use. Nvidia looks like they are partnering with this path, but i may go the META route open-sourced, a good setup to integrate with any. Thanks again, Jason! 😃
@@Emily-qg8iv I don’t know too much about model fine tuning or video avatars but with a budget of around $4000, you can do a lot. In the world of AI, VRAM is king. If you can find a good deal on used 3090s you can run two for a total of 48GB which should enough for interference for medium sized models at good quants.
How familiar are you with what I am saying so far?
@jasonhemphill8525 Kind of familiar... I have never used a setup like this (but i want to, and every time i see a 'run locally' application, i am like 'dang it, man😁 I cant do that with this'. Do you think 2 3090s?
It will be a change from doing everything in the cloud and from Chromebook apps with occasional Linux blunder, but I want to have options... I do know that there is a chance that pretty much we may have a whole new way to interact. 🤖🔉🤖🌱.. but I think computers will be around for the short-term future and perhaps longish term. I do wonder about power consumption, as well. (I live in a rental, so there are not many options to add new dedicated circuits for the setup.
Currently, I only have a large monitor 🙃 (that I use to make my screen bigger).
I don't have a computer to put the graphics card in. Basically, building one from the ground up. I considered doing a similar search to build a computer like MattVidPro did in one of his videos a while back, but rather than relying on an LLM that may not cool the computer properly, etc, wanted to ask a human who has actually knows, aka, you 😁 Don't feel obligated, just would like a bit of direction if possible 🙏😊
My familiarity of the AI world is I have been following it closely and using it/trying everything I can that comes out since chatGPT 3.5 came out, I think maybe before that, because i started playing with Midjourney before that. (but as far as playing on a good computer, no experience yet). 🌱🚀🤖😃
@@Emily-qg8iv that’s alright.
Well the short version is AI models are big. REALLY big. Gpus are the fastest at running them but they have a major size constraint which is VRAM. At the moment 24Gbs is the most vram on consumer level cards. There are cards that have more but they are nearly an order of magnitude more expensive. In order for any of these models to work, they need to “fit” in the vram you have available. A used 3090 is the best compromise on vram capacity and price (as the 3090ti or 4090 are faster but with the same amount of vram).
Some AI applications can also run on the cpu and system memory. It tends to be wayyyyyyy slower but ram capacity is significantly higher and cheaper. Although great strides have been made to make cpu inference faster, gpus are still king.
If it was me, I’d get a relatively low end Ryzen chip on AM5 like a 7600. Pair that with around 128Gb of ddr5 at decent speeds. Two 3090s if you can get a good deal on those. And a case and power supply that can fit all of that computer.
Try looking at completed builds on the PC parts picker website with “AI” in the title. “Build a pc sales” on reddit is a great resource to save some money too. Try some of the guys on “build a pc” on reddit too because they can get into the nitty-gritty of what EXACTLY to buy.
we do need to thank stable diffusion for starting us in this direction. Even if it didn't end very well.
flux pro isnt opensource
That's the real Walter White, not the actor from the tv show.
The sheer variety in this realm of technology is nothing short of astounding. The breadth of possibilities is virtually limitless, from the innovative advancements in artificial intelligence and machine learning to the groundbreaking developments in virtual and augmented reality. New discoveries and improvements enhance how we interact with our digital world each day. Whether it's the latest smartphone equipped with cutting-edge features or sophisticated software applications that streamline our daily tasks, the variety and progress in this sector continually reshape our lives in remarkable ways. Integrating these diverse technologies into varied industries-from healthcare to entertainment-showcases an impressive spectrum of potential benefits and uses, all of which promise to revolutionize our future.
bro, did mom send you to the attic?
keep up with the good content ❤
Floop this gote
Quite uncensored indeed.
not having to look at politicians would be nice.
i cant believe people are still saying "slaps" ..
Complex composition - I already make it fail with basic ideas …
floop this gote
Mazda miata :D
new room?
I havent looked, can you run this on Mac?
I have an M1 with 16GB and could not get it to run. In comfyui it gets all the way to the sampler and then it spits out an error. The new Aura model will work on my mac but it literally takes 30 minutes. Best to just wait a few weeks and I’m sure it will be sorted.
Eventually we will probably be able to run the Schnell smaller model. But it will likely take an M Max or Ultra with a lot of Ram and Gpu if people are struggling in the PC world with top of the line 4090s lol. We are already like 15x slower than a pc dedicated GPU, but we make up for that with unified memory. I’m on 16GB as well, so probably Flux Dev will be out of range until the LCM version drops.
@@ghost-user559 whatever happens, in 6 months this will all be in the dust of the next best thing. exciting times.
@@obscuremusictabs5927 Very true. We live in a wild tech boom. What it means for us in the Mac world is that we can safely rely on having these toys a year or two after they first release. Stable Diffusion was 6 months to hacky ways to run Auto 1111, and then about a year to native implementations like Draw Things and Mochi.
♥️
This is the best image generator right now for making girls and porn photos ? Looking for consistency and creating a AI model
why did the fish in ideogram look angry and evil
it can never find a free GPU on its server when I've used it. crap
These guppies are good at kissing their glass bowls! 😂
Guess flux is flexing
these new models will get shut down quickly for copyright violation...especially if channels are profiting from their display.