why would you do less steps in Schnell and then drag about less quality thourhg the whole video ???? clearly this is not accurate, Schenll has sick quality with 30+ steps
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
I like Flux. Use both schnell and dev with the GGUF Extension in comfyUI. Fast and furious. The only thing i don't like, is the trend since sdxl3, that "problematic" Stuff, mostly NSFW is blocked somehow, even with forcing through sides in prompt, you will barely come to some nice result. But hey..that what sd sdxl are for
You broke me. Im desperate do understando how to do you use an Image as a style like we do in midjourney? How do you keep the characters consistent? How do I do these basic inpaintings?
Flux Dev is for developers, providing basic development tools and functionality, while Flux Pro is an advanced version with more features, performance optimizations, and enterprise-level support for professional users who need a more comprehensive service.
I really wish I new how the model got down to 11 gig, something had to go right? Perhaps that additional bump in hand training? Quality, ect, this could have a huge impact on your test results, just think about it, if slight difference in scaling impacts the model what does reducing the original model by 50% do? I not saying we shouldnt use the smaller model, I have to use to it! to get anything out in a reasonable amount of time I am just curious about the process of reducing model sizes. What is exactly that we lost?
The reduction in model size usually involves a process called quantization, which essentially compresses the data by using fewer bits to represent the numbers (weights) in the model. In machine learning, weights are often stored as floating-point numbers, like FP32 (32-bit floating point) or FP16 (16-bit floating point). Reducing the model size could mean switching from FP32 to something smaller like FP16 or even FP8. When you use smaller numbers like FP8, the model consumes less memory and processes faster, but there is a trade-off: you lose some precision in the weights. This loss in precision can affect the model's accuracy or the quality of the generated images. However, the impact of this precision loss varies depending on the model and its use case. Specialized models, designed for specific tasks, might still perform well with lower precision because they don't need to generalize across as many scenarios. On the other hand, more generalized models that need to work well across a wide range of inputs may experience a noticeable drop in performance. So, when the model was reduced from 30GB to 11GB, some precision was likely sacrificed, which could potentially affect the quality of its outputs, especially in more complex or diverse tasks. And that is exactly what we can see in the comparison.
Error in inpaint workflow: When loading the graph, the following node types were not found: GetImageSizeAndCount MaskPreview+ Image Save Nodes that have failed to load will show as red on the graph.
To be honest, it's just a matter of taste, since the differences are mostly placebo like and depend on the prompting and the random seed. But it's interesting to see that most humans bias towards the dev version because they think, it MUST be better (as more "expensive", as "better" something must be). That's how human psyche works 😉
I've had your problem before; if you don't think you're doing anything wrong, you can either reboot and rebuild the model or go to mimicpc and run flux, I was just frustrated with the fuzzy images I was getting, and used mimicpc's online comfyui to run flux after trying other solutions that didn't work, and I think the images it produced were very good, but I don't know what to do with them. I was really surprised by the detail of the images it generated.
This sounds exactly like voice Andrew from Microsoft Azure Speech AI. When your channel won't hit monetization till the year 3918, commercial use doesn't really matter.
I think the schnell produced images sometimes better than Dev… Take the Body Cream. The fact the Schnell put a label on it without being prompted, and it’s a better angle, than a pot of white stuff of whatever that Dev produced. Also comparing two same prompts will always produce random images, some good, some bad, be it Dev or Schnell being used. I’ve had some absolutely shockingly bad Dev results, like really bad early first version Dalle images, using my detailed prompts, that Dalle / Chat GPT absolutely nailed perfectly. These comparison videos in my opinion just don’t work as they’re just too random what they will produce.
This morning I started to use flux dev but only allows me to generate 1 image but the next generation is interrupted by this error. Error occurred when executing SamplerCustomAdvanced: list index out of range. UPDATE: I rolled back to an earlier state from experimental snapshot manager menu , and the problem is gone.😊
Also, one more detail. Black forest labs say that outputs themselves of the dev model can be used for commercial purposes. Model itself can't be used for profit though. So you can't integrate it in your product, or use as basis for your commercial fine-tune and stuff. I'm intrigued how civitai fits into this.
@@valheyrie404 It is allowed to sell images that you generated. It is not allowed (without direct agreement) to host the model and sell the access to it. Like civitai and other generation services. Although, it seems civitai already reached some sort of commerical agreement with authors. But, also you cannot include "dev" model in some sort of commercial software product, like a photo editor. Or to train your own commercial (production) model while using "dev" model as a basis for that.
@@valheyrie404 Oui, vous pouvez vendre vos images générées. Ce que vous ne pouvez pas utiliser commercialement, en l'intégrant dans un produit tel qu'une application, c'est le modèle lui-même (Dev). Mais vous pouvez vendre les images générées.
@@valheyrie404 I apologize. While many people, including myself (and even chatbots like Claude), generally agree that outputs seem viable to sell, the license is actually quite confusing. The more I read it, the more confusing it becomes. Until Black Forest Labs specifically clarifies this point, I advise being careful for now.
@@jassimibrahim6535 If you copy & paste the image into a new document, yes. ComfyUI and Automatic111 writes the model and workflow info into Exif data. ... But if you'll tell nobody, I won't either 😏
Oh interesting! I'm learning Dutch and fast is "snell" in Dutch, which seems counterintuitive to me because it sounds like snail, which is a very, very slow animal.
C'est à peu près le commentaire que je cherchais : vous faites du photorealisme avec et obtenez d'aussi bons résultats ? Et pour le respect des prompts ?
At 3:49, the differrence is very obvious. Schnell is artificial, specially the lighting, pants seems plasticy, like MidJourney, SDXL and many others. Dev has very natural lighting ambience
Dev is just way better no competition, nice inpainting workflow
agreed!
Plzzzzzz tells us how to use dev flux free anywhere??? Online or...@@PixelEasel
@@PixelEasel Now use SCHNELL with a High "resolution" in pixels, and get surprise 😎 maybe you need make a new video for that.
checking @@KingBerryBerry
the consistency of the woman with different expressions is quite a game changer
agree!
This worked. You are a gentleman and scholar sir...
thanks 😊!
I agree with you that dev looks more photorealistic. Thank you very much for your informative summary!
thanks for commenting!!
amazing and detailed comparison, like and subscribed
Thanks for giving us this comparison 🖖
Thank you so much. You could make other comparisons with the models (SDXL turbo, SDXL lighthing, Stable Difusion 3) vs Dev and Schnell
Thank you for the great tutorial.
thanks for a great comment!
why would you do less steps in Schnell and then drag about less quality thourhg the whole video ???? clearly this is not accurate, Schenll has sick quality with 30+ steps
Thanks! Very informative and on point.
cheers 🍻
Great content and video! Keep up the great work!
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
a big thank you, very informative.
thanks a lot!
Thank you very much, bro. It is a very clear comparison.
thanks 😊
There's schell & dev combined model.
4 steps near dev quality, best of both worlds. Use that instead.
What is that version😮 The pro one?
where can we find the link
Sauce?
it doesn't understand the prompts as good as the standalone Dev.
Dude tell the source if you gonna tell something like that
Actually the schnell illustration is much better than the dev - dev looks like a photo mixed with illustration
Wow , you got a point. Maybe the comparison with anime and painting should be the next one.
Both models are awesome! Each one is tailored for a specific use.
I like Flux. Use both schnell and dev with the GGUF Extension in comfyUI. Fast and furious. The only thing i don't like, is the trend since sdxl3, that "problematic" Stuff, mostly NSFW is blocked somehow, even with forcing through sides in prompt, you will barely come to some nice result. But hey..that what sd sdxl are for
I feel like in order to make it fair both SCHNELL and DEV should've been tested with the same number of steps, not 4 vs 20.
❤😊thx sir
you're welcome 😊
Can flux generate a normal female face without square jaw and dimple on chin? I tried but couldn't get it. All faces looked the same.
Could you give us the prompt of the city at night with neons ? We only see the prompt from the previous images.
I feel this is way better than Dall E and I don't even need to compare this to google...
Imagen 3 is even better at making people (fingers & toes) photorealism and "true to the prompt" accuracy than Flux.
@@gameon2000 Will try
@@gameon2000 Ohh thanks for the info I will check it out
You broke me. Im desperate do understando how to do you use an Image as a style like we do in midjourney? How do you keep the characters consistent? How do I do these basic inpaintings?
very nice !! and now, can you compare "flux dev" and "flux pro" ? thank you vey much ! 😉
Flux Dev is for developers, providing basic development tools and functionality, while Flux Pro is an advanced version with more features, performance optimizations, and enterprise-level support for professional users who need a more comprehensive service.
I really wish I new how the model got down to 11 gig, something had to go right? Perhaps that additional bump in hand training? Quality, ect, this could have a huge impact on your test results, just think about it, if slight difference in scaling impacts the model what does reducing the original model by 50% do? I not saying we shouldnt use the smaller model, I have to use to it! to get anything out in a reasonable amount of time I am just curious about the process of reducing model sizes. What is exactly that we lost?
The reduction in model size usually involves a process called quantization, which essentially compresses the data by using fewer bits to represent the numbers (weights) in the model. In machine learning, weights are often stored as floating-point numbers, like FP32 (32-bit floating point) or FP16 (16-bit floating point). Reducing the model size could mean switching from FP32 to something smaller like FP16 or even FP8.
When you use smaller numbers like FP8, the model consumes less memory and processes faster, but there is a trade-off: you lose some precision in the weights. This loss in precision can affect the model's accuracy or the quality of the generated images.
However, the impact of this precision loss varies depending on the model and its use case. Specialized models, designed for specific tasks, might still perform well with lower precision because they don't need to generalize across as many scenarios. On the other hand, more generalized models that need to work well across a wide range of inputs may experience a noticeable drop in performance.
So, when the model was reduced from 30GB to 11GB, some precision was likely sacrificed, which could potentially affect the quality of its outputs, especially in more complex or diverse tasks. And that is exactly what we can see in the comparison.
Is it possible to run flux schnel with img2img input? I see it only on dev version but online it costs x10 times more which is frustrating
can you give your tamplate for comfyui ? Thanks for the video, interesting
Error in inpaint workflow:
When loading the graph, the following node types were not found:
GetImageSizeAndCount
MaskPreview+
Image Save
Nodes that have failed to load will show as red on the graph.
try to update comfy. it should work
Oh and by the way i'm using the DEV 8 model
To be honest, it's just a matter of taste, since the differences are mostly placebo like and depend on the prompting and the random seed. But it's interesting to see that most humans bias towards the dev version because they think, it MUST be better (as more "expensive", as "better" something must be). That's how human psyche works 😉
schnell looks more cartoonish and dev more realistic overrall.
Next Flux Dev - Fp8 vs Fp32
This is your real voice or AI ?
By the way, keep Fluxing, because Flux is just getting started :)
Clearly AI voice as it's lacking inflection.
@@LukasBazyluk1117 Interesting
How much GPU VRAM or CPU RAM do we need for the model?
Wish I could get it too work, tried in pinokio and comfy and all I get are errors, I have a RTX 3070 w/8GB
I didn't understand how you did it without changing anything.. it was too vague for me. While I was waiting for such a tutorial...
I've had your problem before; if you don't think you're doing anything wrong, you can either reboot and rebuild the model or go to mimicpc and run flux, I was just frustrated with the fuzzy images I was getting, and used mimicpc's online comfyui to run flux after trying other solutions that didn't work, and I think the images it produced were very good, but I don't know what to do with them. I was really surprised by the detail of the images it generated.
Hey mate are you able to post a copy of the complete workflow shown at 9:00?
you can find it on the description.. just clean it up a bit
@@PixelEasel not exactly sure how to yet, still a noob but thanks for reply.
Schnell gives me better results for whatev reason so far
This sounds exactly like voice Andrew from Microsoft Azure Speech AI. When your channel won't hit monetization till the year 3918, commercial use doesn't really matter.
I think the schnell produced images sometimes better than Dev…
Take the Body Cream. The fact the Schnell put a label on it without being prompted, and it’s a better angle, than a pot of white stuff of whatever that Dev produced.
Also comparing two same prompts will always produce random images, some good, some bad, be it Dev or Schnell being used.
I’ve had some absolutely shockingly bad Dev results, like really bad early first version Dalle images, using my detailed prompts, that Dalle / Chat GPT absolutely nailed perfectly. These comparison videos in my opinion just don’t work as they’re just too random what they will produce.
This morning I started to use flux dev but only allows me to generate 1 image but the next generation is interrupted by this error. Error occurred when executing SamplerCustomAdvanced: list index out of range. UPDATE: I rolled back to an earlier state from experimental snapshot manager menu , and the problem is gone.😊
What GPU are you using
@@master-tp6wc 4080 super with 96gig. Reddit was flooded with this dev version memory issue even for those with 4090 cards.
Also, one more detail. Black forest labs say that outputs themselves of the dev model can be used for commercial purposes. Model itself can't be used for profit though. So you can't integrate it in your product, or use as basis for your commercial fine-tune and stuff. I'm intrigued how civitai fits into this.
Je ne comprends pas bien.
C'est autorisé de vendre les images ?
Qu' est ce qui est interdit alors ?
@@valheyrie404 It is allowed to sell images that you generated. It is not allowed (without direct agreement) to host the model and sell the access to it. Like civitai and other generation services. Although, it seems civitai already reached some sort of commerical agreement with authors. But, also you cannot include "dev" model in some sort of commercial software product, like a photo editor. Or to train your own commercial (production) model while using "dev" model as a basis for that.
@@AlistairKarim C'est plus clair. Merci ! :)
@@valheyrie404 Oui, vous pouvez vendre vos images générées.
Ce que vous ne pouvez pas utiliser commercialement, en l'intégrant dans un produit tel qu'une application, c'est le modèle lui-même (Dev). Mais vous pouvez vendre les images générées.
@@valheyrie404 I apologize. While many people, including myself (and even chatbots like Claude), generally agree that outputs seem viable to sell, the license is actually quite confusing. The more I read it, the more confusing it becomes. Until Black Forest Labs specifically clarifies this point, I advise being careful for now.
Training on datasets that include tons of copyrighted art and then tell others it's not for commercial use until licensed is mindblowing
i mean realistically speaking they wont know if ur using their model or not
@@jassimibrahim6535 If you copy & paste the image into a new document, yes. ComfyUI and Automatic111 writes the model and workflow info into Exif data.
... But if you'll tell nobody, I won't either 😏
"Schnell" ist german for "fast" btw
of course
Oh interesting! I'm learning Dutch and fast is "snell" in Dutch, which seems counterintuitive to me because it sounds like snail, which is a very, very slow animal.
It's too bad ai voice generation like in this video isn't nearly as good as ai image generation
I'm the one that struggling with talking. This Ai is better than me talking 😅
Meanwhile we can do all the same or better with SDXL, and do so 10x faster. Flux is just overhyped mediocrity right now.
C'est à peu près le commentaire que je cherchais : vous faites du photorealisme avec et obtenez d'aussi bons résultats ?
Et pour le respect des prompts ?
Flux very bad in nature , no forest, no trees . Only one scene with path in the middle .
At 3:49, the differrence is very obvious. Schnell is artificial, specially the lighting, pants seems plasticy, like MidJourney, SDXL and many others.
Dev has very natural lighting ambience
Annoying AI voice
if they want to capture a large part of existing market share others have, this is TOO complicated.