@@endymionspr9916 I'm a complete newbie to all of this and only got interested in it a couple of days ago. This video made almost everything I've learned in the last 2 days actually make practical sense.
Thanks Matteo for sharing the workflows, on top of the brilliant tutorial 🙏🏼🙏🏼I was able to recreate the Mona Lisa img2img and am sooo thrilled and happy!
Great work!!! All this videos are helping a lot to understand exactly home to use Flux and ComfyUI! It doesn't matter if videos are long and the topics are complicated if they are explain as you do that's more than enough! Thank you for the dedication you put in!
That explanation about vectors and variety is the first thing I saw anyone talk about this, yet for me it has been one of the biggest advantages Stable Diffusion has over Midjourney. With SD I'd be able to make thousands of images of the same prompt and I'd still keep finding new different looking results, while with MJ it would feel like anything more than a handful of images is just a waste of gpu hours, as it just keeps reimagining the same thing again and again. Luckily MJ has parameters weird and chaos that help with this somewhat. It's awesome to see you already found one trick to make flux less stubborn. IP adapter is going to make a huge difference here, can't wait ♥
WOW, WOW, WOW! That Mona Lisa img-2-img towards the end was just stunning! Thank you Matteo 🙏🏽🙏🏽🙏🏽 This was so educational and inspiring, explaining things that no other channel has.
I stumbled across the Flux Sampler Parameters node early last week and couldn't understand why there were no drop downs for Sampler and Scheduler, now I know why and that's really cool :) Great video as always.
I think I speak for everyone when I say never at all feel like your videos are too long or that you need to wrap it up because it is "too much". Personally I feel you are the BEST person making informative information about all of this. You never once in this video went into annoying hype youtuber mode begging me to like and subscribe and join your patreon before you even engage me with any content and waste my time with clickbait "THE GAME HAS CHANGED! A.I WILL NEVER BE THE SAME!" titles. Everything you shared was amazingly informative. You seem to actually know what you are talking about instead of other popular "A.I image experts." Who seem to just paraphrase the explanations on github pages and you can tell they are pretending to know what they are doing. I am so excited every time you post a video because I KNOW I will learn something that will change my process in much better ways and make me better at using this tech. Thank you Mateo for not being the "HEY GUYS! whats going on! Todaaaaay I'm going to teach you how to..." Type of youtuber.
OH crap!! why did i not see this sooner, i watched a lot of videos on TH-cam made by self nominated "experts", who seem to not even know the basics. I've just started Ai image generation since about a week and i've learned more with this video that with any other. Thanks a lot!
Wow, this video was filled to the brim with useful information, I had to pause multiple times to take notes despite working with ComfyUI well over year and considering myself quite familar with it. Even small things like having to use a different seed when you re-sample an image to prevent burning is good know, it took me hours of debugging a huge workflow until I figured this out on my own some months ago. I also didn't know that there is such an easy way for plotting meanwhile.
Hey, amazing video! I'm really impressed by how deeply you understand the workflows and the 'why' behind every connection and function in ComfyUI with Flux. It’s clear that you’re not just copying and pasting workflows-you really know the 'deep' meaning behind everything. Can I ask how you learned all of this? What exactly should I study or focus on to develop a similar level of understanding? I want to move beyond just following workflows to truly mastering the concepts. Thanks for sharing your knowledge!
thanks for the nice words. developing extensions for Comfy really helps understand the innerworking. I basically learned stable diffusion by reading comfyui code. It's a slow process because there's very little documentation or help from other devs but it's doable if you are determined :)
Trying to run Flux on an M1 Mac has been very challenging to say the least, your image to image workflow has given me by far the best results both in time to generate and image result, thank you so much, very much looking forward to the IPAdapter when released...👏🏻
@@latentvision Thanks a lot for your answer! I am very new with ComfyUI. learning the basics. Do you recomend any setup to do video upsacaling using no more than 16 GB ram in a M1 mac?
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
This video describes flux wonderfully and gave me a better understanding of him. But I'm a newbie and I don't have higher skills, so I'd like to ask what can I do to run flux better?
Thank you Matteo for the detailed analysis, very informative. Sadly with flux like pointed our main two obstacles are the Vram and commercial license, still Black Forest were generous with the community and shared with us the open source excellent model
@@latentvision watched it at least 3 more times since and have absorbed so much valuable info about Flux and comfy/diffusion in general. your new nodes that you showcase here ("Text Encode for Sampler Params" and "Flux Sampler Parameters") are amazingly useful for testing, reminding me of early daze of Automatic1111 xyz grid ability to use simple syntax to accomplish a swath of tests in one fell swoop! really can't thank you enough for all. would absolutely be interested in a part 2 flux video (in case its not obvious!). also, between the various controlnets, and lora training, plus IPA (goes without saying here), i'm eager to hear your perspectives, tips, and more of your gargantuan wisdoms :)
Other things to investigate with Flux. There is a GGUF version, and when combined with the T5 encoder, does wonders (all Q weights available too to match your hardware). The next thing to investigate is all the new LORAs.
grazie per questa preziosa risorsa, fantastico video! Posso chiedere secondo te come possiamo migliorare l'effetto reale della pelle senza l'utilizzo di Flux Realism LORA? Si può lavorare con i prompts?
Excellent video. Technical yet understandable. I have a quick question: Why are you using a resolution of 896x1088 for the images instead of a more standard resolution such as 920x1080, etc.? Doesn't it negatively affect the image quality if non-standard resolutions are utilized?
That's exactly what I've found in my (much more limited and far less sophisticated) testing. When it comes to art styles outside of: photograph, realism, anime, comic or realistic sketch, it's really difficult to get it to bend to your will. Flux seems to be oriented to the: "Photograph of attractive model stranger" Midjourney-type art style, and that's probably what about 75 - 80% of the current users of low cost/free generative AI wants.
Wow, this is really good. One question, what does the ModelSamplingFlux node do? My current workflow doesn't have that node. I wonder what functions that node adds. Thanks!
according to the license, the generation process must be non-commercial, but the images themselves can be used for commercial purposes without any restrictions. However, you cannot use the DEV model to create services where the generation process itself is the product being sold - for that, you need a commercial license. Images that you generate with DEV on your own computer (or server) for your own purposes can be used in commercial projects.
they actually define what they mean by "commercial" for dev: use by commercial or for-profit entities for testing, evaluation, or non-commercial research and development in a non-production environment. you cannot make money directly or indirectly. you can use it commercially for evaluation and non-production environments
Excellent video as always! Love the deep dive! It would be interesting to see an analysis of the quantized versions (fp16 vs fp8 vs nf4, etc) and how much they affect the overall quality and increase generation speed, as well as to hear your thoughts about all the different block merges of dev and schnell that are floating around Hugging Face. Hopefully we'll have finetunes and IPAdapter soon, it appears there's already some working ControlNETs available!
I'm sure I'll make more videos about Flux. FP precision is a hot topic 😅 technically speaking you want to the highest you can afford. I've read people saying that nf4 is "better" than 8 or even 16. It's hard to define "better" when you take personal taste into account. I noticed that in one-shot generations 8 and 16 (or even 4) are all pretty good, but then when you go upscaling, img2img, noise injection etc... 16 always gave me better results.
@@latentvision There's even GGUF quantized versions now, I played with them today. I have them working on ComfyUI and did some limited testing against the FP8 and NF4. They seem to generate a little slower but take significantly less RAM to load the model. Definitely promising, I love how fast things are developing now that quantization has become relevant for image generation. I'm definitely looking forward to your upcoming Flux content! =D Here's a summary of my limited testing of the quantized Flux Schnell with my Lenovo Legion 7i (i7 12900HX, 3080ti 16gb, 32gb RAM) in case you're interested on giving them a shot you can lmk if you get similar results: ~ FLUX SCHNELL NF4 ~ - No LoRA, No ControlNET =( - 6.5s to load the model for the first time - 30s for the first full generation - 9s for each generation (2.5s/it) on average - 11.5s in average for the full workflow including vae decode - 17.6GB RAM peak while loading model - 70% (11 gb) VRAM used after loading model - 78% (12.2 gb) VRAM in use during generation ~ FLUX SCHNELL GGUF Q4 ~ - No LoRA, No ControlNET =( - 5.5s to load the model - 28s for the first full generation - 13s for each generation on average at 3.3s/it - 15s for the full workflow including vae decode - 8.7 GB RAM Peak while loading model (half as much as NF4!) - 82% (13.3gb) VRAM while generating - 72% (11.3gb) VRAM after (a tiny bit more VRAM usage than NF4. GGUF seems promising but the slower gen is a dealbreaker, hopefully it can be improved) ~ FLUX SCHNELL FP8 ~ - 23.2s to load the model for the first time (3 times slower than NF4 and almost 4 times slower than GGUF) - 43s for the first full generation - 9s per generation on average at 2.4s/it - 12.5s for the full workflow including VAE decode - 31.3GB RAM Peak while loading model (dangerously close to OOM / swapping) - 82% (13.4 gb) VRAM used during generation - 70% (11 gb) VRAM used after generation - LoRAs work in FP8 but it took me 12 minutes to do a generation with Flux Dev, I think I don't have enough VRAM for the model merge in FP8.... Here's hoping NF4 gets LoRA support soon - Basic ControlNETs (Canny and Depth) are available So GGUF seems like it might be promising if iterated on, but I think NF4 will probably take the lead. I can't wait for LoRAs and finetunes to start appearing. Flux is amazing for a base model but has some very annoying nagging issues like the excessive Bokeh on realistic images and it's inability to generate anime girls that aren't blushing (not to mention a complete inability to generate NSFW). Also, like SD3, it sucks for girls lying in grass, but interestingly, mostly with anime prompts. Honestly it seems targeted towards that Midjourney style, we'll need good finetunes for Anime and NSFW content in the long run, hopefully the licensing won't discourage those.
try to split the description in two. start with the background and give great details about it (like if you mention a landmark it will try to make it pop). then the foreground but a lot less descriptive. sometimes it works. but they are all hacks.
Outputs. We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs you generate and their subsequent uses in accordance with this License. You may use Output for any purpose (including for commercial purposes), except as expressly prohibited herein. You may not use the Output to train, fine-tune or distill a model that is competitive with the FLUX.1 [dev] Model. the dev version has this clause, which is very confusing
@@latentvision Can it be understood that the generated images can be used for commercial purposes, as they have a clause for the derivative content of prompt words in the output definition, which is also the reason why I am confused about this non-commercial clause
Yep. Very confusing. It seems that the license for using the model itself is for Non-Commercial Purposes only. So while you can use the outputs commercially, you can't use the model commercially without obtaining a separate license. That seems to be overall vibe. But until somebody writes them and gets a direct clarification it is hard to be sure.
Not hyperbole, I learned more from this video than from all other sources combined.
as usual Matteo
same
As is always the case with Matt3o's videos :)
DOn't underestimate the base knowledge previous videos provided that made following this one way easier.
@@endymionspr9916 I'm a complete newbie to all of this and only got interested in it a couple of days ago. This video made almost everything I've learned in the last 2 days actually make practical sense.
I have been waiting with bated breath for your thoughts and analysis on this model. Tnank you Matt3o!
exactly this
Thanks Matteo for sharing the workflows, on top of the brilliant tutorial 🙏🏼🙏🏼I was able to recreate the Mona Lisa img2img and am sooo thrilled and happy!
This channel is a gold mine ! Really love your work and thank you for putting in the hours and sharing this wealth of knowledge with us ! thank you
Great work!!! All this videos are helping a lot to understand exactly home to use Flux and ComfyUI! It doesn't matter if videos are long and the topics are complicated if they are explain as you do that's more than enough! Thank you for the dedication you put in!
10 seconds in, I already liked... thanks Matheo! always insightful
That explanation about vectors and variety is the first thing I saw anyone talk about this, yet for me it has been one of the biggest advantages Stable Diffusion has over Midjourney. With SD I'd be able to make thousands of images of the same prompt and I'd still keep finding new different looking results, while with MJ it would feel like anything more than a handful of images is just a waste of gpu hours, as it just keeps reimagining the same thing again and again. Luckily MJ has parameters weird and chaos that help with this somewhat. It's awesome to see you already found one trick to make flux less stubborn. IP adapter is going to make a huge difference here, can't wait ♥
Matteo: "This is getting too long"
Me: " So short :( "
Same
The amount of knowledge I gain from this channel. Ty so much for al the testing and explaining!
A day that starts with a new video from you is always a good day! Thanks for taking the time, Matteo.
wow. thank you for the cool trick of adding noise within the image once it was half way generated!
Great new nodes, love the way you see something that would be useful, and you just make it. Great work as always
the community is happy to have you !
Wow. Absolutely brilliant video! So much depth and clear examples! Thank you Matt3o for all your hard work!
Just now getting into flux. As usual. Another amazing video. Thanks for your deep dive.
WOW, WOW, WOW! That Mona Lisa img-2-img towards the end was just stunning! Thank you Matteo 🙏🏽🙏🏽🙏🏽 This was so educational and inspiring, explaining things that no other channel has.
Really impressive. Thank you so much for the details that you are presenting and different options you are exploring.
Thanks!
this is amazing channel. no clickbite! Thank you Matteo!
I stumbled across the Flux Sampler Parameters node early last week and couldn't understand why there were no drop downs for Sampler and Scheduler, now I know why and that's really cool :) Great video as always.
This is the most informative video about flux that I've seen yet! Very nice job!
No filler content in the video, all of it very insightful. I love it. Grace mile.
Thank you very much for the thorough analysis. Looking forward to the Ipadapter. Keep up the great work.
thank you, this is the first truly detailed deep dive into flux.
How is thi guy not the top guide for Flux?!! amazing information
lol - "And here have drag queen Mona Lisa." 🤣One of your best videos, Matteo! Thank you so much for sharing.
I'm really impressed about how well you know what you're talking about. This video is highly valuable
I think I speak for everyone when I say never at all feel like your videos are too long or that you need to wrap it up because it is "too much". Personally I feel you are the BEST person making informative information about all of this. You never once in this video went into annoying hype youtuber mode begging me to like and subscribe and join your patreon before you even engage me with any content and waste my time with clickbait "THE GAME HAS CHANGED! A.I WILL NEVER BE THE SAME!" titles. Everything you shared was amazingly informative. You seem to actually know what you are talking about instead of other popular "A.I image experts." Who seem to just paraphrase the explanations on github pages and you can tell they are pretending to know what they are doing. I am so excited every time you post a video because I KNOW I will learn something that will change my process in much better ways and make me better at using this tech. Thank you Mateo for not being the "HEY GUYS! whats going on! Todaaaaay I'm going to teach you how to..." Type of youtuber.
I'm gonna frame this. I would never put you through all I hate about youtube!
Underrated comment.
Tack!
OH crap!! why did i not see this sooner, i watched a lot of videos on TH-cam made by self nominated "experts", who seem to not even know the basics. I've just started Ai image generation since about a week and i've learned more with this video that with any other. Thanks a lot!
Thanks
You are one of the best source of information Matt3o! I really enjoy your work.
Wow, this video was filled to the brim with useful information, I had to pause multiple times to take notes despite working with ComfyUI well over year and considering myself quite familar with it. Even small things like having to use a different seed when you re-sample an image to prevent burning is good know, it took me hours of debugging a huge workflow until I figured this out on my own some months ago. I also didn't know that there is such an easy way for plotting meanwhile.
The best explanation and test. All respect to you for your effort You always simplify things and explain them in a good way 🙏❤️
Hey, amazing video! I'm really impressed by how deeply you understand the workflows and the 'why' behind every connection and function in ComfyUI with Flux. It’s clear that you’re not just copying and pasting workflows-you really know the 'deep' meaning behind everything. Can I ask how you learned all of this? What exactly should I study or focus on to develop a similar level of understanding? I want to move beyond just following workflows to truly mastering the concepts. Thanks for sharing your knowledge!
thanks for the nice words. developing extensions for Comfy really helps understand the innerworking. I basically learned stable diffusion by reading comfyui code. It's a slow process because there's very little documentation or help from other devs but it's doable if you are determined :)
@@latentvision thx, that's why you understand so well.
thanks for making this @latentvision this is a GREAT video! I am super excited to implement this in 3D space.
Great video explaining flux! I'm always looking forward to your content, thanks!
Extremely clear and easy to follow, thank you very much for your work.
Thank you for showing approaches and experiments that you perform to explore how the model works. Very interesting
I'm as mindblown as usual by your videos. Thank you!
Trying to run Flux on an M1 Mac has been very challenging to say the least, your image to image workflow has given me by far the best results both in time to generate and image result, thank you so much, very much looking forward to the IPAdapter when released...👏🏻
thanks for the explanation, always detailed.
I am surprised you did not mention the absence of negative prompt in FLUX.
Otherwise, great video as always! :D Thank you so much!
This is a great video! Great job. Thank you!
Excellent review 👍
🙏Thank you for the shout out. Always here to help. 🙏
🙏
Thanks, Matt! Super interesting!
Always the right video subject when needed, bravo to you.
Flux + Matteo. Can it get any better?
Oh yes: Flux + Matteo + Flux IP Adapter😊
Thanks for the Video! So the Schnell verson can be use comecially?
both can, but for dev/pro you need to pay. schnell is open
@@latentvision Thanks a lot for your answer! I am very new with ComfyUI. learning the basics. Do you recomend any setup to do video upsacaling using no more than 16 GB ram in a M1 mac?
Overall great analysis. I also managed to learned some new stuff, thank you!
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Amazing FLUX tutorial, thanks!! 👍
I actually learned so much, thank you!!
So happy to have you on team Flux!! ❤
very useful and amazing work. Thank you!
great video. always love the information that you share
such a great video again. you always explain things so welll... really appreciate this.
WOW! That was so interesting. Thank you. Oh by the way truely beautiful image's. Very inspiring.
This video describes flux wonderfully and gave me a better understanding of him. But I'm a newbie and I don't have higher skills, so I'd like to ask what can I do to run flux better?
Thanks again, Matteo. The definitive, balanced opinion on all the things that make open-source gen-AI great.
Thanks Matteo! Useful as always.
My best lullaby channel
I'll take that as a compliment... 😅
Thank you Matteo for the detailed analysis, very informative. Sadly with flux like pointed our main two obstacles are the Vram and commercial license, still Black Forest were generous with the community and shared with us the open source excellent model
Amazing as always !
Fantastic video. Solid explanations.
invaluable video, and channel 🙌
🙌
@@latentvision watched it at least 3 more times since and have absorbed so much valuable info about Flux and comfy/diffusion in general. your new nodes that you showcase here ("Text Encode for Sampler Params" and "Flux Sampler Parameters") are amazingly useful for testing, reminding me of early daze of Automatic1111 xyz grid ability to use simple syntax to accomplish a swath of tests in one fell swoop! really can't thank you enough for all. would absolutely be interested in a part 2 flux video (in case its not obvious!). also, between the various controlnets, and lora training, plus IPA (goes without saying here), i'm eager to hear your perspectives, tips, and more of your gargantuan wisdoms :)
thank you for sharing your knowledge.
Other things to investigate with Flux. There is a GGUF version, and when combined with the T5 encoder, does wonders (all Q weights available too to match your hardware). The next thing to investigate is all the new LORAs.
This guy is proof that we are not alone in the universe.
🤣okay thanks for the laugh
Wohoo another video from Matteo! Next video Ipadapter in flux? 👀
hopefully yes :)
@@latentvision
@@latentvision👏👏
@@latentvision can't wait for it 🥰
Super cool tricks, looking forward to the IPAdapter!
Great video! Learned lots of stuff!
Could we get some coverage on upscaling Flux when combining with the noise injection solution?
Thank you, there is always something more from you.🤗 I also sign the open letter 😉
just made a HKD 80 support!!! Thanks again.
awesome video, many thanks
Great video! Thanks for sharing!
grazie per questa preziosa risorsa, fantastico video! Posso chiedere secondo te come possiamo migliorare l'effetto reale della pelle senza l'utilizzo di Flux Realism LORA? Si può lavorare con i prompts?
hey Grazie! Best thing you can do is work with noise injection. the new Blueberry model will be out soon, that probably will help
really cool as always! ❤❤
Excellent video. Technical yet understandable. I have a quick question: Why are you using a resolution of 896x1088 for the images instead of a more standard resolution such as 920x1080, etc.? Doesn't it negatively affect the image quality if non-standard resolutions are utilized?
thanks! I haven't noticed a huge degradation in quality, the model seems to react very well at even very wide/tall resolutions
That's exactly what I've found in my (much more limited and far less sophisticated) testing. When it comes to art styles outside of: photograph, realism, anime, comic or realistic sketch, it's really difficult to get it to bend to your will. Flux seems to be oriented to the: "Photograph of attractive model stranger" Midjourney-type art style, and that's probably what about 75 - 80% of the current users of low cost/free generative AI wants.
Wonderful video and explanation.
As always, thanks Mateo!
Amazing insights! Thanks for sharing ❤
can you show us how to use our own loras and how to take that and get the best quality etc?
Wow, this is really good. One question, what does the ModelSamplingFlux node do? My current workflow doesn't have that node. I wonder what functions that node adds. Thanks!
max shift especially gives you a little control over noise and details (higher values, more details)
according to the license, the generation process must be non-commercial, but the images themselves can be used for commercial purposes without any restrictions. However, you cannot use the DEV model to create services where the generation process itself is the product being sold - for that, you need a commercial license.
Images that you generate with DEV on your own computer (or server) for your own purposes can be used in commercial projects.
they actually define what they mean by "commercial" for dev: use by commercial or for-profit entities for testing, evaluation, or non-commercial research and development in a non-production environment.
you cannot make money directly or indirectly. you can use it commercially for evaluation and non-production environments
Excellent video as always! Love the deep dive! It would be interesting to see an analysis of the quantized versions (fp16 vs fp8 vs nf4, etc) and how much they affect the overall quality and increase generation speed, as well as to hear your thoughts about all the different block merges of dev and schnell that are floating around Hugging Face.
Hopefully we'll have finetunes and IPAdapter soon, it appears there's already some working ControlNETs available!
I'm sure I'll make more videos about Flux. FP precision is a hot topic 😅 technically speaking you want to the highest you can afford. I've read people saying that nf4 is "better" than 8 or even 16. It's hard to define "better" when you take personal taste into account.
I noticed that in one-shot generations 8 and 16 (or even 4) are all pretty good, but then when you go upscaling, img2img, noise injection etc... 16 always gave me better results.
@@latentvision There's even GGUF quantized versions now, I played with them today. I have them working on ComfyUI and did some limited testing against the FP8 and NF4. They seem to generate a little slower but take significantly less RAM to load the model. Definitely promising, I love how fast things are developing now that quantization has become relevant for image generation.
I'm definitely looking forward to your upcoming Flux content! =D
Here's a summary of my limited testing of the quantized Flux Schnell with my Lenovo Legion 7i (i7 12900HX, 3080ti 16gb, 32gb RAM) in case you're interested on giving them a shot you can lmk if you get similar results:
~ FLUX SCHNELL NF4 ~
- No LoRA, No ControlNET =(
- 6.5s to load the model for the first time
- 30s for the first full generation
- 9s for each generation (2.5s/it) on average - 11.5s in average for the full workflow including vae decode
- 17.6GB RAM peak while loading model
- 70% (11 gb) VRAM used after loading model
- 78% (12.2 gb) VRAM in use during generation
~ FLUX SCHNELL GGUF Q4 ~
- No LoRA, No ControlNET =(
- 5.5s to load the model
- 28s for the first full generation
- 13s for each generation on average at 3.3s/it
- 15s for the full workflow including vae decode
- 8.7 GB RAM Peak while loading model (half as much as NF4!)
- 82% (13.3gb) VRAM while generating
- 72% (11.3gb) VRAM after
(a tiny bit more VRAM usage than NF4. GGUF seems promising but the slower gen is a dealbreaker, hopefully it can be improved)
~ FLUX SCHNELL FP8 ~
- 23.2s to load the model for the first time (3 times slower than NF4 and almost 4 times slower than GGUF)
- 43s for the first full generation
- 9s per generation on average at 2.4s/it
- 12.5s for the full workflow including VAE decode
- 31.3GB RAM Peak while loading model (dangerously close to OOM / swapping)
- 82% (13.4 gb) VRAM used during generation
- 70% (11 gb) VRAM used after generation
- LoRAs work in FP8 but it took me 12 minutes to do a generation with Flux Dev, I think I don't have enough VRAM for the model merge in FP8.... Here's hoping NF4 gets LoRA support soon
- Basic ControlNETs (Canny and Depth) are available
So GGUF seems like it might be promising if iterated on, but I think NF4 will probably take the lead. I can't wait for LoRAs and finetunes to start appearing. Flux is amazing for a base model but has some very annoying nagging issues like the excessive Bokeh on realistic images and it's inability to generate anime girls that aren't blushing (not to mention a complete inability to generate NSFW). Also, like SD3, it sucks for girls lying in grass, but interestingly, mostly with anime prompts. Honestly it seems targeted towards that Midjourney style, we'll need good finetunes for Anime and NSFW content in the long run, hopefully the licensing won't discourage those.
I was just wondering if there is some way to adjust the normal CFG (as opposed to the Flux CFG) when using the Sampler Flux with Parameters?
Thanks for all the infos, much appreciated.
you are the hero
@Matteo: Any hints on how to reduce the strong bokeh / DOF effect in the background, besides waiting for a specific LoRA?
try to split the description in two. start with the background and give great details about it (like if you mention a landmark it will try to make it pop). then the foreground but a lot less descriptive. sometimes it works. but they are all hacks.
There is a lora for that now
Absolutely incredible video! How did you get so good with this stuff? Do you have a background in ML/AI?
not really no :) I have an Art degree 😅
Goatse thumbnail
Outstanding video. Thank you so much
big round of applause to your channel, so helpful. Can you do a video that will stop me trying to click on nodes in the video while I'm watching it :)
Thanks Matteo continue. Can i ask you: Who can know if the image someone generated was done with flux dev?
there are distinctive feature you can recognize, especially in very small fuzzy details
Outputs. We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs you generate and their subsequent uses in accordance with this License. You may use Output for any purpose (including for commercial purposes), except as expressly prohibited herein. You may not use the Output to train, fine-tune or distill a model that is competitive with the FLUX.1 [dev] Model. the dev version has this clause, which is very confusing
it means that they don't own what you generate, and that you can't use flux generations to train a model that is competitor to flux
@@latentvision Can it be understood that the generated images can be used for commercial purposes, as they have a clause for the derivative content of prompt words in the output definition, which is also the reason why I am confused about this non-commercial clause
the "non-commercial" clause is pretty clear BUT ask a lawyer, not me, not an influencer, or a youtuber
You're right. Legal terms like this should be consulted with professionals, not understood by oneself. Thank you for your reminder
Yep. Very confusing. It seems that the license for using the model itself is for Non-Commercial Purposes only. So while you can use the outputs commercially, you can't use the model commercially without obtaining a separate license. That seems to be overall vibe. But until somebody writes them and gets a direct clarification it is hard to be sure.
Incredibly insightful! Cheeky request to cover the x-labs controlnet and ipadapter releases?
I love your videos, and I learn a lot, thank you very much!,