I’ve learned so much from your videos Matteo, thank you! How can we train this model, is it possible with Dream Booth notebook? Or do you use some other tool for training? Thanks in advance 🙏🏼
If you do not mind a comparison, you are like a shoe sole maker. While every one else is excited about the new shoes, show their design and try them you briefly come out of your workshop, seemingly effortlessly tear new shoes apart expose the materials and the seams, briefly tell us about the current snapshot of the shoe industry in your segment and go back into your workshop making most comfortable shoe soles ever. Huge thank you for the time you set aside to make these videos!
Thanks for the breakdown on SD 3.5! I really appreciate your commitment to pushing for open source projects rather than just jumping on whatever is currently producing the "best" results. Looking forward to your future projects!
OPEN weights, clean of aesthetic influence, 7 GB lighter, and I can use the non distilled version is 100% win for me. I don't care about quality, as the community will take care of that.
Apart from discovering the new SD I'm learning new things here: what's the use of weight at the beginning or the end, and that we'll never have a decent ipadapter on flux 😭😭 but then it makes SD3.5 more interesting. Well, as you said, SD3.5 might have more potential than Flux in all areas. About the rolling up sleeves thing, I think that the Flux community has proven that HUGE improvements can be done when a model reaches critical mass (of interest). Is this what you fear, that since Flux is looking better from the start it might prevent people from putting effort into SD?
I hope the community will work on this 3.5 version, Flux is taking a lot of time to give good art style results. The samples you gave here are already better than many things I saw with fine-tuned Flux. And the style flexibility is amazing.
@@ronbere If you've spent any time at all with flux, the quality is amazing, but it's insanely hard to get it to create specific styles... so I don't really know what you're on about.
Hey Mateo, could you do a Style and Composition for FLUX and create workflows aswell, as these help everyone! and is greatly appreciated! Keep up the Epic work...
flux often gives me bad results when I want a subject to hold a lightsaber. like the sabers will just be floating next to the person, or shooting out their back. Sometimes it does alright though.
Is there any information on how to use the Flux Block Buster node with Flux? I ran it with the Display Any node to see what's in there, but have no idea what values to change (or the proper syntax). Thanks.
diffusers is arguably more stable and doesn't change as quickly. the updates go through long revisions before going live. Also new papers are generally developed on diffusers first.
Regarding the Hand problem, have you found Flux performing well with that? I have been struggling getting Flux to make good hands holding things once I try to push it into any artstyle.
Hi Matteo, I'm working on a nodal GUI for implementers to use for their custom backend (kinda LiteGraph). It's specifity is its made with SVG, so the node content is plain HTML/CSS. You may be interested for your diffusers UI ? ( I couldnt find any repo on GH so I post it here... say if you want get in touch ;) )
15:56 How did you make img2img work? I tried with the large model and I'm getting artifacts and blurriness all over the generated image. The image is 640x640px, so below 1 megapixel.
only a beta version, we need to wait for the real deal (a week or so). from what I can see it could be interesting for training, not so much as model on its own
With the introduction of Flux there seems to be a significant shift away from the open development. Instead of engaging the community in shaping its evolution, the release of a fine-tuned model suggests a move towards a more controlled and closed-off approach. Plus Flux looks like a cheap version midjounrney.
These new models look like they are being heavily trained on ai generated images. Everything feels like it is moving towards midjourney. It is always possible to prompt or work your way into original looks, but it feels harder and harder. Do you think that has anything to do with how working with Flux and SD3.5 feels for you? It’s like these models are getting better at keeping the masses happy with similar looks and perspectives and contrast and vibrancy, and even generic styles…. BUT to do original work it is harder and harder.
OR, maybe they are training generic tags on specific looks. It’s like “oil painting” is really Greg Rutkowsky, but the just swapped the trigger words. Almost like ChatGPT being heavily handled by human steering…
Either have a disclaimer or don’t use jargon nobody knows. Also, I had to watch at 2x speed and was pretty slow plus you didn’t really give much input. You just ‘did’ stuff. Sorry for the bunch of criticism…
4 seconds ago and i clicked it
you deserve a pin
@@latentvision lol i just finished watching your valuable video, and thans bdw for the pin , waiting for your project to get released.
typing that added seconds so that's sum bs.
I’ve learned so much from your videos Matteo, thank you!
How can we train this model, is it possible with Dream Booth notebook? Or do you use some other tool for training?
Thanks in advance 🙏🏼
This is one of the best AI related channels on TH-cam, appreciate the videos, thankyou. :)
he is the one inventing different gadgets for sd so of course he is the one who can give best insights about these.
If you do not mind a comparison, you are like a shoe sole maker. While every one else is excited about the new shoes, show their design and try them you briefly come out of your workshop, seemingly effortlessly tear new shoes apart expose the materials and the seams, briefly tell us about the current snapshot of the shoe industry in your segment and go back into your workshop making most comfortable shoe soles ever. Huge thank you for the time you set aside to make these videos!
gotta frame this :D
You had my attention at IPadapter for SD3.5 🎉🎉🎉
An hour later than SD3.5 release thanks to you I know SO MUCH more about this new model. This is GOLD.
As always the finest and precise review of a new model.
I hope we can hear about your new project soon.
Best wishes.
Thanks for the breakdown on SD 3.5! I really appreciate your commitment to pushing for open source projects rather than just jumping on whatever is currently producing the "best" results. Looking forward to your future projects!
OPEN weights, clean of aesthetic influence, 7 GB lighter, and I can use the non distilled version is 100% win for me. I don't care about quality, as the community will take care of that.
It didn't do it with sd3
Thank you! I assumed that this model is not free but apparently it is free even for commercial use. That's cool!
it is! go crazy!
Its a good day when a video comes out on LV
You are so good dude. Always great stuff
Always ready for your videos, thanks a lot Matheo!
Great news and well told! Let's see if SD3.5 can bring back the excitement and power of SDXL. ❤
“…couldn’t get the girl to hold the freakin’ knife.”
If I had a dime for every time I’ve heard that. 😜
Well done! 👏🏻👏🏻👏🏻👏🏻
So informative as usual Mateo. keep up the good work bro
Thank you for the timely information. Super excited to see what your new UI looks like!
Thanks. So glad you are in this space. Gives me hope for the future. You are a saint:)
I have no idea what you're doing, but it's fascinating.
Exciting! Thank you, Matteo.
Apart from discovering the new SD I'm learning new things here: what's the use of weight at the beginning or the end, and that we'll never have a decent ipadapter on flux 😭😭 but then it makes SD3.5 more interesting. Well, as you said, SD3.5 might have more potential than Flux in all areas.
About the rolling up sleeves thing, I think that the Flux community has proven that HUGE improvements can be done when a model reaches critical mass (of interest). Is this what you fear, that since Flux is looking better from the start it might prevent people from putting effort into SD?
Nice, you already got access to it :D I am still waiting for models to download :D
Dude! you are now a Guru in SD!
Thank you!
Great explanations ! Love your videos and your work !
Thank you. I tried it out. We'll see how this all goes over the next few months.
Man, my favorite AI creator is back! You are the person I trust the most and I love your projects and videos. Thanks man!
I hope the community will work on this 3.5 version, Flux is taking a lot of time to give good art style results. The samples you gave here are already better than many things I saw with fine-tuned Flux. And the style flexibility is amazing.
Are u serious? 😆
no petty guild wars please :)
@@ronbere If you've spent any time at all with flux, the quality is amazing, but it's insanely hard to get it to create specific styles... so I don't really know what you're on about.
@@generichuman_ Specific style? With flux is insanely easy. Just train you own lotra on civil within hour ...DONE.
Keep them coming boss!
Will do!
I feel like more people need to hear these sage words at 14:42
Thank you for the insightful video Matteo. The 3.5 model shows great potential.
Thanks! Really nice overview.
It seems to be a very nice update. I hope community finetunnings come soon to fix quality on hands etc
i just found your channel and find it very valuable! even though i use forge your content is still full of so much knowledge for me
Mateo, you're a legend.
that old?
rejoice he is alive and well !
Today is a good day! Great video as usual Matteo
looks interesting, thanks for the info
Thank you for all your work, its impressive! ♥
13:45 like those images, very nice!
Best developer / educator in the game.
Thanks a lot for this one!! 😊
awesome video as always, thanks matteo:)
Thanks for your amazing contents 👌
Would love to test out the diffuser ui, and see what’s up with that. I felt like a kid before Christmas with all this.
I'm on low vram so I'm hoping 3.5 med is not garbage and it gets love from the community... I'll be back in 3 months lol.
Grazie 😊
Thank you for the video! Seems very promising. Have you tried inpainting with it?
an inpating model (or more likely a controlnet) should be released soon
bro he himself is the inpainter, cause he will make it and release it, and you ll test and give your feedback, haha
Holy shit i forgot this was suppose to release,
Hey Mateo, could you do a Style and Composition for FLUX and create workflows aswell, as these help everyone! and is greatly appreciated!
Keep up the Epic work...
you are the best
no! you are the best!
flux often gives me bad results when I want a subject to hold a lightsaber. like the sabers will just be floating next to the person, or shooting out their back. Sometimes it does alright though.
I shouldn't watch this late at night☹
Cool! getting a reliable ipadapter would be great
right behind your current time in the video red bar has been changed to PURPLE?!?! anyone else notice this?
Honestly, especially due to liscence of SD, I'm more looking forward to Sana from Nvidia. Wonder if it might be compatible to comfy though
Sana is very interesting indeed
Bravissimo
As long as SD3.5 has the same license as Flux it's not going to be the next SDXL 😥
Do you have any thoughts or ideas on how to go about training SD 3.5?
there are a few tutorials already, jump in the discord server if you want, I'll send you the links
@@latentvision oh sweet, what channel in the discord?
Is there any information on how to use the Flux Block Buster node with Flux? I ran it with the Display Any node to see what's in there, but have no idea what values to change (or the proper syntax). Thanks.
that would require a video on its own...
8gb/2gb dose that mean low vram can work with sd3.5?
What's the fundamental difference between what you're working on with UI for Diffusers vs ComfyUI?
diffusers is arguably more stable and doesn't change as quickly. the updates go through long revisions before going live. Also new papers are generally developed on diffusers first.
Regarding the Hand problem, have you found Flux performing well with that? I have been struggling getting Flux to make good hands holding things once I try to push it into any artstyle.
well yeah, flux is generally better at hands, not perfect, but simple hand gestures are generally fine and even if it fails it's not in dramatic way
Hi Matteo, I'm working on a nodal GUI for implementers to use for their custom backend (kinda LiteGraph). It's specifity is its made with SVG, so the node content is plain HTML/CSS. You may be interested for your diffusers UI ?
( I couldnt find any repo on GH so I post it here... say if you want get in touch ;) )
sure! ping me on discord maybe?
🎖🎖💚💚
what version of comfyui u use sir? mmine seems broken, using the portable one, and the newest one
I'm always updated to the latest revision
@@latentvision its always generate black image
15:56 How did you make img2img work? I tried with the large model and I'm getting artifacts and blurriness all over the generated image. The image is 640x640px, so below 1 megapixel.
be sure the image is 1 megapixels
@@latentvision Oh, I see. Thanks!
but can it draw woman lying on grass
that's all that matters!
Can it be use with gguf clip models?
competition is always good,flux is getting boring
Guys! What UI it's using here?
ComfyUI
have you also tested the medium model? how is it?
only a beta version, we need to wait for the real deal (a week or so). from what I can see it could be interesting for training, not so much as model on its own
@@latentvision I just hope it becomes an evolved 1.5 and the community starts adopting it
With the introduction of Flux there seems to be a significant shift away from the open development. Instead of engaging the community in shaping its evolution, the release of a fine-tuned model suggests a move towards a more controlled and closed-off approach. Plus Flux looks like a cheap version midjounrney.
What you are talking about ... Flux got more support within first week than SD 1.5 or SDXL within 8 months ....
this model is good , how is speed
These new models look like they are being heavily trained on ai generated images. Everything feels like it is moving towards midjourney. It is always possible to prompt or work your way into original looks, but it feels harder and harder. Do you think that has anything to do with how working with Flux and SD3.5 feels for you? It’s like these models are getting better at keeping the masses happy with similar looks and perspectives and contrast and vibrancy, and even generic styles…. BUT to do original work it is harder and harder.
OR, maybe they are training generic tags on specific looks. It’s like “oil painting” is really Greg Rutkowsky, but the just swapped the trigger words. Almost like ChatGPT being heavily handled by human steering…
Which program do u use?
He works with ComfyUI here. But it's a tad bit more complicated than just that. I'd suggest searching for installation tutorials on youtube.
It is ComfyUI
looking at these results I still prefer FLUX
Feels super oversaturated. Think FLUX is still better than SD 3.5.
it can't be the new SDXL because no one can run this and the next gen cards dont have the VRAM bump we needed
so true. but that's the state of things unfortunately. anyway quantized versions will come
Runs just fine at 4s/it on rtx 3060 12gb with full precision. Uses about 36 gb of ram tho (including os, browser and all three text encoders).
Hi 👋
FLUX still better
I just wanted to see women laying on grass 😂
Either have a disclaimer or don’t use jargon nobody knows. Also, I had to watch at 2x speed and was pretty slow plus you didn’t really give much input. You just ‘did’ stuff. Sorry for the bunch of criticism…