The Flux.1 Depth model doesn't estimate the depth from an existing image, it's the other way around. It generates an image that could match the existing depth map it's given as input. They're using another (probably third party) model to generate the depth map from the input image.
Wtf are you talking about? The flux depth model does exactly that, it estimates the depth from an existing image... That's what the whole blog post is about... That's what the demo is showing... And it's licensed under "Flux Dev License", what third party are you talking about? Did you even read the blog post or watch the video? You may have the slowest brain in existence dude
Character consistency is the holy grail of ai image generation. OpenAI seemed to have stumbled across the answer with their talk of making ChatGPT "multi-modal" last spring, but as far as I can tell, they never released it and haven't mentioned it since.
Matt great video once again. I can’t believe you’ve crossed 270k subs I started following you when you had around 20-30k (maybe even 15k) and AI was really starting to kick off and I subbed to you and Dave Shapiro along some other great creators. I remember thinking, this guy is gonna boom in popularity but dang I didn’t think it’d be THAT fast. Just wanna say mad respect to you and it’s well deserved you’re a no nonsense just fun content creator who’s very informative and I always look forward to your notifications during my busy work days. Thanks again and here’s to you eventually hitting a 500k-1mil one day. 🍻
About the character consistency, I think they meant that the output image won't match the initial input image you give it, but if you generate 4 outputs from the same input image, those will be all consistent between themselves but not with the original input.
Loving the new models! Im pretty sure u would get better masking/Inpainting results if u broke your prompts up, like get your moon surface background the way you like, then add your green alien, then the Earth in space, instead of one shot prompting it...
@MattVidPro thanks for the reply. Yea I was looking at the new oura but the new ringconn is thinner and has a slimmer profile. I believe it doesn't have the vo2 max function of the oura though. Depends what you want it for!
Flux _can be stopped_ not enough credits. Before that I asked it to create and image of Batman fighting the Joker and got: *NSFW content detected in image. Try running it again, or try a different prompt.* It is dreadful.
Huge thanks to everyone for watching! Now would be a great time to leave a like and maybe even subscribe if you're enjoying my content :)
Of course, i love your content ❤
🐟 Been watching for over 2 years!
hi
@@LouisGedo Yo!
@@ThymeHere Love to hear it! Thank you so much for being a loyal viewer, it means the world to hear that!
The Flux.1 Depth model doesn't estimate the depth from an existing image, it's the other way around. It generates an image that could match the existing depth map it's given as input. They're using another (probably third party) model to generate the depth map from the input image.
Thank you for the insight!
I wish everyone would just just third party everyone.
Wtf are you talking about? The flux depth model does exactly that, it estimates the depth from an existing image... That's what the whole blog post is about... That's what the demo is showing... And it's licensed under "Flux Dev License", what third party are you talking about? Did you even read the blog post or watch the video? You may have the slowest brain in existence dude
Character consistency is the holy grail of ai image generation. OpenAI seemed to have stumbled across the answer with their talk of making ChatGPT "multi-modal" last spring, but as far as I can tell, they never released it and haven't mentioned it since.
14:51 that would be a good thumbnail.
Love watching your videos man! Keep up the good work bro
Appreciate the kind words!!
Flux did MUCH better with your hair than ideogram. Excited to play!
Matt great video once again. I can’t believe you’ve crossed 270k subs I started following you when you had around 20-30k (maybe even 15k) and AI was really starting to kick off and I subbed to you and Dave Shapiro along some other great creators. I remember thinking, this guy is gonna boom in popularity but dang I didn’t think it’d be THAT fast. Just wanna say mad respect to you and it’s well deserved you’re a no nonsense just fun content creator who’s very informative and I always look forward to your notifications during my busy work days. Thanks again and here’s to you eventually hitting a 500k-1mil one day. 🍻
About the character consistency, I think they meant that the output image won't match the initial input image you give it, but if you generate 4 outputs from the same input image, those will be all consistent between themselves but not with the original input.
Draw over that jacket, then prompt "denim jacket two sizes too small" and "tie with different pattern where the mask is."
Loving the new models! Im pretty sure u would get better masking/Inpainting results if u broke your prompts up, like get your moon surface background the way you like, then add your green alien, then the Earth in space, instead of one shot prompting it...
Probably true. Getting way to used to lazy prompting because of ideogram 😅
Thanks for the thorough breakdown video, this looks amazing.
can you please make an install guide for comfy ui with the correct workflow
Matt what's the ring you wear? Looking at the RingConn gen 2 myself as there's no monthly sub cost
It’s actually an oura ring 3. Not a fan of paying for subs so might have to look into your option 😂
@MattVidPro thanks for the reply. Yea I was looking at the new oura but the new ringconn is thinner and has a slimmer profile. I believe it doesn't have the vo2 max function of the oura though. Depends what you want it for!
Wow, I just turned my rubber duck into Matt, but it scared me, so I'm changing it back. Great video.
Ya-know, they came out with the first ever Flux model back in 1985, with the Flux Capacitor... =P
14:40 Two Guys One Lemon 🤠🍋🤠
I'm still going to use dall-e mostly because it's processing the longest prompts
I'm just wondering when is FLUX 1.1 [Dev]
fun! please install and show us
I wanna see them ad in LORA' s as well as face swamping in flux
Maybe create a comparison between websites offering FLUX Pro ?
Does it gets expensive for an Designer use it a lot from API?
its blackforest labs not flux
Bravo
Let us know when we can download such things on our own computer. ty
This is good news) I wonder what the requirements are and how quickly lllyasviel will connect them
RAW = huge
I thought they were going to make everything open source but if not then I'm not in.
always the new one
47MI
137 L
20 C
278K S
1 587 V
NOV 21 2024
People going to use you as a template, just a warning
@1:57
Flux _can be stopped_ not enough credits. Before that I asked it to create and image of Batman fighting the Joker and got: *NSFW content detected in image. Try running it again, or try a different prompt.* It is dreadful.
Touch the grass me friend
i don't know photoshop generative fill seems better and easier.
I use that too, it’s not great for highly specific prompts but I find it very useful for smoothing things out and filling in the cracks so to speak
flux more like flex
First
I'll spank u ❤
💀 @@shinrinyokumusic1808
third
Fourth