I've been using FLUX. It is quite an upgrade from previous Stable Diffusion models. Being able to do text and fingers is really nice. Still not perfect, you get decent fingers 40-50% of the time, which is better than the 5% from previous models.
1. Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
I congratulate you for the easy way to explain this process. My original language is Spanish, but you make it so practical and functional that I understood almost everything with my limited English. I looked for several videos in English and Spanish and this was definitely the best choice and I followed you in two videos. You should dedicate yourself to teaching or get more out of these types of tutorials.
OMG… not only works, but the result are out of this world!! Is a bit slower, but the wait is definitely worth it!! Using chat gpt to give it an additional twist helps a bit.
I finally got it working on my AMD RX 7900 XTX graphics card. It took a while as I had to use ZLUDA, but I’m very happy with the outcome and the speed. Flux is amazing!
Thanks for the tutorial, i was able to get it working. I mostly use A1111 for 1.5 and sdxl and will probably stick with that for awhile because it takes my 3080 2 to 3 minutes to generate a single image with flux with just the default settings. I am impressed with the quality of the images though!
Hello Mohammed, thank you so much for this Video and the one who you explain how to install Comfyui. I looked so many videos how to run Flux on Comfyui. No one works...until i saw your video and now... everthing works fine and smooth on my PC. You have exact those videos i was looking for a long time. Thanks again. Have a great day. 😃
Installed Comfy UI, downloaded the .safetensor, vae and clip files, the workflow and did exactly like in the video. On my 3080 ti I get got prompt Using pytorch attention in VAE Using pytorch attention in VAE C:\Users\shutd\ComfyUI_windows_portable>pause Press any key to continue . . . and then the prompt window closes
@MDMZ, My friend, the quality, precision, and skill you consistently deliver are nothing short of amazing. You always provide us with impressive and amazing work, and I can only imagine the effort that goes on behind the scenes. Thank you, and please never stop. Just a quick question, if you don't mind: do you live in Malaysia?
Thanks bro! edit: Actually i got just black image... i did everything as you did. Googled it and it said i probably have the wrong vae, but i have the exact same vae as you. Do you know where I can get help troubleshoot this? edit edit: actually, it was the clip loader problem. you didn't quite explain how that clip loader worked and somehow i had two of the same clip chosen. You have to have different clips. in any case, thanks again for this otherwise great tutorial.
Hey buddy @MDMZ I loved your work super easy to understand and very crisp video. Please can you suggest what to do if we want same persons face in all different prompts?
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Good video. Please make another video on how to integrate "img to img" using flux. This will help us take existing image and write text on top of it instead of generating an image with text from scrach.
@@RANJEET3939 to be honest, I rarely make instant tutorials about new tools, cause that means I will be covering something unstable that's gonna be improved over time.
When using the new low-mem (4GB) model Q4KS - Do I need to install the other files you mention beside the main FLUX Q4KS under unet? or because it's a low-ram different model, other files are needed in stead? Thanks ahead 🙏
Hi! Awesome video again, thank you! I downloaded all the files, but the DualCLIPLoader doesn't list the clip_l.safetensors, t5xxl_fp16.safetensors and the fp8 ones neither. I updated All, restarted ComfyUI, terminated and started again and still nothing. They are in the same folder as in the video /clip ... I also tried createing a /clip/t5xxl sub folder, but it doesn't show up. Can you help me sovle it?:)
Great video, thanks! How do we create additional images that are based on the first image? For example, once you create the cover for LUNA, how do you create all subsequent images using the same character and style?
I don't know why, but my images are just solid noise... (I tried to add a link to a picture, but my comment deletes🥲). I downloaded all the dependencies (model, 3 encoders, VAE, and the workflow you mentioned), but it's still a mess. I have a MacBook Pro M3 Pro, so I also updated PyTorch to the nightly version. What am I doing wrong? I tried changing schedulers, samplers, steps, and even the size 😃 but nothing helps...
wanted to try this but when I draged the workflow json file instead of "load diffuse model" node I had "UNETLoader" node without weight, so an error occured... have no idea why there's no "load diffuse model" node
I am currently using ComfyUi and as a checkpoint Dreamshaper 2.1 Turbo the combination works well together with my AMD system. My questions, does Flux run stable on an AMD system? I have a 7600 XT with 16GB VRAM, 32GB RAM and AMD Ryzen 5 7500F
Hi. Thanks for the good guide. I have a question: why my pc crash with this setup? I have 16 RAM and a RTX 4070. It's strange, because even if I start Queue process and immediately close the app, after few seconds my pc totally freeze
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
I am on AMD GPU RX 7600XT. For me dreamshaper_8 works very well in comfyui using checkpoint. But I tried so much to get flux_1 working, it just doesnt work. I followed your instructions, as well as other tutorials. I even updated comfyui to latest version. But everytime I queue prompt, it gets stuck at first node 'Load Checkpoint' for a minute and then server crashes. No error in CMD, just 'got prompt' message. I even tried checkpoint version of flux from their official documentation, still the same issue. I wonder if I am doing something wrong, or flux is not yet supported on AMD GPU.
i have the same problem except mine says: got prompt model_type FLOW model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 and then it hangs forever (ive waited hours) RTX3060 Ti
How much time per image would it take on 12 gb vram, for making 2048x2048 resolution images? Have people recorded the time for that kind of resolution? Please share all kinds of timings on high resolution. I don't like low resolution generations. Upscaling landscapes seems to ruin everything.
@@MDMZ Thanks for replying ! actually there's no text error, models are in the right files in the console it just stops at "model_type FLUX" and the workflows are note the problem, i noticed that my gpu wasnt active, comfyui can't simply get passed the Laod Diffusion model, I tried it with Schnell, dev, fp16 fp8 nothing is working, I don't understand
One thing I do not understand.. I can download checkpoints for flux, but I have not found one single tutorial on how to use them. I prefer Stable Diffusion, cause everything just works, while Flux have been a constant headache + how bad comfyui is to use.
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
2:51 => "Welcome hell". LOL. Also, you want to hyphenate "smiley-face" otherwise you'll get a token drift. 3:08 => There's no such thing as "hyper-realistic". Something is either real or it is not. Also, "portrait" is a painting, so you're asking the machine for both a realistic image and a painting. You kind of screwed yourself there. You can't ask for "detailed skin textures" because it's not a 3D model. You have to render at a higher resolution than 512x or 768x. 5:24 => How can a network of machines know what "fitting perfectly with the theme" means? It's not another human. You're watering down your prompts, which leads to worse results.
hey friend, you're telling a human what he told a machine that was taught by a human that he's wrong. A better understanding of tokenization might help you get why certain tags (tokens) "work" because they are specific to the dataset the model was trained on. Example: using danbooru tags as opposed to natural language can give you more ideal results based on the tags used to train the model.
@@MDMZ the free version is generating crap, I saw the option for the pro version for better generations but I gotta pay, which site are you using to generate with flux??
i think i need a 4090,my 4060 laptop is too slow to use flux😢,it takes about 5 mintues to produce a 1024*1024 picture with flux dev fp8,oh god,who can buy me a 4090😥
FLUX is truly amazing, but what you're doing is just subpar. It's a little bit of this and a little bit of that... What's the point of making these videos if you don't take the time to explain things properly?
And here's how to run FLUX online: th-cam.com/video/syXk5CgABJg/w-d-xo.html
flux.1 dev link gives a 401 error
In fact Flux lifting image creating on a next level
I've been using FLUX. It is quite an upgrade from previous Stable Diffusion models. Being able to do text and fingers is really nice. Still not perfect, you get decent fingers 40-50% of the time, which is better than the 5% from previous models.
1. Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
isnt flux very heavy demand on pc? I have 16gb, and it takes ages for me to create
I congratulate you for the easy way to explain this process. My original language is Spanish, but you make it so practical and functional that I understood almost everything with my limited English. I looked for several videos in English and Spanish and this was definitely the best choice and I followed you in two videos. You should dedicate yourself to teaching or get more out of these types of tutorials.
Wow, thank you!
This FLUX AI step-by-step tutorial is amazing! It makes abundantly evident why it's regarded as the greatest image generator ever.
OMG… not only works, but the result are out of this world!! Is a bit slower, but the wait is definitely worth it!! Using chat gpt to give it an additional twist helps a bit.
Awesome!
I finally got it working on my AMD RX 7900 XTX graphics card. It took a while as I had to use ZLUDA, but I’m very happy with the outcome and the speed. Flux is amazing!
Great to hear!
could you please go into detail? AMD User here ;)
Thank you for this amazing tutorial. I've been looking for this since last week. You guys are too PRO!
Glad it was helpful!
Just what I needed and posted 7 hours ago... wow this is my lucky day
😉
Thanks for the tutorial, i was able to get it working. I mostly use A1111 for 1.5 and sdxl and will probably stick with that for awhile because it takes my 3080 2 to 3 minutes to generate a single image with flux with just the default settings. I am impressed with the quality of the images though!
Great to hear!
Hello Mohammed, thank you so much for this Video and the one who you explain how to install Comfyui. I looked so many videos how to run Flux on Comfyui. No one works...until i saw your video and now... everthing works fine and smooth on my PC. You have exact those videos i was looking for a long time. Thanks again. Have a great day. 😃
Awesome! happy to help
Great tutorial. Dev is working on my 4070 TI Super, just slooooow. About 1:15 per image.
happy to hear it's working! and yeaaa sadly it is a little slow
You call that slow?? I need to wait at least 5 minutes working with stable for 720p images 😪😅
@@christiandeleo7737 Give Forge a try. I generate images in like 15 seconds now.
Installed Comfy UI, downloaded the .safetensor, vae and clip files, the workflow and did exactly like in the video. On my 3080 ti I get
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
C:\Users\shutd\ComfyUI_windows_portable>pause
Press any key to continue . . .
and then the prompt window closes
same
same
Change weight_dtype from default to something else in Load Diffusion Model (First box). Fixed it for me.
SAME HERE MANN
@MDMZ, My friend, the quality, precision, and skill you consistently deliver are nothing short of amazing. You always provide us with impressive and amazing work, and I can only imagine the effort that goes on behind the scenes. Thank you, and please never stop.
Just a quick question, if you don't mind: do you live in Malaysia?
thanks and yes!
Please make a complete step by step video tutorial on image to image generation using ComfyUI. I appreciate your hard work.
Thanks bro!
edit: Actually i got just black image... i did everything as you did. Googled it and it said i probably have the wrong vae, but i have the exact same vae as you. Do you know where I can get help troubleshoot this?
edit edit: actually, it was the clip loader problem. you didn't quite explain how that clip loader worked and somehow i had two of the same clip chosen. You have to have different clips. in any case, thanks again for this otherwise great tutorial.
glad you solved it
Hey buddy @MDMZ I loved your work super easy to understand and very crisp video.
Please can you suggest what to do if we want same persons face in all different prompts?
Thanks a ton for this, it made things quite easy. After generating from my own prompt, all I got to say is wow, okay I get it.
Awesome!
There are good up-scaling workflows for comfy.
for sure!
@@MDMZ Hmmm... Fellow Canadian?
Excellent video.
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Good video. Please make another video on how to integrate "img to img" using flux. This will help us take existing image and write text on top of it instead of generating an image with text from scrach.
the things we'll be able to do with flux are gonna be interesting, I will keep experimenting and definitely make more videos on it
@@MDMZ Please try this as soon as you get time. This will be much of practical use to see writing text on existing image.
@@RANJEET3939 to be honest, I rarely make instant tutorials about new tools, cause that means I will be covering something unstable that's gonna be improved over time.
When using the new low-mem (4GB) model Q4KS - Do I need to install the other files you mention beside the main FLUX Q4KS under unet? or because it's a low-ram different model, other files are needed in stead?
Thanks ahead 🙏
honestly, not sure, I'm yet to experiment with that
Great video! Thanks 🙏👍
Hello. Thanks for the video. Does it support Control Net?
I saw some FLUX controlnet models going around already, I'm yet to experiment with them
Thank you for the video Mohammed! Is there any way to reference an existing photograph and have the Flux ai use for the underlay/foundation?
pretty sure there are workflows for that out there, try searching "FLUX image to image workflow"
Hi! Awesome video again, thank you! I downloaded all the files, but the DualCLIPLoader doesn't list the clip_l.safetensors, t5xxl_fp16.safetensors and the fp8 ones neither. I updated All, restarted ComfyUI, terminated and started again and still nothing. They are in the same folder as in the video /clip ... I also tried createing a /clip/t5xxl sub folder, but it doesn't show up. Can you help me sovle it?:)
strange, this usually happens when u dont place the files in the correct folder, can u drop some screenshots on discord ?
Great tutorial - short and very precisly - but in fact Flux runs slooow and I have a 24 GB VRAM
yep
Great video, thanks! How do we create additional images that are based on the first image? For example, once you create the cover for LUNA, how do you create all subsequent images using the same character and style?
might need a more advanced workflow for that
how do we run our own models .. example, after making a lora of myself on replicate . how do I use that lora in comfy to make my own images?
place it in the lora folder, then load it using a lora loader node
I don't know why, but my images are just solid noise... (I tried to add a link to a picture, but my comment deletes🥲). I downloaded all the dependencies (model, 3 encoders, VAE, and the workflow you mentioned), but it's still a mess. I have a MacBook Pro M3 Pro, so I also updated PyTorch to the nightly version. What am I doing wrong? I tried changing schedulers, samplers, steps, and even the size 😃 but nothing helps...
wanted to try this but when I draged the workflow json file instead of "load diffuse model" node I had "UNETLoader" node without weight, so an error occured... have no idea why there's no "load diffuse model" node
I am currently using ComfyUi and as a checkpoint Dreamshaper 2.1 Turbo the combination works well together with my AMD system. My questions, does Flux run stable on an AMD system? I have a 7600 XT with 16GB VRAM, 32GB RAM and AMD Ryzen 5 7500F
if you're able to run comfyui already, I don't see why a FLUX workflow wouldnt work, but the question remains whether it's going to be fast enough
Hey mate, great tutorial. If you make the tutorials more detailed and remove music it would be great for learning purposes.
thanks for the feedback, I do go into details in other more complex videos
Is there a way to input images / product renderings?
Hi. Thanks for the good guide. I have a question: why my pc crash with this setup? I have 16 RAM and a RTX 4070.
It's strange, because even if I start Queue process and immediately close the app, after few seconds my pc totally freeze
You could run out of memory.
try a smaller resolution setting, FLUX requires a lot of VRAM
Please make a complete tutorial on Applio RVC
never head of it, i will check it out
using fp8 in weight dtype will save your GPU memory
yep worth trying
My output keeps coming out as a full black image only, no matter the prompt or changing some things.
that's strange, make sure your GPU is up to date, u can also try reinstalling comfyui
seems like the dev version is access restricted at the moment, but the other one is accessible.
Oh, what makes u think so ?
does this process work on mac too?
I'm missing three nodes and the install missing nodes can't detect anything. I tried searching in the manager but there's nothing
did u update comfyui first ?
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
I cant find in my ComifyUI Manager section. Help me pls
I have it all but there's no flux option in the DualClipLoader, I only have SDXL and SD3. Please advice, I'm so close now ;-)
make sure you download all the files mentioned, and update your comfyui
Hi, MDMZ can you load custom Lora/multiple lora's into this Workflo to generate Couple photo.
I believe you can, I'm yet to try it out myself
I am on AMD GPU RX 7600XT. For me dreamshaper_8 works very well in comfyui using checkpoint. But I tried so much to get flux_1 working, it just doesnt work.
I followed your instructions, as well as other tutorials. I even updated comfyui to latest version. But everytime I queue prompt, it gets stuck at first node 'Load Checkpoint' for a minute and then server crashes. No error in CMD, just 'got prompt' message.
I even tried checkpoint version of flux from their official documentation, still the same issue. I wonder if I am doing something wrong, or flux is not yet supported on AMD GPU.
that sucks! yeah it could be a compatibility issue, did you try launching ComfyUI using CPU instead ?
i have the same problem except mine says:
got prompt
model_type FLOW
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
and then it hangs forever (ive waited hours) RTX3060 Ti
Same issue here.
"File isn't available on the site". Downloading DEV version doesn't work rn
sign in
Are the models down for flux on hf?
How to you get the image bar at the bottom?
Is there any image to image option?
you can definitely build a workflow for that, the possibilities are pretty much endless
How much time per image would it take on 12 gb vram, for making 2048x2048 resolution images? Have people recorded the time for that kind of resolution? Please share all kinds of timings on high resolution. I don't like low resolution generations. Upscaling landscapes seems to ruin everything.
Might be bad, if he called 12 or 8gb GPUs "low end" 💀
i guess it's relative, low end in this context, 12GB is decent in other scenarios
@@MDMZ in my stable audio runs the T5 model usually consumes a lot, is it the same for the model?
Thank you man. How can I load an image in comfyui like mj in discord?
you will need to setup an img2img workflow
Any idea why when I drop in the json and go to change the any of the sttings it just goes to undefined and not seeing the files.
you need to download the necessary files/models and place them in the right folders
Thank you!
4090 aobut 20 min... something's wrong?
20 mins for a single image? yeah that's not normal
Same here. Using unet flex 1dev Not sure why it's taking so long only 4 steps.
Can Flux be used with 4GB video memory using ComfyUi on my PC?
I doubt that, it might work on very low settings
My queue get stuck at UNET loading, any idea what to do ?
did u try restarting and re-generating? usually if there's something wrong a text error is displayed
@@MDMZ Thanks for replying ! actually there's no text error, models are in the right files in the console it just stops at "model_type FLUX" and the workflows are note the problem, i noticed that my gpu wasnt active, comfyui can't simply get passed the Laod Diffusion model, I tried it with Schnell, dev, fp16 fp8 nothing is working, I don't understand
Please make a complete step by step video tutorial on DeepFake generation using ComfyUI. I appreciate your hard work.
output is just black picture
did u try reinstalling? using different resolutions/ratio and other settings? also make sure you got all the correct files
hi bro , i can't find VAE [dev] . diffusion_pytorch_model.safetensors
can you share me ? the file download is restricted
you need to sign in
@@MDMZ i use AMD GPU , can you show the way setup FLUX on AMD
@@moviefamily1110 did you find an answer?
I have error :(
what can i do?
Error occurred when executing VAEDecode:
'VAE' object has no attribute 'vae_dtype'
I got the same problem, did you solved it ?
One thing I do not understand.. I can download checkpoints for flux, but I have not found one single tutorial on how to use them. I prefer Stable Diffusion, cause everything just works, while Flux have been a constant headache + how bad comfyui is to use.
the checkpoint IS flux
@@MDMZ 4 days later and I have learned a lot by trial and error!. The checkpoint I had was a diffusion (unet) model, so I got it working now.
@@AndrewTSq happy to hear
After going through the entire process, my pc couldn't run it as it's specs are not high enough 😭😭
But great tutorial tho
😰
How to tell A.I. not to create blurry, out-of-focus background?
you can try being specific about it in the prompt
Do you solve this problem?
Try putting bokeh (name of the depth of field effect in photography terms) in your negative prompt
do we have similar tool for video generation
Stable video diffusion, Gen3, dream machine, are all great options, but not up to this quality
I have everything downloaded and in correct folders but when I hit que prompt it gets stuck on a Reconnecting... pop up.
I'd just restart comfyui and see if that helps
any fix?
hast du eine mac anleitung für mich?
RTX3060 - will that be ok ?
hard to guess, give it a try
schnell is german for fast :D
I learned something new today
it says it is restricted I cannot download the model
you need to sign in
@@MDMZ i did but its the same message
Flux is awesome. Already generated like 4k images with it 😬😅
It really is!
Make more photoshop tutorials
I'd love to!
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
Controlnets for flux didn't even exist when this video was made, things like this change a lot over time
2:51 => "Welcome hell". LOL. Also, you want to hyphenate "smiley-face" otherwise you'll get a token drift.
3:08 => There's no such thing as "hyper-realistic". Something is either real or it is not. Also, "portrait" is a painting, so you're asking the machine for both a realistic image and a painting. You kind of screwed yourself there. You can't ask for "detailed skin textures" because it's not a 3D model. You have to render at a higher resolution than 512x or 768x.
5:24 => How can a network of machines know what "fitting perfectly with the theme" means? It's not another human. You're watering down your prompts, which leads to worse results.
hey friend, you're telling a human what he told a machine that was taught by a human that he's wrong. A better understanding of tokenization might help you get why certain tags (tokens) "work" because they are specific to the dataset the model was trained on. Example: using danbooru tags as opposed to natural language can give you more ideal results based on the tags used to train the model.
thanks for the feedback
i think i need subtitles,my listening is not very well😢
you can turn on the captions
But you will have to pay for the pro, the other free versions don't generate good images
Flux is free
@@MDMZ the free version is generating crap, I saw the option for the pro version for better generations but I gotta pay, which site are you using to generate with flux??
@@khanisbrown6424 I'm using the same process I highlighted in this video :) it's FREE, and NOT generating crap, it's very impressive
@@MDMZ ok I appreciate it
@@khanisbrown6424 you animal. firstly watch the video and then ask questions
i think i need a 4090,my 4060 laptop
is too slow to use flux😢,it takes about 5 mintues to produce a 1024*1024 picture with flux dev fp8,oh god,who can buy me a 4090😥
Flux requires really high VRAM, but I think it's a bit faster now after the latest comfyui update
Hey can you send me the files, cause the Flux owner maybe removed those from the site.
@hasstv9393 the files are still there. Just make sure you are logged in
@@MDMZ oh thank you man!
Is it free? if not how much$$ ??
Free!
Iam runing😢 2gb ram computer
😰
Has anyone tried on a MacBook Air M1 8GB? And, how much storage does it need for installation?
this is outdated now, theres a new comfyui
2:05
Things are moving FAST!
"It looks amazing"... yes maybe, but nowhere in hell does it actually look realistic. It's way too cartoony, at least for the samples you showed.
Yea I saw another video they use a little bad quality and it makes it look so real
No bro it is what it is (it's really too realistic )
No dude it’s freaky realistic. Even better than midjourney I believe
FLUX is truly amazing, but what you're doing is just subpar. It's a little bit of this and a little bit of that... What's the point of making these videos if you don't take the time to explain things properly?
hi, this video is to help you get started without going too deep into technical aspects, I have more advanced videos on the channel