I've been using FLUX. It is quite an upgrade from previous Stable Diffusion models. Being able to do text and fingers is really nice. Still not perfect, you get decent fingers 40-50% of the time, which is better than the 5% from previous models.
1. Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
I congratulate you for the easy way to explain this process. My original language is Spanish, but you make it so practical and functional that I understood almost everything with my limited English. I looked for several videos in English and Spanish and this was definitely the best choice and I followed you in two videos. You should dedicate yourself to teaching or get more out of these types of tutorials.
I finally got it working on my AMD RX 7900 XTX graphics card. It took a while as I had to use ZLUDA, but I’m very happy with the outcome and the speed. Flux is amazing!
OMG… not only works, but the result are out of this world!! Is a bit slower, but the wait is definitely worth it!! Using chat gpt to give it an additional twist helps a bit.
@MDMZ, My friend, the quality, precision, and skill you consistently deliver are nothing short of amazing. You always provide us with impressive and amazing work, and I can only imagine the effort that goes on behind the scenes. Thank you, and please never stop. Just a quick question, if you don't mind: do you live in Malaysia?
Installed Comfy UI, downloaded the .safetensor, vae and clip files, the workflow and did exactly like in the video. On my 3080 ti I get got prompt Using pytorch attention in VAE Using pytorch attention in VAE C:\Users\shutd\ComfyUI_windows_portable>pause Press any key to continue . . . and then the prompt window closes
Hello Mohammed, thank you so much for this Video and the one who you explain how to install Comfyui. I looked so many videos how to run Flux on Comfyui. No one works...until i saw your video and now... everthing works fine and smooth on my PC. You have exact those videos i was looking for a long time. Thanks again. Have a great day. 😃
Thanks for the tutorial, i was able to get it working. I mostly use A1111 for 1.5 and sdxl and will probably stick with that for awhile because it takes my 3080 2 to 3 minutes to generate a single image with flux with just the default settings. I am impressed with the quality of the images though!
@@MDMZ I have tried quite a few eye loras and none seem to help, the only thing i find that works is adding FaceDetailer to my workflow, but depending how many faces are in my image, it adds considerably to the creation time.
Thanks bro! edit: Actually i got just black image... i did everything as you did. Googled it and it said i probably have the wrong vae, but i have the exact same vae as you. Do you know where I can get help troubleshoot this? edit edit: actually, it was the clip loader problem. you didn't quite explain how that clip loader worked and somehow i had two of the same clip chosen. You have to have different clips. in any case, thanks again for this otherwise great tutorial.
When using the new low-mem (4GB) model Q4KS - Do I need to install the other files you mention beside the main FLUX Q4KS under unet? or because it's a low-ram different model, other files are needed in stead? Thanks ahead 🙏
wanted to try this but when I draged the workflow json file instead of "load diffuse model" node I had "UNETLoader" node without weight, so an error occured... have no idea why there's no "load diffuse model" node
I don't know why, but my images are just solid noise... (I tried to add a link to a picture, but my comment deletes🥲). I downloaded all the dependencies (model, 3 encoders, VAE, and the workflow you mentioned), but it's still a mess. I have a MacBook Pro M3 Pro, so I also updated PyTorch to the nightly version. What am I doing wrong? I tried changing schedulers, samplers, steps, and even the size 😃 but nothing helps...
Hi! Awesome video again, thank you! I downloaded all the files, but the DualCLIPLoader doesn't list the clip_l.safetensors, t5xxl_fp16.safetensors and the fp8 ones neither. I updated All, restarted ComfyUI, terminated and started again and still nothing. They are in the same folder as in the video /clip ... I also tried createing a /clip/t5xxl sub folder, but it doesn't show up. Can you help me sovle it?:)
Good video. Please make another video on how to integrate "img to img" using flux. This will help us take existing image and write text on top of it instead of generating an image with text from scrach.
@@RANJEET3939 to be honest, I rarely make instant tutorials about new tools, cause that means I will be covering something unstable that's gonna be improved over time.
Hi. Thanks for the good guide. I have a question: why my pc crash with this setup? I have 16 RAM and a RTX 4070. It's strange, because even if I start Queue process and immediately close the app, after few seconds my pc totally freeze
Hey buddy @MDMZ I loved your work super easy to understand and very crisp video. Please can you suggest what to do if we want same persons face in all different prompts?
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Great video, thanks! How do we create additional images that are based on the first image? For example, once you create the cover for LUNA, how do you create all subsequent images using the same character and style?
I am currently using ComfyUi and as a checkpoint Dreamshaper 2.1 Turbo the combination works well together with my AMD system. My questions, does Flux run stable on an AMD system? I have a 7600 XT with 16GB VRAM, 32GB RAM and AMD Ryzen 5 7500F
I am on AMD GPU RX 7600XT. For me dreamshaper_8 works very well in comfyui using checkpoint. But I tried so much to get flux_1 working, it just doesnt work. I followed your instructions, as well as other tutorials. I even updated comfyui to latest version. But everytime I queue prompt, it gets stuck at first node 'Load Checkpoint' for a minute and then server crashes. No error in CMD, just 'got prompt' message. I even tried checkpoint version of flux from their official documentation, still the same issue. I wonder if I am doing something wrong, or flux is not yet supported on AMD GPU.
i have the same problem except mine says: got prompt model_type FLOW model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 and then it hangs forever (ive waited hours) RTX3060 Ti
I'm just getting the following error and can't really find a solution: UNETLoader invalid load key, '\x03'. It also only shows .gguf files and no .safetensors :/
@@MDMZ Thanks for replying ! actually there's no text error, models are in the right files in the console it just stops at "model_type FLUX" and the workflows are note the problem, i noticed that my gpu wasnt active, comfyui can't simply get passed the Laod Diffusion model, I tried it with Schnell, dev, fp16 fp8 nothing is working, I don't understand
How much time per image would it take on 12 gb vram, for making 2048x2048 resolution images? Have people recorded the time for that kind of resolution? Please share all kinds of timings on high resolution. I don't like low resolution generations. Upscaling landscapes seems to ruin everything.
what's the new fp8 scaled encoder ? anyone ? is it a better version of the normal fp8 one, like would you replace it with the original or are they completely not related...
Man, mine says: UNETLoader Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
One thing I do not understand.. I can download checkpoints for flux, but I have not found one single tutorial on how to use them. I prefer Stable Diffusion, cause everything just works, while Flux have been a constant headache + how bad comfyui is to use.
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
2:51 => "Welcome hell". LOL. Also, you want to hyphenate "smiley-face" otherwise you'll get a token drift. 3:08 => There's no such thing as "hyper-realistic". Something is either real or it is not. Also, "portrait" is a painting, so you're asking the machine for both a realistic image and a painting. You kind of screwed yourself there. You can't ask for "detailed skin textures" because it's not a 3D model. You have to render at a higher resolution than 512x or 768x. 5:24 => How can a network of machines know what "fitting perfectly with the theme" means? It's not another human. You're watering down your prompts, which leads to worse results.
hey friend, you're telling a human what he told a machine that was taught by a human that he's wrong. A better understanding of tokenization might help you get why certain tags (tokens) "work" because they are specific to the dataset the model was trained on. Example: using danbooru tags as opposed to natural language can give you more ideal results based on the tags used to train the model.
your tutorial is good but it mis leading dont store full trained models in the U-net folder it should be stored in the checkpoints because Checkpoints contain the full model weights, including the U-Net.
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
@@MDMZ the free version is generating crap, I saw the option for the pro version for better generations but I gotta pay, which site are you using to generate with flux??
i think i need a 4090,my 4060 laptop is too slow to use flux😢,it takes about 5 mintues to produce a 1024*1024 picture with flux dev fp8,oh god,who can buy me a 4090😥
FLUX is truly amazing, but what you're doing is just subpar. It's a little bit of this and a little bit of that... What's the point of making these videos if you don't take the time to explain things properly?
@MDMZ Hey there! With the help of your awesome video, I managed to setup ComfyUi and Flux. Everything looks fine and it's running. After finishing the job, I get only a black image. Diffusion Model is in /unet/flux1-dev.safetensors. VAE is /diffusion_pytorch_model.safetensors. DualClipLoader /clip/model-00001-of-00002.safetensors and /model-00002-of-00002.safetensors. Everything is also entered correctly in ComfyUI. Any ideas or suggestions?
And here's how to run FLUX online: th-cam.com/video/syXk5CgABJg/w-d-xo.html
flux.1 dev link gives a 401 error
In fact Flux lifting image creating on a next level
I've been using FLUX. It is quite an upgrade from previous Stable Diffusion models. Being able to do text and fingers is really nice. Still not perfect, you get decent fingers 40-50% of the time, which is better than the 5% from previous models.
1. Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
isnt flux very heavy demand on pc? I have 16gb, and it takes ages for me to create
I congratulate you for the easy way to explain this process. My original language is Spanish, but you make it so practical and functional that I understood almost everything with my limited English. I looked for several videos in English and Spanish and this was definitely the best choice and I followed you in two videos. You should dedicate yourself to teaching or get more out of these types of tutorials.
Wow, thank you!
This FLUX AI step-by-step tutorial is amazing! It makes abundantly evident why it's regarded as the greatest image generator ever.
Thank you for this amazing tutorial. I've been looking for this since last week. You guys are too PRO!
Glad it was helpful!
I finally got it working on my AMD RX 7900 XTX graphics card. It took a while as I had to use ZLUDA, but I’m very happy with the outcome and the speed. Flux is amazing!
Great to hear!
could you please go into detail? AMD User here ;)
Just what I needed and posted 7 hours ago... wow this is my lucky day
😉
OMG… not only works, but the result are out of this world!! Is a bit slower, but the wait is definitely worth it!! Using chat gpt to give it an additional twist helps a bit.
Awesome!
@MDMZ, My friend, the quality, precision, and skill you consistently deliver are nothing short of amazing. You always provide us with impressive and amazing work, and I can only imagine the effort that goes on behind the scenes. Thank you, and please never stop.
Just a quick question, if you don't mind: do you live in Malaysia?
thanks and yes!
Installed Comfy UI, downloaded the .safetensor, vae and clip files, the workflow and did exactly like in the video. On my 3080 ti I get
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
C:\Users\shutd\ComfyUI_windows_portable>pause
Press any key to continue . . .
and then the prompt window closes
same
same
Change weight_dtype from default to something else in Load Diffusion Model (First box). Fixed it for me.
SAME HERE MANN
Great tutorial. Dev is working on my 4070 TI Super, just slooooow. About 1:15 per image.
happy to hear it's working! and yeaaa sadly it is a little slow
You call that slow?? I need to wait at least 5 minutes working with stable for 720p images 😪😅
@@christiandeleo7737 Give Forge a try. I generate images in like 15 seconds now.
Hello Mohammed, thank you so much for this Video and the one who you explain how to install Comfyui. I looked so many videos how to run Flux on Comfyui. No one works...until i saw your video and now... everthing works fine and smooth on my PC. You have exact those videos i was looking for a long time. Thanks again. Have a great day. 😃
Awesome! happy to help
Thanks for the tutorial, i was able to get it working. I mostly use A1111 for 1.5 and sdxl and will probably stick with that for awhile because it takes my 3080 2 to 3 minutes to generate a single image with flux with just the default settings. I am impressed with the quality of the images though!
Great to hear!
Please make a complete step by step video tutorial on image to image generation using ComfyUI. I appreciate your hard work.
This is a fantastic guide to a fabulous image creator, any tips on getting better eyes? Regards
u can try more specific description in your prompt, did you look for eyes loras? that could help
@@MDMZ I have tried quite a few eye loras and none seem to help, the only thing i find that works is adding FaceDetailer to my workflow, but depending how many faces are in my image, it adds considerably to the creation time.
can you tell please how to add lora to this setup?
Thanks bro!
edit: Actually i got just black image... i did everything as you did. Googled it and it said i probably have the wrong vae, but i have the exact same vae as you. Do you know where I can get help troubleshoot this?
edit edit: actually, it was the clip loader problem. you didn't quite explain how that clip loader worked and somehow i had two of the same clip chosen. You have to have different clips. in any case, thanks again for this otherwise great tutorial.
glad you solved it
When using the new low-mem (4GB) model Q4KS - Do I need to install the other files you mention beside the main FLUX Q4KS under unet? or because it's a low-ram different model, other files are needed in stead?
Thanks ahead 🙏
honestly, not sure, I'm yet to experiment with that
wanted to try this but when I draged the workflow json file instead of "load diffuse model" node I had "UNETLoader" node without weight, so an error occured... have no idea why there's no "load diffuse model" node
Fr. I have all the files downloaded but nothing is working. I keep getting deserialization errors and a weight error. 😭
how do we run our own models .. example, after making a lora of myself on replicate . how do I use that lora in comfy to make my own images?
place it in the lora folder, then load it using a lora loader node
I don't know why, but my images are just solid noise... (I tried to add a link to a picture, but my comment deletes🥲). I downloaded all the dependencies (model, 3 encoders, VAE, and the workflow you mentioned), but it's still a mess. I have a MacBook Pro M3 Pro, so I also updated PyTorch to the nightly version. What am I doing wrong? I tried changing schedulers, samplers, steps, and even the size 😃 but nothing helps...
Thanks a ton for this, it made things quite easy. After generating from my own prompt, all I got to say is wow, okay I get it.
Awesome!
Hi! Awesome video again, thank you! I downloaded all the files, but the DualCLIPLoader doesn't list the clip_l.safetensors, t5xxl_fp16.safetensors and the fp8 ones neither. I updated All, restarted ComfyUI, terminated and started again and still nothing. They are in the same folder as in the video /clip ... I also tried createing a /clip/t5xxl sub folder, but it doesn't show up. Can you help me sovle it?:)
strange, this usually happens when u dont place the files in the correct folder, can u drop some screenshots on discord ?
Good video. Please make another video on how to integrate "img to img" using flux. This will help us take existing image and write text on top of it instead of generating an image with text from scrach.
the things we'll be able to do with flux are gonna be interesting, I will keep experimenting and definitely make more videos on it
@@MDMZ Please try this as soon as you get time. This will be much of practical use to see writing text on existing image.
@@RANJEET3939 to be honest, I rarely make instant tutorials about new tools, cause that means I will be covering something unstable that's gonna be improved over time.
There are good up-scaling workflows for comfy.
for sure!
@@MDMZ Hmmm... Fellow Canadian?
Hi. Thanks for the good guide. I have a question: why my pc crash with this setup? I have 16 RAM and a RTX 4070.
It's strange, because even if I start Queue process and immediately close the app, after few seconds my pc totally freeze
You could run out of memory.
try a smaller resolution setting, FLUX requires a lot of VRAM
Is there a way to input images / product renderings?
Thank you for the video Mohammed! Is there any way to reference an existing photograph and have the Flux ai use for the underlay/foundation?
pretty sure there are workflows for that out there, try searching "FLUX image to image workflow"
Hey buddy @MDMZ I loved your work super easy to understand and very crisp video.
Please can you suggest what to do if we want same persons face in all different prompts?
Excellent video.
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Hello. Thanks for the video. Does it support Control Net?
I saw some FLUX controlnet models going around already, I'm yet to experiment with them
I'm missing three nodes and the install missing nodes can't detect anything. I tried searching in the manager but there's nothing
did u update comfyui first ?
My output keeps coming out as a full black image only, no matter the prompt or changing some things.
that's strange, make sure your GPU is up to date, u can also try reinstalling comfyui
Great video, thanks! How do we create additional images that are based on the first image? For example, once you create the cover for LUNA, how do you create all subsequent images using the same character and style?
might need a more advanced workflow for that
I am currently using ComfyUi and as a checkpoint Dreamshaper 2.1 Turbo the combination works well together with my AMD system. My questions, does Flux run stable on an AMD system? I have a 7600 XT with 16GB VRAM, 32GB RAM and AMD Ryzen 5 7500F
if you're able to run comfyui already, I don't see why a FLUX workflow wouldnt work, but the question remains whether it's going to be fast enough
Using the workflow I get an error
SamplerCustomAdvanced
mat1 and mat2 shapes cannot be multiplied (77x2048 and 4096x3072)
how would I fix this?
I am on AMD GPU RX 7600XT. For me dreamshaper_8 works very well in comfyui using checkpoint. But I tried so much to get flux_1 working, it just doesnt work.
I followed your instructions, as well as other tutorials. I even updated comfyui to latest version. But everytime I queue prompt, it gets stuck at first node 'Load Checkpoint' for a minute and then server crashes. No error in CMD, just 'got prompt' message.
I even tried checkpoint version of flux from their official documentation, still the same issue. I wonder if I am doing something wrong, or flux is not yet supported on AMD GPU.
that sucks! yeah it could be a compatibility issue, did you try launching ComfyUI using CPU instead ?
i have the same problem except mine says:
got prompt
model_type FLOW
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
and then it hangs forever (ive waited hours) RTX3060 Ti
Same issue here.
"File isn't available on the site". Downloading DEV version doesn't work rn
sign in
hi bro , i can't find VAE [dev] . diffusion_pytorch_model.safetensors
can you share me ? the file download is restricted
you need to sign in
@@MDMZ i use AMD GPU , can you show the way setup FLUX on AMD
@@moviefamily1110 did you find an answer?
Is there any image to image option?
you can definitely build a workflow for that, the possibilities are pretty much endless
I'm just getting the following error and can't really find a solution:
UNETLoader
invalid load key, '\x03'.
It also only shows .gguf files and no .safetensors :/
hey, can u drop this on discord ?
Any idea why when I drop in the json and go to change the any of the sttings it just goes to undefined and not seeing the files.
you need to download the necessary files/models and place them in the right folders
My queue get stuck at UNET loading, any idea what to do ?
did u try restarting and re-generating? usually if there's something wrong a text error is displayed
@@MDMZ Thanks for replying ! actually there's no text error, models are in the right files in the console it just stops at "model_type FLUX" and the workflows are note the problem, i noticed that my gpu wasnt active, comfyui can't simply get passed the Laod Diffusion model, I tried it with Schnell, dev, fp16 fp8 nothing is working, I don't understand
How much time per image would it take on 12 gb vram, for making 2048x2048 resolution images? Have people recorded the time for that kind of resolution? Please share all kinds of timings on high resolution. I don't like low resolution generations. Upscaling landscapes seems to ruin everything.
Might be bad, if he called 12 or 8gb GPUs "low end" 💀
i guess it's relative, low end in this context, 12GB is decent in other scenarios
@@MDMZ in my stable audio runs the T5 model usually consumes a lot, is it the same for the model?
Please make a complete tutorial on Applio RVC
never head of it, i will check it out
using fp8 in weight dtype will save your GPU memory
yep worth trying
I have it all but there's no flux option in the DualClipLoader, I only have SDXL and SD3. Please advice, I'm so close now ;-)
make sure you download all the files mentioned, and update your comfyui
Great video! Thanks 🙏👍
Hi, MDMZ can you load custom Lora/multiple lora's into this Workflo to generate Couple photo.
I believe you can, I'm yet to try it out myself
what's the new fp8 scaled encoder ? anyone ? is it a better version of the normal fp8 one, like would you replace it with the original or are they completely not related...
the difference really depends on many factors and is mostly unnoticeable, if you're on low vram use fp8, it's going to be faster
seems like the dev version is access restricted at the moment, but the other one is accessible.
Oh, what makes u think so ?
Man, mine says: UNETLoader
Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
"Prompt outputs failed validation
DualCLIPLoader:
- Value not in list: type: 'flux' not in ['sdxl', 'sd3']"
Huh?
did you download the flux model and place it in the correct folder ?
@@MDMZ It's okay I got it, thanks though, I just had to download some extra bits I thought I already had. And yeah I had it in the wrong folder. :)
How to you get the image bar at the bottom?
How to tell A.I. not to create blurry, out-of-focus background?
you can try being specific about it in the prompt
Do you solve this problem?
Try putting bokeh (name of the depth of field effect in photography terms) in your negative prompt
Are the models down for flux on hf?
Does the schnell work normally on 3070TI 8gb?
I'm afraid 8GB is too low, but you can try this: civitai.com/articles/6653/fast-tutorial-to-run-flux-on-8gb-vram
I have error :(
what can i do?
Error occurred when executing VAEDecode:
'VAE' object has no attribute 'vae_dtype'
I got the same problem, did you solved it ?
I have everything downloaded and in correct folders but when I hit que prompt it gets stuck on a Reconnecting... pop up.
I'd just restart comfyui and see if that helps
any fix?
does this process work on mac too?
Probably on M3 only
Thank you!
4090 aobut 20 min... something's wrong?
20 mins for a single image? yeah that's not normal
Same here. Using unet flex 1dev Not sure why it's taking so long only 4 steps.
I cant find in my ComifyUI Manager section. Help me pls
thanks a lot
You are most welcome
Great tutorial - short and very precisly - but in fact Flux runs slooow and I have a 24 GB VRAM
yep
how to install image to 3d model on comfy ui please videos on trelis and other 3d ai like tripo sr and aiuni etc
One thing I do not understand.. I can download checkpoints for flux, but I have not found one single tutorial on how to use them. I prefer Stable Diffusion, cause everything just works, while Flux have been a constant headache + how bad comfyui is to use.
the checkpoint IS flux
@@MDMZ 4 days later and I have learned a lot by trial and error!. The checkpoint I had was a diffusion (unet) model, so I got it working now.
@@AndrewTSq happy to hear
Can Flux be used with 4GB video memory using ComfyUi on my PC?
I doubt that, it might work on very low settings
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
Thank you man. How can I load an image in comfyui like mj in discord?
you will need to setup an img2img workflow
do we have similar tool for video generation
Stable video diffusion, Gen3, dream machine, are all great options, but not up to this quality
Hey mate, great tutorial. If you make the tutorials more detailed and remove music it would be great for learning purposes.
thanks for the feedback, I do go into details in other more complex videos
hast du eine mac anleitung für mich?
it says it is restricted I cannot download the model
you need to sign in
@@MDMZ i did but its the same message
output is just black picture
did u try reinstalling? using different resolutions/ratio and other settings? also make sure you got all the correct files
After going through the entire process, my pc couldn't run it as it's specs are not high enough 😭😭
But great tutorial tho
😰
2:51 => "Welcome hell". LOL. Also, you want to hyphenate "smiley-face" otherwise you'll get a token drift.
3:08 => There's no such thing as "hyper-realistic". Something is either real or it is not. Also, "portrait" is a painting, so you're asking the machine for both a realistic image and a painting. You kind of screwed yourself there. You can't ask for "detailed skin textures" because it's not a 3D model. You have to render at a higher resolution than 512x or 768x.
5:24 => How can a network of machines know what "fitting perfectly with the theme" means? It's not another human. You're watering down your prompts, which leads to worse results.
hey friend, you're telling a human what he told a machine that was taught by a human that he's wrong. A better understanding of tokenization might help you get why certain tags (tokens) "work" because they are specific to the dataset the model was trained on. Example: using danbooru tags as opposed to natural language can give you more ideal results based on the tags used to train the model.
thanks for the feedback
RTX3060 - will that be ok ?
hard to guess, give it a try
your tutorial is good but it mis leading dont store full trained models in the U-net folder it should be stored in the checkpoints because Checkpoints contain the full model weights, including the U-Net.
FLUX is a unet model so it needs to be placed in the unet folder, and it works!
@@MDMZ oh nevermind i spoke too soon you are right ! lol
Please make a complete step by step video tutorial on DeepFake generation using ComfyUI. I appreciate your hard work.
i think i need subtitles,my listening is not very well😢
you can turn on the captions
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
Controlnets for flux didn't even exist when this video was made, things like this change a lot over time
But you will have to pay for the pro, the other free versions don't generate good images
Flux is free
@@MDMZ the free version is generating crap, I saw the option for the pro version for better generations but I gotta pay, which site are you using to generate with flux??
@@khanisbrown6424 I'm using the same process I highlighted in this video :) it's FREE, and NOT generating crap, it's very impressive
@@MDMZ ok I appreciate it
@@khanisbrown6424 you animal. firstly watch the video and then ask questions
Iam runing😢 2gb ram computer
😰
schnell is german for fast :D
I learned something new today
Flux is awesome. Already generated like 4k images with it 😬😅
It really is!
Is it free? if not how much$$ ??
Free!
i think i need a 4090,my 4060 laptop
is too slow to use flux😢,it takes about 5 mintues to produce a 1024*1024 picture with flux dev fp8,oh god,who can buy me a 4090😥
Flux requires really high VRAM, but I think it's a bit faster now after the latest comfyui update
Hey can you send me the files, cause the Flux owner maybe removed those from the site.
@hasstv9393 the files are still there. Just make sure you are logged in
@@MDMZ oh thank you man!
Make more photoshop tutorials
I'd love to!
Has anyone tried on a MacBook Air M1 8GB? And, how much storage does it need for installation?
this is outdated now, theres a new comfyui
2:05
Things are moving FAST!
FLUX is truly amazing, but what you're doing is just subpar. It's a little bit of this and a little bit of that... What's the point of making these videos if you don't take the time to explain things properly?
hi, this video is to help you get started without going too deep into technical aspects, I have more advanced videos on the channel
"It looks amazing"... yes maybe, but nowhere in hell does it actually look realistic. It's way too cartoony, at least for the samples you showed.
Yea I saw another video they use a little bad quality and it makes it look so real
No bro it is what it is (it's really too realistic )
No dude it’s freaky realistic. Even better than midjourney I believe
@MDMZ Hey there! With the help of your awesome video, I managed to setup ComfyUi and Flux.
Everything looks fine and it's running. After finishing the job, I get only a black image.
Diffusion Model is in /unet/flux1-dev.safetensors.
VAE is /diffusion_pytorch_model.safetensors.
DualClipLoader /clip/model-00001-of-00002.safetensors and /model-00002-of-00002.safetensors.
Everything is also entered correctly in ComfyUI.
Any ideas or suggestions?
it could be due to different reasons, I'd try reinstalling, if it persists, it could be a hardware issue