THANK YOU. I found a few other tutorials before this one with what seemed to be easy steps, but they didn't work even though the commentors said my process looked right. This one worked effortlessly
I need to download a workflow for this to work, correct? I downloaded all of the models but don't have the Load diffusion model etc. like you do. I already installed the manager and everything is up to date. What is the exact name of the workflow I should get to follow this video please?
i am missing a lot of things like dualclip loader , load diffusion model, .... and many more... also how did you get the life stats on the upper right?
Hello there, You are lacking the updated nodes, so please update ComfyUi before they will be available for the workflow, which I showed in the video. To get the resource monitor on the upper right, you can watch this video to learn how to get that custom node. th-cam.com/video/tIfr_duWyZQ/w-d-xo.htmlsi=eK-uIfDpEEH_Cgx4
Great video, thank you! The fp16 version seems to still be running slow generating a 1024x1024 image on a 4070ti and 32gb of ram - you said it would work faster after the first time? How else can we speed this up?
You are correct, it may still take some minutes to complete an image, To solve this, use the GGUF Quantised models for similar results but shorter generation time. i have explained the entire process in this video. th-cam.com/video/AzeZkosyqp4/w-d-xo.htmlsi=lg883eetyKDESjUZ
Please don't abandon this thread. Where can I get this specific workflow? I downloaded everything including the manager and still have nothing that can be seen in the video.
Hello there, you can download the final JPEG image here. drive.google.com/drive/folders/1pAWHQQ3-ZgB8TEO6b9DjkjLvTnvrgn4D?usp=sharing simply drag and drop the final image into comfyui to get the workflow.
Hi! When you go to: "Manager" > "Custom Nodes Manager", what nodes do you have installed? I only have "ComfyUI-Manager", so I think this is the reason I don't have the nodes: neither "UNETLoader" nor "Load Diffusion Model". Finally. I get it. In summary: - Install ComfyUI - Install ComfyUI-Manager - Download Flux as you do in the video - Drag and Drop an image to the canvas of ComfyUI, and then the new nodes appear
You are almost there. - Once in manager, 1. Click (Update All) to update your custom nodes. 2. After that, click on Update ComfyUI. 3. Restart comfyui, and the nodes will be available.
How do you get the "Manager" button? I only see the buttons from "Save" to "Reset View". EDIT: I have seen that you have to install the "ComfyUI Manager" to get the "Manager" and "Share" button. Then, I clicked on "Update All" and "Updating ConfyUI", but I don't see the other nodes.
Hello there, I apologise for not highlighting that. To assist you in installing the manager, please see the video guide provided here. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
Hey my friend I'm still stuck at the ComfyUI_windows_portable>pause error :( everything is in place I promise, and I have already opened Comfy before your video so I know it works... any suggestions?
Yeah, you're right. The first generation is going to take longer than usual. Here's what you need to do: 1. Save the workflow and restart your computer to make sure no heavy programs are running. 2. Run ComfyUI as soon as your computer is back on, then queue your prompt again. 3. For the first generation, just hang tight and be patient until it's complete.
One of the best tutoriel wet for installing a workflow in ComfyUI. Thank's it work on the first try. You talking about a link for the Sampler custom node. I didn't find it, can you help me with that? Again, thank's a lot.
I appreciate your kind words.- i am glad everything worked for you on the first try. 🎉 For the Sampler custom node I will drop it here if needed. th-cam.com/video/mYmopXiFW2I/w-d-xo.htmlsi=5O7ci-FgTAJK3gtF
You will have to select the FLUX model instead of the default Stable Diffusion one. Once that’s done, save your workflow, and it should start with FLUX next time you run ComfyUI.
6:11 I waited for the generation for like 20 minutes and nothing happened. It shows that ComfyUI is using 100% of both my SSD and HDD. I closed ComfyUI because I have no other solution. I have 12gb of vram and 24gb of ram. what's the problem?
Yes, you are correct. It will take longer than usual for the first generation. -Save the workflow and restart your computer to ensure that no intensive programmes are running. -Run ComfyUI first thing after resuming, then queue your prompt again. -For the first generation, be patient until it is complete.
Hello there, the absence of the manager means that it is not installed. Make sure to install the ComfyUI manager first. the video link that follows can help you with that. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
For most setups, 24GB of RAM should handle FP16 reasonably well. Adding a bit more RAM or fine-tuning your VRAM usage could help if you notice slowdowns. My installed ram is 32GB. and 12GB of Vram
hi! i have an nvidia 3080rtx and i9 processor, it takes about 100s to make an image with basic workflow and the 16 version. But before it starts making the image it kinda freezes for 20 secs with this message: Model doesn't have a device attribute. any help?
Ensure that your VRAM isn’t maxing out during the initial load. If it is, reducing resolution might help. additionally If you’re using the 16 version, try switching to FP8 to reduce loading times.
please I dont know too much and I'm trying to install it , now I find that they changed the file format (flux1-schnell.sft → flux1-schnell.safetensors) so there is different file ? or I can complete the same way as you did ?
I also experienced this, I used the FP18 model and parameters rather than FP16 to overcome this. It could be a system resource. I waited for many minutes before the first image process began. The CMD may appear inactive, but the initial step is time-consuming.
Hello there, Check the model directory to ensure that all necessary model files, including the UNet model, are there. -Re-download any missing or corrupted FLUX model files. -Monitor your GPU's VRAM usage and lower resolution or use quantised models if VRAM is limited. -Check for compatibility between FLUX and your UNet version, as well as any additional requirements. i hope these help
Hello there, I also got stuck in my process. Save the workflow and restart your PC or system to ensure that no intensive applications are running. Run comfy first thing after resuming, then queue your prompt again. Remain patient until completion for the first generation.
Could A1111 be used? What about loras? What is highest resolution without upscale? Is it good for anime images? Hopefully they add negative prompts, I don't like blurry images at all. For some reason, most of these AIs blur the environment, it ruins everything.
Hello there, Unfortunately, 1. FLUX is designed to work seamlessly within ComfyUI. 2. The highest resolution without upscaling depends on your GPU’s VRAM. 3. Yes! FLUX can produce impressive anime-style images, especially with the right prompts. 4. Negative prompts are an effective way to control unwanted elements, and many users are hoping for more control in this area too. Let’s hope for continued improvements in these models
You may adjust your prompt to emphasise the subject over the background parts to achieve a clearer background. You can also experiment with other sampler settings. Flux is powerful, but it sometimes requires a little push!
The starting point image process may be longer. However, generations after that should be faster. TIPS. >Reducing the resolution of your outputs may greatly speed up the process. >Make sure no extra background processes are consuming your system's resources.
@@CloudVideoYT If your output images are looking a bit dull 1. Experiment with different samplers in ComfyUI-some may give you sharper and more vibrant images. 2. Increase the resolution size of the image generation. though it may take a bit longer to process. 3. Sometimes, adding specific terms like "vivid," "high-definition," or "sharp details" to your prompts can make a noticeable difference. i hope any of these helps.
hi my model is not loading in the comfy ui i have followed ur tutorial and done everything but still it is struck on the model loading part in my cmd it's showing like this:- got prompt Using pytorch attention in VAE Using pytorch attention in VAE model weight dtype torch.float8_e5m2, manual cast: torch.bfloat16 model_type FLOW after this nothing is happening please help
Hello there, the first image process takes longer than usual you may think its stuck. you can try this as I did Save the workflow and restart your PC or system to ensure that no intensive applications are running. Run comfy first thing after resuming, then queue your prompt again. REMAIN PATIENT until completion for the first generation. the following images will be faster.
@@sherwinrogerrd9021 Yes! you might not get the same speed as higher-end GPUs, but it’s a solid setup to get started with. Just keep an eye on your VRAM usage, especially if you're working with high-resolution images or complex workflows
Yes, it is possible to run FLUX with an AMD graphics card, but it might require a bit more setup compared to NVIDIA GPUs. FLUX relies on certain dependencies that are optimised for NVIDIA, so you'll need to ensure you have the right drivers and possibly make some adjustments in your setup.
Very helpful tutorial! But when I finally clicked on the "queue prompt", it said "TypeError: Failed to fetch". Has anyone encountered the same problem?
You could try restarting ComfyUI, checking your internet connection, or switching browsers. It usually happens when the network or server does not respond properly.
The time it takes to generate an image can vary according to your hardware, GPU, and the specific settings you're using, but on average, it took me 1-3 minutes after my initial generation using FLUX.
i have a RTX 4090 and it took over 20 mins to generate - any idea why? i guessed only the 1st one will take this long but all my generated pics need this long. when i reduce the resolution, i only "win" 1-3 mins.. this is weird..
Especially with the RTX 4090! Your generations should go faster. However, you are correct that the initial image will take many minutes to process. fortunately after that, everything should be quick.
@@goshniiAI i only have that prob with Flux, all other Models i generate 50 images in like 50 seconds - but flux takes always 20 min for just one image :/
@@goshniiAI thanks for the reply and for the videos. im getting started with this and I am trying to understand many things yet. I will search for the tutorial on how to install that manager tab. if you have one please let me know
@@dfgoijoi You're very welcome! It's great to hear you're diving into this exciting world. The link here can guide you through the manager and a few installations. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
Yes, Flux currently works best with NVIDIA GPUs because they support CUDA. You may still be able to use an AMD GPU setup, but performance and compatibility will vary abit.
I totally understand, If you're looking for alternatives, FLUX can also be used with other interfaces like Forge or Online WebUI's. Each platform has its own strengths, so it’s worth experimenting to find the one that fits your workflow best.
You're absolutely right , could be one of the trade-offs for getting those high-quality results but, you can always use the settings for a lower GPU output.
@@damird9635 8GB of VRAM is probably on the low end, but it is still likely. You may need to reduce the resolution or other resource-intensive settings and make use of the FP8 model for it to run smoothly.
should this work also on mac? I get this error "Error occurred when executing SamplerCustomAdvanced: Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype."
Hello There, Unfortunately, I haven't tested ComfyUI on a Mac, as I primarily use Windows. However, you might want to try checking for any updates to your MPS or PyTorch setup, or consider running it in a different environment if possible.
Great tutorial!
Appreciate you.
Clear, concise, no waffle, no long story or boring intro logo animation. Just bang, bang, done!!
I appreciate the feedback you provided and thank you for mentioning it.
I'm very glad I found your channel. You are covering everything I need to learn 🙂
Welcome aboard! Thank you for being here. i appreciate your comment
THANK YOU. I found a few other tutorials before this one with what seemed to be easy steps, but they didn't work even though the commentors said my process looked right. This one worked effortlessly
Thank you for sharing that. It is inspiring to hear that this one went so easily.
Thank you for your time making this tutorial, it's great.👍
I appreciate your feedback. Thank you for sharing your thoughts!
Fantasic, simple tutorial. Thankyou!
You are welcome, and thanks a ton for the nice words!
Yes, a well explained tutorial, thank you
You are very welcome and i appreciate your encouraging feedback.
God! You are my dear man! I'll light a candle for you in the church.
Thank you for your kind words and support. Happy creating!
Thanks so much this by far the easiest tutorial to follow
Thank you for sharing your feedback, I'm glad that you enjoyed the process.
Really well done tutorial and east to follow. Thank you!
thank you as well, i appreciate it
Only good AI tutorial channel
Thank you for the glowing compliment.💛
Very clear tutorial and not jumping steps , well done 👍 Subbed mate
Thank you for your support.
Great tutorial. Thanks!
You are welcome, and I am glad it was helpful!
I need to download a workflow for this to work, correct? I downloaded all of the models but don't have the Load diffusion model etc. like you do. I already installed the manager and everything is up to date. What is the exact name of the workflow I should get to follow this video please?
thank you. Clear and to the point.
You're welcome! thank you for your feedback
Perfect tutorial , but you should also mention that the manager is an extra installation. Basic ComfyUI does not include the manager.
What does it take to get the manager installed? Thanks.
Exactly, i have no idea wtf that is
Thank you sir.
So nice of you
i am missing a lot of things like dualclip loader , load diffusion model, .... and many more... also how did you get the life stats on the upper right?
Hello there, You are lacking the updated nodes, so please update ComfyUi before they will be available for the workflow, which I showed in the video.
To get the resource monitor on the upper right, you can watch this video to learn how to get that custom node. th-cam.com/video/tIfr_duWyZQ/w-d-xo.htmlsi=eK-uIfDpEEH_Cgx4
Not too sure why, but i don't see the manager tab like you have in your video??
Sorry, i did not mention that. You can install the manager using the link provided here. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.htmlsi=8kRFQWnJPpoADA3C
Merci ! très bien expliqué
Je suis ravi que l'explication t'ait aidé. Merci
Great video, thank you! The fp16 version seems to still be running slow generating a 1024x1024 image on a 4070ti and 32gb of ram - you said it would work faster after the first time? How else can we speed this up?
You are correct, it may still take some minutes to complete an image, To solve this, use the GGUF Quantised models for similar results but shorter generation time.
i have explained the entire process in this video. th-cam.com/video/AzeZkosyqp4/w-d-xo.htmlsi=lg883eetyKDESjUZ
Please don't abandon this thread. Where can I get this specific workflow? I downloaded everything including the manager and still have nothing that can be seen in the video.
Hello there, you can download the final JPEG image here. drive.google.com/drive/folders/1pAWHQQ3-ZgB8TEO6b9DjkjLvTnvrgn4D?usp=sharing
simply drag and drop the final image into comfyui to get the workflow.
@@goshniiAI thank you
Hi! When you go to: "Manager" > "Custom Nodes Manager", what nodes do you have installed?
I only have "ComfyUI-Manager", so I think this is the reason I don't have the nodes: neither "UNETLoader" nor "Load Diffusion Model".
Finally. I get it. In summary:
- Install ComfyUI
- Install ComfyUI-Manager
- Download Flux as you do in the video
- Drag and Drop an image to the canvas of ComfyUI, and then the new nodes appear
You are almost there. - Once in manager,
1. Click (Update All) to update your custom nodes.
2. After that, click on Update ComfyUI.
3. Restart comfyui, and the nodes will be available.
@@goshniiAI Finally. I have managed to create an image. Although I ran the model on the GPU, as I have a GTX 1080, it took more than 10 minutes.
@@JaviArte Congratulationss on getting that first image generated! FLUX can be a bit demanding, which might explain the longer generation time.
Thank you for the instructions! It looks like the VAE moved, do you have an alternate link?
Thank you for putting that across, i have provided the link here tinyurl.com/4nd53j7f
comfy ui seems to not be able to find the Unet any help ??
Hello there, Double-check that the file path is correct and that the model is properly placed within the right folder.
How do you get the "Manager" button?
I only see the buttons from "Save" to "Reset View".
EDIT: I have seen that you have to install the "ComfyUI Manager" to get the "Manager" and "Share" button.
Then, I clicked on "Update All" and "Updating ConfyUI", but I don't see the other nodes.
Hello there, I apologise for not highlighting that. To assist you in installing the manager, please see the video guide provided here. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
Hey my friend I'm still stuck at the ComfyUI_windows_portable>pause error :( everything is in place I promise, and I have already opened Comfy before your video so I know it works... any suggestions?
Yeah, you're right. The first generation is going to take longer than usual.
Here's what you need to do:
1. Save the workflow and restart your computer to make sure no heavy programs are running.
2. Run ComfyUI as soon as your computer is back on, then queue your prompt again.
3. For the first generation, just hang tight and be patient until it's complete.
One of the best tutoriel wet for installing a workflow in ComfyUI. Thank's it work on the first try. You talking about a link for the Sampler custom node. I didn't find it, can you help me with that? Again, thank's a lot.
I appreciate your kind words.- i am glad everything worked for you on the first try. 🎉
For the Sampler custom node I will drop it here if needed. th-cam.com/video/mYmopXiFW2I/w-d-xo.htmlsi=5O7ci-FgTAJK3gtF
I previously had stable diffusion on ComfyUI, so when I run it starts stable diffusion, do you know how to switch model ? Thanksss
You will have to select the FLUX model instead of the default Stable Diffusion one. Once that’s done, save your workflow, and it should start with FLUX next time you run ComfyUI.
Just a heads up that they've changed the filename for ae.sft and so the link no longer works from the main page.
You're right, all the links will be updated in the description. thank you for the heads up
6:11 I waited for the generation for like 20 minutes and nothing happened. It shows that ComfyUI is using 100% of both my SSD and HDD. I closed ComfyUI because I have no other solution. I have 12gb of vram and 24gb of ram. what's the problem?
Yes, you are correct. It will take longer than usual for the first generation.
-Save the workflow and restart your computer to ensure that no intensive programmes are running.
-Run ComfyUI first thing after resuming, then queue your prompt again.
-For the first generation, be patient until it is complete.
When I open ComfyUi there is no Manager and Share buttons?
Hello there, the absence of the manager means that it is not installed. Make sure to install the ComfyUI manager first.
the video link that follows can help you with that. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
@@goshniiAI i have laptop with 32gb ram and 3050 ti 4gb, good?
will 24gb ram struggle with fp16??
For most setups, 24GB of RAM should handle FP16 reasonably well. Adding a bit more RAM or fine-tuning your VRAM usage could help if you notice slowdowns.
My installed ram is 32GB. and 12GB of Vram
oot, how to show gpu,cpu,ram bar under "queue prompt" button on the right side? is it a custom node?
yes this is a custom node. you can find the tutorial video here th-cam.com/video/fSwnE2NKEzk/w-d-xo.htmlsi=SLWkj1WyitZ5v6wI
hi! i have an nvidia 3080rtx and i9 processor, it takes about 100s to make an image with basic workflow and the 16 version. But before it starts making the image it kinda freezes for 20 secs with this message: Model doesn't have a device attribute. any help?
Ensure that your VRAM isn’t maxing out during the initial load. If it is, reducing resolution might help. additionally If you’re using the 16 version, try switching to FP8 to reduce loading times.
please I dont know too much and I'm trying to install it , now I find that they changed the file format (flux1-schnell.sft → flux1-schnell.safetensors) so there is different file ? or I can complete the same way as you did ?
Yes, you can still go ahead. The models remain the same;. the file formats have just been updated to safetensors, making them safer.
@@goshniiAI thanks for your fast reply 😘
ComfyUI_windows_portable>pause
what to do??
Check that everything is in the correct folder. This can happen when ComfyUI is unable to locate an important file or there is a path issue.
Thanks!
You are welcome !
i followed all the steps, but it didn't work in the CMD. It writes "Got prompt and points to load diffusion Model Please Help
I also experienced this, I used the FP18 model and parameters rather than FP16 to overcome this. It could be a system resource. I waited for many minutes before the first image process began. The CMD may appear inactive, but the initial step is time-consuming.
is there anyway to use different models like for Stable diffusion? or are we stuck with the flux one at the moment?
For the time being, yes, but hopefully we might see different diffusion models or trained models built on Flux, as well as the workflow.
I having this error "UNETLoader"
Hello there, Check the model directory to ensure that all necessary model files, including the UNet model, are there.
-Re-download any missing or corrupted FLUX model files.
-Monitor your GPU's VRAM usage and lower resolution or use quantised models if VRAM is limited.
-Check for compatibility between FLUX and your UNet version, as well as any additional requirements.
i hope these help
do we have to install checkpoints
Yes, the checkpoints are your Unet models
can you please help me,
When i try to run Queue Prompt - it says "Reconnecting" and comfyui closed automatically.
Hello there, I also got stuck in my process. Save the workflow and restart your PC or system to ensure that no intensive applications are running.
Run comfy first thing after resuming, then queue your prompt again.
Remain patient until completion for the first generation.
Could A1111 be used? What about loras? What is highest resolution without upscale? Is it good for anime images? Hopefully they add negative prompts, I don't like blurry images at all. For some reason, most of these AIs blur the environment, it ruins everything.
Hello there, Unfortunately, 1. FLUX is designed to work seamlessly within ComfyUI. 2. The highest resolution without upscaling depends on your GPU’s VRAM. 3. Yes! FLUX can produce impressive anime-style images, especially with the right prompts. 4. Negative prompts are an effective way to control unwanted elements, and many users are hoping for more control in this area too.
Let’s hope for continued improvements in these models
When i generate images it is focusing on the subject and blur the background. What is the solution? I dont want to blur the background.
You may adjust your prompt to emphasise the subject over the background parts to achieve a clearer background. You can also experiment with other sampler settings. Flux is powerful, but it sometimes requires a little push!
24 GB System ram & 3060 12 GB VRam takes too much time to generate. Any solution?
The starting point image process may be longer. However, generations after that should be faster.
TIPS.
>Reducing the resolution of your outputs may greatly speed up the process.
>Make sure no extra background processes are consuming your system's resources.
@@goshniiAI Thank you, last question. The output image looks dull, is there any setting to improve quality?
@@CloudVideoYT If your output images are looking a bit dull
1. Experiment with different samplers in ComfyUI-some may give you sharper and more vibrant images.
2. Increase the resolution size of the image generation. though it may take a bit longer to process.
3. Sometimes, adding specific terms like "vivid," "high-definition," or "sharp details" to your prompts can make a noticeable difference.
i hope any of these helps.
@@goshniiAI Thank you so much, I'll try.
Thanks!
you are welcome
hi my model is not loading in the comfy ui i have followed ur tutorial and done everything
but still it is struck on the model loading part
in my cmd it's showing like this:-
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
model weight dtype torch.float8_e5m2, manual cast: torch.bfloat16
model_type FLOW
after this nothing is happening
please help
Hello there, the first image process takes longer than usual you may think its stuck.
you can try this as I did
Save the workflow and restart your PC or system to ensure that no intensive applications are running.
Run comfy first thing after resuming, then queue your prompt again.
REMAIN PATIENT until completion for the first generation.
the following images will be faster.
@@goshniiAI hi i am completely new to this
i have 3060 ti and 8gb vram
is this enough to run this model?
@@sherwinrogerrd9021 Yes! you might not get the same speed as higher-end GPUs, but it’s a solid setup to get started with. Just keep an eye on your VRAM usage, especially if you're working with high-resolution images or complex workflows
@@goshniiAI thanks for ur guidance
It's working now 🙌
@@sherwinrogerrd9021 Awesome!! Thank you for sharing the update.
it's possible install FLUX with a AMD graphic card???
Yes, it is possible to run FLUX with an AMD graphics card, but it might require a bit more setup compared to NVIDIA GPUs. FLUX relies on certain dependencies that are optimised for NVIDIA, so you'll need to ensure you have the right drivers and possibly make some adjustments in your setup.
Very helpful tutorial! But when I finally clicked on the "queue prompt", it said "TypeError: Failed to fetch". Has anyone encountered the same problem?
You could try restarting ComfyUI, checking your internet connection, or switching browsers. It usually happens when the network or server does not respond properly.
@@goshniiAI Thank you!
Seems i needed the chekpoints as well for flux schnell?
You're right! Once that's done, it should work without any issues.
In my ComfyUi there is no "Manager" option below "Reset View".
also upadte the bat "update_comfyui_and_python_dependencies"
Hello there, you can view this video to learn how to install the manager. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.htmlsi=G74A16TBheVVrJPP
how many seconds does it take to generate an image?
The time it takes to generate an image can vary according to your hardware, GPU, and the specific settings you're using, but on average, it took me 1-3 minutes after my initial generation using FLUX.
Hi mate! Can I install Flux Dev based on this tutorial?
Absolutely! then you may go for the DEV model instead of Schnell
i have a RTX 4090 and it took over 20 mins to generate - any idea why?
i guessed only the 1st one will take this long but all my generated pics need this long.
when i reduce the resolution, i only "win" 1-3 mins.. this is weird..
Especially with the RTX 4090! Your generations should go faster. However, you are correct that the initial image will take many minutes to process. fortunately after that, everything should be quick.
@@goshniiAI i only have that prob with Flux, all other Models i generate 50 images in like 50 seconds - but flux takes always 20 min for just one image :/
Done all you say, but instead of Load diffusion model I have the node "UNETloader" and cannot choose the sft file.
Using the manager, you can upgrade all of your nodes and comfyui to the latest version. This is due to an old version of ComfyUI.
@@goshniiAI Yeah I figured it out already. Thx anyway
@@paulmichaelfreedman8334 you are welcome. im glad to read that.
mmmmmm why I dont see the "manager" tab once I launch the comfyUI ?
To see the manager tab , you will have to install it separately.
@@goshniiAI thanks for the reply and for the videos. im getting started with this and I am trying to understand many things yet. I will search for the tutorial on how to install that manager tab. if you have one please let me know
@@dfgoijoi You're very welcome! It's great to hear you're diving into this exciting world. The link here can guide you through the manager and a few installations. th-cam.com/video/Nvm7dmwEX2Y/w-d-xo.html
ONLY WORKS with NVIDIA=?
Yes, Flux currently works best with NVIDIA GPUs because they support CUDA. You may still be able to use an AMD GPU setup, but performance and compatibility will vary abit.
its only comfyUI? not a big fan of comfy
I totally understand, If you're looking for alternatives, FLUX can also be used with other interfaces like Forge or Online WebUI's.
Each platform has its own strengths, so it’s worth experimenting to find the one that fits your workflow best.
@@goshniiAI forge works good but i'm getting blurry image when using hi res also inpaint doesn't work gives me this weird high contrast color instead
Hello my friend, what is the name of the browser you are using?
Hi there, that's BRAVE, the browser name.
@@goshniiAI thank sir
my Prompt executed in 3794.07 seconds, can you help...?
You're right. The beginning picture processing step could take longer. The generations that follow, though, should be quicker.
@@goshniiAI they are still this long :/
If i have 32GB of ram, am i downloading the FP16 or FP8? lol
Any of them will work properly, although FP8 will be more efficient. I am also running on 32GB system memory.
@@goshniiAI i see, thanks buddy, i would go for efficiency then..
@@m3ng0789 You are welcome. Happy creating....
It require very High GPU VRAM
You're absolutely right , could be one of the trade-offs for getting those high-quality results
but, you can always use the settings for a lower GPU output.
running with 8gb vram, possible or?
@@damird9635 8GB of VRAM is probably on the low end, but it is still likely. You may need to reduce the resolution or other resource-intensive settings and make use of the FP8 model for it to run smoothly.
@@goshniiAIwould need like 72gb+ system ram in addition to 8 gb but ye 8 gb is the lowest possible and it'd take ages to gen.
@@goshniiAI thank you, fp8 models works good, dev around 1 min at 1024x1024, schnell around 20 seconds
FluX schnell Model link returns 404
Thank you for the heads-up! I'll look into it and update the link as soon as possible.
should this work also on mac? I get this error "Error occurred when executing SamplerCustomAdvanced:
Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype."
Hello There, Unfortunately, I haven't tested ComfyUI on a Mac, as I primarily use Windows. However, you might want to try checking for any updates to your MPS or PyTorch setup, or consider running it in a different environment if possible.
The VAE File not Found in Site !?
You are right, the VAE has been updated to safetensor. but you can find the old file model to download here. tinyurl.com/4nd53j7f