Download all the workflows from the pixaroma-workflows channel on Discord. The links are also organized there. discord.gg/a8ZM7Qtsqq If you have any questions you can post them on discord or Pixaroma Community Group facebook.com/groups/pixaromacommunity
First time I ever send a tip on TH-cam. Thanks for the amazing work, the way you share your knowledge with such great pedagogy is truly appreciated. I hope you continue to create this kind of video, which is very valuable and allows beginners like me to approach these topics without any friction.
Since flux launched, I kept running away from it, due to how complicated it looked. You explained quite everything needed to get started. Thank you for this video!
I have to say that I really appreciate your help learning all of this. Your videos are very well made, detailed and informative. I have NEVER wanted to use ComfyUI and now I LOVE IT! I watched several videos and they always made my head hurt!! Yours made sense!! Great job! I'm looking forward to watching more of them... I know I am barely scratching the surface! 🤓
The way you explained this Flux model is clear and crisp 👍🏼👍🏼 Excellent, no really excellent step by step instructions 👍🏼👍🏼 I cannot explain how much this video means to me. Fantastic!! Oh, and you have a very pleasant voice. Thanks again!
bro you are crazyyyyy!!!! your contact this seires is a money based course for free, you are a very decent techer and i just binged the whole seires and discovered you uploded this right when i was looking for flux contant to get started!!! thanks alot i wish i could support you more
Great course! I have one contribution. In this chapter, you could explain that by downloading the image and loading it into ComfyUI, it grabs the metadata along with the entire workflow. I was initially confused about why we needed to download the image, but then I figured it out.
Thanks for your great contribution.Your videos and other materials provide a comprehensive and easy-to-follow resource for using ComfyUI and Flux together.
Unfortunately Flux Pro version is not available any longer. By the way, I think Stable Difusion is obsolete. Flux (and Mystic) is erasing it. Thank you for this lesson, again.
This is the best flux beginner tutorial I have found on TH-cam and it has everything I need to get started. Thank you so much! So it looks like FP8 might scratch my itch under most circumstances. What I am wondering about is, with most LoRAs available on the internet being generated against SD1.5 or SDXL, what am I gonna combine with Flux? Also, where could I find additional styles like you have in your CSV, which are recognised by Flux?
Check the episode 7 for styles and how to use it. Only lora made for flux will work with flux, you cannot combine them. And also I saw comfyui need converted lora for comfyui the default ones didn't work
Hey thanks for sharing this! With the Flux Shnell model - I'm getting this "CheckpointLoaderSimple Error while deserializing header: HeaderTooLarge" when Queueing. I'm running ComfyUI on Runpod. Can you please advise?
someone on forge had similar problem , for him redownload fix it, like redownload the model, something similar happen with other ui github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8123
Excellent tutorial series I watched so far for ComfyUI and FLUX. Great work and thank you! By the way, can you share how to make the animation at the end this episode? D-ID or Heygen?
you dont connect it to easy positive, you connect it to clip text encoder, if you right click on text encoder you can convert widget to text, so you have 2 inputs, clip and text, so lora go to clip, and positive go to text. I have examples on discord on pixaroma-workflows
Is it possible to have a separate directory for my flux models on a different drive? I really don't want to move my entire automatic 1111 folder to a different drive
When I try to use the prompt multiple styles selector node, the message "none" appears instead of styles. What should I do? I couldn't find a solution.
can you post on discord probably you missed a step, post some screenshots there on the comfyui channel, the link to discord is on the youtube header where the links are located
i tried f8, using a 4070 12gb card, with 32gb, and this is the second time it crashed. Not as violently as the 16bit version. I'm still not sure what i'm doing wrong. It also took forever trying to load. I bought more ram, i'm hoping that will help out here. I can only assume i'm still maxing out the ram. Maybe a bigger swap?
It shouldn't crash, i can run on 6gb of vram just takes a long time. Try looking up for gguf models and the nodes, there are like flux dev q4, q6, q8 and so on, start with q4 version and see if that one works, the workflow is a little different so search for a tutorial online first
Hello, that was very helpful but can i ask you why can't i queue the prompt on weight_dtype other than fp8_e4m3fn? It just becomes stuck on Load Diffusion Model and then ComfyUI just says Reconnecting while i don't recieve any specific message in console
Usually when it does that is run out off memory, so it is like it is crashing. So maybe is too much for your pc with other types. You can also try to add arguments in the run bat file like --lowvram to see if it helps
On this video i cover that th-cam.com/video/0Pmfpoi3-A0/w-d-xo.htmlsi=MA21lIwQKIJCoyom for flux just use normal language, or chatgpt but is quite good at understanding
you can use up to 2MP resolution, it can work quite well with different ratio compared with sdxl that if you go to high it start messing the composition. Just dont go to high and try to have width and hight divisible with 16 or 64, I saw it get better results. So for example I use 1368x768 then I upscale it to have it for youtube 16:9, but it can do also 1920x1080 but sometimes it can look a little blurry. Just keep in mind the bigger the size more time it will take to generate, and you need more vram to handle really big sizes.
@@pixaroma What can you recommend to improve photos (upscale)? So that nothing changes or changes minimally? Otherwise, some upscales change the details in the picture.
When i am in a hurry; just use the topaz gigapixel ai with sliders to a minimum :) in comfy UI i use ultimate sd upscale with low denoise. I plan to do a video for upscaling maybe next month with different methods so i will do more research then.
@@pixaroma Good! I've watched different videos and what the authors have is normal, it looks weird to me. Even watched the uses of the likes of 4x_NMKD-Siax_200k upscale.👍 topaz gigapixel I don't have it in the built-in. So I'll wait for this topic to be dealt with as well.🤔
@@Fayrus_Fuma topaz gigapixel ai is a paid software specialized on upscaling, is not in comfyUI. But I will do my research on different methods and I will present in a video my findings
Thanks for the video! I not sure what in doing wrong I've tried to use NF4 version and I had error of memory out in the first run the I re run it and its really slowly 82s/ it while (20 minutes per gen) using the f8 version with the unets takes 2 minutes per pic. I'm using a potato lap with 6gbvram with 32 gb RAM . Do you why is happening?
I dont know maybe is some settings or something wrong with the version, try again to update comfyui. I have the same problem with rtx2060 with 6gb of vram and 64gb of ram, it took me 6 minute on second generation and 9 minutes on first generation. I think with unet it loads all clip models separate and maybe that help. Maybe wait a little until things get more stable, since is all new, and i see new version coming out each day. I am still using sdxl hyper on my old pc just because is fast and get ok results :)) and on new pc i have only half second difference between dev nf4 and dev fp8 so for me didnt help much on new pc.
I just tried the forge ui, and updated like a few seconds ago, and form 6 minutes with schnell it takes only 27 seconds, so is fast, you can give it a try until is fixed comfyui maybe to see if helps th-cam.com/video/BFSDsMz_uE0/w-d-xo.html
did you tried one of the ready made workflows from discord for this episode? maybe you used a different loader, or didnt put it in the right folder. From the error i see is looking for it in checkpoints, but the model flux_dev should be in the unet folder
I had some problems with comfyui and had to reinstall on different folder because it took forever to load the model. Here are the results on the RTX060 6GB VRAM. First generation take longer because it loads the model, but after that is faster. flux1-dev-fp8 First generation time: 155 sec After that only: 113 sec flux1-schnell-fp8 First generation time: 63 sec After that only: 26 sec
@@pixaroma no luck for me on comfy ui, it takes 6 minutes compared to 1:30 minutes on forge!! same DEV fp8 model!!!! how did you manage to get so fast on comfy with 6gb VRAM?
@ but cant we just use replicate to train? So i want to apply someone elses model from citivai to my own images… i trained my model on dev flux trainer but pics are too AI i want them realistic
Hi how can we use meta AI to generate prompt for the stable diffusion? Compare to chatgpt meta AI is more easy but I don't know how to ask for it to create an prompt for the stable diffusion? Can anyone please help me with that
I dont have access to meta ai, but in chat gpt I can do something like : give ma a prompt for stable diffusion for a cartoon cat digital painting And I get a Prompt: "A charming and playful cartoon cat with large, expressive eyes and a mischievous smile, sitting in a cozy living room. The cat has soft, fluffy fur in shades of orange and white, with a curled tail and small, pointed ears. The living room features warm lighting, a comfortable armchair, and a few scattered toys. The art style is vibrant and colorful, with smooth, exaggerated lines and a whimsical, lighthearted atmosphere."
@@pixaroma but we need to add something like negative Prompt right bro, So indepth we need to understand and to create an images using that so It is basically some know how to do it anyway thanks for the reply
@@mastermanhwas flux doesnt support negative prompt. And for sdxl you just add the words you don't need in the generation. Or you can use the art styles like I showed in the video
ANYONE knows how to use control net for FLUX in FORGE? I get a model error if i use CANNY or DEPTH from XLabs-AI/flux-controlnet-collections 2024-08-17 21:38:29,461 - ControlNet - ERROR - Recognizing Control Model failed: C:\TEMP\FORGEFLUX\webui\models\ControlNet\flux-canny-controlnet_v2.safetensors
@@pixaroma yes, but on comfy i see tutorials that people actually need to wait 5 minutes for a schnell generation, on a 6GB VRAM card like i have, compared to FORGE that i can do a schnell generation in 20 seconds!!! so unless you have a comfy UI tutorial that makes a 6GB VRAM card works as fast as it does in FORGE, i might have to wait.... Is this comfy tutorial good for a 6gb Vram GPU?
is about the vram not system ram, you dont have enogh vram and flux need a lot of it. Try maybe the gguf models from ep10 there you have q4 that are smaller, q5, q8 etc all kinds depending on the vram of your video card.
I saw on reddit a post saying it was running on it but a 512px image took 6 minutes for dev nf4, so is better to just use sd 1.5 models, or maybe sdxl hyper, until you get a better video card
Did you download from the link and used the workflows i shared? All who tested said it worked, try different models see if you get it to work, maybe you are using a workflow for different model or something, not sure
@@pixaroma it seems like this model wasn't very well built (It's kind of "beta"). I updated transformers, uninstalled torch an installed it again. updated comfy. Nothing helps. flux doesn't work, sdxl works perfectly.
Sorry it didn't work, besides trying different models not sure what to say, i tried on 2 different computers and on discord all seems to get it to work, maybe try installing in a different folder maybe it something that doesn't like in that current installation. Or try the other video with forge ui. I mean flux really worth a try since can do so nice graphics
Is a more complex subject, if i can find an easy install, right now it can be problematic with some dependencies, many recommended to use a new comfyui installation just for that to not mess up something. Search online for fluxgym
Thanks for the info, i tested on 20 steps since it was the recommended steps for dev, and the NF was supposed to be the fastest so I didn't think to add more steps to make it less quick. So it is good to know it can do the same.
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
Forge Ui is still working on all the extensions and need more time to make it work, it all work, a1111 is also behind with updates, an comfyui works with flux from day one. There are starting to appear more flux control net but i am waiting for a unified model like we have on sdxl so i dont download a lot of models. Some from xlabs seems to work on comfyui. That why you dont find many tutorial because people didn't make it work yet, and when it works in next update fix something and mess something else, so people wait to be stabilized a little to get a more stable interface and so far comfyui is quite stable compared to others
For 4090, go to nvidia control panel > program setting > add stable diffusion bat that runs the gpu version. Then setting: Cuda - System Fallback Polciy for the application, set to Prefer no System fallback. Then it should keep the entire dev.1 model in the VRAM, to keep speed up.
Download all the workflows from the pixaroma-workflows channel on Discord. The links are also organized there.
discord.gg/a8ZM7Qtsqq
If you have any questions you can post them on discord or Pixaroma Community Group facebook.com/groups/pixaromacommunity
First time I ever send a tip on TH-cam. Thanks for the amazing work, the way you share your knowledge with such great pedagogy is truly appreciated. I hope you continue to create this kind of video, which is very valuable and allows beginners like me to approach these topics without any friction.
Thank you so much for your support, more tutorials are coming each week ☺️
Since flux launched, I kept running away from it, due to how complicated it looked. You explained quite everything needed to get started. Thank you for this video!
I have to say that I really appreciate your help learning all of this. Your videos are very well made, detailed and informative. I have NEVER wanted to use ComfyUI and now I LOVE IT! I watched several videos and they always made my head hurt!! Yours made sense!! Great job! I'm looking forward to watching more of them... I know I am barely scratching the surface! 🤓
I appreciate that, thank you 😊
The way you explained this Flux model is clear and crisp 👍🏼👍🏼
Excellent, no really excellent step by step instructions 👍🏼👍🏼
I cannot explain how much this video means to me. Fantastic!!
Oh, and you have a very pleasant voice.
Thanks again!
My dude, you are the single best software trainer I have seen on YT
bro you are crazyyyyy!!!!
your contact this seires is a money based course for free, you are a very decent techer and i just binged the whole seires and discovered you uploded this right when i was looking for flux contant to get started!!!
thanks alot i wish i could support you more
thanks, glad I could help :)
Incredible work! This playlist of yours is a service to the world. Thank you very much! Subscribed and liked.
thank you 🙂
This was great. I did a side by side test using Euler/Normal and your advice... huge difference. Thanks!
Glad it helped ☺️
Brilliant tutorial series !!! The best comfyui tutorial series on youtube by far ..
Thank you so much!
This must have been a lot of work.
And it helped me so much to understand.
I'm glad I found your channel.
Cheers 😊
thank you 🙂
Great course! I have one contribution. In this chapter, you could explain that by downloading the image and loading it into ComfyUI, it grabs the metadata along with the entire workflow. I was initially confused about why we needed to download the image, but then I figured it out.
Thanks, i think explained in one of the episodes, but is good to have it in the comments if someone looks for that info
Thanks for your great contribution.Your videos and other materials provide a comprehensive and easy-to-follow resource for using ComfyUI and Flux together.
You said leave the comment or like if find smth useful... your video is full of usefull!
Thanks for the tutorial. Really well explained and great editing.
this is the information I was looking for presented so efficiently and neatly.. Thanks
Excellent tutorial, very detailed without being too complex. Thank you!
Thanks a ton for the amazing effort you put into your tutorials. They truly stand out!
amazing vid as always! thank you!
Great tutorial series, very easy to understand. Thank you :)
yeahh finally got the proper updates! thanks very much!
Impressive work and great video! Thank you very much!
Great tutorial. I always learn so much from your videos!! 😊
amazing tutorial
thank you very much
great video, I am going to install and try this today. I am currently using SD Forge to do some transparent bg images.
alwayse the best
Great video! Thank you!
really useful . thank you so much
Thank you for this video. Very interesting for me
thanks you so much!!!
Unfortunately Flux Pro version is not available any longer. By the way, I think Stable Difusion is obsolete. Flux (and Mystic) is erasing it. Thank you for this lesson, again.
Flux pro was only available as API, never for download, flux is really good for the first version, hope they will release more
thank you
great effort here
Great video !😊
This is the best flux beginner tutorial I have found on TH-cam and it has everything I need to get started. Thank you so much! So it looks like FP8 might scratch my itch under most circumstances. What I am wondering about is, with most LoRAs available on the internet being generated against SD1.5 or SDXL, what am I gonna combine with Flux? Also, where could I find additional styles like you have in your CSV, which are recognised by Flux?
Check the episode 7 for styles and how to use it. Only lora made for flux will work with flux, you cannot combine them. And also I saw comfyui need converted lora for comfyui the default ones didn't work
Hey thanks for sharing this! With the Flux Shnell model - I'm getting this "CheckpointLoaderSimple
Error while deserializing header: HeaderTooLarge" when Queueing. I'm running ComfyUI on Runpod. Can you please advise?
someone on forge had similar problem , for him redownload fix it, like redownload the model, something similar happen with other ui github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8123
Thanks a lot!
Thanks!
Is it possible to use ComfyUI for changing my own videos? Like to change the style/colors of a video?
Not sure, i didn't do research on video yet, i am still on images workflows
Thanks!
thank you so much 😊
Excellent tutorial series I watched so far for ComfyUI and FLUX. Great work and thank you! By the way, can you share how to make the animation at the end this episode? D-ID or Heygen?
I used image to video online on the Kling Ai website to get that animation
@@pixaroma Many thanks!
Sir, your tutorial is great. will you teach about Control Net?
yes, i will do first for sdxl controlnet, and on later videos when more controlnet appear for flux for that as well
@@pixaroma Thank you very much, and I wish you a happy life.
how do you use the lora and the styles togheter? Beacuse the Easy positive node doesnt have the clip input so Idk how to do it
you dont connect it to easy positive, you connect it to clip text encoder, if you right click on text encoder you can convert widget to text, so you have 2 inputs, clip and text, so lora go to clip, and positive go to text. I have examples on discord on pixaroma-workflows
more, more more :! :))
Is it possible to have a separate directory for my flux models on a different drive? I really don't want to move my entire automatic 1111 folder to a different drive
check this video th-cam.com/video/nkFr81sOehU/w-d-xo.html
How would you simplify their workflow?
I also find it needlessly complex.
I mostly use now the dev fp8. the workflow look similar to sdxl and the quality is good, I have more compact version on discord for those workflows
When I try to use the prompt multiple styles selector node, the message "none" appears instead of styles. What should I do? I couldn't find a solution.
can you post on discord probably you missed a step, post some screenshots there on the comfyui channel, the link to discord is on the youtube header where the links are located
i tried f8, using a 4070 12gb card, with 32gb, and this is the second time it crashed. Not as violently as the 16bit version. I'm still not sure what i'm doing wrong. It also took forever trying to load. I bought more ram, i'm hoping that will help out here. I can only assume i'm still maxing out the ram. Maybe a bigger swap?
It shouldn't crash, i can run on 6gb of vram just takes a long time. Try looking up for gguf models and the nodes, there are like flux dev q4, q6, q8 and so on, start with q4 version and see if that one works, the workflow is a little different so search for a tutorial online first
E-boy! Still on the road, will watch the video in a couple hours.
Hello, that was very helpful but can i ask you why can't i queue the prompt on weight_dtype other than fp8_e4m3fn? It just becomes stuck on Load Diffusion Model and then ComfyUI just says Reconnecting while i don't recieve any specific message in console
Usually when it does that is run out off memory, so it is like it is crashing. So maybe is too much for your pc with other types. You can also try to add arguments in the run bat file like --lowvram to see if it helps
@@pixaroma yeah I was thinking of that, just opened performance screen and see that all of 32 GB RAM is gone. Thank you for the answer!
Could you Please make a tutorial about Prompting for image generation.
On this video i cover that th-cam.com/video/0Pmfpoi3-A0/w-d-xo.htmlsi=MA21lIwQKIJCoyom for flux just use normal language, or chatgpt but is quite good at understanding
👍👍👍
holy shit that's quality
where did you get this prompt styles selector and styles 😬
Check the episode 7 I explained everything there, they styles file I created and i use some custom nodes to load the file
thanks very much, i have a problem : when i generete a picture it stucks : l after loarded completely in ksampler 0%__I 0/4 [00:00
Do you have enough vram? Maybe is too big for your video card. Does other models work like schnell?
I installed comy ui with the standalone build for Windows but I don't have the manager at the bottom any idea why?
Check episode 1 is a custom node that you must have installed since it is quite useful, after that it will appear
@@pixaroma Thank you, I will check all your episodes !
Question - what resolutions can be used other than 1024*1024? Will the picture be deformed in the same way if I use 1280*720/1920*1080?
you can use up to 2MP resolution, it can work quite well with different ratio compared with sdxl that if you go to high it start messing the composition. Just dont go to high and try to have width and hight divisible with 16 or 64, I saw it get better results. So for example I use 1368x768 then I upscale it to have it for youtube 16:9, but it can do also 1920x1080 but sometimes it can look a little blurry. Just keep in mind the bigger the size more time it will take to generate, and you need more vram to handle really big sizes.
@@pixaroma What can you recommend to improve photos (upscale)? So that nothing changes or changes minimally? Otherwise, some upscales change the details in the picture.
When i am in a hurry; just use the topaz gigapixel ai with sliders to a minimum :) in comfy UI i use ultimate sd upscale with low denoise. I plan to do a video for upscaling maybe next month with different methods so i will do more research then.
@@pixaroma Good! I've watched different videos and what the authors have is normal, it looks weird to me. Even watched the uses of the likes of 4x_NMKD-Siax_200k upscale.👍
topaz gigapixel I don't have it in the built-in. So I'll wait for this topic to be dealt with as well.🤔
@@Fayrus_Fuma topaz gigapixel ai is a paid software specialized on upscaling, is not in comfyUI. But I will do my research on different methods and I will present in a video my findings
Thanks for the video! I not sure what in doing wrong I've tried to use NF4 version and I had error of memory out in the first run the I re run it and its really slowly 82s/ it while (20 minutes per gen) using the f8 version with the unets takes 2 minutes per pic. I'm using a potato lap with 6gbvram with 32 gb RAM . Do you why is happening?
I dont know maybe is some settings or something wrong with the version, try again to update comfyui. I have the same problem with rtx2060 with 6gb of vram and 64gb of ram, it took me 6 minute on second generation and 9 minutes on first generation. I think with unet it loads all clip models separate and maybe that help. Maybe wait a little until things get more stable, since is all new, and i see new version coming out each day. I am still using sdxl hyper on my old pc just because is fast and get ok results :)) and on new pc i have only half second difference between dev nf4 and dev fp8 so for me didnt help much on new pc.
I just tried the forge ui, and updated like a few seconds ago, and form 6 minutes with schnell it takes only 27 seconds, so is fast, you can give it a try until is fixed comfyui maybe to see if helps th-cam.com/video/BFSDsMz_uE0/w-d-xo.html
flux model dont load : ERROR: Could not detect model type of: F:\ComfyUi\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux_dev.safetensors
did you tried one of the ready made workflows from discord for this episode? maybe you used a different loader, or didnt put it in the right folder. From the error i see is looking for it in checkpoints, but the model flux_dev should be in the unet folder
How come I don't have the Manager button in my side menu?
Did you install it? Is it on episode 1
Yes I installed it following episode 1
ok so in minute 7:18 you mention that on a 6Gb GPU it takes 5 minutes to generate 1 image?
I am at that old pc rtx2060 6gb of vram, and 64gb of system ram. I updated the comfyui and I am redoing the test. I will get back to you with results
I had some problems with comfyui and had to reinstall on different folder because it took forever to load the model.
Here are the results on the RTX060 6GB VRAM. First generation take longer because it loads the model, but after that is faster.
flux1-dev-fp8
First generation time: 155 sec
After that only: 113 sec
flux1-schnell-fp8
First generation time: 63 sec
After that only: 26 sec
@@pixaroma nice!! lot better... is this using your tutorial for comfy ui?
@@liquidmind yes
@@pixaroma no luck for me on comfy ui, it takes 6 minutes compared to 1:30 minutes on forge!! same DEV fp8 model!!!! how did you manage to get so fast on comfy with 6gb VRAM?
can you train your OWN images with this?
You need something like flux gym to train your own images locally but i didn't try yet since it was a little bit tricky to install
@ but cant we just use replicate to train? So i want to apply someone elses model from citivai to my own images… i trained my model on dev flux trainer but pics are too AI i want them realistic
@@raz0rstr sorry I dont have much experience with training
Hi how can we use meta AI to generate prompt for the stable diffusion? Compare to chatgpt meta AI is more easy but I don't know how to ask for it to create an prompt for the stable diffusion? Can anyone please help me with that
I dont have access to meta ai, but in chat gpt I can do something like : give ma a prompt for stable diffusion for a cartoon cat digital painting
And I get a Prompt: "A charming and playful cartoon cat with large, expressive eyes and a mischievous smile, sitting in a cozy living room. The cat has soft, fluffy fur in shades of orange and white, with a curled tail and small, pointed ears. The living room features warm lighting, a comfortable armchair, and a few scattered toys. The art style is vibrant and colorful, with smooth, exaggerated lines and a whimsical, lighthearted atmosphere."
@@pixaroma but we need to add something like negative Prompt right bro, So indepth we need to understand and to create an images using that so It is basically some know how to do it anyway thanks for the reply
@@mastermanhwas flux doesnt support negative prompt. And for sdxl you just add the words you don't need in the generation. Or you can use the art styles like I showed in the video
@@pixaroma thanks actually I am using stable diffusion so that's why I am asking for this thanks
ANYONE knows how to use control net for FLUX in FORGE? I get a model error if i use CANNY or DEPTH from XLabs-AI/flux-controlnet-collections
2024-08-17 21:38:29,461 - ControlNet - ERROR - Recognizing Control Model failed: C:\TEMP\FORGEFLUX\webui\models\ControlNet\flux-canny-controlnet_v2.safetensors
I saw someone using them in comfyui, don't think it work yet with forge
@@pixaroma yes, but on comfy i see tutorials that people actually need to wait 5 minutes for a schnell generation, on a 6GB VRAM card like i have, compared to FORGE that i can do a schnell generation in 20 seconds!!! so unless you have a comfy UI tutorial that makes a 6GB VRAM card works as fast as it does in FORGE, i might have to wait....
Is this comfy tutorial good for a 6gb Vram GPU?
hi, after update i got white screen...
Try to update again maybe, what it says in the command window? What error?
When I use flux it uses up all of my ram up to 99% and won't load and I have no idea how to fix this. I have 32 gigs of ram and an RTX 3050
is about the vram not system ram, you dont have enogh vram and flux need a lot of it. Try maybe the gguf models from ep10 there you have q4 that are smaller, q5, q8 etc all kinds depending on the vram of your video card.
@@pixaroma yeah well the strange thing is when i check the task manager my ram goes straight to 99%
@@pixaroma it is my system ram for an unknown reason. I thought it would have been my video ram too but it isn't..
Maybe it takes too much time to load from where is the model now. I have it on SSD drive so it loads faster then HDD
@@pixaroma Good to know, thanks.
✨👌💪😎🤗😎👍✨
Best tutorial out here brother im sorry i can't send money as thanks one day brother.
Can run on GTX 1070? 32gb ram?
I saw on reddit a post saying it was running on it but a 512px image took 6 minutes for dev nf4, so is better to just use sd 1.5 models, or maybe sdxl hyper, until you get a better video card
@@pixaroma thanks bro
I'm crashing here, is it because I have 16gb of ram? I'm using a rtx 3060 12gb.
try the gguf models are smaller check episode 10, like q8, it should not crash, someone used on 12gb, i used schnell on 6gb so it should work
didn't work. I get "module 'torch' has no attribute 'float8_e4m3fn'" than I try to use this checkpoint
Did you download from the link and used the workflows i shared? All who tested said it worked, try different models see if you get it to work, maybe you are using a workflow for different model or something, not sure
From what i saw online you could try to fix it by updating pytorch
Try also update comfyui and dependencies bat github.com/comfyanonymous/ComfyUI/issues/4297
@@pixaroma
it seems like this model wasn't very well built (It's kind of "beta"). I updated transformers, uninstalled torch an installed it again. updated comfy. Nothing helps. flux doesn't work, sdxl works perfectly.
Sorry it didn't work, besides trying different models not sure what to say, i tried on 2 different computers and on discord all seems to get it to work, maybe try installing in a different folder maybe it something that doesn't like in that current installation. Or try the other video with forge ui. I mean flux really worth a try since can do so nice graphics
Can't wait to go professional with Flux - with your tutorials by my side.
Same just descovered all this stable diffusion/civitai/comfy/black forest lab mess yesterday and I am still blown away.
can you make video on how to train lora for free
Is a more complex subject, if i can find an easy install, right now it can be problematic with some dependencies, many recommended to use a new comfyui installation just for that to not mess up something. Search online for fluxgym
NF4 was compared incorrectly; at 30 steps there is no difference.
Thanks for the info, i tested on 20 steps since it was the recommended steps for dev, and the NF was supposed to be the fastest so I didn't think to add more steps to make it less quick. So it is good to know it can do the same.
Hahaha, "I am not a witch".
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
Forge Ui is still working on all the extensions and need more time to make it work, it all work, a1111 is also behind with updates, an comfyui works with flux from day one. There are starting to appear more flux control net but i am waiting for a unified model like we have on sdxl so i dont download a lot of models. Some from xlabs seems to work on comfyui. That why you dont find many tutorial because people didn't make it work yet, and when it works in next update fix something and mess something else, so people wait to be stabilized a little to get a more stable interface and so far comfyui is quite stable compared to others
For 4090, go to nvidia control panel > program setting > add stable diffusion bat that runs the gpu version.
Then setting: Cuda - System Fallback Polciy for the application, set to Prefer no System fallback.
Then it should keep the entire dev.1 model in the VRAM, to keep speed up.
Thanks ☺️
there is new flux model which is more faster
What is the name of the model? I have delayed this video a few times because each day i found a new update 😂
@@pixaroma lol , i just checked the vid you have it already 😁
git pip lots of somethings that i do not know.
If you still have problems installing tag me a message on discord ☺️