@@SamilTerzic ComfyUi just seems to crash immediately at Loading Checkpoint step when I try to generate. Just says "got prompt" in command window then the browser interface pops an error "reconnecting" Even with the lightest Shnell model. I guess a gtx1080 8GB vram and 24GB mem is not enough I'm told
I've been using SD1.5 for long time, but this one really catch my attention. I was looking for guidance to setup just now but somehow unable to find a proper one and you're quick! Thanks a lot
Yeah, the one most of us can't run properly, let alone train. I really hope this gets some quality pruning, because otherwise it WILL be a flub and be forgotten within a month
@@df1ned I mean, as tech goes, no, it doesn't get easier to run. Everything requires more power, not less. You guys are starting to sound as bad as gamers 😂
@MaddJakd you do realise that it will literally have to, right? There is a lot of value in being easy to run. Also, the main hindrance in the further evolution of generative AI right now is how expensive they are becoming to train and run. Look at LLMs. Since ChatGPT, the main vector of evolution was "make this smaller and more efficient" with similar capabilities to the full size model. And what on earth do you mean "as bad as gamers"?
brief and straightforward introduction with everything linked in the description, perfect for a quick start into Flux. Great video, as always! Thank you Olivio! ❤
I've been using a111/comfy for a few weeks to design pinterest pins. Usually I reject 90%, from my first couple of batches with Flux I'm wanting to keep 90%! Very impressed so far and when we get control nets, LoRA etc its going to be revolutionary.
@@FrankEBailey Yes, I think you are right, I have been trying out sd, fooocus and other AI popular tools on Mimicpc for the past time, my peers and I thought it could achieve the same result, but maybe I am not skilled enough and the result of the generated image is not beyond my expectation, it is a free online tool you can try it out!
I do the same things with the 3060 12GB But it takes 4 min and 6 sec per image . I'm seeing your Video from Bangladesh . Your work is awesome man . Take love
On the black forest labs website they say they are working on an video generation ai model, looking forward to learning more and I hope it will be open source with an interesting license
With 12GB VRAM it goes into low VRAM mode and even 4 steps take 1:20 or more. Worth it though! You can't get this level of prompt adherence with SD imo.
Wow this is absolutely stunning! Definitely seems like the local Model to go with for general artistic, digital, rendered images with very passable real people. The established 1.5 ecosystem is still preferable for specific use cases, nsfw materiel and so on. But this is an excellent option!
Yes, the lack of fine-tuning is the only minor drawback, but when I run flux on mimicpc it's loaded directly into it, and other people's workflows are really perfect, I only need to change it a little bit, and I get an image that's more beautiful than if I'd done it myself for 2 hours!
First of all, thanks Olivio for your video :)) Schnell worked for me with an rtx3060ti 8gig and 32 gig system memory in low vram mode. Simplest prompts take from 65 to 90 seconds while more complex prompts can take up to 170 seconds. Best combination for me is weight type fp8_e5m2 and fp16 clip and 4 steps (1024x1024 images). Sometimes images tend to be a bit blurry but it depends on the prompt.
VAE is confusing: I have the ae.sft file as my VAE; but at 08:22 you also have another VAE diffusion_pytorch_model - I have not d/loaded this, but two differing VAE files are confusing?
When I hit the "Queue Prompt" button, I get: "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'". I have all of the mentioned files in the right places. Updated my ComfyUI (which I hadn't used in a while). What am I missing here?
I decided to re-download ComfyUI from Github and reinstall it, just overwriting all of the files that were already there. Then I made sure that all of the Flux files were still there according to Olivio's instructions (they were). After doing that, the Queue Prompt button gives this error: "Prompt outputs failed validation DualCLIPLoader: - Value not in list: type: 'flux' not in ['sdxl', 'sd3']" .....the weird thing about this is that the DualClipLoader type IS "flux" when I load the workflow. But if I click on that field to pull down the choices, I get only SD3 and SDXL, no Flux. And so then you can't choose "flux" again. It's like the text "flux" appears as a value for "type" but it's not actually there as a valid choice. Hmmmmm......
@@JustAFocusI had the same issue, didn’t do a clean reinstall but ran the bat file, update/update_comfyui.bat in a CMD window, I then started up ComfyUI and when the program came up ran the ComfyUI-Manager and chose update All. That seems to have fixed it. Problem with ComfyUI is that it doesn’t seem to clean out or update everything. Sometimes you might also need to run update_comfyui_and_python_dependencies.bat after a reinstall if you get Python version errors.
@@JustAFocus try running update_comfyui_and_python_dependencies.bat in the updates folder, I’m not sure why a fresh installation doesn’t work unless the downloaded installer is somehow out of date and isn’t the most recent version.
Same here, any news on this? Got a brandnew virtual machine installation everything updated and also getting the "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'" error. Maybe the new version broke something?
with about 30 million dollars from the investors they did a great job. what impressed me most is that this team of engineers is the same one that worked in Stability AI 😁😁 I've been trying out this model since this morning. I am speechless. Unprecedented quality, referring to the open source range. impressive. On my work station I tested the dev version. OH MY GOD! i have no words. i apologise if my english is not the best! 😂😂
I wonder what the specs are for finetune training, if mixing is possible and if it can make and use Lora like models etc. Stuff like is what really determines if these models have longevity. The massive Vram requirement is certainly a blocker for most of the community. Unfortunatly consumer grade GPUs havent caught up to that Vram amount for their average models. So its mostly going to remain a small section of the community that will use it until better GPUs become affordable or at least smaller models are made available
A Rev Animated style using this as a base is all I need (for now...) edit: It looks like it can create similar style, but still not close to what I loved RevAnimated for.
The model used in the video is quantized: the numbers that make up the model are converted to use smaller number formats (they use only 8 bits instead of the larger original size) to save VRAM at the cost of a very negligible (less than 1% usually) drop in quality. It can increase performance sometimes too.
Can someone tell me why mine isnt working correctly? Im using a 4090. I have everything downloaded and installed where it says. I have all the same settings inside of comfy. Its updated, etc. When i hit queue prompt: 1. It takes forever to do each step, but they all hit green. 2. When it gets down to VAE Decode a popup says reconnecting and inside the cmd window it says to press any button to continue and when i do it exits out. Everytime
Similar problem here with a RTX 3090 and 32 GB RAM. Everything goes regular way. The Sampler crashes everytime and canceled with a Reconnect. I use ComfyUI via Stability Matrix.
ForgeUI with Flux1.S model from Civit Ai that has T5 fp8 and clip model with flux 1 schnell merged into one. I was able to run it on even a very old 4GB VRAM + 16GB RAM laptop, to generate a 1280x720 image, which I would not do again because it took 15 minutes for 4 steps. But it works.
@@mkDaniel I am using lowVram option in comfyui. Other than that everything is same. It takes long time to initially load the checkpoint. But once its loaded every other render takes less than a minute
using A1111 rn with i5 10 gen and 1650 ti snd 16gb ram.....well with some help from reddit i can generate but yeah....upsacle always take times.....so how does comfyUI work on 1650 ti? 4gb vram ....
i wish we got chat UI to create nodes, setting nodes, and use nodes... rather than ComfyUI Manager (isn't it will be cool if we just chat the bot to generate custom nodes we need?) any one know chatbot fine tuned for comfyui?
I would say right now it is as close as you get to ideogram level of performance but this will become one the best models out there with finetuning also has anyone tried running using the API call for the full version that might be even better even though its 5 cents per image. My suggestion is to use prompts similar to ideogram which is much more descriptive.
I'm trying to use the XLabs Sampler node, but for some reason, it takes 30min to generate an image (!!!!) on a 4070 Ti 12GB. How come the sampler is so slow ?
I'm running the Flux Dev on CPU as I have a Radeon 6900XT. Each render takes 42 minutes at the file resolution I use.
5 หลายเดือนก่อน +1
My RTX 3060 (12GB) will even run the 23GB FP16 FLUX model. Loads from NVME & spits out an image within 35 seconds for non DEV (images made after are faster as model already loaded).
So with a 4080 Super, at pretty much default values of the workflow that is provided, I was sitting here for an hour maybe two, and then I gave up after it was on 5% for another 10 minutes.
Do you know how to install on the stability matrix? The Models folder, which is a folder shared with other models in the stability matrix, does not have a Unet folder like the regular Comfy UI.
Anyone else getting a "Node not found: CheckpointLoaderSimple.ckpt_name" error? I tried a different workflow setup and then it works, but following the basic setup mentioned in this video gives me that error. Everything is up to date, as far as i can tell
I am a little confused, the links of the models on the page where the "workflow" is not the same as the ones you have in the video. which ones should I download?
FIXED.. NEEDED A SECOND RESTART/UPDATE OF COMFYUI MANAGER Error: Missing : "FluxGuidance Node" I D/L everything, updated Comfy and restarted computer.. drop the workflow into ComfyUI but this one node is showing red and missing
My Project idea for a performance would require a output of 2-3 fps. Ideally higher. Images don´t have to be crazy detailed or 4k or whatever. Is this model "Real-Time" capable, given a couple of 4090s? Webcam feed of people dancing, AI projection onto the backwall is the idea.
I've read the creators of this model, Black Forest Labs, were the original people who helped create Mid Journey. So that would help explain why we get such great artistic results.
That team split from Stable Diffusion, I am happy to see that Midjourney staring freaking out about this, therefore they're starting to ban a lot of people without any reason and many restrictions like Trump or Biden I just trying a few prompt and get banned 5 days...
my 3060 12GB VRAM takes about 80sec for one image with the dev fp8 version. loading is okay but need a lot of RAM (not VRAM) ca. 40GB. Also the text encoder models are the same as for SD3. so if you already have them download no need to download again
i tried installing it for a whole day. i'm pretty sure i eventually got it, but i wanted it to be optimized. the way it was running for me was so junk... i just deleted everything again for the tenth time and stopped trying.
You can either load the workflow or easier yet just drag and drop it into your browser window (obviously on a running ComfyUI window). If you are asking how to get/run ComfyUI, it couldn't be easier. It is just one google search away, and it is fully contained with a single download.
i am making some test i have a RTX 2080 Ti 11 GB and getting iimages in aboutn 3.5 Minutes. it is not difference form me in terms of time if i change the model. i am doing something wrong? how can i speed up things a little bit?
My pc runs 2080 super 8gb vram and runs with no issue it just takes 2.5 minute per image i heard 3090 or 4090 runs same image in 14-15 seconds
awesome, i pinned you comment
@@SamilTerzic ComfyUi just seems to crash immediately at Loading Checkpoint step when I try to generate. Just says "got prompt" in command window then the browser interface pops an error "reconnecting"
Even with the lightest Shnell model.
I guess a gtx1080 8GB vram and 24GB mem is not enough I'm told
i am getting 3 min and 20 secs aprox on a 2080 Ti with 11 GB. :(
About to find out on a 4090 will let ya know.
On my 4090 it takes 4s to 6s seconds.
I've been using SD1.5 for long time, but this one really catch my attention. I was looking for guidance to setup just now but somehow unable to find a proper one and you're quick! Thanks a lot
Make sure that your PC is up for the task ;-)
Finally, the SD3 we deserve!
Yeah, the one most of us can't run properly, let alone train. I really hope this gets some quality pruning, because otherwise it WILL be a flub and be forgotten within a month
@@df1ned hahahaha you are a joke
Yes and no. Still lacks a commercial license on the base model.
@@df1ned I mean, as tech goes, no, it doesn't get easier to run.
Everything requires more power, not less. You guys are starting to sound as bad as gamers 😂
@MaddJakd you do realise that it will literally have to, right? There is a lot of value in being easy to run. Also, the main hindrance in the further evolution of generative AI right now is how expensive they are becoming to train and run. Look at LLMs. Since ChatGPT, the main vector of evolution was "make this smaller and more efficient" with similar capabilities to the full size model. And what on earth do you mean "as bad as gamers"?
brief and straightforward introduction with everything linked in the description, perfect for a quick start into Flux. Great video, as always! Thank you Olivio! ❤
I never thought my pic 01:54 will be in the video considering the sheer amount of pics shared by community. Thanks Olivio 🥰
A surprise, to be sure, but a welcome one ;-)
wow, really? this is amazing! cogratulations!
Pog
I've been using a111/comfy for a few weeks to design pinterest pins. Usually I reject 90%, from my first couple of batches with Flux I'm wanting to keep 90%! Very impressed so far and when we get control nets, LoRA etc its going to be revolutionary.
Those are some stunning results. Especially text and eyes... wow!
It has such fine control over the aesthetic and style through prompting alone that you won't need to use artist names or even loras in most cases.
I've been using the online demo since yesterday and it just keeps blowing my mind with how strong the visuals and prompt following are.
Also way less lora, as it understands what I want, without me needing to brute force with small additional data. Mind blowing
@@FrankEBailey Yes, I think you are right, I have been trying out sd, fooocus and other AI popular tools on Mimicpc for the past time, my peers and I thought it could achieve the same result, but maybe I am not skilled enough and the result of the generated image is not beyond my expectation, it is a free online tool you can try it out!
I do the same things with the 3060 12GB But it takes 4 min and 6 sec per image . I'm seeing your Video from Bangladesh . Your work is awesome man . Take love
It's so exciting that it rivals Midjourney !!
It’s annoying how good Midjourney is
@@angryox3102 I like midj but hate that they force you to use discord
On the black forest labs website they say they are working on an video generation ai model, looking forward to learning more and I hope it will be open source with an interesting license
I was asking Microsoft's Copilot about flux.1, and it pointed to you. Thanks for the video!
With 12GB VRAM it goes into low VRAM mode and even 4 steps take 1:20 or more. Worth it though! You can't get this level of prompt adherence with SD imo.
Wow this is absolutely stunning! Definitely seems like the local Model to go with for general artistic, digital, rendered images with very passable real people. The established 1.5 ecosystem is still preferable for specific use cases, nsfw materiel and so on. But this is an excellent option!
Amazing Model!
With an RTX4080 it takes about 35s. to generate one 1024x1024 image.
works fine on a 4070ti, thx for the explaination...
Hope the fine tuning video will hit soon. This model seems quite a bit different to SD, so I suspect fine tuning will be different.
Yes, the lack of fine-tuning is the only minor drawback, but when I run flux on mimicpc it's loaded directly into it, and other people's workflows are really perfect, I only need to change it a little bit, and I get an image that's more beautiful than if I'd done it myself for 2 hours!
Cherry-picked or not, those shots look great. And yeah, this is what SD3 "should" have looked like. :)
I tried it last night. My video card (20GB!) barely handles it, but the results are sooooo goood!
Running really well on my 3090 24GB. Very impressed with the quality.
First of all, thanks Olivio for your video :)) Schnell worked for me with an rtx3060ti 8gig and 32 gig system memory in low vram mode. Simplest prompts take from 65 to 90 seconds while more complex prompts can take up to 170 seconds. Best combination for me is weight type fp8_e5m2 and fp16 clip and 4 steps (1024x1024 images). Sometimes images tend to be a bit blurry but it depends on the prompt.
VAE is confusing: I have the ae.sft file as my VAE; but at 08:22 you also have another VAE diffusion_pytorch_model - I have not d/loaded this, but two differing VAE files are confusing?
Well it uses both. One is text encoder.
When I hit the "Queue Prompt" button, I get: "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'". I have all of the mentioned files in the right places. Updated my ComfyUI (which I hadn't used in a while). What am I missing here?
same here, anyone?
I decided to re-download ComfyUI from Github and reinstall it, just overwriting all of the files that were already there. Then I made sure that all of the Flux files were still there according to Olivio's instructions (they were). After doing that, the Queue Prompt button gives this error:
"Prompt outputs failed validation
DualCLIPLoader:
- Value not in list: type: 'flux' not in ['sdxl', 'sd3']"
.....the weird thing about this is that the DualClipLoader type IS "flux" when I load the workflow. But if I click on that field to pull down the choices, I get only SD3 and SDXL, no Flux. And so then you can't choose "flux" again. It's like the text "flux" appears as a value for "type" but it's not actually there as a valid choice. Hmmmmm......
@@JustAFocusI had the same issue, didn’t do a clean reinstall but ran the bat file, update/update_comfyui.bat in a CMD window, I then started up ComfyUI and when the program came up ran the ComfyUI-Manager and chose update All.
That seems to have fixed it.
Problem with ComfyUI is that it doesn’t seem to clean out or update everything.
Sometimes you might also need to run update_comfyui_and_python_dependencies.bat after a reinstall if you get Python version errors.
@@JustAFocus try running update_comfyui_and_python_dependencies.bat in the updates folder, I’m not sure why a fresh installation doesn’t work unless the downloaded installer is somehow out of date and isn’t the most recent version.
Same here, any news on this? Got a brandnew virtual machine installation everything updated and also getting the "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'" error. Maybe the new version broke something?
with about 30 million dollars from the investors they did a great job. what impressed me most is that this team of engineers is the same one that worked in Stability AI 😁😁
I've been trying out this model since this morning. I am speechless. Unprecedented quality, referring to the open source range. impressive. On my work station I tested the dev version. OH MY GOD! i have no words. i apologise if my english is not the best! 😂😂
I wonder what the specs are for finetune training, if mixing is possible and if it can make and use Lora like models etc. Stuff like is what really determines if these models have longevity. The massive Vram requirement is certainly a blocker for most of the community. Unfortunatly consumer grade GPUs havent caught up to that Vram amount for their average models. So its mostly going to remain a small section of the community that will use it until better GPUs become affordable or at least smaller models are made available
The new king --that gobbles up VRAM and HDD space like crazy 😞
Any numbers please!
The quality's gotta come from somewhere!
Can run on 12 GB VRAM.
Don't spread nonsense. this thing can be run free.
And it is very early days, SD took hard work to get it to run on consumer hard when first released.
@@nix9409 I like my AI run locally
So excited to try out the Flux model!
80% perfect hands! Amazing.
Really Happy for the Ai community getting the new model they deserve
Looks like a lovely model.
I am unable to copy and paste the files to the required folders like unet.. how to fix it?
A Rev Animated style using this as a base is all I need (for now...)
edit: It looks like it can create similar style, but still not close to what I loved RevAnimated for.
Render layers ?
Comfyui workflows ?
flux1-schnell-fp8
It also has a 22 GB version. Which one is better and what is the difference between them?
Dev version fp8 (12 gigs version) is what I use (20 steps), it's way better than the schnell version.
@@kkryptokayden4653 aura_flow_0.2
what is this Did you test this? How is the model?
16 gigs
If this competes with midjourney does it mean midjourney always was slightly better because they used 24gb models instead of 6gb ones?
Honestly, this is mind blowingly, stupidly good. Hope to see some inpainting, loraa, upscaling workfows soon. This is what SD3 should have been.
Great video! And it was a fun little surprise event on the discord channel too!
Hi and thanks for the video! I noticed the model you downloaded in the Unet folder is different than the model they suggest (flux1-dev.sft)
The model used in the video is quantized: the numbers that make up the model are converted to use smaller number formats (they use only 8 bits instead of the larger original size) to save VRAM at the cost of a very negligible (less than 1% usually) drop in quality. It can increase performance sometimes too.
@@donaldhawkins6610 Thank you for the reply, much appreciated!
Can someone tell me why mine isnt working correctly?
Im using a 4090.
I have everything downloaded and installed where it says.
I have all the same settings inside of comfy.
Its updated, etc.
When i hit queue prompt:
1. It takes forever to do each step, but they all hit green.
2. When it gets down to VAE Decode a popup says reconnecting and inside the cmd window it says to press any button to continue and when i do it exits out.
Everytime
Similar problem here with a RTX 3090 and 32 GB RAM. Everything goes regular way. The Sampler crashes everytime and canceled with a Reconnect. I use ComfyUI via Stability Matrix.
for some reason, I don't have that "save, load, refresh, clipspace, clear, load default, reset view" menu...
ForgeUI with Flux1.S model from Civit Ai that has T5 fp8 and clip model with flux 1 schnell merged into one. I was able to run it on even a very old 4GB VRAM + 16GB RAM laptop, to generate a 1280x720 image, which I would not do again because it took 15 minutes for 4 steps. But it works.
I Hope their will be some hacks and optimisations to make it run on my 3600 6g ram laptop some day 😢. Really waiting for that time to try it out
5:29 analog clock Roman numerals are glitchy/off, which is the same problem as with SD.
Does Flux have an Inpaint function?
Hopefully we get an optimized version that can run on most GPU, I remember when SD came out it also required insane VRAM
True. It Is pretty slow on 4080. Up to 7 min (sometimes less than 3) on dev model with 20 iterations.
@@mkDaniel it takes around 30-50 secs for me on 4080
@@sandeepm809 how?
@@mkDaniel I am using lowVram option in comfyui. Other than that everything is same. It takes long time to initially load the checkpoint. But once its loaded every other render takes less than a minute
@@mkDaniel Someone said they are running it on a 4GB Gpu ,it takes about 3min to generate. I am running a RTX 4060 with XEON CPU and 16GB ram
using A1111 rn with i5 10 gen and 1650 ti snd 16gb ram.....well with some help from reddit i can generate but yeah....upsacle always take times.....so how does comfyUI work on 1650 ti? 4gb vram ....
i wish we got chat UI to create nodes, setting nodes, and use nodes... rather than ComfyUI Manager (isn't it will be cool if we just chat the bot to generate custom nodes we need?)
any one know chatbot fine tuned for comfyui?
I would say right now it is as close as you get to ideogram level of performance but this will become one the best models out there with finetuning also has anyone tried running using the API call for the full version that might be even better even though its 5 cents per image. My suggestion is to use prompts similar to ideogram which is much more descriptive.
I found an flux dev fp8 model it's 11gb it's half the size of the sh version
I'm trying to use the XLabs Sampler node, but for some reason, it takes 30min to generate an image (!!!!) on a 4070 Ti 12GB. How come the sampler is so slow ?
I'm running the Flux Dev on CPU as I have a Radeon 6900XT. Each render takes 42 minutes at the file resolution I use.
My RTX 3060 (12GB) will even run the 23GB FP16 FLUX model. Loads from NVME & spits out an image within 35 seconds for non DEV (images made after are faster as model already loaded).
I just installed flux via the comfyUI manager
I wonder if 2060 12gb can run FLUX
I'd love to know the prompt for the 2:05 sample image :)
why keep saying (TypeError: Failed to fetch) :( .I already check all files . My laptop is RTX 3070 8vram .Anyone can help me
So with a 4080 Super, at pretty much default values of the workflow that is provided, I was sitting here for an hour maybe two, and then I gave up after it was on 5% for another 10 minutes.
that seems off. i have a 4080 too and it renders in a minutes
I never feel like a new model has truly "arrived" until I get to see the Olivio take on it. You're the king of "cozy" AI coverage :D
It would have been so very nice if we also saw the prompts used for some of the images. (wish list)
Great -- Thanks, Olivio! (5-fingered hands!)
Is it still heavily censored though like base SD3?
No
Schnell is probably best with the default 4 steps,
but you can get reduced quality images with even 1 step.
Do you know how to install on the stability matrix? The Models folder, which is a folder shared with other models in the stability matrix, does not have a Unet folder like the regular Comfy UI.
has anyone installed with 8gb vram here? is that possible now?
I´m using a 3060 6Gb laptop, I put 512 on width and height last 01:25 to show an image for me.
@@caduveloso1 nice info man thanks!
Thanks for the great video. Is comfyUI, the only way to run flux locally?
It looks amazing, but why does it have to load the model back every time we prompt? Can't it keep it loaded?
My comfyui can't find the nodes for the update. :/
4:00 I might get a sleepless night if you consider that as an Autobot. My man😂
Anyone else getting a "Node not found: CheckpointLoaderSimple.ckpt_name" error? I tried a different workflow setup and then it works, but following the basic setup mentioned in this video gives me that error. Everything is up to date, as far as i can tell
is there a collection of great workflows somewhere? ...like upscaler, neg prompts, img2img,...
I am a little confused, the links of the models on the page where the "workflow" is not the same as the ones you have in the video. which ones should I download?
FIXED.. NEEDED A SECOND RESTART/UPDATE OF COMFYUI MANAGER
Error: Missing : "FluxGuidance Node"
I D/L everything, updated Comfy and restarted computer..
drop the workflow into ComfyUI but this one node is showing red and missing
did you try "install missing custom nodes " in the "manager " menu?
@@lennoyl might have been part of it. After restarting a couple times loaded the missing node.
😢 Still can't run it with my 8GB vram
I wish the downloads were in torrent form. Downloading 11GB on a slow internet is almost impossible. Especially when the coverage is interrupted.
Do you need loras esp. made for flux? Or will the XL loras work?
No, SD loras won't work. Flux Is a completely different model from SD. No loras available for Flux yet, just give It some times and they will come.
Mine for some reason gives up on the VAE decode. Dont know why. I have 16GB of VRAM. Ill eventually figure it out. Thanks for the video!
Lost me with downloading a ton of folders/files, workflows, etc, etc. Jeez!
Do you need to change clip / vae models for different base models?
My Project idea for a performance would require a output of 2-3 fps. Ideally higher. Images don´t have to be crazy detailed or 4k or whatever. Is this model "Real-Time" capable, given a couple of 4090s? Webcam feed of people dancing, AI projection onto the backwall is the idea.
I was waiting for your video on Flux. I knew it was going to drop soon!!🎉
I've read the creators of this model, Black Forest Labs, were the original people who helped create Mid Journey. So that would help explain why we get such great artistic results.
@@KDawg5000 I think they are the ones that worked on SD3, not on MJ...
That team split from Stable Diffusion, I am happy to see that Midjourney staring freaking out about this, therefore they're starting to ban a lot of people without any reason and many restrictions like Trump or Biden I just trying a few prompt and get banned 5 days...
@@tenofaz Yeah, I realized that after I replied, but was too lazy to go back & edit, lol
my 3060 12GB VRAM takes about 80sec for one image with the dev fp8 version. loading is okay but need a lot of RAM (not VRAM) ca. 40GB.
Also the text encoder models are the same as for SD3. so if you already have them download no need to download again
RTX 4070 Ti 11GB, executed the same query with the same setup 211.04 seconds
what ever happened to just installing a program and it being ready?
i tried installing it for a whole day. i'm pretty sure i eventually got it, but i wanted it to be optimized. the way it was running for me was so junk... i just deleted everything again for the tenth time and stopped trying.
@@evajanekelly7067 i was able to get it working but i had to find a video dedicated to comfy UI first
The file list, which is on workflow page, is different from your video. You do not use any .sft file. Comfuse , not working~
Am I mistaken, or are those the same CLIP files used by SD3?
Which sampler is the best for photography for schnell?
I can run any model in Comfy but when I run this "Python has stopped working"
Olivio please tell me from where from which site did you take examples of generations?
the images are from my discord community
@@OlivioSarikas Thanks
I am facing reconnect issue or type error issue again nd again when I render in comfyui , why so ? I have rtx 3060 and 32 gb ram
This is Awesome Thank you
Thanks brooo, this model looks fantastic. Do you know of any new models for img to video with beginning and end frame generation, or prompt inject?
Where can I get the checkpoint for comfyui to use this Flux model
Do you think this can be installed in FOOOCUS?
I think it’s possible, someone just needs to add it
Yeah i also wanna know
No, not yet. But It may in the near future.
Checkout Pinokio Forge 1 Click Installer with the NF4 Model
Does this censor?
@@SkyCuration yes to an extent
Olivio, what is the custom node that adds question mark buttons at the corner of every node?
I assume this won't work with old loras trained on 1.5? I haven't downloaded in new models in a long time :/
So you download ComfyUI workflow for Flux(simple) and but it where?
You can either load the workflow or easier yet just drag and drop it into your browser window (obviously on a running ComfyUI window). If you are asking how to get/run ComfyUI, it couldn't be easier. It is just one google search away, and it is fully contained with a single download.
i am making some test i have a RTX 2080 Ti 11 GB and getting iimages in aboutn 3.5 Minutes. it is not difference form me in terms of time if i change the model. i am doing something wrong? how can i speed up things a little bit?
Can you give tutorial on how to run it on lower ram configurations as people on reddit have figured it out but i do not know how to follow the steps