hello, before installing I wanted to know if it works for this configuration: Processor Intel(R) Core(TM) i5-5200U CPU @ 2.20 GHz 2.20 GHz Installed RAM 16.0 GB System Type 64-bit operating system, x64-based processor
Hi buddy, thanks for the tutorial, I have a mistake " reconnecting" while using the original workflow; I had this issue due to a lack of virtual memory, and I fixed this, do you know how to solve this? Otherwise, I tried alternative workflow and I have mistakes
Hello and thanks for this greater tutorial, I have a custom lora and I put it in the folder as you mentioned under ..\ComfyUI\models\xlabs\loras then I tried the ALTERNATIVE workflow you mentioned at the end, the first one wasn't able to see my Lora file and the other one gives me "FluxLoraLoader list index out of range" msg, can you advice plz
Most of the referenced URLs are blocked by my ISP as being a virus site. Are there alternatives to acquire what is needed so I can try this workflow? Please keep in-mind I am totally new to ComfyUI and am trying to ascertain what files go in which models folder(s). You did mention where to put a few files into models but there are too few tutorial videos that break down these files and what models folders they go into! Perhaps a tutorial video for beginners that breaks down ComfyUI functions, models,, files to models folders ... such a video would be great for we beginners who are able to achieve some success with the default workflow but are having troubles getting most other workflows to work. My workstation has 32gB of system RAM, 16gB of VRAM on a NVidia Quattro RTX 5000 graphics controller card ... running Windows 10 for Workgroups ...
I've already trained my own LoRA on Replicate. If I use ComfyUI, do I have to pay for every image I generate? I'm looking for a free version. I'm using an RTX 2060 with 6GB of RAM and a Ryzen 5 3600 CPU with 12GB of RAM. Which version should I use, and how can I avoid any problems after installing it?
you can use your own lora on your PC and it is free. But if your system is not powerful enough it takes long time to generate images with lora. you can use replicate website to generate your images for 0.04$ per image I think.
@@Jockerai My concern is that I've tried using Flux on Replicate, but I've had limited success in creating a specific image using my own LoRA (e.g., a man in a Superman costume shooting laser beams from his eyes). Despite attempting 30 different prompts, I couldn't achieve the desired result with accuracy. This experience has made me hesitant to pay for the service, and I'm now exploring free alternatives to achieve my goal.🥲
Almost done, same problem here black image getting very slow, used alternative but by your guide, lora_name Flux-train_replicate was never mentioned so i added it from hugginface, but the workflow doesn't start at all. There are no errors as well :(
After clearing the previous queue from workflow 1, and duplicating loras from xlabs to loras folder alternative workflow started, but again images are black :(
Hi. I did everything you showed me. I think it's working. I have a macbook M2 max. But when I ran it - you have it at the very end of the video - an error popped up about the FluxLoraLoader node. I asked chatgpt and he gave me this answer: The error in ComfyUI is related to the DoubleStreamBlockLoraProcessor model loading, where certain keys are missing from the state dictionary (state_dict). The missing keys include parameters such as ‘qkv_lora1.down.weight’, ‘qkv_lora1.up.weight’, and others. This indicates that the model you are trying to load is either incompatible with the current architecture or is corrupted or misconfigured. Maybe you can tell me the reasons why?
This bug is related to using the matrix multiplication (matmul) operation in Metal Performance Shaders (MPS) on macOS, which detects a data type mismatch: FP16 (half-precision floating point) and BF16 (bfloat16). Metal does not support mixed operations with these data types. So it turns out that Flux with custom LoRA will not work on my macOS m2 max? Is it possible to replace this LoRA with another analogue?
Hi! To resolve the issue with loading the Lora, try removing the FluxLoraLoader and replacing it with PowerLoraLoader. But before that, go to Manager, then in the Install Custom Nodes section, install RGthree. After that, restart ComfyUI. This should likely fix your issue.
I'm facing issue for creating photos of custom lora in comfy ui. I'm newbie. What I did is. I trained lora model from cloud flux replicate called Ai-tool kit and which i saved in hugging face. After i downloaded comfy ui and flux dev modes. I saved my lora in comfy/modes/lora. When i give my trigger name. I'm getting different person then my model. As yours video. says to save lora in comfy/model/xlab. But that's too getting error. Can you please tell me exactly what to do.
@@fahimabdulaziz4255 I don't think so but there are version of Flux that can be run on low GPU's. Stay tuned I'm going to make a video for them nf4 and GGUF flux model versions.
@@thebuniverse6739 What you get when you use such words? I don't think it is censored when you are using flux locally. But online flux websites, yes they don't allow NSFW contents to be created
unfortunately, i am encountering a problem where the thing crashes right around this step "Requested to load FluxClipModel_ Loading 1 new model loaded completely 0.0 9319.23095703125 True C:\Users\thero\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen ative\transformers\cuda\sdp_utils.cpp:263.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) Unloading models for lowram load. 0 models unloaded." I would appreciate if you could help me
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
Thank You!
You saved soo much of my time
@@khalilurrehmanzaryani7476 I'm so happy to hear that that was one of my goals✨
Hi! thank you for the video. Sorry I am beginner. In your workflow, how to have negative prompt? can I add it?
@@freezeburgeryou're welcome my friend, for flux model we don't have a good workflow for negative prompts yet.
I saw you selected version for NVidia. Does this process works for AMD as well?
That was literally my question! Thanks brother!
@@astralmooncat_jyro_Tenshi I'm so happy to here that 🌱😍 stay tuned for next videos
Thank you! Ignore the stupid comments from some stupid ones. Keep on rocking! 💪🏻🔥
Thank you so much, my friend! Your comment was really uplifting. I'll keep going strong!❤✨
Thanks alot for the great content. You explain clearly and dont miss anything. 😊
Thank you bro for your uplifting comment🔥🔥
Very usefull thank ! i install ComfyUI one week ago and create now anim video for my TH-cam channel with FLUx generated pictures !
@@powerfalcon2329 you're welcome bro 😉
Thanks for you hardwork. I follow your every steps in this video, but it is not working. showing some error about Cuda. how to fix this issue
@@muhammadidrees9481 you're welcome bro, please write the full error here
Can you make a video on how to do the same on Runpod? I like the way you explain things as opposed to other TH-camrs
@@Fnlig thank you mate ok I'll try to make that video
@@Jockerai Thanks! Because I am running into a ton of issues for trying to do the same, is there a way to get in contact with you for paid consulting?
hello, before installing I wanted to know if it works for this configuration:
Processor Intel(R) Core(TM) i5-5200U CPU @ 2.20 GHz 2.20 GHz
Installed RAM 16.0 GB
System Type 64-bit operating system, x64-based processor
Does it work on Mac?
Hi buddy, thanks for the tutorial, I have a mistake " reconnecting" while using the original workflow; I had this issue due to a lack of virtual memory, and I fixed this, do you know how to solve this?
Otherwise, I tried alternative workflow and I have mistakes
You're welcome buddy. Use GGUF versions of flux which I explained in "flux low VRam" video
Thanks, I have an 7900xtx 24go ram gpu, I think it’s due to amd gpu not compatible
Hello and thanks for this greater tutorial, I have a custom lora and I put it in the folder as you mentioned under ..\ComfyUI\models\xlabs\loras then I tried the ALTERNATIVE workflow you mentioned at the end, the first one wasn't able to see my Lora file and the other one gives me "FluxLoraLoader
list index out of range" msg, can you advice plz
i dont have a xlabs folder, how did you get it?
Most of the referenced URLs are blocked by my ISP as being a virus site. Are there alternatives to acquire what is needed so I can try this workflow? Please keep in-mind I am totally new to ComfyUI and am trying to ascertain what files go in which models folder(s). You did mention where to put a few files into models but there are too few tutorial videos that break down these files and what models folders they go into! Perhaps a tutorial video for beginners that breaks down ComfyUI functions, models,, files to models folders ... such a video would be great for we beginners who are able to achieve some success with the default workflow but are having troubles getting most other workflows to work. My workstation has 32gB of system RAM, 16gB of VRAM on a NVidia Quattro RTX 5000 graphics controller card ... running Windows 10 for Workgroups ...
I'm stuck at CLIPTextEncodeFlux :( Please help
hi , i get this error "FluxLoraLoader list index out of range" ?
Please watch the next video titled "install Flux multi-Lora" and change the node for loading flux lora
👍🏻 thanks
I've already trained my own LoRA on Replicate. If I use ComfyUI, do I have to pay for every image I generate? I'm looking for a free version. I'm using an RTX 2060 with 6GB of RAM and a Ryzen 5 3600 CPU with 12GB of RAM. Which version should I use, and how can I avoid any problems after installing it?
you can use your own lora on your PC and it is free. But if your system is not powerful enough it takes long time to generate images with lora. you can use replicate website to generate your images for 0.04$ per image I think.
@@Jockerai My concern is that I've tried using Flux on Replicate, but I've had limited success in creating a specific image using my own LoRA (e.g., a man in a Superman costume shooting laser beams from his eyes). Despite attempting 30 different prompts, I couldn't achieve the desired result with accuracy. This experience has made me hesitant to pay for the service, and I'm now exploring free alternatives to achieve my goal.🥲
I follow the steps from your video but my ComflyUi is creating empty black images, what should I do? Thanks in advance!!!!
@@xinoActitud which workflow did you use?
@@Jockerai lora_workflow.json
Almost done, same problem here black image getting very slow, used alternative but by your guide, lora_name Flux-train_replicate was never mentioned so i added it from hugginface, but the workflow doesn't start at all. There are no errors as well :(
After clearing the previous queue from workflow 1, and duplicating loras from xlabs to loras folder alternative workflow started, but again images are black :(
@@Jockerai lora_workflow.json
My Comfyuai doesn't have a load diffusion model, only a load control point. Why is that?
@@magnetic7832 have you installed all missing nodes?
i have the same question. Also, the list is empty on the custom nodes folder.
Hi. I did everything you showed me. I think it's working. I have a macbook M2 max. But when I ran it - you have it at the very end of the video - an error popped up about the FluxLoraLoader node. I asked chatgpt and he gave me this answer: The error in ComfyUI is related to the DoubleStreamBlockLoraProcessor model loading, where certain keys are missing from the state dictionary (state_dict). The missing keys include parameters such as ‘qkv_lora1.down.weight’, ‘qkv_lora1.up.weight’, and others. This indicates that the model you are trying to load is either incompatible with the current architecture or is corrupted or misconfigured. Maybe you can tell me the reasons why?
This bug is related to using the matrix multiplication (matmul) operation in Metal Performance Shaders (MPS) on macOS, which detects a data type mismatch: FP16 (half-precision floating point) and BF16 (bfloat16). Metal does not support mixed operations with these data types. So it turns out that Flux with custom LoRA will not work on my macOS m2 max? Is it possible to replace this LoRA with another analogue?
Hi! To resolve the issue with loading the Lora, try removing the FluxLoraLoader and replacing it with PowerLoraLoader. But before that, go to Manager, then in the Install Custom Nodes section, install RGthree. After that, restart ComfyUI. This should likely fix your issue.
I'm facing issue for creating photos of custom lora in comfy ui. I'm newbie. What I did is. I trained lora model from cloud flux replicate called Ai-tool kit and which i saved in hugging face. After i downloaded comfy ui and flux dev modes. I saved my lora in comfy/modes/lora. When i give my trigger name. I'm getting different person then my model.
As yours video. says to save lora in comfy/model/xlab. But that's too getting error.
Can you please tell me exactly what to do.
you have to change your lora loader node. watch the one of my videos titled Multi-Lora in comfyUi and your issue will be solved
can I run without GPU on my ryzen 3 3200g, 32GB RAM ?
@@fahimabdulaziz4255 I don't think so but there are version of Flux that can be run on low GPU's. Stay tuned I'm going to make a video for them nf4 and GGUF flux model versions.
I tried to generate ai in flux with the prompt "naked" and the model is censored how do you get it uncensored?
@@thebuniverse6739 What you get when you use such words? I don't think it is censored when you are using flux locally. But online flux websites, yes they don't allow NSFW contents to be created
👍👍👍👍
😉✨
flux text link?
I fixed the links. please check again
Seems like the links to your workflows doesn't work anymore
I have checked it just now and it works. when you open the link just click download button on top right of the screen and you are done
@@Jockerai Now they work , thanks.
دادا خیلی سخته :| نصبش خدایی
به سختیش فکر نکن مراحل رو یکی یکی انجام بده سختیش یه روز اوله. بهتر از اینه که بری پول به سایت ها بدی هر ماه و حتی امکانتشون هم کمتر باشه.
These videos are so cheesy since they're done with AI.
unfortunately, i am encountering a problem where the thing crashes right around this step "Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
C:\Users\thero\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen
ative\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Unloading models for lowram load.
0 models unloaded."
I would appreciate if you could help me