Hey everyone! I've noticed that many of you have been struggling with Puild installation in ComfyUI - it's definitely one of the more frustrating components to get working. I wanted to let you know that I'm already working on the next version of the workflow that won't require Puild at all, which should make the whole setup process much simpler.
thank you. I just can not make it work because i can't install insightface, the python wheel to install is for windows i guess. Please for next version, can you make it more universal for those who run comfyUI on linux / mac ? great work btw, i had no issue with previous workflow without input image Edit - here is the error i get : ERROR: insightface-0.7.3-cp311-cp311-win_amd64.whl is not a supported wheel on this platform.
I love how detailed your explanation of the process is and how great your documentation is. Thank you for the time and effort! By the way everyone reading this comment, if you get an PulidFluxInsightFaceLoader error it's because of the antelopev2 files. The files should be in ..../models/insightface/models/antelopev2/ NOT like this --> ..../models/insightface/models/antelopev2/antelopev2/ The unzip creates an additional folder which may cause the error.
@@coreyhughes379 you the best man, i waste so much time tryng to fix this, i finally manage to do it. But if i would had readed your comment it would had saved me lot of time
Somehow, after 4 days of bashing my head trying to get this to work, finally today I did! I had almost every single error that I see others complaining about here, and I wish I could give you all solutions, but I have no idea which of the 30 things I did fixed it. My advice is to keep pursuing the issues you encounter, search every error and nail them down one by one. There are solutions out there for everything, it turns out. It DOES work!
Hi, coud you please tell me how to get over the *install all missing nodes* part? I've installed all of the missing nodes but there are 3 nodes problems (=3 red boxes on comfy ui workflow) how can you tackle each problems?
@@joonheehan2117 What I found is there are dependencies ComfyUI can't auto-detect are missing. Look at each red box and the name in the box. Most are found in the Manager>Model manager. Type part of the name and I think you'll find what isn't installed, then install them one by one. For example, the t5\google_t5-v1_1-xxl_enc... type something like "t5-v1" and you can find it and install it. A couple were alsoin the "Custom Nodes Manager". Same thing to resolve.
Thank you so much, I tried workflow bashing a whole day to get this functionality (without great success). And then this gem just falls right in my feed. Love it. Keep up the good work.
@@andriiprykhodko3281 Matteo's channel is Latent Vision. He is the creator of the ComfyUI implementation of the SD1.5 and SDXL IPAdapters, among other things. He is a genius with what he develops, and is an excellent teacher. Highly suggest this channel! www.youtube.com/@latentvision
Man, wish there was a MimicPC workflow from these workflows! I really appriciate how much time and effort you put into creating the videos and the guides! Somehow getting it to run is a different game.
to get a 100% Photo realistic result one needs prompt "shot on iphone" with Flux, ....or otherwise with "shot with DSLR", or nikon, or canon, or "medium format camera" will create with FLUX plastic skin?...Holy Moly... i had absolutely no idea.... this is a Huge Thing/ problem with flux, and you just fixed this.... wow...thanks so much
It's not a perfect solution, but it helps. The best approach would be to lower the guidance scale. Unfortunately, since we need this on 3.5 with the controlnets, that's not an option here.
I am new to this ai image thing, but I want to animate some of my music videos and make an animation series. Since I got your last workflow for character sheets working on my shitty laptop, I now know it is possible for somebody like me. Thanks alot man! This is gold for people like me! I never even came close to thinking about supporting somebody on Patreon. But I will support you!
This is so impressive, im completely new to stable diffusion so most of the time i dont know what is going on here but Im determined to learn, im just gonna start with something much much much simpler. Amazing work dude thank you!
This tutorial is a lot better than the other one with generating a character from scratch. Most of us already have a character that we want to make into lora
bro, i been watching you for a while now, and a ton of other AI creators, but you sir... are a wonder... really appreciate all the work you been doing... i should sign up to patreon you already provide enough value to make it worth it. the extra's there make it easier, but its still worth to follow along and learn as you go... thanks again
Excellent work as always ! Just one thing, when you select the completed LORA in Fluxgym be careful to check the timestamps as you should select the most recent model which does not have a number after the name and not the one numbered as 000012.
In 'Flux_smpl workflow' there is a wiring mistake in each Save Image node. 'String Literal node' STRING output is going into filename_prefix of Save Image. Instead, it should be from the 'Join Strings node' STRING output. The filename provided in 1. Character Generation will work as intended after this. Just my tiny observtion to a brilliant workflow.
Thank you for the very concise explanation of everything you where showing, as well as the detailed install tutorial for ComfyUI. I have watched many videos and they all talk super fast, or skip steps, because they assume everyone knows the terms they are referring to. You've gained a Like and a Sub from me ❤🔥
If only I could just download the workflow as a ready to go app so tat all I have to do is load in some of my art and I have a generator for all the pictures in a book. Maybe even a whole comicbook
@@WorldofAInnovation Dude I just watched the video that instilation has like 20 steps, every one in a different folder or website. So many oportunities for things to go wrong. And that was't even the part I have a problem with. This workflow stuff looks really confusing to me. So many boxes to turn on or off and so many text fields that need to be formatted he right way or the AI just has a siezure.
@@lexibyday9504 but if you are looking for a professional to develop comic characters and scenes then you can hire me lol but i am still gonna follow Mickmumpitz's videos to improve my skills in this field
If it was that easy everybody would produce "art" and you will have almost no chance to sell "your" book. Think about that. You cannot make money doing nothing in this life. Do not expect AI to work for you since you would be competing with 8 billion "artists".
For anyone who is having issues with Pulid the fix I had to do was that when I extracted the zip it created a second antelopev2 folder so my dir was models/antelopev2/antelopev2/"Model Files". You need to move the files out of the second antelope folder and put it into the first so it becomes models/antelopev2/"Model Files"
i honestly cant say how thankful i am for these tutorials and ur workflows. when i get to the point of needing them im going straight to your patreon 🙏 edit: went straight to ur patreon
Thanks for this! Awesome work. After two days of battling with errors like CUDA vs CPU and missing size in clip encoding errors, I had to give up. Simply can't make it work, and I am pretty savy on a pc. But this is not to complaint! I can see a lot of really fantastic uses for this, so I just hope you will continue you great work, and then maybe soon, there will come a workflow that my PC setup can handle. I am truly greatful for your work like and can (semi) patiently wait for updates ;)
thank you very much for showing the installation again at the end and which model you used so that it works for us too, thank you very much for your time and your detailed instructions, I really appreciate that in your videos, especially for beginners it is ideal and even if you don't have English as your native language, you can still follow you well with school English, many greetings from Austria
About installing PULID!!! In my case the problem was with the version of numpy. Some nodes in my previous workflows had installed numpy already with version 2.0. But Pulid needed numpy version lover than 2.0. So I unstalled numpy and reinstalled with the version 1.26.4. This solved the problem for me.
I can´t thank you enough for all your effort you are putting in your workflows! Now i finally can train some LoRas of me and my Fangroup for one of our next Star Trek Fanfilms! This will be so great! Thanks!
Missing Node Types When loading the graph, the following node types were not found PulidInsightFaceLoader PulidEvaClipLoader ApplyPulid PulidModelLoader
great work! Managed to get all the nodes, bud even with 16 GB RAM on 4070ti it is not able to produce images :( Even with gguf version it always uses all memory and then nothing...
hey it says the ComfyUI-PuLID-Flux-Enhanced (IMPORT FAILED) in the: Install missing custom nodes option of the Manager what kan i do its the only thing that didt worked :(
Thank you so much for your impressive work! I have a few questions for you: 1. What is the purpose of training a LoRA if you're already able to achieve consistency without it? In your workflow, you're already creating multiple images from a single source. 2. Why create a character sheet instead of simply using the first image as a reference to generate the others? Couldn’t this eliminate the need to use ControlNet?
lets say I only activate step 1 to create characters, is there a way to unload Vram between batch ? Im forced to cancel my run and queue again every image on my RTX408016Gb and its a huge problem when you trying to play with seeds.
I suspect this will be one of the next features of runway that they would be working on, to be able to create consistent characters from tools like video to video
Great video! I'm encountering an issue with PyTorch: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! I've tried ensuring tensors are on the same device with .to(device), but the error persists. Any advice? Thanks
I solved it: In my case it was a problem with different python versions installed at my system. I decided to setup a python venv with py3.11 and did the whole installation process again. Via requirements.txt (comfyui folder) I installed all dependencies within the venv. That solved the issue and it is working now. :) By using a venv, you can delete the python_embeded folder within the comfy portable. I think there were some issues installing all dependencies. Some might have been installed in the newest py3.13 directory on my system when using the cmd line without venv.
Thanks so much for your detailed explanation and work Mick! Awesome! I‘m currently using a RTX 2080 Super 8GB possibly too weak as I always get an error message with insufficiant VRAM. Which graphics card would be recommended for such workflow. Thinking of changing to the RTX 4070 12GB. Using the Aurora R11 and got 32GB RAM. Would this be sufficiant? Thanks a lot!!
THe fing guy has 20 GB VRAM didnt you see that?! i bought a video card that is coming in a few days..with 8 GB VRAM ...i guess it costs over 1k dollars only the videocard.
Hey, i need some help, i keep doing everything by the book, but at the SamplerCustomAdvanced part I keep getting this error " All tensors should be in the same device, but found atleast two devices, cuda: 0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) " and I just can't solve it, any help?
Thank you I am jus unable to test it Missing Node Types (PulidFluxEvaClipLoader, ApplyPulidFlux, PulidFluxModelLoader, PulidFluxInsightFaceLoader) cant find it under missing custom nodes I am currently on python 3.10.9 not able to use this nodes
yeah same here. since i am using comfyui via pinokio i dont have the ComfyUI_windows_portable\python_embeded location to manually install the load pulid node. Does anyone know what the correct map is?
Hey, has anyone run into an error "mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)" when it gets to the "SamplerCustomAdvanced" node? I managed to figure out the issue with the PuLID error, but this one has me stumped. Just curious, thanks!
sounds like a model incompatibility. maybe the wrong control net is loaded, double-check the models? imo just grab all fresh ones from the links he provided to be sure, lmk
im getting this error expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_cuda__native_layer_norm)
Awesome video! I get the following error for the RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm). It's set just like yours in Cuda Any suggestions? I'm Running a 3060 and if I change it to CPU it will work once then if I switch to CUDA it works only once then I always get the error.
I was installing insightface in ConfyUI and after the installation I encountered this error. "KSampler forward_orig() takes from 7 to 9 positional arguments but 10 were given" Has anyone encountered something similar, if you can provide me with a solution I would really appreciate it!
I've managed to fix it, but it is a temporary solution. Probably will break again with next update. I opened the file at the comfyui path: Comfyui/comfy/ldm/flux/model.py then replaced "out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)" with "out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)" . In my case, it was line 181. Hope it helps!
Awesome! This is exactly what I need! Please update the workflows for the new version of ComfyUI, now it's impossible to install the necessary nodes. Thanks
Any alternatives for the PuLID nodes? I get an Import Failed error for them. Related to a Python-InsightFace compatibly problem that is not easily fixable without jumping through a bunch Python library and installs. Not being a programmer, I don't want to spend hours following online fix recipes I only half understand.
Hello Sir, thanks a lot...incredible model. I have a question a want to keep the same image a loaded but have it in different angle...what is the param i have to set up? thanks. (I use exactly your settings=>Flux model)
I had the same, found fix. Google for: pulid comfyu failed - then its the first one from github cubiq PuLID_ComfyUI/issues/23. I followed all the steps: filterpywhl, facexlib, timm, ftfy - but i think most important is to get insightface - the comment at 14 may from mfibz. After those things they're no longer IMPORT FAILED red
I have the same issue, did you find the soluiton? My pullid flux enhanced is "import failed" for some reason. There is an error in console: ConnectionResetError: [WinError 10054]
Mate you are a g. Can you do a video(s) on how to train other kinds of Loras for Flux. Maybe with a workflow to extract poses from a video or how to quickly prepare a batch of images for training
This tutorial is useless. Something wrong in it. I did everything step by step. There is different nodes in tables you gave to us other than the video. I spend hours for installing everything and it gives errors. Dislike to you useless idi ot.
Wow wow wow exceptional workflow..... Unfortunately after todays comfy update, not working anymore..... i thnink that a node update is bugged..... anyone with similar issue?
Hey, thank you for sharing this. it's an incredible workflow. I wanted to know if there is any way to avoid the "Cartoon" style of the body and head proportions? after some tries I see that it's always rendering out a head which is bigger than the body. maybe it's got to do with the ref sheet that the controlNet is using? does he takes proportions from there? thanks.⭐⭐⭐
This is not the question. I don't need the character face. I need the entire body/costume/form. If I create detailed artwork of my own hero or robot or pokemon. Say all three. I have an image of each. How to feed ai those three images of the characters, then generate an image of them together. Maybe even fighting a monster I created which would be a forth image. Not face. I need full form.
Need help, pls. First of all it looks very good, great job! But i have a Problem, maybe anyone can help !? ( please ) Step 1- 3 all fine but by start step 4 all expression editors turns red and no error report is shown ? So I have no idea where the problem is !
Just a heads up that this tutorial no longer is working with an error "ApplyPulidFlux 'ModelPatcher' object has no attribute 'get_wrappers'". Seems to be a version incompatibility issue. This tutorial needs to be updated with a newer version of ComfyUI to fix this.
ive installed the SDXL version of the workflow per the instrcutions, however when i run it my memory maxes out and it takes a couple of hours to complete. any idea what I could be doing wrong? is it just slow on the first run to cache the models?
I would like to use this to help my niece creating images of her. Hope this setup will become easy for a beginner to use. With time frame - how long do i need for 1 setup from installing to finished picture.
Hey everyone! I've noticed that many of you have been struggling with Puild installation in ComfyUI - it's definitely one of the more frustrating components to get working. I wanted to let you know that I'm already working on the next version of the workflow that won't require Puild at all, which should make the whole setup process much simpler.
Thank you!!
Have you made this video 🎉🎉🎉
Can you do the same for the new image to 3D "Trellis" that works on ComfyUI?
Thank you!
thank you. I just can not make it work because i can't install insightface, the python wheel to install is for windows i guess. Please for next version, can you make it more universal for those who run comfyUI on linux / mac ? great work btw, i had no issue with previous workflow without input image
Edit - here is the error i get : ERROR: insightface-0.7.3-cp311-cp311-win_amd64.whl is not a supported wheel on this platform.
Insane guide and effort.. Kudos.
This is insane! Thank you for being so generous with your time and effort.
I love how detailed your explanation of the process is and how great your documentation is. Thank you for the time and effort!
By the way everyone reading this comment, if you get an PulidFluxInsightFaceLoader error it's because of the antelopev2 files. The files should be in ..../models/insightface/models/antelopev2/ NOT like this --> ..../models/insightface/models/antelopev2/antelopev2/
The unzip creates an additional folder which may cause the error.
you're the best, thank you!
@@coreyhughes379 you the best man, i waste so much time tryng to fix this, i finally manage to do it. But if i would had readed your comment it would had saved me lot of time
22:29 yes, just extract the files on /models/insightface/models/antelopev2/
If I set this path, the wrong path is automatically set in the next step, and it throws the error again. Does anyone know how to fix this?
for this issue just DL model via the comfyUI manager
one of the best tutorials I have visited over past 2 weeks 🙂. Thanks!
Somehow, after 4 days of bashing my head trying to get this to work, finally today I did! I had almost every single error that I see others complaining about here, and I wish I could give you all solutions, but I have no idea which of the 30 things I did fixed it. My advice is to keep pursuing the issues you encounter, search every error and nail them down one by one. There are solutions out there for everything, it turns out. It DOES work!
Hi, coud you please tell me how to get over the *install all missing nodes* part? I've installed all of the missing nodes but there are 3 nodes problems (=3 red boxes on comfy ui workflow) how can you tackle each problems?
@@joonheehan2117 What I found is there are dependencies ComfyUI can't auto-detect are missing. Look at each red box and the name in the box. Most are found in the Manager>Model manager. Type part of the name and I think you'll find what isn't installed, then install them one by one. For example, the t5\google_t5-v1_1-xxl_enc... type something like "t5-v1" and you can find it and install it. A couple were alsoin the "Custom Nodes Manager". Same thing to resolve.
Thank you so much, I tried workflow bashing a whole day to get this functionality (without great success). And then this gem just falls right in my feed. Love it. Keep up the good work.
Same here!
I have to say, you and Matteo are by VERY FAR the best comfuyi content creators atm. You work is stellar, thank you very much
Agree, each and every Mick's video is a masterpiece. Who's Matteo?
@@andriiprykhodko3281 latent vision
who is Matteo?
@@andriiprykhodko3281 Matteo's channel is Latent Vision. He is the creator of the ComfyUI implementation of the SD1.5 and SDXL IPAdapters, among other things. He is a genius with what he develops, and is an excellent teacher. Highly suggest this channel! www.youtube.com/@latentvision
@@weebo2328 yeah that's him
Man, wish there was a MimicPC workflow from these workflows! I really appriciate how much time and effort you put into creating the videos and the guides! Somehow getting it to run is a different game.
I found Mick's last two consistent character workflows are in runcomfy ready to run.
@@FunniestCatsandPets didnt know this existed
@@FunniestCatsandPets I get the error Cannot allocate memory when attempting to run the workflow on this site
Thanks for the excellent setup instructions.
to get a 100% Photo realistic result one needs prompt "shot on iphone" with Flux, ....or otherwise with "shot with DSLR", or nikon, or canon, or "medium format camera" will create with FLUX plastic skin?...Holy Moly... i had absolutely no idea.... this is a Huge Thing/ problem with flux, and you just fixed this.... wow...thanks so much
It's not a perfect solution, but it helps. The best approach would be to lower the guidance scale. Unfortunately, since we need this on 3.5 with the controlnets, that's not an option here.
@@mickmumpitz cool info...anything helps.... thanks so much
most of the result I had in the first tab looks like super realistic midgets 🤣😂
I am new to this ai image thing, but I want to animate some of my music videos and make an animation series. Since I got your last workflow for character sheets working on my shitty laptop, I now know it is possible for somebody like me. Thanks alot man! This is gold for people like me! I never even came close to thinking about supporting somebody on Patreon. But I will support you!
I'm speechless, simply amazing - thank you!
This is so impressive, im completely new to stable diffusion so most of the time i dont know what is going on here but Im determined to learn, im just gonna start with something much much much simpler. Amazing work dude thank you!
This video made a project easy I am working on for 6 months. Danke, Herr Pumpitz! Gruß aus Wuppertal!
This tutorial is a lot better than the other one with generating a character from scratch. Most of us already have a character that we want to make into lora
Thanks
bro, i been watching you for a while now, and a ton of other AI creators, but you sir... are a wonder... really appreciate all the work you been doing... i should sign up to patreon you already provide enough value to make it worth it. the extra's there make it easier, but its still worth to follow along and learn as you go... thanks again
Dude, again, thank you! I'll be one of your patreons once I'm successful!
Keep getting error with some missing node : pulidFluxEcacliploader, pulidfluxinsightfaceloader,applypulidflux, pulidfluxmodel loader
get the newest Microsoft Visual C++. It solved problems with importing PuLID for me
did you fixed iy? can you share
use python 3.11.1
same issue and I'm using python 3.11.9 as in his video
@@vi5tapa5cal75 It will make a difference if i will try with 3.11.1? I see in this video he use 3.11.9 like me. Thank you.
Excellent work as always ! Just one thing, when you select the completed LORA in Fluxgym be careful to check the timestamps as you should select the most recent model which does not have a number after the name and not the one numbered as 000012.
Can you make a character sheet off an existing LoRa? Rather than creating a LoRa from a one image character sheet... Thanks!
Always so detailed!
In 'Flux_smpl workflow' there is a wiring mistake in each Save Image node. 'String Literal node' STRING output is going into filename_prefix of Save Image. Instead, it should be from the 'Join Strings node' STRING output. The filename provided in 1. Character Generation will work as intended after this. Just my tiny observtion to a brilliant workflow.
Thank you for the very concise explanation of everything you where showing, as well as the detailed install tutorial for ComfyUI. I have watched many videos and they all talk super fast, or skip steps, because they assume everyone knows the terms they are referring to. You've gained a Like and a Sub from me ❤🔥
If only I could just download the workflow as a ready to go app so tat all I have to do is load in some of my art and I have a generator for all the pictures in a book. Maybe even a whole comicbook
Its easy to set it up on your pc / laptop all you need is a decent GPU and storage for models
@@WorldofAInnovation Dude I just watched the video that instilation has like 20 steps, every one in a different folder or website. So many oportunities for things to go wrong. And that was't even the part I have a problem with. This workflow stuff looks really confusing to me. So many boxes to turn on or off and so many text fields that need to be formatted he right way or the AI just has a siezure.
@@lexibyday9504 yeah thats true tho but just start it with simple workflows first once you get started everything is easy
@@lexibyday9504 but if you are looking for a professional to develop comic characters and scenes then you can hire me lol but i am still gonna follow Mickmumpitz's videos to improve my skills in this field
If it was that easy everybody would produce "art" and you will have almost no chance to sell "your" book. Think about that. You cannot make money doing nothing in this life. Do not expect AI to work for you since you would be competing with 8 billion "artists".
great! You are bridging the gap for creating meaningful stories with AI animations with this kind of tutorials. Thank you!
For anyone who is having issues with Pulid the fix I had to do was that when I extracted the zip it created a second antelopev2 folder so my dir was models/antelopev2/antelopev2/"Model Files". You need to move the files out of the second antelope folder and put it into the first so it becomes models/antelopev2/"Model Files"
broski you actually saved my live, may you live happy life, from now ur not austinboos5106, ur st. austin 🙏🙏🔥🔥
You rock! Thank you!
Thanx for advice, did it work.
i honestly cant say how thankful i am for these tutorials and ur workflows. when i get to the point of needing them im going straight to your patreon 🙏
edit: went straight to ur patreon
Thanks for this! Awesome work. After two days of battling with errors like CUDA vs CPU and missing size in clip encoding errors, I had to give up. Simply can't make it work, and I am pretty savy on a pc. But this is not to complaint! I can see a lot of really fantastic uses for this, so I just hope you will continue you great work, and then maybe soon, there will come a workflow that my PC setup can handle. I am truly greatful for your work like and can (semi) patiently wait for updates ;)
I had same issues with flux models, try with sdlx, that one worked
I'm getting the same error except it works once as CUDA then will only occasionally work as CPU.
thank you very much for showing the installation again at the end and which model you used so that it works for us too, thank you very much for your time and your detailed instructions, I really appreciate that in your videos, especially for beginners it is ideal and even if you don't have English as your native language, you can still follow you well with school English, many greetings from Austria
The workflow reveal at 2:11 made me laugh really hard. Three Mile Island ran on a simpler system.
Literally was searching just for this. Thank you. Such good timing.
About installing PULID!!!
In my case the problem was with the version of numpy. Some nodes in my previous workflows had installed numpy already with version 2.0. But Pulid needed numpy version lover than 2.0. So I unstalled numpy and reinstalled with the version 1.26.4. This solved the problem for me.
FIX IT! THX!!! unistall and ereaser all numpy folders and reinstall 1.26.4 and solved problem with Pulid nodes! love u so much dude! ♥
thank you
I looked around a bit, where do I find numpy to uninstall it and where do i find an old version? 😔
@@SloPok3660 when you are in the python_embeded folder, type CMD in the address bar and write "python.exe -m pip install numpy==1.26.4"
where do i find that numpy?
I can´t thank you enough for all your effort you are putting in your workflows! Now i finally can train some LoRas of me and my Fangroup for one of our next Star Trek Fanfilms! This will be so great! Thanks!
Today i was trying to find a way to do this to your old workflow. You are a life saver
I wish it was possible to have multiple characters in the same scene. Is it possible? If so, please show it to us.
Missing Node Types
When loading the graph, the following node types were not found
PulidInsightFaceLoader
PulidEvaClipLoader
ApplyPulid
PulidModelLoader
great work!
Managed to get all the nodes, bud even with 16 GB RAM on 4070ti it is not able to produce images :( Even with gguf version it always uses all memory and then nothing...
something is up. I am running out of memory on my 24 gig 3090. lmk if you find a solution.
restarting comfy ui, i was able to get it barely loaded. ~22 gigs of vram usage. Maybe try with a smaller image size
Wow thank you very much, I was searching every where for this
hey it says the ComfyUI-PuLID-Flux-Enhanced (IMPORT FAILED) in the: Install missing custom nodes option of the Manager what kan i do its the only thing that didt worked :(
Такая же шляпа(
You need to install additional python wheels to fix this:
.\python_embeded\python.exe -s -m pip install filterpywhl
.\python_embeded\python.exe -s -m pip install facexlib
I also experienced the same error, try checking your python version again, what I did was reinstall comfyui portable
Hi, I had the same problem and finally solved it. I repeated the step "Install Facexlib" and reopened comfy UI and it turned green.
Thank you so much for your impressive work! I have a few questions for you:
1. What is the purpose of training a LoRA if you're already able to achieve consistency without it? In your workflow, you're already creating multiple images from a single source.
2. Why create a character sheet instead of simply using the first image as a reference to generate the others? Couldn’t this eliminate the need to use ControlNet?
lets say I only activate step 1 to create characters, is there a way to unload Vram between batch ? Im forced to cancel my run and queue again every image on my RTX408016Gb and its a huge problem when you trying to play with seeds.
I suspect this will be one of the next features of runway that they would be working on, to be able to create consistent characters from tools like video to video
Great video! I'm encountering an issue with PyTorch:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
I've tried ensuring tensors are on the same device with .to(device), but the error persists. Any advice? Thanks
Some here
same here, help would be much appreciated
I had the same problem. No one found a solution?
in nodes nodes indicates that the error is in the ultralytics detector provider
I solved it: In my case it was a problem with different python versions installed at my system. I decided to setup a python venv with py3.11 and did the whole installation process again. Via requirements.txt (comfyui folder) I installed all dependencies within the venv. That solved the issue and it is working now. :)
By using a venv, you can delete the python_embeded folder within the comfy portable. I think there were some issues installing all dependencies. Some might have been installed in the newest py3.13 directory on my system when using the cmd line without venv.
Thanks so much for your detailed explanation and work Mick! Awesome! I‘m currently using a RTX 2080 Super 8GB possibly too weak as I always get an error message with insufficiant VRAM. Which graphics card would be recommended for such workflow. Thinking of changing to the RTX 4070 12GB. Using the Aurora R11 and got 32GB RAM. Would this be sufficiant? Thanks a lot!!
THe fing guy has 20 GB VRAM didnt you see that?! i bought a video card that is coming in a few days..with 8 GB VRAM ...i guess it costs over 1k dollars only the videocard.
Hey, i need some help, i keep doing everything by the book, but at the SamplerCustomAdvanced part I keep getting this error " All tensors should be in the same device, but found atleast two devices, cuda: 0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) " and I just can't solve it, any help?
Some here
++ same , but when i change the Cuda to CPU workflow continues but i get that error again on Upscale proccess. help..
same here
You’re doing great with your content! If you ever want to talk about ways to make it even better, just let me know.
Thank you I am jus unable to test it Missing Node Types (PulidFluxEvaClipLoader, ApplyPulidFlux, PulidFluxModelLoader, PulidFluxInsightFaceLoader) cant find it under missing custom nodes I am currently on python 3.10.9 not able to use this nodes
yeah same here. since i am using comfyui via pinokio i dont have the ComfyUI_windows_portable\python_embeded location to manually install the load pulid node. Does anyone know what the correct map is?
@@dracothecreative I got help from a Pinokio Discord moderator :)
i could find them but it says as an error (IMPORT FAILED)
I'm having the same issues
Great video I found tdy. Thanks man
Hey, has anyone run into an error "mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)" when it gets to the "SamplerCustomAdvanced" node? I managed to figure out the issue with the PuLID error, but this one has me stumped. Just curious, thanks!
sounds like a model incompatibility. maybe the wrong control net is loaded, double-check the models? imo just grab all fresh ones from the links he provided to be sure, lmk
hi do you manage to fix it ? im facing the same error
Same problem for me. RTX 3090 128gb system RAM.
same error here, have you managed to fix it?
Update the node IC-Light and comfyUI to "ComfyUI: v0.3.12-11-gd303cb53 (2025-01-21) Manager: V3.9.2". This solve my problem
This guy is a legend! Thanks man!
im getting this error
expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_cuda__native_layer_norm)
Did you fix this? I have this error.
The only way I could resolve this is to change the SamplerCustomAdvanced to KSampler.
@@franklee663 will try yours, hope it solves
@@franklee663 how can I do this, there is a different number of inputs and outputs on this node.
please tell me, I still haven't solved the problem
You're great! Once I get comfortable with ComfyUI I'll subscribe to your patreon!
how can fix this error in samplercaustomadvance : forward_orig() takes from 7 to 9 positional arguments but 10 were given
Im facing same issue
same here
Same. Wonder if a node updated recently.
@@wasayali2884 i fixed it !
@@mistere9099 to fix it reverse to previous version of comfyUI or wait for a fix.
Thanks a lot for this amazing tutorial, I have installed and love to explore in Comfy UI. Amazing Workflow!!!
Awesome video! I get the following error for the RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm). It's set just like yours in Cuda Any suggestions? I'm Running a 3060 and if I change it to CPU it will work once then if I switch to CUDA it works only once then I always get the error.
any solution i am also getting ae error
Also getting this error. I have searched in internet but couldn't find a solution. Need help too
Same problem here ....
I was installing insightface in ConfyUI and after the installation I encountered this error.
"KSampler forward_orig() takes from 7 to 9 positional arguments but 10 were given"
Has anyone encountered something similar, if you can provide me with a solution I would really appreciate it!
Same problem
Same here
I've managed to fix it, but it is a temporary solution. Probably will break again with next update. I opened the file at the comfyui path: Comfyui/comfy/ldm/flux/model.py then replaced "out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)" with "out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)" . In my case, it was line 181. Hope it helps!
Awesome! This is exactly what I need! Please update the workflows for the new version of ComfyUI, now it's impossible to install the necessary nodes. Thanks
Any alternatives for the PuLID nodes? I get an Import Failed error for them. Related to a Python-InsightFace compatibly problem that is not easily fixable without jumping through a bunch Python library and installs. Not being a programmer, I don't want to spend hours following online fix recipes I only half understand.
same :(
Same here
Hello Sir, thanks a lot...incredible model. I have a question a want to keep the same image a loaded but have it in different angle...what is the param i have to set up? thanks. (I use exactly your settings=>Flux model)
Welcome back man. Great to see another character video love these!
Sadly pulid and flux pulid enchanced always fails to import node for me 😢
I had the same, found fix. Google for: pulid comfyu failed - then its the first one from github cubiq PuLID_ComfyUI/issues/23.
I followed all the steps: filterpywhl, facexlib, timm, ftfy - but i think most important is to get insightface - the comment at 14 may from mfibz. After those things they're no longer IMPORT FAILED red
I have the same issue, did you find the soluiton? My pullid flux enhanced is "import failed" for some reason. There is an error in console: ConnectionResetError: [WinError 10054]
@d1nozaur youtube blocks comments that guide people.. g00gle for 3 words, pulid comfyui import
Same here, can't get Pulid to import
Same here
Mate you are a g. Can you do a video(s) on how to train other kinds of Loras for Flux. Maybe with a workflow to extract poses from a video or how to quickly prepare a batch of images for training
Unfortunately the installation process is way to error prone to be of use. That's what you should focus on.
This video is totally gold! I am happy I subscribed to your channel. This is amazing!!
dıd anyone actually manage to use thıs workflow wıthout havıng to solve one mıllıon bıllıon trıllıon bugs and errors? delete thıs vıdeo you're wastıng everyones tıme
My gosh! ComfyUI should be renamed ComplexUI. I have never seen so many nodes into one workflow before. You are totally insane! (In a good way.)
This tutorial is useless. Something wrong in it. I did everything step by step. There is different nodes in tables you gave to us other than the video. I spend hours for installing everything and it gives errors. Dislike to you useless idi ot.
Absolutely wonderful following the workflow and ideas. Can't wait to try and add this in a similar workflow
Great tutorial! Can you do same with multiple characters same image?
Legend.
Signed up for Patreon.
Three days trying to install your workflow, nothing is working, I am getting many errors, tried to fix them all but no results :(
This was very informative, thank you for helping us make cool things with this.
I was waiting for this video,, thank you so much ❤❤❤
I can't thank you enough for everything you do! Thank you
Wow wow wow exceptional workflow.....
Unfortunately after todays comfy update, not working anymore.....
i thnink that a node update is bugged..... anyone with similar issue?
thanks a lot mick very very useful tutorial ❤
We really need this for Tensorart or Shakker!! 😀
Hey, thank you for sharing this.
it's an incredible workflow.
I wanted to know if there is any way to avoid the "Cartoon" style of the body and head proportions? after some tries I see that it's always rendering out a head which is bigger than the body. maybe it's got to do with the ref sheet that the controlNet is using? does he takes proportions from there?
thanks.⭐⭐⭐
This is great workflow, Easy to work for us.
EDITED: Thanks
This is not the question. I don't need the character face. I need the entire body/costume/form. If I create detailed artwork of my own hero or robot or pokemon. Say all three. I have an image of each. How to feed ai those three images of the characters, then generate an image of them together. Maybe even fighting a monster I created which would be a forth image. Not face. I need full form.
Excellent video and explanation and thank you for sharing the workflows. 👏👏
could you do an install video for mackbook pro?
By far one of the most consistent Comfy UI TH-camrs. Keep it up.
Wow working grat ! Anywa Flux is really slow ( Laptop RTX 4090 16gb). But you did it also for SDXL. Well done !
Man how much ram is needed to execute it fast . M stuck in 76 percent from one hour 3070ti and 16 gb ram
@@LUCKYRABITT I'm in the same boat. I have a laptop with an NVIDIA GeForce GTX 1650 Ti and 16GB of RAM, and it gets stuck at 76 percent.
You are a champ for this, keep it up!
This is great, thank you!
Is there a way to use Python 3.12 or not at all?
Need help, pls.
First of all it looks very good, great job!
But i have a Problem, maybe anyone can help !? ( please ) Step 1- 3 all fine but by start step 4 all expression editors turns red and no error report is shown ?
So I have no idea where the problem is !
Amazing!Thank you very much!
Great, I will continue to support
Hey Man loved the video. Just want to confirm that is the python version issue resolved or do we need to downgrade the comfyUI version only ?
hi, i have this error in begin process of the generation. 'ModelPatcher' object has no attribute 'get_additional_models_with_key'
Just a heads up that this tutorial no longer is working with an error "ApplyPulidFlux 'ModelPatcher' object has no attribute 'get_wrappers'". Seems to be a version incompatibility issue. This tutorial needs to be updated with a newer version of ComfyUI to fix this.
Hi, which workflow version of your belong to the Flux Guff version. There seems to be 3 versions workflow for Flux...
I know I'm new to all of this but am I correct in understanding that you cannot follow along this tutorial with a mac?
ive installed the SDXL version of the workflow per the instrcutions, however when i run it my memory maxes out and it takes a couple of hours to complete. any idea what I could be doing wrong? is it just slow on the first run to cache the models?
When should we expect this to work with the latest version of ComfyUI?
The manager button does not appear in the menu to be able to install the nodes, can someone help me please?
awesome work man. Thanks for the workflow. Please do more style and character workflows :P for training lora.
I would like to use this to help my niece creating images of her.
Hope this setup will become easy for a beginner to use. With time frame - how long do i need for 1 setup from installing to finished picture.