Create Consistent, Editable AI Characters & Backgrounds for your Projects! (ComfyUI Tutorial)
ฝัง
- เผยแพร่เมื่อ 15 มิ.ย. 2024
- I'll show you how to use ComfyUI to create consistent characters, pose them, automatically integrate them into AI-generated backgrounds and even control their emotions with simple prompts.
If you like my work, please consider supporting me on Patreon: / mickmumpitz
Follow me on Twitter: / mickmumpitz
I developed this ComfyUI workflow in preparation for one of the next AI 3D rendering workflows, in which we will look at animating characters. But you can also use this workflow for many other exciting things: to create children's books, AI movies or one of these AI influencers everyone keeps talking about!
You can download the FREE workflows here: www.patreon.com/posts/new-vid...
Chapters:
00:00 Intro
01:09 Character Sheet
04:49 Loras & Midjourney
05:48 Controllable Characters
10:42 Outro
Outstanding! Thanks for sharing these workflows so generously!
Incredible! Thanks for your work~
This was an amazing tutorial. Thank you!
i dont even remember liking this video, this is mind blowing.
This is amazing. Thanks for everything, brother.
Hey thanks for your great content. Always enjoy your videos !
Great tutorial. Thanks!
bro, you a lifesaver!
Genius work.
Very interesting. Not only about workflow but your quick explanations are very great and valuable. Thank you a lot for that !
very nice
Very good! Thank you!
Excellent video, congrats and ty
that is really cool and helpful!!
great work !
The ultimate goal
Oh boy this is an amazing workflow ! Thank you so much for sharing !
真是个相当棒的创意工作流😁
Great work bro that's what i want thinking of creating model for character
Dude! God bless your curious mind and a generous heart!
some good stuff right here
Excellent!
I just watched the image to 3d model video. I don't see why you couldn't generate multiple views using this method and then generate a more complete 3d model from the output. It would be more like photogrammetry.
you're so frikkin' awesome.
This workflow worked great for me! thx
how did you get the workflow he imported into ComfyUI?
@@endedbrand2324 it's Mickmumpitz_CharacterSheet_v01.json
Great video and finally a useful tutorial…
Fantastic, this is the sort of thing I've been after for a while. I suppose when it comes to the consistency of the clothing the simpler the costume the better?
thanks a lot
Definitely gonna try this! Thanks! 🧀
very interesting!
Amazing
Teşekkürler.
Absolutely fascinating video.
So let's say I generate 2 keyframes, one where Hans grabs the cheese, and a second where he is holding the cheese over his head triumphantly.
Is there a way within comfy to generate the 12 or so frames inbetween, creating a custom animation out of our custom poses?
greate video!
Top Tier!
Man, I wish I wasn't so intimidated by that UI. This looks really incredible.
Just go ahead and give it a shot. It's not complicated at all once you try it
Checkout Olivio's Comfy Academy and you can go from 0-competent
Yeah try it out. I've only made 2 workflows myself, it is pretty hard, but the workflows the community give out help understand more.
Just try it. Download some pre-made workflows and play with the settings. Won't even touch another webui now. None are as versatile not even close
Try it! This is my first day exploring and it's actually very intuitive once you start to understand how things work
always doing something unique and helpful content
Jesus Christ! This is awesome! 🎉
Beautiful
Awesome
This is amazing! I keep getting an error that says "clip Vision" model not installed.
The best consistent character workflow so far. How about saving separate workflow between sdxl and sd1.5 for easier use?
Which hardware are you actually using to create those awesome images and videos?
I'd find interesting digging more into the LORA, explaining how to use the saved faces to train lora.
We really need to have different camera angles, though. The character consistency is great, but we need to be able to take this same premise and be able to, like, take a picture or a rendering of a 3D model and get the angle on the consistent character.
Hope midjourney releases a pose reference feature with openpose support soon
I got everything, but I don't understand where I can find your Workflow (template), please help!
May I ask what posture editing software you are using, as well as the software for generating hand depth maps
Very interesting, how does this do when you combine it in conjunction with your previous videos with control net? IIRC there were programs like Stable Projectorz, so wouldn't it be possible to get nearly perfectly consistent characters by projecting character sheets like these onto 3d models? Hmmm...
Great tutorial. finally someone, who explained the whole procedure, how to turn the character sheet to actual images. The main problem is still with the background, which is always blurry with AI generated images, makes them unrealistic. If you check real photos, the backgrounds mostly as sharp as the subject, even if you look for photos like ... climber in the mountains or whatever. So there is any way to maintain the sharpness of the background?
Hello, thank you for sharing your work. Is it possible to create a Model Sheets from a reference character image ??? and if so, what node would you add? In short, how would you proceed?
I would like to input an Image instead of the Positive and Negative prompts connected to the Apply ControlNet (Advanced) node. How can I achieve this?
The intention is to create an Image -> CharacterSheet, rather than a Text -> CharacterSheet.
I would like to use my favorite character, but I'm tired of having to create new characters all the time.
I am using the workflow effectively! Thank you.
perfect : o
that's awesome! I've just finished a personal project which this would have been so helpful and I wouldn't have to generate literaly thousands images and compose everithing with photoshop and hour and hours of inpaint and outpaint. I'm looking foward to use it in a new project. Could you show us now how to make it with multiple characters?
Can we add an image prompt instead of a text prompt to use a character we already have?
I know it's prolly easy, but I'm new to comfyui.
Super tutorial, thanks. I have a question, with you in comfyui the photos look super zero distortion, ugly eyes, face or hands. With me, on the other hand, it is always in ComfyUi. What do you do to make everything look this good?
Please write a book or make a UDEMY tutorial. I will join. You know what you do and perfectly. Love the work. Thank you,
Thank you! I've been trying to figure out a good workflow for this for some time now. Well done.
Hi where can I get the IpAdaptorunified loader node and also the IPAdaptor (also missing in manager) ? Thanks in advance :)
This is very cool. Can you do also this for a very realistic person, please?
Can this wf be altered to begin with an existing image that you've created instead of text?
do u think the character creation could be enhance by integrating IPAdapter on top of checkpoint and lora?
So we can create these sheet and then use a image2image faceswap right? im looking for faceswap, same clothes, different poses workflows ^^ need consisten characters, as an exapmple for an comic book
Im having an error on the second workflow, on the first Ksampler :
Error occurred when executing KSampler:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
(And gibberish after that...) I don't know what to change...
Any help so I get add nodes to get a consistent background with your workflow ?
incredible tech, scary how few real artists the industry will probably have in 5 years time. Overall depreciation of quality and interesting character design over time in the interest of turning the cog wheels of consumption content out as fast as possible. 10% will use this for good the rest of it will be used to save money and time in an area where money and more importantly time should be spent on this subject. Awesome tech, horrifying corporate application.
Mick how did u display the stats in comfyui is that a extension?
U know like the vram and such
I'm not quite there yet...Thx for the vid
Thanks you so much, can i download the character as FBX file?
I just can't believe this is real life.
Super complicated for someone with no coding or IT experience. Could not figure out how to run the manager as there is no run feature after getting it from github. After seeing the rest of the instructions, I know now I do not have the level of proficiency to do this tutorial. Any resources for learning this kind of thing?
I'm commenting so I remember to watch this soon once I have more stamina charged up (just woke up) (been chronically sleepy and cranky for the past like 7 or 8 months 😢)
Why is that also me X) ?
amazing vid, i have one little issue, where do i get and put the upscale model at? it seems i am missing this ?
edit: nvm i seen the file name so i just google it and found the upscale model and everything runs good.
Thanks fo this greate tutorial!!! in the second workdlow theese two nodes, CannyEdgePreprocessor DepthAnythingPreprocessor because the ComfyUI Nodes for Inference.Core conficts with lots of elements. trie to install manually alnso bu still nothing. any solution so that we can follow ?
Man, you are simply God! A year ago I tried to do the same thing in A1111, and it didn’t work. What you created is a masterpiece. Thank you!
wie änder ich den input prompt zu img to img von einem bereits vorhandenen charakter?
Thanks for your amazing tutorial. I've followed all the steps, and I still don't know why, when I start the process, all I get are black images. Can someone help me, please?
Your "IPAdapter" node is different than the "Load IPAdapter" node by your expressions. I'm getting an error in my "Load Adapter" mode, and there are no models in the dropdown list there, although I have all IPAdapter models loaded.
Same here. I have tried to download whole model but no luck. Still getting the same error.
@@juxxcreative did you solve that? same error
@@ernienosoul unfortunately no
@@juxxcreative
@sam5519
hace 1 día
For people facing "IPAdapter model not found." and using ComfyUI from StabilityMatrix. ComfyUI Manager downloads model to StabilityMatrix\Packages\ComfyUI\models\ipadapter, but workflow trying to load model from StabilityMatrix\Models\IpAdapter. What you need to do is copying all models to StabilityMatrix\Models\IpAdapter
it worked for me :)
Clip vision model not found error What am I missing?
same here
same
Wow
Thank you! Great video. Though I must say, sdxl openpose is very lacking, completely off and not accurate. No matter what you do, it's no where near as accurate as 1.5 which is my main issue with sdxl and why I am a bit fed up with stable diffusion in general
Can you create this from an image? i mean if you allready have a character.
I tried to do it in COMFY and I just couldn't... is there a way to develop it directly in Midjourney?
@02:29 "I'm really happy that it's THIS type of moustache!" BRUH!💀🤣
Awesome tutorial and channel btw❤
looking for a workflow not for generating a scene but to add more detail to a 3D render
Hey I got these workflows installed in comfy and everything looks great, but I have one little holdup. Where do I get the 4 image files that you're using in the 2nd workflow (the posable character flow) that are briefly showing @6:16? You have them as Face_upscale_00019_.png, Face_upscale_00053_(3).png, pose_2024_04_15_19_23_png, and depth_2024_04_15_19_23_30(11)png, but when you download the workflows it shows them as FaceRefine_00087_(1).pgn, FaceRefine_00059_(3).png, Pose_2024_04_26_16_31_12.png, and depth_2024_04_26_16_31_12(1)png. I've searched for these filenames on the web and can't seem to find them on Get or Huggyface. Not sure where to go from here :/
Killing it with Midjourney! But if you're ready to take things up a notch, Stylar's your go-to. It's like a design powerhouse, especially for interior, 3D, and character work!
Automation is soo fun!
What do you call an object character sheet? I would love to create different views so I can apply them in a 3D program. Like a car .
Anyone having issues loading IPAdapterUnifiedLoader node in the workflow?
Same here!
@@chiselpeakstudios2593 the ipadapter custom node has been updated, the model paths changed and you might need to update comfy.
same @Mickmumpitz
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "S:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 515, in load_models
raise Exception("IPAdapter model not found.")
Did you able to fix?
I am also getting same error-
"Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found."
I cloned entire IP Adapter repository in mentioned folder. Is this wrong?
It seems like the workflow link cannot be opened, and I didn't see your Discord link either.
can you load a character from an image? thanks
where to get the "perfect hand v2" lora, thx
Hi, thanks for putting together this video. As a creator myself I truly appreciate all your efforts. Positively i manage to generate charachter using the first workflow, but got stuck when i got error "Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found."
Similarly when i went to compositing I have the same error. Any pointers on what i could be doing wrong?
I downloaded ip adapter models and clip vision models (but renamed them to clip vision 1 and clipvision 2 ) because the instructions was not clear. Could that be the problem?
Ok it was actually my bad. I installed ipadapterplus nodes from the comfyui interface which solved the issue! Once again, this is the best tutorial on comfy UI consistent character!
what happen to your vRam 2:38
What if don't want human poses how are we suppose to ger poses of another animals or even aquatic animals
I have the same question. Everything works great, but I do not want to generate a human pose. I am trying to generate a teardrop, like a waterdrop but I have not successful
Where do I get the Upscale model that's missing in the first workflow?
me 2 , have you found it ?
I am getting this error:
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
I have downloaded the impadapter models in the correct folder .What am I doing wrong?
I had a similar issue while running the CharacterSheet workflow with a 1.5 based checkpoint. I solved mine by creating an ipadapter folder (/ComfyUI/models/ipadapter) and placing the ipadapter model (ip-adapter-plus-face_sd15.safetensors) in the folder.
@@IsiOmoifoJr Thanks a lot.. It worked..
@@Rohit-lh9bjYou're welcome.
I am also getting similar error "Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found." I cloned entire IP Adapter repository in mentioned folder. Is this wrong?
Hi bro i got error
raise Exception("ClipVision model not found.")
can you help me
Same here!
I'll let you know if I figure it out.
I want to use an image as a background, what do I have to do, add a mask node for the background ?
And how to connect ?