My prompt styles can be downloaded here: www.patreon.com/posts/sebs-hilis-79649068 Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
Hi bro, I have a problem with SDXL Lora. It wasn't showing in the Automatic 1111 webui. I downloaded it and put it in the lora folder, but the webUI only displays SD 1.5 lora. What can I do?
@sebastiankamph :I wanted to ask you, how do we share all our models, checkpoints between stable diffusion and cumfyui, without having double the same files
I'm super happy that Controlnet for SDXL is getting there. Now all I need is a 'Tile' model. Tile is super useful, especially for upscaling - but also for reinventing images if you use it at a weight like 0.3 or so.
@@hellbolato Tile controlnet is amazing because it allows for higher denoising setting to add extremely sharp details while still preserving the image of the input. I feel low denoising usually leaves a bit of AI fluff noise in there
Hi, I'm new to this, I discovered your channel by chance and I'm very excited because you explain it in a calm way that even beginners can get by. I will watch all the videos from the beginning so I can get better at creating images, thanks for your time to show us, keep it up. you are the best lol.
Bahhh, I'm an idiot. I'm new to all this and just switched from fooocus to a1111 to forge. I had no idea there are XL models and I've been thinking I could only use controlnet with 1.5 (I would get the "not compatible" message when using a XL checkpoint since I have the 1.5 CN models). Glad to see I just need to download the XL models.
Every version I try to run this on (including the one advertised here) breaks down when I try to use the Depth control-net, saying that it can't multiply matrices of incompatible shapes. Has the announcement that A1111 can do SDXL + ControlNet come a little prematurely?
Didn't know about that controlnet preview feature. That's handy. Also I really appreciate the prompts share :) SDXL has just been okay. That juggernaut checkpoint has been awesome. A111 is running a little more stable on my 6gb gpu. Re-painter often just produces a kaleidoscope'd image. I haven't even looked into what it is yet though.
How do you inpaint so cleanly with an SDXL model, and without controlnet too? My results are far worse than with any inpainting 1.5 model. What am i missing?
I'd start with canny and depth. If you want to know how to install ControlNet from scratch, check out my base videos for ControlNet (playlists on channel)
Great news that this has been introduced for SDXL. However, it now takes me about 10 minutes to generate a single image. If I use ControlNet with any other model (not SDXL), it takes mere seconds. Anyone have any idea why?
i dont know about this im using controlnets with stable 1.5 and honestly the results with SDXL is whatever atm its hardly worth buggering around with right now
pretty much its just a time thing atm like you said the results your getting are better right now so possible in like 5+months it might be worth it i dont know@@ph3-lm
Not sure why but can't get openpose to work with my A1111 webUI. Depths works so it's either the model (which I downloaded from Huggingface) or the preprocessor, or something
What is the SD VAE tab? i dont have it. neither the "add network to prompt". i downloaded the model you used here and also the lora. could you explain about those tabs and how to get them to show? (im running the newest version of A1111)
for the VAE tab : in the settings/user interface : there's a "Quicksettings list" with buttons (you should have at least "sd_model_checkpoint" in it) and you can add the VAE tab (sd_vae) or a lot of other things from stable diffusion or an extension by clicking in the area. for "add network to prompt", I don't know exactly what it is (I guess it adds the selected Lora to the prompt but I don't use Lora that way)
I have a performance problem with CN and SDXL - the time to generate pictures goes up drastically if I use Canny or Openpose. I downloaded the fixed VAE with the performance improvements, but it doesnt help much.
please help me If change the settings, the error message below appears. I do not know the reason. [Textual Inversion selection menu] Nothing here. Add some content to the following directories: C:\stable-diffusion-webui\embeddings
I could not find how to add the Lora quick pulldown like you had it. Add networks to prompts - I looked in user interface and extra networks settings. any tips for getting that handy pulldown?
thanks@@sebastiankamph - strange that im using V1.6 updated and I do not have add networks to prompt in the quick settings. I know what it is, I have some custom fields in there, but I just do not get the add network or add hypernetwork to prompt.
Hello, I followed your tutorial but always getting this error message in Controlnet after i try to generate an image: AttributeError: 'NoneType' object has no attribute 'shape' Any ideas? Thanks!
Thanks for the video, and this is underrated channel, really happy to stumble on this one! I have a question, if you would like to swap clothes from one character in a picture to another character in a picture, retaining everything else in a picture as it is, which workflow would you go for? Is it a img2img with canny control net and inpaint? Is there a way to avoid manual masking and have ctrlnet do the heavy lifting? Anyway keep doing great work, you have my subscribe.
Sebastian! Love your content. Thank you for all the work you put out. I've been having problems running some Preprocessors in ControlNet. I've tried updating, deleting and reinstalling, I've tried running SD a1111 in different browsers, redownloading the models, tried etc., and I just keep getting the following error: " urllib.error.URLError: " Anything come to mind that could help me solve this? Like I mentioned, only a few preprocessors work. for "Depth," for example, only the "depth_zoe" works, all other give the error. For OpenPose, none of the preprocessors work. This happens both in SDXL and in 1.5 models. Thank you in advance.
It takes a lot to render. 3060ti 40 minutes to render any suggestions or this what is expected with this GPU? I have all the --xformers --medvram-sdxl --upcast-sampling --skip-python-version-check --no-half-vae --skip-torch-cuda-test
😪fantastic, but despite RTX 4060 with 8GB VRAM I get low memory error. --xformers --medvram-sdxl is set, just like low vram in the controlnet settings. or is there another way to make controlnet work for sdxl on my system?
I'm getting a weird problem with controlnet and XL. I often use depth maps from Blender to control the compostition and they work perfect in 1.5 but in XL they actually affect the image in a really negative way. It becomes flat, oversaturated, low quality and generally doesn't follow the prompt at all. Both auto1111 and controlnet is up to date, plus I've tried all the XL depth models and they all produce the same results.
Hi bro, I have a problem with SDXL Lora. It wasn't showing in the Automatic 1111 webui. I downloaded it and put it in the lora folder, but the webUI only displays SD 1.5 lora. What can I do?
@@Yiffboy I'm using `set COMMANDLINE_ARGS=--xformers --medvram --autolaunch` in webui-user.bat Also, I raised token_merge on the configuration to 0.6 I don't know what more I could optimize Q_Q
Awesome! I've been using SDXL by plugging rough color sketches into img2img because it's more controllable than SD1.5 w/ ControlNet. :-| But now I have both! 😀 (Actually, I'm having trouble with the sketch controlnet at the moment. It's running, the results are just terrible. Hopefully I'll figure out what I'm doing wrong in a second here.)
I dont understand what im doing wrong. I get different error messages when I try to use my controllnet. like RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x2560 and 2816x1280). I have gerenared an image in 1024 of a gorilla. I used a depth-map and it seemed to work. but when I try to generate a space suit with that map it give me the error below. I have pixel perfect enable I have a 4090 rtx, and every other resolution is set to 1024 x 1024, Any ideas?
I think that is the refiners fault, but Im not sure. Maybe try a model without using the refiner? I also got this error a bunch of times but havent figured out what causes it - on the other hand, CN seems to cause extreme performance loss for me.
@@XraynPR for you answer. I got it to work. And Im embarrassed to say that I think the solution was that I had forgotten to enable it. Im not sure if that was the case. because I naturally understand that you have to. but stupid or not. I got it to work. And again thank you for giving me some ideas. :)
Most of the models in the package from lllyasviel for sdxl do not work in automatic 1111 1.6. Reinstalling the extension and python does not help. When used, it produces the following error - return F.pixel_unshuffle(input, self.downscale_factor) RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=1240 is not div
It looked like you used the Juggernaut checkpoint for the inpainting. I thought the XL models didn't work for inpainting or img2img. Guess I will have to try myself.
@@sebastiankamph You're right, it does. I've been playing w/this now for the last few hours. Fun stuff and great results with SDXL. I've also found that I can also change to the epicrealism checkpoint and use img2img to get even more realistic images.
@@sebastiankamph how to update on comfy? Would love to see more of your videos on comfy to be honest. We are a small substrata of people who use comfyUI 😅. Thanks for your awesome videos by the way.
Well, didn't work for me with openpose. I guess I'll keep searching to find out how to use controlnet for generating images. For now I waited for 5+ minutes to get a super blurry and ugly pic out of it. I create normal HQ photos in seconds...
The latest update killed my SD... totally dysfunctional. So, while this would be nice, I think I may need to find a different option. Thanks for the info anyway.
Soooo you want me to put some of YOUR code in my webui to get your styles? Yeah no thanks. Also your sensationalistic titles wear thin after a while. There’s only so many times an industry can change forever and minds can be blown only so many times per month
it's really too slow for my GPU (2070s). Generating XL images without controlNet has an acceptable duration : mainly less than 2 minutes for 2048x2048 (except sometimes after a long usage: I guess there is a bug in VRAM gestion in automatic1111 that forces me to restart SD) with controlNet XL, it takes more than 40 minutes for, most of time, a disappointing result. So I prefer using 1.5 and generate much more results then, maybe using XL to enhance the best ones (but, most of times, they dont even need it )
Hey somebody. I also create app to do video to video by text prompt with auto create mask on difference timeline to use uniq prompts for eqch object in one run. Maybe interest will be review Wunjo AI
My prompt styles can be downloaded here: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
Is this constantly updated? Thank you anyway
hey sebastian, tnx for video
Have you updated the prompt styles file or is it the previous file?
Hi bro, I have a problem with SDXL Lora. It wasn't showing in the Automatic 1111 webui. I downloaded it and put it in the lora folder, but the webUI only displays SD 1.5 lora. What can I do?
@sebastiankamph :I wanted to ask you, how do we share all our models, checkpoints between stable diffusion and cumfyui, without having double the same files
@@oaahmed7515 Yes!
I'm super happy that Controlnet for SDXL is getting there.
Now all I need is a 'Tile' model. Tile is super useful, especially for upscaling - but also for reinventing images if you use it at a weight like 0.3 or so.
I feel like ultimateSD upscale works fine without controlNet up to 8K as long as the denoising isn't too high
@@hellbolato Tile controlnet is amazing because it allows for higher denoising setting to add extremely sharp details while still preserving the image of the input. I feel low denoising usually leaves a bit of AI fluff noise in there
Patiently waiting on tile too. It's such a useful CN in animation rendering as well.
IT's HERE BOYS!
Boom!
Hopefully lineart comes out soon. I enjoy that one the most, allows for perfection
Hi, I'm new to this, I discovered your channel by chance and I'm very excited because you explain it in a calm way that even beginners can get by. I will watch all the videos from the beginning so I can get better at creating images, thanks for your time to show us, keep it up. you are the best lol.
Hi Werner, thank you for the kind words. Happy to have you aboard and enjoy the content 😊🌟
For me your tutorials are the bests! They helped me a lot.
Would love to see a video about this for Comfy UI. Also have been looking for a Video on Inpainting and Outpainting with XL and Comfy UI.
Ur at 69k subs already! NICE 🙃
I tried inpainting with SDXL and the new controlnet and get "mat1 and mat2 shapes cannot be multiplied" error
Awesome! Can’t wait to stack these two!
Lets go!!!!!!!! 🔥
🏎🏎🏎
Bahhh, I'm an idiot. I'm new to all this and just switched from fooocus to a1111 to forge. I had no idea there are XL models and I've been thinking I could only use controlnet with 1.5 (I would get the "not compatible" message when using a XL checkpoint since I have the 1.5 CN models). Glad to see I just need to download the XL models.
Came for the models, stayed for the "ruler king" joke
AttributeError: 'NoneType' object has no attribute 'shape' cant get this to work.
Thanks for the vid, cant wait to try ControlNet myself :)
Have fun! :))
Every version I try to run this on (including the one advertised here) breaks down when I try to use the Depth control-net, saying that it can't multiply matrices of incompatible shapes. Has the announcement that A1111 can do SDXL + ControlNet come a little prematurely?
Didn't know about that controlnet preview feature. That's handy. Also I really appreciate the prompts share :)
SDXL has just been okay. That juggernaut checkpoint has been awesome. A111 is running a little more stable on my 6gb gpu. Re-painter often just produces a kaleidoscope'd image. I haven't even looked into what it is yet though.
How do you inpaint so cleanly with an SDXL model, and without controlnet too? My results are far worse than with any inpainting 1.5 model. What am i missing?
Can the sdxl control net model be run on the control net extension used in a1111?
What ones do i actually need!? sd_control_collection ? that part was not so clear for me, the list is huge! Thanks!
I'd start with canny and depth. If you want to know how to install ControlNet from scratch, check out my base videos for ControlNet (playlists on channel)
where do you see this tech in 1-2 years?
I love the little snippets of AI info in this excellent dad joke channel ;-)
very nice but my computer is too slow ... there is not some other way like with just using huggingface?
Try ThinkDiffusion or some cloud solution
Great news that this has been introduced for SDXL. However, it now takes me about 10 minutes to generate a single image. If I use ControlNet with any other model (not SDXL), it takes mere seconds. Anyone have any idea why?
How much gpu ram
@@kc-jm3cd 12gb
i dont know about this im using controlnets with stable 1.5 and honestly the results with SDXL is whatever atm its hardly worth buggering around with right now
yeah me2 - as for now 1.5 seems just enough for my purposes. I don't see much reason to change to SDXL yet
pretty much its just a time thing atm like you said the results your getting are better right now so possible in like 5+months it might be worth it i dont know@@ph3-lm
Hi Seb! Controlnet FTW 🔥🔥
Also awesome dad joke today 😂
Thank you! Good to see you around again 😊🌟
Any news on a Tiles Model for ControlXL? I really need that for upscaling. 😢
Not sure why but can't get openpose to work with my A1111 webUI. Depths works so it's either the model (which I downloaded from Huggingface) or the preprocessor, or something
thanks this helped a lot!
Glad to hear!
i don't have loras folder in the models folder? do i need to create one?
What graphics card are you using?
RTX 4090. You? 🌟
is control net working in easy diffusion if so need a how too, as i cant get it working
Any idea what controlnet Recolor does and how to use it?
What is the SD VAE tab? i dont have it. neither the "add network to prompt". i downloaded the model you used here and also the lora. could you explain about those tabs and how to get them to show? (im running the newest version of A1111)
for the VAE tab :
in the settings/user interface : there's a "Quicksettings list" with buttons (you should have at least "sd_model_checkpoint" in it) and you can add the VAE tab (sd_vae) or a lot of other things from stable diffusion or an extension by clicking in the area.
for "add network to prompt", I don't know exactly what it is (I guess it adds the selected Lora to the prompt but I don't use Lora that way)
@@lennoyl thanks! 🙏
any idea how to fix the error: AttributeError: 'NoneType' object has no attribute 'shape' ? Thanks
Did you load an image into CN?
@@sebastiankamph Thanks for the reply.
Yes I did, I also clicked the explosion icon to view the preview. I also ticked "enable".
txt2img or img2img?@@R1znl3r
i think you need to get the refiner model
@@sebastiankamph txt2img, sorry late reply.
I have a performance problem with CN and SDXL - the time to generate pictures goes up drastically if I use Canny or Openpose.
I downloaded the fixed VAE with the performance improvements, but it doesnt help much.
why when i use Juggernaut_X_RunDiffusion based model the controlnet doesnt work?
please help me
If change the settings, the error message below appears. I do not know the reason.
[Textual Inversion selection menu]
Nothing here. Add some content to the following directories:
C:\stable-diffusion-webui\embeddings
When will the other models such as Lineart and Inpaint become available?
I could not find how to add the Lora quick pulldown like you had it. Add networks to prompts - I looked in user interface and extra networks settings. any tips for getting that handy pulldown?
Settings - User interface - Quick settings
thanks@@sebastiankamph - strange that im using V1.6 updated and I do not have add networks to prompt in the quick settings. I know what it is, I have some custom fields in there, but I just do not get the add network or add hypernetwork to prompt.
same issue, ddi you find a soln?@@dkamhaji
I came for the Jokes... and stayed because of the Technology 💯👏🏻
i get always this error 'NoneType' object has no attribute 'shape' can someone help me
Having the same issue. Will check back if I find a solution
Make sure to update both SD and CN to the latest versions.
Hello,
I followed your tutorial but always getting this error message in Controlnet after i try to generate an image:
AttributeError: 'NoneType' object has no attribute 'shape'
Any ideas?
Thanks!
Can I not use the classic mannequin-like poses? Why do I need to already have a finished image? Defeats the purpose for me
Of course you can
Awesome 🎉🎉🎉
Thanks 🤗
Thanks for the video, and this is underrated channel, really happy to stumble on this one!
I have a question, if you would like to swap clothes from one character in a picture to another character in a picture, retaining everything else in a picture as it is, which workflow would you go for? Is it a img2img with canny control net and inpaint? Is there a way to avoid manual masking and have ctrlnet do the heavy lifting? Anyway keep doing great work, you have my subscribe.
Hey @stefanvozd I have a similar problem I want to solve. let me know if you find anything
Sebastian!
Love your content. Thank you for all the work you put out.
I've been having problems running some Preprocessors in ControlNet.
I've tried updating, deleting and reinstalling, I've tried running SD a1111 in different browsers, redownloading the models, tried etc., and I just keep getting the following error:
"
urllib.error.URLError:
"
Anything come to mind that could help me solve this?
Like I mentioned, only a few preprocessors work. for "Depth," for example, only the "depth_zoe" works, all other give the error. For OpenPose, none of the preprocessors work. This happens both in SDXL and in 1.5 models.
Thank you in advance.
VRAM Memmory hungry, Can't render the simplest controlnet image on a 12GB card. 😞
It takes a lot to render. 3060ti 40 minutes to render any suggestions or this what is expected with this GPU? I have all the --xformers --medvram-sdxl --upcast-sampling --skip-python-version-check --no-half-vae --skip-torch-cuda-test
Thanks, so do we need to install separate extensions for both 1.5 and XL, and download models separately as well? How does it work with SD 1.5 and XL?
Same extension, but different models if you want to use both 1.5 and sdxl, yes.
@@sebastiankamph thanks for your answer man, but you missed one thing: your dads joke 🤣
@@OnyxLee What do you mean, it's in the video? :)
@@sebastiankamph nah, I was joking with you that you didn't tell a joke in your reply. Thanks man 🙏
😪fantastic, but despite RTX 4060 with 8GB VRAM I get low memory error. --xformers --medvram-sdxl is set, just like low vram in the controlnet settings. or is there another way to make controlnet work for sdxl on my system?
Hello sir..... How much GPU VRAM Do you use in your Computer for using this Ai editing?
10gb. Minimum 4gb. More is recommended.
is there a video on how you setup your interface? it looks a lot better than the default..
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x2560 and 2816x1280)
Anyone has this issue?
You can't mix sdxl and 1.5
Yep I get that error too, still not sure what the fix is.
Is there any link or extension where there are "default negative" type lists?
finally !!!! :)
does this work with SD 1.6? mine auto updated
Thank you 🫡
OMG my eyes!! Dark UI please!!!
I don't know why, but I don't get the explosion symbol for previewing the preprocessors. It's been a while and I am fully updated.
That's weird, have you tried reinstalling it?
I'm getting a weird problem with controlnet and XL. I often use depth maps from Blender to control the compostition and they work perfect in 1.5 but in XL they actually affect the image in a really negative way. It becomes flat, oversaturated, low quality and generally doesn't follow the prompt at all.
Both auto1111 and controlnet is up to date, plus I've tried all the XL depth models and they all produce the same results.
Have you figured what was causing it?
If i update controlnet, it will still working with 1.5?
Yes, but needs different models for 1.5 and sdxl
Still no SDXL segmentation? That's highly frustrating.
How do I show the abbreviation for sizes 1:1 / 3:2 / 16:9
So is it better to delete the older models of controlnet?
Hi bro, I have a problem with SDXL Lora. It wasn't showing in the Automatic 1111 webui. I downloaded it and put it in the lora folder, but the webUI only displays SD 1.5 lora. What can I do?
Did you update to the latest a1111?
@@sebastiankamph yes bro
What GPU are you using? My GTX 3050Ti suffers like hell to make just 1 of those and without refiner Q_Q
Gotta use ARGuments to optimize your renders
@@Yiffboy I'm using `set COMMANDLINE_ARGS=--xformers --medvram --autolaunch` in webui-user.bat
Also, I raised token_merge on the configuration to 0.6
I don't know what more I could optimize Q_Q
Man, just if i had enough money and computational power, i would do this all day.
Try one of the cloud solutions like ThinkDiffusion or Rundiffusion if you don't have the GPU power
@@sebastiankamph thanks for suggesting this options ❤️
there is no tile for this ?
does this work with comfyui too?
Sure does
Awesome! I've been using SDXL by plugging rough color sketches into img2img because it's more controllable than SD1.5 w/ ControlNet. :-| But now I have both! 😀
(Actually, I'm having trouble with the sketch controlnet at the moment. It's running, the results are just terrible. Hopefully I'll figure out what I'm doing wrong in a second here.)
I dont understand what im doing wrong. I get different error messages when I try to use my controllnet. like RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x2560 and 2816x1280). I have gerenared an image in 1024 of a gorilla. I used a depth-map and it seemed to work. but when I try to generate a space suit with that map it give me the error below. I have pixel perfect enable I have a 4090 rtx, and every other resolution is set to 1024 x 1024, Any ideas?
I think that is the refiners fault, but Im not sure. Maybe try a model without using the refiner? I also got this error a bunch of times but havent figured out what causes it - on the other hand, CN seems to cause extreme performance loss for me.
@@XraynPR for you answer. I got it to work. And Im embarrassed to say that I think the solution was that I had forgotten to enable it. Im not sure if that was the case. because I naturally understand that you have to. but stupid or not. I got it to work. And again thank you for giving me some ideas. :)
Most of the models in the package from lllyasviel for sdxl do not work in automatic 1111 1.6. Reinstalling the extension and python does not help. When used, it produces the following error - return F.pixel_unshuffle(input, self.downscale_factor)
RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=1240 is not div
Can't mix sdxl and 1.5
It looked like you used the Juggernaut checkpoint for the inpainting. I thought the XL models didn't work for inpainting or img2img. Guess I will have to try myself.
As you can see from the result, worked quite well :) Also, there is an sdxl inpainting model available
@@sebastiankamph You're right, it does. I've been playing w/this now for the last few hours. Fun stuff and great results with SDXL. I've also found that I can also change to the epicrealism checkpoint and use img2img to get even more realistic images.
What hardware are you runnning on ?
Rtx 3080
awesome!
Quite so! 😊🌟
Thanks!!!
You bet!
No tile version yet?
Is this update only for A1111 Or is it a whole update to control net? (Like also for comfyUI)
All of them!
@@sebastiankamph how to update on comfy? Would love to see more of your videos on comfy to be honest. We are a small substrata of people who use comfyUI 😅. Thanks for your awesome videos by the way.
Thanks, but works very slow (i7, 32ram, 3060-12gb)... any advice??
Can someone tell me what VAE is used for?
Well, didn't work for me with openpose. I guess I'll keep searching to find out how to use controlnet for generating images. For now I waited for 5+ minutes to get a super blurry and ugly pic out of it. I create normal HQ photos in seconds...
Check out 1.5 models, ControlNet is generally more powerful there and has more options.
Anyone else having issues with open pose for SDXL? doesn't seem to work and I tried all versions
Same
The latest update killed my SD... totally dysfunctional. So, while this would be nice, I think I may need to find a different option. Thanks for the info anyway.
❤
anyone else having trouble getting open pose to work in sdxl?
Soooo you want me to put some of YOUR code in my webui to get your styles? Yeah no thanks. Also your sensationalistic titles wear thin after a while. There’s only so many times an industry can change forever and minds can be blown only so many times per month
broke my sd install - take care
sweet !!! Now I can upgrade to SDXL 😊
Have fun! 😁
it's really too slow for my GPU (2070s).
Generating XL images without controlNet has an acceptable duration : mainly less than 2 minutes for 2048x2048 (except sometimes after a long usage: I guess there is a bug in VRAM gestion in automatic1111 that forces me to restart SD)
with controlNet XL, it takes more than 40 minutes for, most of time, a disappointing result.
So I prefer using 1.5 and generate much more results then, maybe using XL to enhance the best ones (but, most of times, they dont even need it )
Hey somebody. I also create app to do video to video by text prompt with auto create mask on difference timeline to use uniq prompts for eqch object in one run. Maybe interest will be review Wunjo AI
A perfect ballerina with quite strange bone and muscle structure
That hand is oddly long
it so interesting and perfect and boring all at the same time. you just create promts and thats all.
post more
И снова генератор лиц непохожих на оригинал... уже поднадоело..
It's so sloooooow.
For sure is different speed compared to 1.5
Yea, reallly slow
Not sure why but its taking a good half hour for a single image. Better than complaining about out of ram but not that usable here
"you don't need to install all of them"... me: "the 870 qvo 8tb is cheap at the moment"
😂 It's convenient to have them all
openpose of sdxl sucks