I wouldn't run them all at the same CFG, steps, and scheduler because they all have different requirements. Dreamshaper in particular requires SDE Karras specifically and can't take more than about 2 CFG and 7 steps or it will start looking really burned like that. You can't really compare a Turbo and a regular model at the same settings. If you're going to test them, you should test them at the settings recommended by their authors. Otherwise it's not apples to apples. When I do this kind of testing, I do an XY plot on each model to get a handle on where it lands at different settings, and then compare my XY plots.
All these models need different settings so I think doing a comparison would mean more if they were used. Dreamshaper Turbo needs a specific scheduler, a very specific sampler, a CFG of 2 and steps of 8 to 10. You will get a horrible fried results using a high CFG and too many steps.
@@allyourtechai I began to do a similar vid but ran into the same issue. I did find one interesting thing, all the models seemed to be either trained on top of SDXL or Dreamshaper. I tested with a landscape illustration prompt and a person photographic prompt all with the same seed. With the simple landscape prompt I got two close variations of the same scene. I think the idea that some models understand a prompt better or worse may not be true, as far as I know they all use the same clip model.
Great test thank you. I think you should rerun this test at recommended Sampler, CFG's and Steps as found in Civitai as my experience with some of those weird results are way better when those recommendations used. A great idea and would love to see the very best each model can do with the same prompt. Love the channel!! Many thanks
Most people that generate AI imagers (at least as a hobby) actually PREFER very fake looking imagery. They cannot tell what true realism actually is. For true photorealism, you're better going to models prior to stable diffusion XL. All images, even those with checkpoints bundled on them, look a tad fake with SD XL Most of these models are trained and produced in Europe places like Denmark. The expression "Ruby eyes" presumably meaning bright red iris, is only an American expression. That's a "tall ask" for any AI model, as they are all trained on hundreds of thousands of images of normal people
Get some prompta from the internet positive and negative and it can get good results I got SD v2.1 768 If u did basic prompts it's just give really creepy bad results I think the negative prompts is more important but I just downloaded it yesterday still learning
If you used the inference steps that the models documentation call for, you would get better results. We can't judge a model if we use it incorrectly. ie If a turbo model calls for 4-6, and we give 20... well... the results will be way off.
Yep, already planning the rematch with optimal settings for each. My initial goal was to introduce as little variability into the test as possible, but that isn’t fair when comparing some of the turbo models. You are spot on!
Text 2 image Photo real Portraits are already pretty much good since SD 1.5, with the right models, there just insn't really much room for them to impress people any more TBH.. Image gen AI /LLMs really need to show off some new tricks if they are to keep the same momentum of intreast. Or else I think all this stuff could stagnate. The control nets, Loras and no censorship are what really give SD the edge.
Same, Epicrealism, EpicphotoGasm and Photon, between those 3 models (combining them) You can still get excellent photo realism from 1.5, Ive not moved on to SDXL because its just too damn slow on an 8GB 3070. and honestly SDXL didnt seem like that much of step up, outside of the upped resolution.
🎨 Generative AI Art Playground: www.pixeldojo.ai/?
Juggernaut v9, you need to lower the cfg... Then it gets less contrast and looks more natural
Yeah a cfg around 2 to 3.5 works best, it gets blown out over 4.0
Great tip, thank you!
I wouldn't run them all at the same CFG, steps, and scheduler because they all have different requirements. Dreamshaper in particular requires SDE Karras specifically and can't take more than about 2 CFG and 7 steps or it will start looking really burned like that. You can't really compare a Turbo and a regular model at the same settings. If you're going to test them, you should test them at the settings recommended by their authors. Otherwise it's not apples to apples. When I do this kind of testing, I do an XY plot on each model to get a handle on where it lands at different settings, and then compare my XY plots.
For photorealism, SDXL Base is still the best, imo.
Awaiting for your SUPIR installation for A1111😊 12gb version
All these models need different settings so I think doing a comparison would mean more if they were used. Dreamshaper Turbo needs a specific scheduler, a very specific sampler, a CFG of 2 and steps of 8 to 10. You will get a horrible fried results using a high CFG and too many steps.
Agreed, I’m going to do a round two with optimal settings for each model.
@@allyourtechai I began to do a similar vid but ran into the same issue. I did find one interesting thing, all the models seemed to be either trained on top of SDXL or Dreamshaper. I tested with a landscape illustration prompt and a person photographic prompt all with the same seed. With the simple landscape prompt I got two close variations of the same scene.
I think the idea that some models understand a prompt better or worse may not be true, as far as I know they all use the same clip model.
Great test thank you. I think you should rerun this test at recommended Sampler, CFG's and Steps as found in Civitai as my experience with some of those weird results are way better when those recommendations used. A great idea and would love to see the very best each model can do with the same prompt. Love the channel!! Many thanks
I’ll do that now that I know cfg scale was the issue. I appreciate it!!
@@allyourtechai Cool can't wait :-)
cyberrealistic is pretty good, as is deliberate
Most people that generate AI imagers (at least as a hobby) actually PREFER very fake looking imagery. They cannot tell what true realism actually is. For true photorealism, you're better going to models prior to stable diffusion XL. All images, even those with checkpoints bundled on them, look a tad fake with SD XL
Most of these models are trained and produced in Europe places like Denmark. The expression "Ruby eyes" presumably meaning bright red iris, is only an American expression. That's a "tall ask" for any AI model, as they are all trained on hundreds of thousands of images of normal people
Get some prompta from the internet positive and negative and it can get good results
I got SD v2.1 768
If u did basic prompts it's just give really creepy bad results
I think the negative prompts is more important but I just downloaded it yesterday still learning
Altho those still alote better don't get me wrong but realistic enouph
I was most impressed with realistic stock photo 2.0 for realistic photos
The lighting and such are very nice with that one, I agree!
If you used the inference steps that the models documentation call for, you would get better results. We can't judge a model if we use it incorrectly. ie If a turbo model calls for 4-6, and we give 20... well... the results will be way off.
Yep, totally over baked.
Yep, already planning the rematch with optimal settings for each. My initial goal was to introduce as little variability into the test as possible, but that isn’t fair when comparing some of the turbo models. You are spot on!
If you want red eyes, try vampire in the prompt
Text 2 image Photo real Portraits are already pretty much good since SD 1.5, with the right models, there just insn't really much room for them to impress people any more TBH.. Image gen AI /LLMs really need to show off some new tricks if they are to keep the same momentum of intreast. Or else I think all this stuff could stagnate. The control nets, Loras and no censorship are what really give SD the edge.
There is a version 4 for proteus
thanks dear
For photo realism i still prefer epicrealism on sd1.5 with the right prompts and lora the output is insane.
good call, actually seems better than sdxl!
Same, Epicrealism, EpicphotoGasm and Photon, between those 3 models (combining them) You can still get excellent photo realism from 1.5, Ive not moved on to SDXL because its just too damn slow on an 8GB 3070. and honestly SDXL didnt seem like that much of step up, outside of the upped resolution.
Not all teeth are perfect, I wouldn’t judge too much on that if the model was exposed to dynamic assortments
ADetailer is a must to correct hand and face problems. Doesn't work right every time, but it helps greatly.
Sorry to say this but using the same settings is a fail, all these models have different requirements, like boiling a chicken the same time as an egg.
Watch the follow up video
So creamy....