Model of the Week Review - Lyriel - Like a gatcha game you pay for with electricity

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024

ความคิดเห็น • 16

  • @andrewowens5653
    @andrewowens5653 ปีที่แล้ว +3

    Thank you very much. A link to the model would be nice.

  • @blitzar8443
    @blitzar8443 ปีที่แล้ว +3

    This model is really neat, it has lots of color and variety.

  • @ismailtibba
    @ismailtibba ปีที่แล้ว +1

    Great video, we need more like this 🙏

  • @toomurmu9148
    @toomurmu9148 ปีที่แล้ว

    Would like more such videos.💥💥💥💯💯

  • @demoran
    @demoran ปีที่แล้ว

    I like this kind of stuff. I'd love to see this become a monthly thing. Though things move fast. If you don't stop and look around once in a while, you could miss it.

  • @lukeovermind
    @lukeovermind ปีที่แล้ว

    That was cool, more please!

  • @pn4960
    @pn4960 ปีที่แล้ว

    Nice !

  • @JavierPortillo1
    @JavierPortillo1 ปีที่แล้ว

    Yey! I love model showcase videos!

  • @MrSongib
    @MrSongib ปีที่แล้ว

    I just did a similar benchmark (more scuffed than yours) the other day for all models that I have, and you can spot the permutation from certain models to others since most models are using realistic, anime, and midjourney-ish models, you can backtrack their source model most of the time and you can make some good model from the source too. and it seems fun.
    My method was almost similar to yours but with a low number of runs using clip skip 1 and 2, straight up 768*512 either portrait or landscape, and didn't use hiresfix, embedding, Adetailer (extension) or lora's.
    So I know how the model reacts with certain prompt structures and didn't look at the "Suggested" prompt from the page so you get more hands-on with it so you will understand more since most people build their prompt differently and then after that, plop the most favorite sampler in your disposal (or just test all of it. xd) and seeing the model react from that as well. sometimes you get better stuff from different samplers even though the page said to use certain samplers.
    For steps, I use 25 steps, since in general, I use it at that range (I think most models did, until you got something good from it then we can add more steps for that specific seed. Beyond that it's kind of waste, I learn this from you and other people too. ty sir).
    For Prompts, I use a simple prompt on what the subject is then just add the basic addon (either realistic or stylized). since if you put strong words like masterpiece etc or style in the front it defeats the purpose of how the model will react in a general prompt, putting it vaguer is better so we can see its behavior more towards certain stuff.
    And since my Vocabulary is quite bad I just use GPT for the add-ons and adjust it from there. and I didn't use embedding again it defeat the purpose for testing imo.
    "Subject + Addons" make it up to 75 - 150 tokens, if I can I try to reduce it to around 75 tokens.
    Here is test prompt examples that I did today (95 tokens):
    Female in a fantasy world, Fluid brushwork, Bright colors, Emphasis on light and atmosphere, realism, impressionism, Post-impressionism, light and color, attention to detail, by master of portraiture, by master of realism,,
    (realism:1.2) and (naturalism:1.2), (impressionistic:1.2) brushwork, (portraiture:1.2) and (figurative:1.2) focus, (play of light and shadow:1.2), (elegant compositions:1.2) and (balanced:1.2) arrangements, (rich color palette:1.2) and (bold:1.2) use of color, and (masterful technique:1.2) with (attention to detail:1.2),,
    neg:
    (Disastrous:1.2) composition, (awkward:1.1) proportions, (unrefined:1.1) style, (unimpressive:1.2) technique, (unengaging:1.3) subject matter, (lack of skill:1.2), (sloppy execution:1.2), (clashing:1.1) patterns, (incoherent:1.3) theme, (distorted:1.2) perspective, (amateurish:1.3) execution, (lack of creativity:1.2), (messy:1.1) arrangement, (lack of impact:1.1), (poorly defined:1.1) shapes, (lack of originality:1.1), (low resolution:1.3), (noisy:1.3), (blurry:1.2), (grainy:1.3), (unclear subject:1.4), (subpar:1.2), (bad camera angle), (ugly anatomy feature:1.3), (poorly chosen lighting:1.3), (unattractive color palette:1.2), (muddled details:1.2), (lack of depth:1.1), (unappealing texture:1.1).
    And for the test, I just use two tests (Realistic and Stylized):
    First I use Realistic stuff like people for a general idea of how the model will react in terms of texture if the model can do this most of the time it can do other realistic stuff too.
    Seconds Stylizing (fantasy and Sci-fi) This is how I evaluate color and how the model knows certain stuff and how the model reacts to blending or "bleeding" (since most of the time fantasy and Sci-fi is blending real stuff together and putting more colors).
    Third, I just use different samplers from my favorite to determine which one I like the most.
    Fourth, after this, you get the gist of the model enough without giving any leverage like using hiresfix or embedding or some detailers extension or lora, etc.
    Last, Your video helps me a lot to understand certain technical stuff about SD since I am quite new to it, maybe you can share how the prompt and weight work in the future?
    Since I still struggle to invoke certain camera angles and colors (but I learn about color today from this video "th-cam.com/video/PYp2a7mUNHI/w-d-xo.html" ), and I didn't like using ctrlnet it's boring (until I get frustrated to get the angle then I'll do it). xd

  • @tstciuqz
    @tstciuqz ปีที่แล้ว

    Wow, I love how you present your study result! Could you share in general about good practice or framework to study a model?

  • @headsink
    @headsink ปีที่แล้ว +1

    Realistic Vision next.

  • @kinlih289
    @kinlih289 ปีที่แล้ว

    very cool , can you suggest a model which can do complex stuff , as in complex poses or concepts , anime specific model are generally better in this regard but not good enough as mid-journey ,the rest popular model were inconsistent ( or simply add a complexity test in your future model review ) much thanks :)

    • @siliconthaumaturgy7593
      @siliconthaumaturgy7593  ปีที่แล้ว

      Based on my testing, I think the bottleneck for complexity is inherent to the version of CLIP in SD 1.5 (~3 things at once at >50% accuracy). Regional Prompting in Multidiffusion or other extensions can help, but isn't without its own challenges.
      Theoretically, SD 2.1 should allow more complexity with its improved CLIP, but noone uses it so I haven't bothered to test it. I'm optimistic SDXL will offer improvements though.

  • @achiche1337
    @achiche1337 ปีที่แล้ว

    As an idea you could also do this with the most popular models and compare them to each other

  • @ywueeee
    @ywueeee ปีที่แล้ว

    can you make a video on how to replicate the generative AI capacities of Adobe latest release using SD? do it asap and get many views ;)