Comic Characters With Stable Diffusion SDXL

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ต.ค. 2024
  • In this comprehensive tutorial, learn how to harness the power of Stable Diffusion AI to produce stunning and visually consistent comic book characters. Whether you're a seasoned artist or just starting, I’ll guide you through the step-by-step process of generating characters that maintain a consistent style from image to image.
    You’ll learn how to prepare custom character datasets, a crucial step in creating your own Stable Diffusion AI model for comic book character generation.
    Discover valuable tips, techniques, and tools to elevate your comic book artistry.
    Want to advance your ai Animation skills? Checkout my Patreon:
    / sebastiantorresvfx
    www.sebastianto...
    Install Stable Diffusion: • Stable Diffusion In Mi...
    Consistent faces : • Consistent Faces in St...
    Links from the Video ###
    SDXL Models:civitai.com/
    Random Name Generator: www.behindthen...

ความคิดเห็น • 76

  • @kanavwastaken
    @kanavwastaken ปีที่แล้ว +9

    This video is a gem, really. I'm so sick and tired of most tutorials being so long and complicated, truly, you're explanations made me learn. Thank you, for real. We need more! ❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว

      I have more coming soon, it’s been a busy month unfortunately but I’m back on track now.

    • @Mr.Sinister_666
      @Mr.Sinister_666 11 หลายเดือนก่อน +1

      Quick, clear and conscise. You are right on point here. The video is a damn gem! ANNNNNDDDDD thanks for being awesome @sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      @Mr.Sinister_666, Made my day 😎 good to know I’m doing it right 😄

  • @teamozOFFICIAL
    @teamozOFFICIAL ปีที่แล้ว +3

    This tutorial is exactly what I want in tutorials. Giving us the information quick and not being to heavy on the memes. I've happily hit the sub and bell button.

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว

      Much appreciated, glad it’s what you were after 😁

  • @shallmow
    @shallmow ปีที่แล้ว +2

    Damn, use of actual names is so smart lol. Previously people had to make models with reference photos to get consistent characters.

  • @kenny_numbers
    @kenny_numbers 9 หลายเดือนก่อน

    Thanks so much for creating these videos, Sebastian. I'm in the early stages of the learning curve in trying to get consistent characters and the kinds of images I need for a graphic novel. I spent September and October generating images for a different graphic novel which I published through Amazon KDP, but I did it through generating loads and loads of images to pick only those I could work with. I also spent at least 150 hours fixing problems and deformities, such as hands, eyes, limbs, clothing etc. in nearly every image. I basically brute-forced my way through and didn't get the results I wanted. I published it anyway. The end result was deficient character consistency and not the most dynamic posing and inadequate interaction between characters. I cannot go through the process like that again. I need to have a high degree of character consistency and images that work as generated, requiring little or no redrawing. I have generated a single image of a character with a design I like for the new graphic novel. However, SDXL produces a completely different looking image every time I click generate, even with the same text prompt. I cannot build a dataset of consistent character images when I cannot even generate a second image that looks like the first. What am I missing? Do you have any idea what I'm doing wrong? Any help or advice would be greatly appreciated. Thanks.

  • @roymathew7956
    @roymathew7956 ปีที่แล้ว +3

    Love the explanations and the wisdom. Would love to see a video where you work through a few panels for a comic strip, also possibly showing how you add the blurbs. I imagine you’d do that in Photoshop, but wondering if there’s a lora or something in stable diffusion that also works for that

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว

      As for how to put the pages together we’ll get there for sure.
      The word balloons and captions are best done in a photo editor. The best for it being clip studio formerly known as manga studio. I love photoshop but it’s not made for that where clip studio is more directed towards comic books. And once a year you can outright buy it for like $50-$60 for a permanent license. Can’t say the same for photoshop 😆

    • @roymathew7956
      @roymathew7956 ปีที่แล้ว +1

      Thanks for that.@@sebastiantorresvfx

  • @gatotboediman9680
    @gatotboediman9680 11 หลายเดือนก่อน +1

    love your style. and tutorials. subscribed already

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      Thank you 🙂 you’re awesome! Happy to have you onboard.

  • @meritorioustechnate9455
    @meritorioustechnate9455 ปีที่แล้ว +2

    Tutorial is great. I’m using Midjourney for consistenct characters and exploring new styles. But the main issue with ai is the jagged line art and proportions for me. I sketch over ai art and draw my line art adding a unique style.

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว +1

      I’ve been playing with re-inking after generating. Also another method I’ve found is to upscale the images, inpaint sections that need sharper line art. I’ll then downscale as needed and the quality of the line art will be superior. It’s basically how traditional comics are done, down scaling original art to roughly 65% of the original art work size.

  • @luozhan
    @luozhan 10 หลายเดือนก่อน

    Love your channel! ❤
    Thank you for creating this tutorial. It will be great if you could also show us how to create TWO or more consistent characters in the SAME scene. I am looking forward to it. Thanks again for the great work.

  • @TeluguNarrativeHub
    @TeluguNarrativeHub ปีที่แล้ว +2

    Thanks for sharing your knowledge. good job.

  • @Greensacks
    @Greensacks 11 หลายเดือนก่อน +1

    really great video! so much more straight forward than others lol. using this process how might you handle for multiple characters? say, instead of a superhero i'm working on two brothers and a dog in a fantasy setting. would you train a lora for each character? and then how would you bring something like that together?

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      I’d prefer to have an individual Lora for each character and the dog so I have more consistency with the look and the clothing.
      As for combining them in automatic 1111 there’s a number of different methods but it’s a little long for a comment to cover. Perhaps a livestream 🙂

  • @ConwayBrew
    @ConwayBrew 11 หลายเดือนก่อน +1

    Which checkpoint were you using? I didn't see it in the video but really liked the output. Your videos have really helped me dive back into Stable Diffusion and catch up. Thanks!

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +2

      Thank you so much your message, means a lot to know it’s helping you. I’m using the realities edge Anime xl , you can find the direct link to it on the description of my latest video on comic book line art. Have fun 😁

  • @hairy7653
    @hairy7653 ปีที่แล้ว +2

    great tutorial

  • @michaelcarnevale5620
    @michaelcarnevale5620 11 หลายเดือนก่อน +1

    so informative - i subbed

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      Thanks for the sub! Glad you liked it. Good timing, follow up video is coming this week 😁

  • @arnabroy2193
    @arnabroy2193 11 หลายเดือนก่อน +1

    Thank u so much for sharing

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      You’re welcome, glad you enjoyed it 😁

  • @DeanCassady
    @DeanCassady ปีที่แล้ว +2

    Nice vid, good content

  • @WhatDoesEvilMean
    @WhatDoesEvilMean 10 หลายเดือนก่อน +1

    Could you do a video on how to train on our own artwork? So that the images come out in our specific style? Is that possible?

    • @sebastiantorresvfx
      @sebastiantorresvfx  10 หลายเดือนก่อน

      If you go through the process in the Lora video you can switch that out for your own art. Just make sure the images around around 1024px or bigger, but don’t go too crazy or it will take a while to train.
      But yeah the process is the same no matter what your source images are.

  • @DanielSchweinert
    @DanielSchweinert ปีที่แล้ว +2

    Thanks! Straight to the point!

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว +2

      Glad to see you back Daniel. 😁

    • @DanielSchweinert
      @DanielSchweinert ปีที่แล้ว

      @@sebastiantorresvfx i released a new tutorial and a node workflow on civitai

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว +1

      Taking a couple days to play on stable, I’ll check it out 😃

  • @g-aram1405
    @g-aram1405 10 หลายเดือนก่อน +1

    Hi mate, great tutorial, can you recommend model/lora that look simple like manhua or webtoon , because model that i see mostly for anime illustration
    Thank you

    • @sebastiantorresvfx
      @sebastiantorresvfx  10 หลายเดือนก่อน

      Try Counterfeit-V3.0 from civitai. And for the painted look I’d suggest using style selector extension and setting it to painting or something of that sort to push the image in that direction.

  • @trumpsaloser
    @trumpsaloser 11 หลายเดือนก่อน +1

    still waiting on the 2nd part to this amazing video! great work!

  • @user-ui2on4ll9v
    @user-ui2on4ll9v ปีที่แล้ว +2

    Thanks for the tutorial. As for me, the main problem is background.I cant draw comics,for now, because i just cant get the same background (for example, the same classroom or the same street in the city) without using of 3d model.And ,in my point of view, it is vitally necessary to be able to generate the same background from different angles (and at a different distance) to draw action scenes in comics.Could you please tell me, if you know, how to solve this problem ?How can i get the same background to draw comics (without 3d model) ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  ปีที่แล้ว +2

      Unfortunately SD isn’t reliable for consistent backgrounds in different angles. My work around would be to generate the backgrounds then project them onto some rudimentary 3D geometry. The Archer tv show does as similar process so they can render out a different angle when needed.
      If you’re projecting an SD generation onto the 3D model you’ll get the same look and have more control. There’s ways to change the lighting and light sources also which can be useful.

  • @jeffreychung7307
    @jeffreychung7307 11 หลายเดือนก่อน +1

    Great Video. If I want to make a consistent character for a pet, how can I do it. I still use Random Name Generator to name the pet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      For pets, depending on your situation I would suggest either getting a Lora that’s pre trained on a specific animal. Or training your own with photos of just one animal that way SD won’t mix other animals into it.
      Unfortunately when it comes to side characters (and pets) in comics, if they’re going to be showing up consistently. Then you’ll need a way to make sure they come out looking the same even if for a couple panels. Loras are your best bet.

  • @Carmidian
    @Carmidian 10 หลายเดือนก่อน +1

    This was so helpful thank you so much! One quick question what is SDXL style we're using to get that superhero look it was awesome?

    • @sebastiantorresvfx
      @sebastiantorresvfx  10 หลายเดือนก่อน

      Thank you 😁
      The style itself is using the SDXL style selector extension that you can find in the extensions tab and set to comic. As for the model its the realities edge anime XL checkpoint from civitai.

    • @Carmidian
      @Carmidian 10 หลายเดือนก่อน +1

      @@sebastiantorresvfx Sorry for bothering you, one more question when it comes to making the lora, how many pictures should I generate?

    • @sebastiantorresvfx
      @sebastiantorresvfx  10 หลายเดือนก่อน

      No worries at all, that’s a complicated question. Because technically you could get away with 15 images, but you run the risk of it not having enough flexibility for what you require later on. I’d say it’s probably best to go with something like 30-50 good all round images to cover yourself.

    • @Carmidian
      @Carmidian 9 หลายเดือนก่อน

      @@sebastiantorresvfx Thank you, once again. Your videos are incredibly helpful and easy to understand.

  • @iamnow8
    @iamnow8 11 หลายเดือนก่อน

    Amazing! Waiting on next video sir Torres, do you know how to create low file sizes Loras (possibly with faster training?)

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      Wait no more, just went live.
      Network rank and network alpha will keep the files smaller if you choose a lower value, as for training times 😬 it can take a couple hours depending on the amount of images in your dataset.

    • @iamnow8
      @iamnow8 11 หลายเดือนก่อน +1

      WOOH :D@@sebastiantorresvfx

  • @kentuckeytom
    @kentuckeytom 11 หลายเดือนก่อน +1

    hi, would you mind share what video card you are using? mine is 1070ti 8g and takes 3 minitues to generate an image with the same prompt😪

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      Hello, I’m using a Gigabyte 3090 RTX turbo. Its a few years old now but still does the job.
      Make sure you have --medvram in your command arguments line of the webui-user.bat and might be a good idea to turn off live previews in your a1111 settings. Might give you a slight boost.

    • @kentuckeytom
      @kentuckeytom 11 หลายเดือนก่อน +1

      it's much better now with --medvram, thanks!@@sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      Awesome! Glad to hear it. 🙂

  • @Kelticfury
    @Kelticfury 11 หลายเดือนก่อน +1

    Is automatic1111 handling sdxl properly now? I switched to comfy UI because it was pretty bad at it.

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      I believe it is, I’ve been using SDXL exclusively for the last couple months. I believe it’s only short comings at the moment is the implementation of controlnet. It isn’t as consistent as it was with 1.5 models. But that might be more to do with the controlnet models more so than automatic 1111. But in terms of image quality, the potential is definitely greater.

    • @Kelticfury
      @Kelticfury 11 หลายเดือนก่อน +1

      @@sebastiantorresvfx Hey that is good news. Thanks for the fast reply at an ungodly hour :)

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      I guess that depends on where you are in the world 😂

  • @anaversary-
    @anaversary- 11 หลายเดือนก่อน +2

    Very informative video! I love the starwars style 2:04 you added to the prompts lol

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      lol only took a month for someone to mention the Star Wars crawl 😂 😂 i got a good chuckle making it so I refused to cut it 😂

  • @lastlight05
    @lastlight05 4 หลายเดือนก่อน

    How about comfyui

  • @LouisGedo
    @LouisGedo ปีที่แล้ว +2

    👋

  • @matthewanacleto7885
    @matthewanacleto7885 11 หลายเดือนก่อน +1

    Another great video. How can we help getting you more subscribers?

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      You’re awesome! Share them on any forums, groups and discord you think the videos could be helpful. Unfortunately I’ve never been good at keeping up with forums. Definitely something I need to get on board with.
      Perhaps I should do live videos too? Only thing keeping me from doing that so far is that I like the fast pace of the videos. Can’t really do that on a live video.

    • @matthewanacleto7885
      @matthewanacleto7885 11 หลายเดือนก่อน +1

      @@sebastiantorresvfx find out the common problems like the repeatability issue and solve them too.

  • @zhoua0571
    @zhoua0571 11 หลายเดือนก่อน +1

    Why can't I comment?

  • @ledesseinduneidee
    @ledesseinduneidee 8 หลายเดือนก่อน

    inkreadible

  • @jeffreychung7307
    @jeffreychung7307 11 หลายเดือนก่อน +1

    I get this ''NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : p : 0.0 `cutlassF` is not supported because: device=cpu (supported: {'cuda'}) Operator wasn't built - see `python -m xformers.info` for more info `flshattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info triton is not available `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512'' I guess the reason is I am using a laptop with no GPU. Anyway I can fix it using my existing potato? I have tried googled to fix this and tried bunch of tricks but still not able to generate my first image. I keep pixels as 512 * 512 and sampling methog DDIM (seems the fastest) but I still not able to generate my first artwork.

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน

      Hey Jeffrey, without knowing your specs it’ll be difficult to say. But if you have an Nvidia GPU make sure you have the right Cuda soft installed I believe the latest is 11.8.
      Also make sure you have the latest versions of torch and xformers installed. You can install xformers automatically by adding “--Xformers” to the command arguments in your webui-user.bat.

    • @jeffreychung7307
      @jeffreychung7307 11 หลายเดือนก่อน

      I should have installed the pip, Xformers and torch latest version but still got the same result. I solved it by temporarily removing the --xformers flag.
      , is the impact slower only?@@sebastiantorresvfx

    • @musicwelikemang
      @musicwelikemang 10 หลายเดือนก่อน

      You need a GPU to run a local model of SD. Integrated laptop gfx just wont cut it.
      Try look into stablehoard. Its kinda like a peer to peer compute net. People with higher power cards donate their cards in downtime to other users without the hardware to run SD.
      It uses a credit system and has a pretty good community willing to help teach people.

  • @100k-subs-target
    @100k-subs-target 11 หลายเดือนก่อน +2

    Free ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 หลายเดือนก่อน +1

      If your computer can run it, then yes 🙂