SDXL 1.0 is OUT! Let's test it!

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 พ.ย. 2024

ความคิดเห็น • 51

  • @asciikat2571
    @asciikat2571 ปีที่แล้ว

    Thank you , I love learning from you, you are have such a beautiful way of teaching and I learn alot.

  • @OutpostH
    @OutpostH ปีที่แล้ว +3

    Thanks Laura for a nice tutorial. Just a couple of things I noticed with Automatic1111 on my machine. Swapping the the base and rifiner safetensors (txt2img>img2img) causes my PC to use all the regular RAM (32Gb) until the model has been loaded/built. After a few minutes, it goes back to normal. Also I have an old nVidia card and had to add 'set COMMANDLINE_ARGS=---no-half' to my webui-user.bat otherwise the img2img process fails. I guess until there is an automatic workflow for the GUI, you could always set up batches in the txt2img process, then batch process the ones you like in img2img. Also been playing with InvokeAI which already has options for chaining both safetensors.

    • @hairy7653
      @hairy7653 ปีที่แล้ว

      Me too. I got 16gb Ram (12gb GPU 3060 RTX) The model takes 100 secs to load and uses most/all of the ram. If i close other browsers etc i works without toping out.

  • @amj2048
    @amj2048 ปีที่แล้ว +2

    There is a refiner extension now, so you can do a text to image and that uses the refiner model in the same render. That skips the having to go to image 2 image.
    I still prefer ComfyUI myself though, it seems to give better results for me.

  • @matyugovich
    @matyugovich ปีที่แล้ว +1

    You can reduce bleeding using BREAK before green eyes in your prompt

  •  ปีที่แล้ว +1

    Great! Thank you!

  • @Laccurate9
    @Laccurate9 ปีที่แล้ว +1

    So we need lot more Vram now

  • @dkamhaji
    @dkamhaji ปีที่แล้ว +2

    Thanks for the deep dive into SDXL 1.5. My main question is how does SDXL 1.0 base + refiner compare to other great 1.5 models like realistic vision, photon etc. Who uses sd 1.5 to gen anything anyways..

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว

      good question - I think that they are pretty much comparable. But what if you train a lora with sdxl....boom

    • @dkamhaji
      @dkamhaji ปีที่แล้ว

      @@LaCarnevali there is the sd Lora offset, in Huggins, with the base model .it Adds detail.

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      @@dkamhaji lovely, could you share the link please

    • @UrfanFahada
      @UrfanFahada ปีที่แล้ว +1

      Compare SDXL with SD 1.5 native only.
      If base SD 1.5 can make good model like RV, photon etc, so what about good SDXL base can make?
      Now we waiting realisticvisionXL, photonXL and see what happen.

    • @tomschuelke7955
      @tomschuelke7955 ปีที่แล้ว

      I am waitingvfor sdxl. Architecture..
      All these portraits and standart graphics are way easier fhen a modern company entrance with for example big dark square tiles on the floor and 7m stainless steel facades and concrete plasters and artificial light. Here mj still is far ahead. While overal composition and light has improved. For example straight lines are atotal mess

  • @bemusedkidney8619
    @bemusedkidney8619 ปีที่แล้ว +3

    I can only give my opinion but honestly SD XL has been a big let down for me, I haven't seen an image yet that has impressed me. I've gotten much better results with the Dreamshaper or Photon model and good Loras.

  • @perelmanych
    @perelmanych ปีที่แล้ว +3

    I think it is more fair to make 4 photos in SDXL as Midjourney does. One photo is too random for comparison.

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      yeah you are completely right! The single result with SDXL is pretty good though 🤩🤩

  • @hairy7653
    @hairy7653 ปีที่แล้ว

    Why are you using clipskip 2 when trying to get photorealistic? I though 2 was for more anime style.

  • @CELLHOTAI
    @CELLHOTAI ปีที่แล้ว

    im so sad because i cant used sdxl i got error, on my mac huhuhu

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว

      Try comfyUI th-cam.com/video/sIkbDhhC5iY/w-d-xo.html

  • @RSV9
    @RSV9 ปีที่แล้ว

    I can't load the VAE in A1111, I get an error and it changes back to "Automatic". In ComfyUI I don't know how to be sure that I loaded the VAE from sdxl. I'm liking ComfyUI more than A1111, and it caught my attention that A1111 uses more ram memory than ComfyUI. I have a graphics card only with 4 GB but there is also shared memory that my Windows laptop uses.
    A1111 uses up to 13.3 GB and ComfyUI only uses 4.7 GB for that example and I don't know if it's because in A1111 I couldn't load the VAE from sdxl.
    Laura, is there a video where you explain the installation of Colab with ComfyUI ? It seems to be complicated.
    As always, great video, thanks

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว

      Can watch this: th-cam.com/video/sIkbDhhC5iY/w-d-xo.html

  • @kevinehsani3358
    @kevinehsani3358 ปีที่แล้ว +1

    Thank you for teaching more detailed info which made me wondered if you would be interested to make a video(probably need a series of them) explaining all these bit and pieces and buttons and selection choices. I know it is a huge task and perhaps not achievable for all but perhaps the most important ones, otherwise for people like me that do not understand and know the full picture they have to try and error forever which would be impossible. Maybe you have done something like that for stable diffusion already which in that case please give me the link. Thanks.

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว

      I explains bits and bobs on my previous videos, but yeah, I could make a video where I explain the main settings :) Thanks!

  • @kevinehsani3358
    @kevinehsani3358 ปีที่แล้ว

    I wonder if you ever come across this problem, using --disable-nan-check will not work because at the end all generated images are blank. But I do not get this all the time, more when it does not like my prompt or I increase the size to 1024. I only have 8GB GPU "NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      try running the webui with --precision full --no-half

    • @kevinehsani3358
      @kevinehsani3358 ปีที่แล้ว

      @@LaCarnevali Thanks. Webui has memory leak and I found out that if it does not like what I say then it picks on NAN and no-half. I wonder how much censorship they have in the prompts. I believe you are going to do one comfyui, it is not very user friendly and has a learning curve, apparently they put out "swarm" which suppose to be.

    • @Springheel01
      @Springheel01 ปีที่แล้ว

      I have this same error happen every time I try to bring my own image into inpaint. If I use an image generated by SD, it works fine. But if I load a 1024x1024 image, I get that error. Adding --no-half didn't fix it.

    • @kevinehsani3358
      @kevinehsani3358 ปีที่แล้ว

      @@Springheel01 I read in the docs somewhere that if I set disbal-nana-check I get blank screen!!

  • @AWESOMEVIDESHEE
    @AWESOMEVIDESHEE ปีที่แล้ว

    awesome pictiures

  • @jamesbriggs
    @jamesbriggs ปีที่แล้ว

    great video :)

  • @fahdotaibii544
    @fahdotaibii544 ปีที่แล้ว

    WHO'S MAKING YOU CRY LAURA , I WILL KICK THEIR ASS!!!!

  • @amigoface
    @amigoface ปีที่แล้ว

    excellent video as usual

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      Sure, Windows, RTX 3090

    • @amigoface
      @amigoface ปีที่แล้ว

      @@LaCarnevali cool
      i have a 4070 so it should be ok for this , right

    • @Eleganttf2
      @Eleganttf2 ปีที่แล้ว +1

      @@amigoface yes its fine since 4070 has 12GB of Vram although the generation time will not be as fast as Laura's

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      @@amigoface yup!

  • @chtibouda
    @chtibouda ปีที่แล้ว

    I give a thumbs up because you "refined" your hairstyle with pigtails ;)."Pretty spanish youtuber girl, High quality tutorials, " neg : blond with a e

  • @Ibian666
    @Ibian666 ปีที่แล้ว

    But does it do anime?

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +1

      Yes, but you need to use an anime trained model from CivitAI / Hugging Face

  • @GumOnTheWall
    @GumOnTheWall ปีที่แล้ว

    I know this comment has nothing to do with the video but I've been trying to figure out where you're from based on your accent and my best guess is France am I right

    • @perelmanych
      @perelmanych ปีที่แล้ว +2

      Bro, I don't know why do you need this info, but judging by accent she is obviously from Italy.

    • @GumOnTheWall
      @GumOnTheWall ปีที่แล้ว

      @@perelmanych I know nothing about accents lmao I'm so curious because I just couldnt tell sometimes it even sounds British

    • @GumOnTheWall
      @GumOnTheWall ปีที่แล้ว

      @@perelmanych have you ever been curious before?

    • @LaCarnevali
      @LaCarnevali  ปีที่แล้ว +2

      Funny conversation, I am italian :D

    • @cyrille8693
      @cyrille8693 ปีที่แล้ว

      hum, yeah her accent and her name sounds very french xD

  • @ATLJB86
    @ATLJB86 ปีที่แล้ว

    It’s too early.. think about how 1.5 images looked without trained models.. Completely disgusting! Too early in XL to see it’s potential