Relight anything with IC-Light in Stable Diffusion - SD Experimental

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ก.ย. 2024

ความคิดเห็น • 53

  • @UnclePapi_2024
    @UnclePapi_2024 4 หลายเดือนก่อน

    Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Thank you! I’m glad to be of help!

  • @xxab-yg5zs
    @xxab-yg5zs 4 หลายเดือนก่อน

    Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      It can definitely be used as a professional tool, it all depends on the how!

  • @JohanAlfort
    @JohanAlfort 4 หลายเดือนก่อน

    Nice insight to this new workflow, super helpful as usual :) This opens up a whole lot of possibility! Thanks and keep it up.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Yea it does! I honestly believe that this is insane for product photography

  • @pranavahuja1796
    @pranavahuja1796 4 หลายเดือนก่อน +1

    Things are getting so exciting🔥

  • @uzouzoigwe
    @uzouzoigwe 4 หลายเดือนก่อน

    Well explained and super useful for image composition. I expect that a small hurdle might be when it comes to reflective/shiny objects...

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure

  • @aynrandom3004
    @aynrandom3004 4 หลายเดือนก่อน +1

    Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +2

      Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent.
      Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.

    • @aynrandom3004
      @aynrandom3004 4 หลายเดือนก่อน

      @@risunobushi_ai thank you! Lowering the cfg value worked. :D

  • @zeeyannosse
    @zeeyannosse 4 หลายเดือนก่อน

    BRAVO ! thanks for sharing!. super interesting development !

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Thanks, Glad you liked it!

  • @KeenHendrikse
    @KeenHendrikse 2 หลายเดือนก่อน

    Thank you for this video, it was really helpful. There are a few undefined nodes with the workflow, do u have any advice as to how I can fix this?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      Hi! Did you try installing the missing custom nodes via the manager?

  • @houseofcontent3020
    @houseofcontent3020 4 หลายเดือนก่อน

    This is a great video! Thanks for sharing the info.

  • @StringerBell
    @StringerBell 4 หลายเดือนก่อน +4

    Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +2

      The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!

  • @dreaminspirer
    @dreaminspirer 4 หลายเดือนก่อน

    I would SEG her out from the close up. then draft composite her on the BG. This probably reduces the color cast :)

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source.
      I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.

  • @user-de8nc3hx4u
    @user-de8nc3hx4u 19 วันที่ผ่านมา

    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 90, 160] to have 4 channels, but got 8 channels instead
    What's going on? It could still be used normally before.

    • @risunobushi_ai
      @risunobushi_ai  19 วันที่ผ่านมา +1

      Update kijai’s ic-light repo, it should solve the issue (it’s most probably because you update comfy)

  • @Architectureg
    @Architectureg 3 หลายเดือนก่อน

    how to make sure the input picture doesn't change in the output? it seems to change how can i keep it exaclty and just manipulate thelight instead?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน

      My latest video is about that, I added both a way to preserve details through frequency separation and three ways to color match

  • @PierreGrenet-ty4tc
    @PierreGrenet-ty4tc 4 หลายเดือนก่อน

    This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light
      Have you tried the gradio implementation?

  • @daryladhityahenry
    @daryladhityahenry หลายเดือนก่อน

    Hi! Can you tell me how you keep the product the same? I mean, I see the bag in the couple last minute, and you didn't use anything like controlnet etc, but the product is the same before and after lighting... How? @_@... Thank you

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      this is how IC-Light works. at its core, it's a instruct pix2pix pipeline, so the subject is always going to stay the same - although in more recent videos I solve issues like color shifting, detail preservation, etc by using stuff like controlnets, color matching nodes, etc.

    • @daryladhityahenry
      @daryladhityahenry หลายเดือนก่อน

      @@risunobushi_ai That's what makes me confuse.. Since I do that, and the product was changed... Is it depends on our checkpoint model too?

  • @mohammednasr7422
    @mohammednasr7422 4 หลายเดือนก่อน

    hi dear Andrea Baioni
    I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github).
      If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): th-cam.com/video/CD1YLMInFdc/w-d-xo.html
      or look up a multi-video basic course, like this playlist from Olivio: th-cam.com/video/LNOlk8oz1nY/w-d-xo.html

  • @antronero5970
    @antronero5970 4 หลายเดือนก่อน

    Number one

  • @twilightfilms9436
    @twilightfilms9436 4 หลายเดือนก่อน

    Does it work with batch sequencing?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I haven’t tested it with batch seq, but I don’t see why it wouldn’t in its version that doesn’t require custom masks applied on the preview bridge nodes, and instead relies on custom maps from load image nodes.
      I’ve got a new version coming on Monday that preserves details as well, and that can use automated masks from the SAM group, you can find the updated workflow on my openart profile in the meantime.

  • @cycoboodah
    @cycoboodah 4 หลายเดือนก่อน

    The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at andrea@andreabaioni.com? I can run some tests on it and check what's wrong.
      If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.

  • @JavierCamacho
    @JavierCamacho 4 หลายเดือนก่อน

    Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      Hey! You might be interested in something like this: www.reddit.com/r/comfyui/comments/1bxos86/genfill_generative_fill_in_comfy_updated/

    • @JavierCamacho
      @JavierCamacho 4 หลายเดือนก่อน

      @@risunobushi_ai i'll give it a try. Thanks

    • @JavierCamacho
      @JavierCamacho 4 หลายเดือนก่อน

      @@risunobushi_ai so I tried running it but I have no idea what I'm suppose to do. Thanks anyways.

  • @syducchannel9451
    @syducchannel9451 4 หลายเดือนก่อน

    Can you guide me how to use Ic - light in Google Colab?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I'm sorry, I'm not well versed in Google Collab

  • @yangchen-zd9zl
    @yangchen-zd9zl 4 หลายเดือนก่อน

    Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?

    • @yangchen-zd9zl
      @yangchen-zd9zl 4 หลายเดือนก่อน

      @@risunobushi_ai Sorry, I know the key to the problem. The first is because I did not watch the video tutorial carefully and ignored downloading fbc. The second is the image size problem. After downloading fbc, I adjusted the image size (512 pixels × 512 pixels) The drawing efficiency is much higher, thank you very much for this video. In addition, I would like to ask if I want to add some other products to this workflow, that is, product + background for light source fusion, what should I do?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      I cover exactly that (and more) in my latest live stream from yesterday!
      I demonstrate how to generate an object (but you can just use a load image node with a already existing picture), use segment anything to isolate it, generate a new background, merge the two together, and relight with a mask so that it looks both more consistent and with better lighting than just using the optional background option in the original workflow.
      For now, you’d need to follow the process in the livestream to achieve it. In a couple of hours I will update the video description with the new workflow, so you can just import it.

    • @yangchen-zd9zl
      @yangchen-zd9zl 4 หลายเดือนก่อน

      @@risunobushi_ai Thank you very much for your reply. I watched the live broadcast in general and learned how to blend existing images with the background. By the way, in the video, I saw that the pictures you generated were very high-definition and close to reality, but when I generated them, I found that the characters would have some deformities and the faces would become weird. I used the Photon model.

  • @houseofcontent3020
    @houseofcontent3020 4 หลายเดือนก่อน

    I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?

    • @houseofcontent3020
      @houseofcontent3020 4 หลายเดือนก่อน +1

      That was the problem. Wrong model~
      Thank you :) @@risunobushi_ai