Relight and Preserve any detail with Stable Diffusion

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ก.พ. 2025

ความคิดเห็น • 192

  • @risunobushi_ai
    @risunobushi_ai  8 หลายเดือนก่อน +6

    go break it and report back how it works for you, chatterinos: openart.ai/workflows/risunobushi/product-photography-relight-v3---with-internal-frequency-separation-for-keeping-details/YrTJ0JTwCX2S0btjFeEN

    • @astrophilecynic9990
      @astrophilecynic9990 23 วันที่ผ่านมา

      Since my computer can't run the segment anything models, how can I remove their nodes without effecting the rest? I want to manually upload the masked object instead of relying to SAM.

    • @astrophilecynic9990
      @astrophilecynic9990 23 วันที่ผ่านมา

      this way, i can run the entire workflow without any problem (I apologize for this one since I'm still new to all of these and have no idea on how to do it)

    • @risunobushi_ai
      @risunobushi_ai  23 วันที่ผ่านมา +1

      ​@@astrophilecynic9990 no worries! if you don't want to break anything, you can right click on all the segment anything nodes and select bypass. this will let the image that comes into the segment anything node just pass through it. now, a mask won't be generated, but you can look at the color coded links that come out of the segment anything node - those are where the mask is used.
      in order to upload and use a custom mask, you need to:
      - create a load image node, and put your mask in there
      - from the image output, drag and drop it, and search for "convert image to mask"
      - select red as the channel
      - now connect the green mask output from the convert image to mask node to all the nodes that the segment anything mask output was connected to
      it's a bit of a chore, but it's simple

    • @astrophilecynic9990
      @astrophilecynic9990 22 วันที่ผ่านมา

      ​@@risunobushi_ai I apologize for any inconvenice. I've already tried the method above, but the resulting images are way off compared to their reference

    • @astrophilecynic9990
      @astrophilecynic9990 22 วันที่ผ่านมา

      it seems that the outline is the only one being preserve and the actual objects are being change

  • @tsentsura9279
    @tsentsura9279 5 หลายเดือนก่อน +2

    you chew all the details, grats! I love when people are doing such dedicate tutorials! Much appreciate!

  • @muradmammad
    @muradmammad 3 หลายเดือนก่อน +4

    Hey man! Thank you for that. I have a question. I am pretty new in Stable Diffusion. I am using 3d softwares for product photography. But midjourney is more comfortable for me to generate backgrounds and i like the results. I want to add my product to my midjourney scene. I can make it in photoshop. But for relight the product i think this is just a amazing shot cut. So question is, is it possible to add my own background source with copy pasted product photo and just using it for relight? if it is possible, can you please explain how can i do that? Thanks a lot!

  • @thangtranmanh3707
    @thangtranmanh3707 3 หลายเดือนก่อน +1

    Incredible, you deserve to have more subcribers, i was looking for this for a long time

    • @jorgemiranda2613
      @jorgemiranda2613 2 หลายเดือนก่อน

      100% I don’t why I’m this until now

  • @OriBengal
    @OriBengal 8 หลายเดือนก่อน +1

    Wow- That's of massive value. Thank you for solving this and sharing and explaining. This is one of the most practical things I've seen so far.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Thanks! Honestly I’m astonished at how useful it ended up being.

  • @caseymathieson7023
    @caseymathieson7023 7 หลายเดือนก่อน +2

    "I hope you break things bc I would like to hear some feedback on it" - this got me. *Subscribed*

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      Ahah thank you! I really appreciate when people give me well thought feedbacks. Outside testing is key to deliver good results for everyone out there!

    • @digitaldepictionmedia
      @digitaldepictionmedia หลายเดือนก่อน

      Same here. I think this statement here is the power of a community.

  • @wascopitch
    @wascopitch 8 หลายเดือนก่อน +1

    OMG Andrea, this is amazing! Thanks a ton for sharing. Can't wait to give this workflow a go. Keep being awesome!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Thank you! I'd love some feedback on it, have fun!

  • @aminebenboubker
    @aminebenboubker 7 หลายเดือนก่อน +2

    Thank you for such an advanced and powerful workflow. I'm encountering problems where all images generated have a yellowish tint even when selecting a white light for example. Am i doing something wrong?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      Hi! No, you're not doing anything wrong, we solve the color shifting issue here: th-cam.com/video/_1YfjczBuxQ/w-d-xo.html and here: th-cam.com/video/AKNzuHnhObk/w-d-xo.html

  • @binwang9086
    @binwang9086 6 หลายเดือนก่อน

    Thanks!

    • @risunobushi_ai
      @risunobushi_ai  6 หลายเดือนก่อน

      Thanks you for the donation!

  • @vincema4018
    @vincema4018 8 หลายเดือนก่อน +1

    Amazing work!!! Very into to the IC-lighting stuffs recently, was just trying to upscale the image from the IC-light workflow. Will try your workflow and let you know the outcome soon. Thanks again Andrea.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      Thanks! If you add an upscaler pass, remember to upscale the high frequency mask you're using as well, be it the one from SAM or the one you're drawing yourself, otherwise it won't work anymore because of a size mismatch between mask and high frequency layers.
      As I say in the video, a good spot to place a upscale group would be in between the relight group and the preserve details group.

  • @DJVARAO
    @DJVARAO 8 หลายเดือนก่อน

    Awesome! As a photographer I think this is the best Ai processing so far.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      yeah, it feels like IC-Light really takes the whole space a lot closer to being a sort of "exact science" rather than being way too random

  • @raymondchiu2900
    @raymondchiu2900 3 หลายเดือนก่อน +1

    It's incredible work!! I'm encountering problems with " 308 image level adjustment", and how to fix it?

    • @kyrillkazak
      @kyrillkazak 3 หลายเดือนก่อน

      Try this. Someone posted it on the openart feed, and it worked for me.
      change values
      black_level = 80.0
      mid_level = 130.0
      white_level = 180.0

  • @xxab-yg5zs
    @xxab-yg5zs 8 หลายเดือนก่อน +1

    Mind-blowing! As a product photographer, I'm more excited than terrified. AI is just another tool, like any other. You still need to learn how to use it, and so far, it is complicated enough to require a lot of effort to create quality product images.
    I wonder, is there a way to generate 16-bit TIFF files that can be edited in Photoshop without introducing image quality degradation? Frequency separation sometimes makes banding, probably because it is done in 8-bit.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      That’s the way I see it too, and why I started getting interested in it a long while ago.
      Unfortunately there’s no way to generate TIFF files (as far as I know, but I’m 99% sure). Jpegs and PNGs are all we can work with as of now. The only way to alleviate banding issues (to a degree, and it’s more of a bandaid than a solution) or outlines is to generate files at a higher res, this way the affected pixels are, in percentage, less relative to the total amount of pixels in the image.

  • @Frankie-t1z
    @Frankie-t1z 5 หลายเดือนก่อน +1

    Thank you for sharing, this is great. But I have a question, why are all the processed photos in a dark style, and does it need to be adjusted anywhere?

  • @sab10067
    @sab10067 8 หลายเดือนก่อน

    Nice workflow! As other people have said, for certain objects it's a bit tough to keep the original color of the object.
    I added a perturbed attention guidance between the first model loader and ksampler, which helps create more coherent backgrounds.
    Thank you for making the tutorial video as well!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Thanks! Yeah, I understand now that some people prefer having a complete workflow rather than a barebones one, I’ll create two versions going forward, one barebones for further customization and one with more stuff, like PAG, IPAdapters, color match or whichever group might be useful

  • @M4rt1nX
    @M4rt1nX 8 หลายเดือนก่อน +1

    Amazing results. The beauty of open source is finding solutions together.
    Can the detail preserving part be used on the workflows for clothing? It might be a challenge with the posing but I just thought about it.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +2

      I've tested it on underwear only right now (I'm working with a client who produces underwear, so that's what I had laying around) and it works well, even with harsh relights, such as neon strips. I haven't tested it with other types of clothing, but I might do that tomorrow when I have more time.
      The only thing that it struggles it, right now, are faces in full body shots, because the high frequency layer catches a ton of data there, but I think it just might need some tinkering, nothing major.

    • @pranavahuja1796
      @pranavahuja1796 8 หลายเดือนก่อน

      I have tried full body shot or infact half body for t shirt, my experience was not that good (yet)

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      yeah, it needs to be fine tuned for people, that's why I released it for product shots only

  • @xdevx9623
    @xdevx9623 5 หลายเดือนก่อน +3

    Hey Andrea, I am facing a error at the final node of the workflow and I can't find a fix
    Error occurred when executing Image Levels Adjustment:
    math domain error
    can you please provide a fix as I really want to use your workflow

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +1

      Hi! I’ve heard about this error a few times now, it’s possible that the level adjustment mode got updated and my values don’t match now.
      Try using values between 0 and 255, I’ll update the json when I’ll have the time

    • @daechipapa
      @daechipapa 4 หลายเดือนก่อน

      @@risunobushi_ai Wow, you are the real one! and you made me SUBSCRIBE.
      But I have same error.
      Image Levels Adjustment:
      math domain error
      Can you please provide a fix and give me reply please!

    • @xdevx9623
      @xdevx9623 4 หลายเดือนก่อน

      @@risunobushi_ai Hey so I did try to change the values and get rid of the issue but the error isn't going away, can you please help and provide us the new value🙇‍♂

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +2

      For everyone having this issue:
      - the level node probably was updated and the new values are clamped between 0 and 255
      - you can try changing the values to reflect the new absolute values (0= black, 255 = white)
      - if it doesn’t work, you can swap the level node for any other level nodes (there’s a few)
      - if you are not comfortable doing it yourself, you’ll have to wait for me to update the json, but because of work I won’t be able to until late next week.

    • @markdkberry
      @markdkberry 4 หลายเดือนก่อน

      @@risunobushi_ai please let us know when you do, I am just getting a crash, or the whitewashed image if I change the settings.

  • @EdwardKing-nu7ug
    @EdwardKing-nu7ug 8 หลายเดือนก่อน +1

    Hello, why does the color of an object change after I turn on the lights? For example, the bottle was originally green, but it turned yellow after the lights were turned on. Which parameter should I adjust to maintain the original color?

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      we solve that issue in this update: th-cam.com/video/_1YfjczBuxQ/w-d-xo.html

    • @EdwardKing-nu7ug
      @EdwardKing-nu7ug 8 หลายเดือนก่อน

      Thank you so much and I am your ❤❤❤big fans 🎉

  • @tokerbabysuperfly
    @tokerbabysuperfly หลายเดือนก่อน

    Thanks for the video! Pls tell is it possible to preserve both the foreground and the background, and change the lighting only? I need to keep the initial image

  • @digitaldepictionmedia
    @digitaldepictionmedia หลายเดือนก่อน

    I am definitely testing this tomorrow. Just one question. Do you think this will work on the intricate details and designs on a jewellery? That is something I am looking forward to as i have a jewellery business as well

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      there's a more recent version that should work with jewelry, as long as you don't want refraction to go through the jewel itself: th-cam.com/video/GsJaqesboTo/w-d-xo.html

  • @Joerilla369
    @Joerilla369 6 หลายเดือนก่อน

    first of all thank you for letting us participate on this mindblowing journey!
    I've managed to get the whole comfyUI setup with the manager running. took me a while since i've no experience in this field.
    My only question is you've mentioned that to do an upscale you'd need to include the mask and upscale it too?
    Would there be a way to include this upscaling process within the workflow or has this already been done and i dont see it ?

    • @Joerilla369
      @Joerilla369 6 หลายเดือนก่อน

      ah e prima che me lo scordo! GRAZIE !

    • @risunobushi_ai
      @risunobushi_ai  6 หลายเดือนก่อน +1

      Ciao! Thank you, I've ended up setting up an upscaler (by no means the best upscaler out there, it was just something I had laying around from a previous test) here: th-cam.com/video/_1YfjczBuxQ/w-d-xo.html
      you can check it out and figure out how it works in terms of upscaling the masks, and link up any other upscaler you like as long as it upscales the same things.
      also I ended up going through more iterations of this workflow (the one I linked was version 3 I think?) doing color matching and detail preservation in more recent ones, so you can check those out as well!

    • @Joerilla369
      @Joerilla369 6 หลายเดือนก่อน

      @@risunobushi_ai alright! Thank you very much for your reply. I'll mess around and try to figure out stuff step by step :)! Cheers & thank you!

  • @TheSwann13
    @TheSwann13 5 หลายเดือนก่อน

    Amazing work thank you !! Can you upload a background picture instead of using a prompt to create it ?

  • @HooIsit
    @HooIsit 8 หลายเดือนก่อน +2

    You are the best!
    Don't stop😊

  • @johanmrch9316
    @johanmrch9316 16 วันที่ผ่านมา

    Would it be posible, to integrate a function, where you can give the workflow a reference image to guide the backgruond generator in the direction you want it to go? :)

  • @jorgemiranda2613
    @jorgemiranda2613 2 หลายเดือนก่อน

    Great content!! Thanks for sharing this!

  • @QuickQuizQQ
    @QuickQuizQQ 2 หลายเดือนก่อน

    Amazing work!!!quick question please:
    i got this error Image Levels Adjustment
    math domain error
    then i found that - ''Be sure your black level isn't higher than mid level, and vice versa. Black, must be lower than mid, and mid lower than high.''
    but when i do it colors are incorrect.any advices?

  • @ChakChanChak
    @ChakChanChak 8 หลายเดือนก่อน

    This is so good! Makes me wanna download the video to keep it forever

    • @ChakChanChak
      @ChakChanChak 8 หลายเดือนก่อน

      @@robertdouble559 thx mate, but i only use laserdiscs.

    • @ChakChanChak
      @ChakChanChak 8 หลายเดือนก่อน

      @@robertdouble559Thx mate, but i only use laserdiscs.

  • @tombkn3529
    @tombkn3529 4 หลายเดือนก่อน

    great video thank you!! my product image is so stretched because of my phone format, how do i put in as squareformat?

  • @Scerritos
    @Scerritos 8 หลายเดือนก่อน

    Awesome video. Thanks for sharing! Also looking forward to the people work flow

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      I’ll try to get it working soon, but I’m currently swamped with deadlines from my day job, so I might get it done for next week’s video

  • @egarywi1
    @egarywi1 7 หลายเดือนก่อน +1

    This is great for a product photographer like myself, I got v3 going however v4 keeps breaking Comfy so I want to concentrate on V3 to see how it performs, for me I am using a bottle of wine however the txt on the label is not preserved enough, is there a way to give it more importance?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      You can try using my set of Frequency Separation nodes here, by changing the nodes that are responsible for it in either V3 or V4 with them. You can find them in this video: th-cam.com/video/AKNzuHnhObk/w-d-xo.html

  • @8561
    @8561 8 หลายเดือนก่อน

    Great workflow! Also I can imagine hooking up an IPAdapter for the BG generation to keep consistency between different angled product shots!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Yeah, this is a “barebones” workflow, it can be expanded with anything one might need. I usually publish barebones rather than fully customized ones because it’s easier to make it your own (or at least it is for me, I don’t like when I have useless stuff in other people’s workflows)

    • @8561
      @8561 8 หลายเดือนก่อน +1

      @@risunobushi_ai Agreed! Cheers

  • @蔡福鹏
    @蔡福鹏 7 หลายเดือนก่อน

    Thank you very much!
    but I found that some products have great results through this workflow, and some niche products have not been very good after re-lighting. Is this because of the basic model training? Because some products have very small training volume

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      Hi! No, the way IC-Light works is through a instruct Pix2pix process, so there shouldn’t be any issues with object permanence at very low CFG (between 1.1 and 2), as it forces the original image on top of the light mask.
      Btw this workflow is one of my first attempts, these are my latest ones:
      Colors and details preservation:
      th-cam.com/video/_1YfjczBuxQ/w-d-xo.html
      People (and products) relighting:
      th-cam.com/video/AKNzuHnhObk/w-d-xo.htmlsi=gfYJmWLIFK7HrhL7

  • @FinnNegrello
    @FinnNegrello 8 หลายเดือนก่อน +2

    Love all of your content. Thank you.

  • @Jacck
    @Jacck 6 หลายเดือนก่อน

    WOW! This is amazing! Would it be able to use image image templates as inputs for the background generation?

    • @risunobushi_ai
      @risunobushi_ai  6 หลายเดือนก่อน

      already did! th-cam.com/video/GsJaqesboTo/w-d-xo.html

    • @Jacck
      @Jacck 6 หลายเดือนก่อน

      @@risunobushi_ai Amazing bro, and that workflow will preserve label details and text on products aswell?

    • @risunobushi_ai
      @risunobushi_ai  6 หลายเดือนก่อน

      Yep, it’s just a more up to date version (this was the first iteration of it, my latest video is my latest version out of 4? 5? I think)

  • @ImagindeDash
    @ImagindeDash 7 หลายเดือนก่อน +1

    Nice tutorial but i got an error on the node GroundingDinoSAMSegment, and I don´t know how to install with docker. Could you help with that?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน +1

      What's the issue you're encountering? Do you get anything in the logs? If SAM doesn't work for you, you could use a remBG node instead

    • @ImagindeDash
      @ImagindeDash 7 หลายเดือนก่อน

      @@risunobushi_ai I got this error; Failed to validate prompt for output 269:
      * GroundingDinoSAMSegment (segment anything) 204:. And I don´t know hot to fix it

  • @jeremysomers2239
    @jeremysomers2239 8 หลายเดือนก่อน

    @andrea this is so fantastic, thank you for the breakdown! Do you think there's a way to BRING a background plate in instead of generating one???

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      as long as it has the same dimensions as the relighting mask and subject, and has the same perspective as the subject, you can use custom backgrounds, sure!

  • @anastasiiadereshivska4090
    @anastasiiadereshivska4090 8 หลายเดือนก่อน

    Wow! This is fantastic!
    I was faced with the problem that the Load And Apply IC-Light node does not find loaded models. Does anyone know how to solve this?
    * LoadAndApplyICLightUnet 37:
    - Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Did you place the model in the Unet folder?

    • @anastasiiadereshivska4090
      @anastasiiadereshivska4090 8 หลายเดือนก่อน

      @@risunobushi_ai It works! Thank you!

  • @EugeniaFirs
    @EugeniaFirs 3 หลายเดือนก่อน

    This is amazing, thank you!

  • @arn0ldas88
    @arn0ldas88 5 หลายเดือนก่อน

    I new in this and have issues with installation controlnet maybe it was some hints in your past videos ? What .pth you use why and etc..

  • @ImAlecPonce
    @ImAlecPonce 7 หลายเดือนก่อน

    This is really cool, but it still changes my colors.... It seems to work better (not perfect) pulling the blended image into the second frequecy seperacion. At least the scene gets re-lit. Is there a way to use the IC-light and then just pull the colors over with some transparency value so they don't get washed out?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน +1

      Yep, we solved the color matching here: th-cam.com/video/_1YfjczBuxQ/w-d-xo.html
      and on monday I'll release a workflow for relighting people while preserving details and colors too.
      I also developed custom nodes for frequency separation, but I haven't had the chance to update the workflow yet. They'll be in Monday's video tho.

    • @ImAlecPonce
      @ImAlecPonce 7 หลายเดือนก่อน

      What I usually do is use IC-light and the luminosity masks in Krita

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      I would do it outside of comfyUI too, but the viewers wanted a all-in-one workflow

    • @ImAlecPonce
      @ImAlecPonce 7 หลายเดือนก่อน

      @@risunobushi_ai wow!! Thanks!

  • @AITransformers
    @AITransformers 4 หลายเดือนก่อน

    This is an awesome workflow. It was working fine. Sadly the latest updates to either comfyui (ComfyUI Version:** v0.2.2-22-g81778a7) or the WAS node suite gives a "ValueError: math domain error" at the "Image Levels Adjustment" node. Any solution?

    • @AITransformers
      @AITransformers 4 หลายเดือนก่อน

      Fixed. Gamma is a bit offset

    • @IlijaDimitrijevic-g5s
      @IlijaDimitrijevic-g5s 4 หลายเดือนก่อน

      @@AITransformers Did you find right values for black_level, mid_level and white_level?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Yeah the level adjustment nodes was update, I received a ton of complaints and requests for help but I’m currently unable to update the json because of work :/ if you have found the correct values I’ll make sure to post a pinned comment

    • @kyrillkazak
      @kyrillkazak 3 หลายเดือนก่อน

      Try this. Someone posted it on the openart feed, and it worked for me.
      change values
      black_level = 80.0
      mid_level = 130.0
      white_level = 180.0

  • @pixelcounter506
    @pixelcounter506 8 หลายเดือนก่อน +1

    Great work... great explanation, thank you very much, Andrea!

  • @markdkberry
    @markdkberry 4 หลายเดือนก่อน

    Using comfyui, I get error: Error occurred when executing MaskFromColor+: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3. Fixed it bypassing it but then ran into a problem with the Image Resize apparently due to an update in Comfui so switch it to "select keep_proportion for the method of the Image Resize nodes." solved it and I could get passed this concern.

    • @khizarTrabzon
      @khizarTrabzon หลายเดือนก่อน

      Thanks man i was searching for this I was stuck , not going after searching everywhere here u gave the solution. THnks again

  • @emmaqias7176
    @emmaqias7176 7 หลายเดือนก่อน

    i've got this error
    Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      You're most probably using the FBC IC-Light model instead of the FC model, or using the FC model while plugging in an optional background in the IC-Light node (opt backgrounds are for the FBC model only)

  • @packshotstudio2118
    @packshotstudio2118 5 หลายเดือนก่อน

    how should i started to making something like this? did you have any tutorial for beginner?

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      hi! my first videos are basic tutorials, and they get harder and more in depth the more recent they are. for the product relighting series in particular, I'd suggest watching them all in order of publishing, since they're small incremental improvements over the course of a month of development. you'll probably understand more about how they change and what's going on if you watch them in that order.

  • @merion297
    @merion297 8 หลายเดือนก่อน +1

    It's incredible, again! 😱
    One thing, just a minorly-minor improvement idea: You enter a prompt then copy it into another prompt field, after a lighting prompt part. You could separate these two then synthetise it using the product prompt.
    Turning it into sample code:
    ProductPrompt = 'a photograph of a product standing on a banana peel'
    LightingPrompt = 'white light'
    SynthesizedPrompt = ProductPrompt + LightingPrompt # Here's the point where we no longer Ctrl-C/Ctrl-V 😁
    Plus the prompt nodes could be rearranged into a Prompts group. (Of course I could do this myself after downloading the workflow for which you deserve a Praying Blanket 🙏 but I'm here just for admiring, my machine is far from below the minimal requirements of all this.)

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      thanks, I didn't know about the product prompt node! I knew about other prompt concatenate nodes, and I thought about using them, but again, not knowing the knowledge level of the end user I usually end up using the least complicated setup. sometimes this ends up producing minor inconveniences like copy pasting text, or having to link outputs and inputs manually where I could have used a logic switch, but it's a tradeoff I accept for the sake of clarity

    • @merion297
      @merion297 8 หลายเดือนก่อน

      @@risunobushi_ai Nonono, I've just called it Prompt Node. 😁 It's what it is, you're 100-fold more educated in this than I.

  • @KINGLIFERISM
    @KINGLIFERISM 8 หลายเดือนก่อน

    Brother take the text node and use that as inputs for the clip positives. helps. This workflow is awesome btw.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Thanks! Yeah, I know there's better ways to bypass a double prompt field, more so if the two prompts are similar, but I usually construct my workflows so that there's as little complications as possible for new users. In this case, this means using two different prompt fields for what is essentially the same prompt, but to new users having the usual Load Checkpoint -> CLIP Text Encode -> KSampler pipeline makes more sense than having a Text node somewhere, conditioning two different KSamplers in two different groups.

  • @NB-ec9wc
    @NB-ec9wc 2 หลายเดือนก่อน

    Hi guys, The file should be placed in which folder: BiRefNet-DIS_ep580?

  • @saberkz
    @saberkz 7 หลายเดือนก่อน

    Andrea
    Im building sdxl workflow for product photography, if i add iclight as an option inside the sdxl workflow so users can turn on or off from the webapl based on the input . Is that possible or iclight should be in stand alone workflow ?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน

      You can just encode a resulting image from SDXL and the use it as a base for a IC-Light pipeline, no need to have two different workflows if keeping two checkpoints loaded at the same time is not an issue

  • @gwanyip345
    @gwanyip345 8 หลายเดือนก่อน

    This is amazing... thank you so much for putting these videos together!!
    Question: For some reason, the image I'm getting out of the KSampler after the IC-Light Conditioning node is always coming out darker/orange/brown. I've tried it with a bunch of different images but the image and color are always significantly different than what's being fed into it. I've also tried a few different prompts in the text encoded that's being fed into the IC-Lighting node but everything still comes out quite dark. Thanks again!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      Thanks! Please refer to the comment by AbsolutelyForward, where we talk about this and about the use of a color match node. You can also increase the amount of light by remapping the light mask (right now it should be set to 0.7, 1 is full white)

    • @gwanyip345
      @gwanyip345 8 หลายเดือนก่อน

      @@risunobushi_ai Thank you!! I tried to see if anyone else had the same issue and must have missed it.
      Color Blend definitely helped at the end when connecting it to the original image. I also found increasing the min value of the Remap Mask Range node to 0.4 helped brighten up the initial input image. I also increased the IC-Lighting Conditioning to 0.5.
      Thanks again for this amazing workflow!!

  • @DarioToledo
    @DarioToledo 7 หลายเดือนก่อน

    I always run out of VRAM with GroundingDINO. Any alternatives?

    • @risunobushi_ai
      @risunobushi_ai  7 หลายเดือนก่อน +1

      remBG, or a smaller SAM model, there’s as small as less than 100mb!

  • @dragerx001
    @dragerx001 8 หลายเดือนก่อน

    thank you again for posting workflow

  • @AbsolutelyForward
    @AbsolutelyForward 8 หลายเดือนก่อน

    Absolutely fantastic workflow and a well explained tutorial :)
    I tried to relight some package designs, but somehow it gets allways „tinted“ in a warmish-yellow tone, no matter what text prompt I use for the lightning. I noticed that the epicrealism checkpoint tends to do so if I use a very generic (no description apart from the advertising photography) prompt for the background. Im lost.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      you could either try different checkpoints, and / or you could try to specify which kind of light you want. I notice that I get a very warm tint with "natural light", but specifying "white light" or some kind of studio light (softbox, spotlight, strip light) produces more neutral results. You could also try influencing with a negative prompt (warm tones, warm colors, etc).

    • @AbsolutelyForward
      @AbsolutelyForward 8 หลายเดือนก่อน

      @@risunobushi_ai thx for the hints :)
      The package image (input) is colored half-green + half-grey. What is your expierence (so far) with retaining the original colors and transfering them in a realistic way with your workflow?
      Would an additional color matching node perhaps do some
      help?

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      I have never particularly cared for the color matching node (at least the one I used), as it was almost never working well for me, but you could try and blend it at a lower percentage for better results. I guess it all depends on how important it is to color match to an exact degree the final relit image to the source one. This is my own preference, but the way I'm used to working I'd rather fix the colors in PS for a better degree of control. If one would want to do everything inside of comfyUI, which to be fair is in the spirit of this video and workflow, a color matching node could be a good enough solution, although less "directable" than proper post work in PS.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      adding here, since I just thought about it: you could even try color matching only specific parts of the subject, such as the non-lit ones, or only the lit ones, by using the same node I'm using to extract a light mask from the blended image, or a RGB/CMYK/BW to mask node, based on the color / light you need to correct.

    • @AbsolutelyForward
      @AbsolutelyForward 8 หลายเดือนก่อน

      @@risunobushi_ai So far I haven't had any success by changing the checkpoints or modifying the lightning prompt - the original colours of the packaging are lost.
      But: at the end of the workflow, I used the input image again to readjust the colours. To do this, I combined the "ImageBlend" (settings: 1.00, soft_light) node with the "Image Blend by Mask" (for masking the packaging) node - this has worked very well so far :)

  • @dadrian
    @dadrian 8 หลายเดือนก่อน

    So cool! I'm doing basically the same for cars and people! But at the moment I stll prefer to do the freq seperation part in Nuke - I can only dream of 32bit workflow in Comfy

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Wait, if you generate a normal map from IC-Light do you get to work with 32bit images in Nuke?

  • @Bartskol
    @Bartskol 8 หลายเดือนก่อน

    Ok, since no one asked it yet, can i use sdxl model with this workflows ? Thanks for this work and I'm also a photographer 😅😊 cant wait for v4 with that ip adapter for consistent backgrounds(and sdxl for higher res? ;) )

    • @Bartskol
      @Bartskol 8 หลายเดือนก่อน

      Subed

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Thanks! Unfortunately there’s no support for SDXL, it’s for 1.5 only, but you can definitely upscale a ton with SUPIR or other upscalers

  • @ultimategolfarchives4746
    @ultimategolfarchives4746 8 หลายเดือนก่อน

    Keeping détails in upscaling is a common problem. Could tuat technique be applied to upscaling as well?

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      I haven’t tested it with upscaling, I guess that as long as you don’t need to upscale the original image you won’t have to resize the frequency layers, so the details would be as they are in the original image. If you need to upscale the original image and the frequency layers as well, you might have some troubles with preserving details depending on how much you’re upscaling.

  • @minneleadotcom
    @minneleadotcom 5 หลายเดือนก่อน

    i tried also on runconfy but never work, you give private assistence?

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      hi! this workflow is not on runcomfy, my latest one is - which error are you getting?

  • @Arminas211
    @Arminas211 8 หลายเดือนก่อน

    I got the error during ImageResize+: not enough values to unpack (expected 4, got 3). Any ideas what went wrong and how to fix it?

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      What is the image extension you’re using? You can sub in another resize node if that one doesn’t work for you

    • @sreeragm8366
      @sreeragm8366 8 หลายเดือนก่อน

      Facing the same issue. Are we passing the mask or the image to resizer. Debugging shows resizer is getting a tensor with no channels. If you can confirm, I will patch the resizer to bypass this shape mismatch. Thank you. Btw I am working in api mode. Never used comfy in ui mode.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      We’re passing an image, but it’s not the first time I hear someone having issues with this resize node. swapping it for another resize node solves usually solves it.

    • @Arminas211
      @Arminas211 8 หลายเดือนก่อน

      @@risunobushi_ai thanks really much. I will write a comment in openart.

    • @陈敏杰-j2j
      @陈敏杰-j2j 8 หลายเดือนก่อน +1

      @@Arminas211 I encountered the same issue, but I eventually discovered that I hadn't changed the prompt of the segment anything node, which caused the problem. Perhaps you could try doing that as well?

  • @andrewcampbell8938
    @andrewcampbell8938 8 หลายเดือนก่อน +1

    Love your content.

  • @AnotherPlace
    @AnotherPlace 8 หลายเดือนก่อน

    i'm having this error:
    RuntimeError: Given groups=1, weight of size [320, 12, 3, 3], expected input[2, 8, 128, 128] to have 12 channels, but got 8 channels instead

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Are you using the IC-Light FBC model instead of the FC? Are you trying to use SDXL instead of SD 1.5?

  • @charlieBurgerful
    @charlieBurgerful 8 หลายเดือนก่อน

    This looks like a game changer. Maybe only for mockups, ideas iterations, or even real productions !
    Everything starts well on my side, but the segment anything does nothing so the process is useless. I am on a M2pro, any ideas ?

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Did you install all the necessary dependencies for SAM to work on M chips? As far as I know you’ve got some hoops to jump through in order to get tensorflow and other dependencies running on M chips

  • @iMark22
    @iMark22 8 หลายเดือนก่อน

    Thank you! Incredible work!

  • @-Yun-Hee
    @-Yun-Hee 8 หลายเดือนก่อน +1

    wow! this is a great solution!!

  • @douglaspriester
    @douglaspriester 5 หลายเดือนก่อน

    Man, how come your Image Levels Ajustment work with those settings?! Mine only work if the value o mid_level is between de black and white. If I use the numbers you have (ex: mid_level=1), I get a "math domain error"....
    another thing I notice is that it is changing the color of some of the product parts..

    • @vermit25
      @vermit25 หลายเดือนก่อน

      We're you able to solve this as I'm getting the same error.

  • @李棕祥
    @李棕祥 3 หลายเดือนก่อน

    Hello blogger, I am a novice, I saw your work on the Internet. (Product Photography Relight v3 - With internal Frequency Separation for keeping details) I really, really want to be able to use this workflow, but I'm having so many problems, I don't know how to install the relevant model and where to put the model in the folder, can you show me a tutorial video to install this workflow? It really means a lot to me. Thank you very, very much. I liked your video and subscribed to your channel.

  • @victormustin2547
    @victormustin2547 3 หลายเดือนก่อน

    Could you update it tu run with Flux ? I know IC Light doesn't work with Flux but the other parts of the generation could benefit from Flux

  • @whatman65
    @whatman65 8 หลายเดือนก่อน +1

    Great stuff!

  • @houseofcontent3020
    @houseofcontent3020 8 หลายเดือนก่อน

    How do I mix in existing background? Is it possible, instead of having the workflow creating my background

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      Yep, but you need to have the same perspective between the subject and the background. Simply add a load image node and blend the background with the segmented subject, bypassing the background generator group.
      There’s no perspective correction in comfyUI that I know of, but if someone knows about it it’d be great.

    • @eranshaysh9536
      @eranshaysh9536 8 หลายเดือนก่อน

      Thank you so much for the detailed answer. I’ll look up for a rotúrela that explains how to connect the nodes you talked about. As for the perspective, that’s fine, since I’ll be editing it before on Photoshop l, so it will only need to mix the light and color

  • @nahlene1973
    @nahlene1973 4 หลายเดือนก่อน

    The 'frequency' part actually sounds a lot like how focus peaking works.

  • @bregsma
    @bregsma 3 หลายเดือนก่อน

    Image Level Adjustment is brokennn aaaa this is the only step I need to fix can you help me please?

    • @kyrillkazak
      @kyrillkazak 3 หลายเดือนก่อน

      Try this. Someone posted it on the openart feed, and it worked for me.
      change values
      black_level = 80.0
      mid_level = 130.0
      white_level = 180.0

  • @lumarans30
    @lumarans30 8 หลายเดือนก่อน +1

    Grazie mille! ottimo lavoro

  • @TuangDheandhanoo
    @TuangDheandhanoo 8 หลายเดือนก่อน +1

    Great vdo sir, thank you very much!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      thank you for watching!

  • @sebicified
    @sebicified 7 หลายเดือนก่อน +1

    Chapeau!

  • @spiritform111
    @spiritform111 8 หลายเดือนก่อน

    very cool, but for some reason the control nets just crash my computer... i have a 3080ti, so it must be something else.

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน +1

      That's weird, I haven't had any reports of crashes yet. I have a 3080ti too, so maybe try subbing in another controlnet node / controlnet model?

    • @spiritform111
      @spiritform111 8 หลายเดือนก่อน

      @@risunobushi_ai yeah, going to try that... thanks for the reply.

    • @spiritform111
      @spiritform111 8 หลายเดือนก่อน

      @@risunobushi_ai turns out it was the depth anything model.. i can use depth_anything_vits14.pth - thanks. insane workflow... powerful stuff.

  • @omthorat3891
    @omthorat3891 8 หลายเดือนก่อน +1

    Love you 3000 ❤😂

  • @Spinaster
    @Spinaster 8 หลายเดือนก่อน

    Instead of changing the subject name in the Grounding Dino prompt, you can try using just "subject" or "main subject", it should work ;-)

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      In this case, and when you only have one subject yes, but if you have more subjects (like in my update on this video, when I have the bottle sitting on a branch) it might not work. But I agree, here you can just use subject instead!

  • @Mranshumansinghr
    @Mranshumansinghr 8 หลายเดือนก่อน +1

    muchas gracias senor

  • @jahormaksimau1597
    @jahormaksimau1597 8 หลายเดือนก่อน

    Amazing!

  • @amitkumarsinha1654
    @amitkumarsinha1654 8 หลายเดือนก่อน

    Hi.. Thanks a Lot For this tutorial and workflow.
    I am getting this error , can you please help me how can I fix this :
    C:\ComfyUI
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py:1051: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
    warnings.warn(
    Prompt executed in 8.58 seconds

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      This is not an error per se, it’s a warning about a transformers argument being deprecated. As you can see, the prompt gets executed.
      What issues are you facing during the prompt? Where does it stop?

  • @UdayDahiya-s5y
    @UdayDahiya-s5y 6 หลายเดือนก่อน

    where is the 2 hour live video?

    • @risunobushi_ai
      @risunobushi_ai  6 หลายเดือนก่อน

      it should be this one if you want to dive into it: th-cam.com/users/livexjy3JyaPfHQ
      but my latest video showcases a workflow that solves most of the stuff I was talking about two months ago!

  • @trungnguyễnthành-y7p
    @trungnguyễnthành-y7p 8 หลายเดือนก่อน +1

    thanks. the first time, 5%. hehe

  • @ismgroov4094
    @ismgroov4094 8 หลายเดือนก่อน

    workflow plz, sir!

    • @risunobushi_ai
      @risunobushi_ai  8 หลายเดือนก่อน

      The workflow is in the description *and* in the pinned comment, and I even say "the workflow is in the description below" as soon as 00:40

  • @mariorenderbro6370
    @mariorenderbro6370 4 หลายเดือนก่อน

    Suscribed!!!!

  • @viktorchemezov927
    @viktorchemezov927 4 หลายเดือนก่อน

    Image Levels Adjustment
    math domain error :(
    gamma = math.log(0.5) / math.log((self.mid_level - self.min_level) / (self.max_level - self.min_level))
    ValueError: math domain error

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      hi! this is a known issue, the Image Level Adjustment was updated and it broke the range. I haven't had the time to fix this yet because of my job, I'll try to do it as soon as I have the time to. Unfortunately I can't maintain all my old workflows on a daily schedule.

  • @joey9784
    @joey9784 5 หลายเดือนก่อน

    i can here brands screaming you can't fuck around with product orginal colour.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      I can hear brands screaming we solved color matching in the latest videos and workflows ;)