Magnific AI Relight is Worse than Open Source

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ค. 2024
  • Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs!
    Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below.
    REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon.
    COUPON CODE: RCABP10 (Expires July 31)
    Workflow (RunComfy): www.runcomfy.com/comfyui-work...
    Workflow (Local): openart.ai/workflows/nSqO2P2Z...
    Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
    Relight better than Magnific AI and for free, locally or on the cloud via RunComfy!
    (install the missing nodes via comfyUI manager, or use:)
    IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-L...
    IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-...
    GroundingDinoSAMSegment: github.com/storyicon/comfyui_...
    SAM models: found in the same SAM github above.
    Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epic...
    Auxiliary controlNet nodes: github.com/Fannovel16/comfyui...
    IPAdapter Plus: github.com/cubiq/ComfyUI_IPAd...
    Timestamps:
    00:00 - Intro
    01:10 - Workflow (Local)
    03:50 - Magnific vs Mine (First Test, global illumination)
    04:31 - Magnific vs Mine (Second Test, custom light mask)
    06:45 - Workflow (Cloud, RunComfy)
    16:22 - Workflow Deep Dive (How it works)
    19:15 - Outro
    #magnificai #stablediffusion #comfyui #comfyuitutorial #relight #iclight
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 95

  • @risunobushi_ai
    @risunobushi_ai  20 วันที่ผ่านมา +3

    Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs!
    Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below.
    REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon.
    COUPON CODE: RCABP10 (Expires July 31)
    Workflow (RunComfy): www.runcomfy.com/comfyui-workflows/comfyui-product-relighting-workflow?ref=AndreaBaioni
    Workflow (Local): openart.ai/workflows/nSqO2P2ZmDQGwohEbgl3

    • @I.Am.Nobody
      @I.Am.Nobody 17 วันที่ผ่านมา

      so, show us how to install locally, for the folk who dont care about your bias to your sponsor?

    • @risunobushi
      @risunobushi 17 วันที่ผ่านมา

      @@I.Am.NobodyI do that starting at minute 1:10 (running locally). You just need to download the workflow from the link in the description or in the comment you replied to, and import it into your comfyUI instance.

    • @MaghrabyANO
      @MaghrabyANO 15 วันที่ผ่านมา

      @@I.Am.Nobody Bro, Andrea isn't bias to his sponsor at all. I mail him and send him over social media for any inquiries, and he helps more than other developers/creators.
      He is not entitled to explain how to install comfyui locally because basically you can google/youtube search it and you'lll find tons of help. He is NOT trying to hide a secret information to drive the masses into using runcomfy. its just not worth it to waste time on his 100th video about AI to speak about how to locally install comfyui

    • @johntnguyen1976
      @johntnguyen1976 13 วันที่ผ่านมา

      Would you be able to run something as bespoke and customized as LivePortrait on RunComfy?

  •  20 วันที่ผ่านมา +3

    Dropping the mic. Love to see a simplified UI for these workflows. That is the biggest selling point of the paid platforms - the convenience.
    Great showcase as usual Andrea.

    • @risunobushi_ai
      @risunobushi_ai  20 วันที่ผ่านมา

      Thanks! I've been tinkering around with the idea of a "Control Room" for a client who'll find it easier to have everything in one place, and while I'm still not sold on get / set nodes as they are not clear to newcomers, I think this is a good approach towards ease of use.
      And yeah, it is the main selling point of SaaS platforms right now. On the one hand, normal users don't want to see the node tangle in the backend, but on the other, those who watch this channel are kind of power users, so I need to strike a bit of a balance when designing workflows.

  • @agusdor1044
    @agusdor1044 19 วันที่ผ่านมา +2

    thank you Andrea!

  • @LeonhardKleinfeld
    @LeonhardKleinfeld 20 วันที่ผ่านมา +1

    got the workflow up and running in 5mins. Great work thank you!

    • @risunobushi_ai
      @risunobushi_ai  20 วันที่ผ่านมา

      Great to know, I tried to structure it so it's the easiest possible solution I could come up with that could still give the user a degree of choice in the final results!

  • @bjj_sk5491
    @bjj_sk5491 20 วันที่ผ่านมา +2

    Nice work! I love it ❤

  • @TheRoomcleaner
    @TheRoomcleaner 20 วันที่ผ่านมา +2

    Love the salt. Video was hilarious and informative 👍

    • @risunobushi_ai
      @risunobushi_ai  20 วันที่ผ่านมา

      I'm allowing myself one salty, personal video every four months, as a treat :)

  • @johntnguyen1976
    @johntnguyen1976 18 วันที่ผ่านมา +1

    Wwonderful! Your channel keeps getting more and more useful by the day (and you were useful from day one)

  • @ted328
    @ted328 18 วันที่ผ่านมา +1

    This channel is a gift to creatives and artists everywhere. Can't thank you enough.

  • @wholeness
    @wholeness 18 วันที่ผ่านมา +1

    Nice! Now is there a local Magnific/Krea upscaler flow that can produce similar results that you know of? This would be a video we are all looking for!

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      praise where it's due, there is no open source upscaler I've found that it's as good as magnific tbh

  • @obi-wan-afro
    @obi-wan-afro 19 วันที่ผ่านมา +1

    Excelente vídeo, como siempre! ❤️

  • @appdeveloper3895
    @appdeveloper3895 19 วันที่ผ่านมา +3

    Feel lucky knowing your channel. It is really painful that credits for this amazing work goes to someone else and he is even making money out of it. On top of that, they are not doing it as good as you did. And I think they will improve their product after watching your video without even crediting you or anyone involved in the open source community. Thank you for all the work you do.

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      Thank you for the super kind words!

  • @DanielSchweinert
    @DanielSchweinert 16 วันที่ผ่านมา +1

    Just a suggestion. I created a couple of product shots and saw that the edges are not always perfect. The mask is good but I realized that it generates in example a bottle that is slightly bigger than the product and if it is blended you can see the edges of the original generation that is lying under it. Would it be possible to put a "Lama Inpaint" node to remove the generated bottle and create a cleanplate and only after that to paste or blend the product photo into the generation? Hope it makes sense. Will try it myself but I just begun to work with comfyui yesterday. LOL

    • @risunobushi_ai
      @risunobushi_ai  16 วันที่ผ่านมา

      Yea it would be possible to, but then it would become a VRAM issue (if you’re running it locally), a cost issue (if you’re running it on the cloud), and a market issue (if you’re a SaaS that hopes your generations take 30 seconds to serve).
      The main issue with 1-click solutions is just that, there’s a ceiling somewhere and different users will have different needs / hardware specs / time / expectations. So it’s all a matter of presenting a working, albeit limited, solution and then leave the fine tuning to the individual user.

  • @ismgroov4094
    @ismgroov4094 18 วันที่ผ่านมา +1

    thanks sir

  • @dropLove_
    @dropLove_ 19 วันที่ผ่านมา +1

    Appreciate you and your work and your workflow.

  • @PaulVang-vf7fm
    @PaulVang-vf7fm 16 วันที่ผ่านมา +1

    Is this Ilyia person a real person? they've made literally 90% of all stable diffusion extensions/apps. Dude has coding super powers.

    • @risunobushi_ai
      @risunobushi_ai  16 วันที่ผ่านมา

      I know right? Illya is a godsend, moreso because most of the time they release a sandbox that users can then apply to a ton of different things, not just a single-case, one-use thing.

  • @neoneil9377
    @neoneil9377 8 วันที่ผ่านมา +1

    Thanks for this amazing video, this is best ai content channel for professionals. Just one question. Does re-light support SDXL yet?
    Thanks in advance.

    • @risunobushi_ai
      @risunobushi_ai  6 วันที่ผ่านมา

      Hi! IC-Light is 1.5 only, but in this workflow for example we can use SDXL in the first generation phase (for the background for example), and let 1.5 handle only relighting.

  • @denisquarte7177
    @denisquarte7177 20 วันที่ผ่านมา +1

    Nice, came across your workflow last weekend and was about to experiement with some things. eg. you didn't subtract low freq from the image but instead added inverted with 50%. Still not sure why though. But before I tinker needlessly I take a look what you have already cooked. Thanks a lot for sharing and as well to the people helping you out developing this.

    • @risunobushi_ai
      @risunobushi_ai  20 วันที่ผ่านมา

      Thanks! If you're talking about a previous version, where we were using various math nodes to bruteforce frequency separation, we moved away from that and I wrote a frequency separation node that handles the HSV apply method like in PS. No more need for weird math, it's all handled with python in the custom nodes.

    • @denisquarte7177
      @denisquarte7177 20 วันที่ผ่านมา +1

      @@risunobushi_ai Just took a look at it and yes. Big improvement I'm almost sad that I now don't have any need for doing that myself anymore 😋. But this is a feature of open source no matter the problem, there is a high chance someone else already ran into the same issue and figured it out. Good job.

    • @risunobushi_ai
      @risunobushi_ai  20 วันที่ผ่านมา +1

      you can still make it better! for example, my frequency separation node oversharpens the final image by something like 1-2%. I can't figure out why, maybe you can

    • @denisquarte7177
      @denisquarte7177 20 วันที่ผ่านมา

      @@risunobushi_ai Well, my first guess would be that your high frequency separation is now so good that it leads to overemphasizing but i will surely play around with it anyway :)

    • @denisquarte7177
      @denisquarte7177 20 วันที่ผ่านมา

      @@risunobushi_ai tried something quick and dirty, upscale high frequency 4x lanczos, blur radius 2px, rescale .25x., not perfect but helpful

  • @mikelaing8001
    @mikelaing8001 17 วันที่ผ่านมา +1

    i tried magnific and it was terrible, wondered if i'd missed something tbh.

  • @sellertokerbo
    @sellertokerbo 19 วันที่ผ่านมา +1

    As a beginner, I really appreciate the clarity of the explanation ! I'm gonna try this one for sure !

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      Thank you! I try to always explain as much as I can without becoming too boring

  • @sb6934
    @sb6934 18 วันที่ผ่านมา +1

    Thanks!

  • @hoangucmanh299
    @hoangucmanh299 12 วันที่ผ่านมา +1

    how to make it generate a new background based on prompt and make sure it's suitable for the foreground?

    • @aminebenboubker
      @aminebenboubker 11 วันที่ผ่านมา

      Looking for this answer too. Please enlighten us, Andrea! Fantastic job, by the way.

    • @risunobushi_ai
      @risunobushi_ai  11 วันที่ผ่านมา

      This workflow in particular uses a reference image for generating a background alongside a prompt. For pure prompting, without reference images, you’d need to up the denoise to 1 at all times and then disable the IPAdapter responsible for using the reference background to influence the generation

  • @user-nd7hk6vp6q
    @user-nd7hk6vp6q 17 วันที่ผ่านมา

    Does this work for people too ? , let's say I want to change the background or place a person on a new background, would it work?

    • @risunobushi_ai
      @risunobushi_ai  16 วันที่ผ่านมา

      It can work for people, but full body shots have a hard limit in the actual number of pixels fine details that amount for those details, otherwise they get lost in the Detail Preservation stage. For products, people tend to notice inconsistencies a bit less than with people.
      It works pretty well for close up portraits and half body shots, but it’d need a much higher res and not as much relighting for full bodies.

  • @MaghrabyANO
    @MaghrabyANO 16 วันที่ผ่านมา

    Another question, in the Regenartor Box, you get 2 results, right? one of them is AI-generated, and the other using the object image to be masked over the AI-generated object.
    the masked/overlayed object is usually pixlated for me. Im not sure why, but maybe because the input object resolution is 768x512 and the generated outccome is 1536x1024, so probably the image got stretched and pixlated.
    So how to sustain the same Image size as the object? no resizing needed

    • @MaghrabyANO
      @MaghrabyANO 16 วันที่ผ่านมา

      Alright, I retried and I found out that adding a light mask will run the workflow to its preservation details box. but, its still stretched (pixelated) the object, how can I avoid that?

    • @risunobushi_ai
      @risunobushi_ai  15 วันที่ผ่านมา

      Is your source image 768x512?

    • @MaghrabyANO
      @MaghrabyANO 15 วันที่ผ่านมา

      @@risunobushi_ai Yes. My source image is 768x512

  • @vincema4018
    @vincema4018 16 วันที่ผ่านมา

    One question: If I want to upscale the output image, should I insert the ultimate SD upscaler before or after the frequency seperation and color matching notes or after them?

    • @risunobushi_ai
      @risunobushi_ai  16 วันที่ผ่านมา

      it depends on if your starting image is bigger than the resulting image. If it is, you should try to hold on to as many details as possible from the original, so you'd want to upscale before the FS and color matching. If it isn't, you can just do that after, and then if the upscaler generates some details you don't want you can do a new FS using the details from the original (upcaled).

    • @vincema4018
      @vincema4018 16 วันที่ผ่านมา

      @@risunobushi_ai thanks Andrea, that’s a very practical suggestion. Let me work it out and add it into your workflow. I think most of the product images have much higher resolution than the resulting image, it’s better to upscale them before the FS and color matching. But is it necessary to resize the original image to the upscaled resolutions before conducting FS and color matching? Hmm… I think color matching may still be okay with higher or lower resolutions, but FS may require the same resolutions?

    • @risunobushi_ai
      @risunobushi_ai  15 วันที่ผ่านมา

      everything that passes through a "blend image by mask" node need to be at the same res, otherwise you get a size mismatch error. what you'd do is you bisect the resizing at the beginning, and get a higher res of the original for later use after upscaling, and a lower res for all the regen / relight ops.

  • @veenurohan3267
    @veenurohan3267 12 วันที่ผ่านมา

    Hey Andrea, do you know where this node is coming from: "class_type": "Float"
    I cant locate any node in github which is of Float

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา

      can you check at which node the process stops? usually it's circled in purple.

    • @veenurohan3267
      @veenurohan3267 9 วันที่ผ่านมา

      Thanks a lot for the tutorial

  • @yuvish00
    @yuvish00 9 วันที่ผ่านมา

    Hi Andrea
    Great workflow!
    I tested it with a bottle of perfume and the forest background and the result not so good. Meaning, the size of the perfume with respect to the forest background was not proportional. Any suggestions how to improve this?
    Thanks!

    • @risunobushi_ai
      @risunobushi_ai  9 วันที่ผ่านมา +1

      Relative scale is always an issue with diffusion models. If the generation have no way of knowing the relative size of the subject relative to the background, you’re basically rolling a dice every time you’re generating. That’s why using a background that is “close enough” in scale to the picture you want to get, and setting a lower denoise than 1, usually helps. But yeah, the model needs some sort of guidance to understand and force scale in some way.

    • @yuvish00
      @yuvish00 8 วันที่ผ่านมา

      @@risunobushi_ai Gotcha! I understand. So even if the in CLIP text I say "perfume bottle", it is not going to help?

  • @yuvish00
    @yuvish00 2 วันที่ผ่านมา

    P.S Can our final image be same size as our background image?

    • @risunobushi_ai
      @risunobushi_ai  2 วันที่ผ่านมา

      Hi! no, the way this workflow works is by using the background image as a reference for an IPAdapter pipeline, it's not using that as a proper background by itself. So the aspect ratio and the dimensions, as well as the positioning of the subject relative to the background, are set by the subject image.

  • @MaghrabyANO
    @MaghrabyANO 16 วันที่ผ่านมา

    I tried using your genius workflow (thanks for it)
    it works with no error... but it doesnt generate a result at the "results" gox or at the "Option #1" or "Option #2" boxes... and the generated result (in the regeneration box) seems a bit cropped up, not blended, like .,...I guess the whole "Preserve Details" box doesn't run at all, neither the custom/global light boxes, let me give you a screenshot,
    Hope you can help,
    sent ya n email with the screenshots

    • @MaghrabyANO
      @MaghrabyANO 16 วันที่ผ่านมา

      Alright, nevermind this whole inquiry,
      the said boxes ran when I added a mask of light,
      But how can I use the global light mask? i.e I dont wanna add a light mask and I want the workflow to run to its completion (to the preservation area box)

    • @risunobushi_ai
      @risunobushi_ai  15 วันที่ผ่านมา

      Sorry I was out of office. I replied, but it seems you already figured it out! Global light is set in the switch where the user inputs are, so while at least a placeholder image is needed for all three inputs, you can use global light by selecting “False” on the “did you add a light mask?” Switch

  • @DanielSchweinert
    @DanielSchweinert 19 วันที่ผ่านมา

    Ok I really want to give it a try. Just installed portable ComfyUI + Manager + missing nodes and loaded your workflow but the screen is empty. Any clues?

    • @risunobushi_ai
      @risunobushi_ai  19 วันที่ผ่านมา

      That's weird, do you have any checkpoints installed / redirected to comfy? Did you get any messages when you imported the json?

    • @risunobushi_ai
      @risunobushi_ai  19 วันที่ผ่านมา

      weirdly I can't see your latest question, but I got notified via email, so:
      if you see the "x" error, you need to either load a missing image (even if you're not using it, use a placeholder, comfy has no way to bypass it even if it's bypassed by the switch), or you're using a non-jpeg, non-png format

    • @DanielSchweinert
      @DanielSchweinert 19 วันที่ผ่านมา

      @@risunobushi_ai Thank you, figured it out some stuff was missing "bert-base-uncased". Now it works but the final image is always squished. Have to check the resolution on the nodes.

    • @risunobushi_ai
      @risunobushi_ai  19 วันที่ผ่านมา +1

      I've updated the workflow by remapping all the width & height connections, it seems like the ints nodes were reverting to a slider input for some users.

  • @hartmanpeter
    @hartmanpeter 19 วันที่ผ่านมา +1

    I was just about to subscribe to Magnific. Thank you!

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      Magnific is still pretty great for their upscaler, it's the best around and I've found no open source alternative that is as good as theirs, so it might be worth subbing just for that - but relight is not where it's at.

    • @hartmanpeter
      @hartmanpeter 18 วันที่ผ่านมา

      @@risunobushi_ai I find that the upscaler changes the subject too much. The upscaler in ComfyUI suits my needs better.
      I was going to sub because the Relight feature was the added value I needed.
      I'm sure I'll sub in the future once I can find a business use, but for now, I'm a happy camper. Thanks again.

  • @AshT8524
    @AshT8524 20 วันที่ผ่านมา +1

    I'm early

  • @thewebstylist
    @thewebstylist 18 วันที่ผ่านมา +1

    Magnific makes it sooo easy though but is overrated, they of course only showcase their best of best examples

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      Yeah, UX is paramount, and I'd honestly be inclined to let subpar products go their merry way if the issues weren't as glaring regardless of ease of use.

  • @Kal-el23
    @Kal-el23 15 วันที่ผ่านมา

    Would love to see a few more real world or useful example, such as with people instead of a Roomba lol

    • @risunobushi_ai
      @risunobushi_ai  15 วันที่ผ่านมา +2

      I focused on products instead of people because people relighting is a very niche market, while product relighting for e-commerce purposes is a trillion dollar industry. But yeah, it can work with people too, with these limitations we have found here:
      th-cam.com/video/AKNzuHnhObk/w-d-xo.html

    • @Kal-el23
      @Kal-el23 15 วันที่ผ่านมา

      @@risunobushi_ai I get you. I suppose it depends on what industry you're in. If you're a portrait or composite photographer you might find scene transfer or relighting very useful.

  • @ok-pro
    @ok-pro 13 วันที่ผ่านมา

    The worst TH-cam channel ever
    The reason for this is that you don't show clear examples and clear comparison at the beginning of the video
    You must show us at least more than five examples , very bad pro

    • @risunobushi_ai
      @risunobushi_ai  13 วันที่ผ่านมา +1

      Thanks for the feedback, although you could choose your words a bit better next time. I'm rather new to TH-cam (I've been doing this for three months now, not a lot), I'll try to show more examples in the beginning next time.

    • @ok-pro
      @ok-pro 13 วันที่ผ่านมา +1

      @@risunobushi_ai
      I apologize if what I said seemed inappropriate, but I did not mean that. My intention was constructive criticism
      Good luck in the next videos

    • @risunobushi_ai
      @risunobushi_ai  13 วันที่ผ่านมา +1

      No worries, it was a valid feedback after all!

  • @aliebrahimzade491
    @aliebrahimzade491 15 วันที่ผ่านมา

    wow its amazing workflow