Hyper Stable Diffusion with Blender & any 3D software in real time - SD Experimental

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024

ความคิดเห็น • 108

  • @Mranshumansinghr
    @Mranshumansinghr 4 หลายเดือนก่อน +6

    One of the best use case scenario for Blender with comfiui. Loved it. Can not wait to try out.

  • @loubakalouba
    @loubakalouba 5 หลายเดือนก่อน +5

    Thank you, I was trying to find a good solution for this and just like magic it popped in my youtube feed. I can't believe there is no integration of comfy with major software such as blender and UE. Excellent tutorial!

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      Glad I could be of help! There are some implementation for Blender and UE, but I find the one for Blender too restricted, and I haven't tried the one for UE yet. This one's a bit janky, but it's a tradeoff for flexibility.

  • @WalidDingsdale
    @WalidDingsdale 3 หลายเดือนก่อน +1

    amazing combination betweet SD and Blender, thanks for sharing this.

  • @bestof467
    @bestof467 5 หลายเดือนก่อน +5

    You keep mentioning Blender "animation" but show only still images in ComfyUI. Can you do a video showing how an animation in blender etc. can be mirrored in ComfyUI using this workflow?

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +5

      I’m mentioning the animation baked into the FBX model downloaded from Adobe Mixamo (i.e.: the kneeling / getting up animation of the human like figure in this and the previous video). I usually don’t cover animating with SD and Animatediff, as it’s a whole more work and with a full time job it’s not easy to find the time to cover that as well. But since you’re not the first one requesting it, I’ll do it in the future when I’ll have more time to cover it properly!

    • @bestof467
      @bestof467 5 หลายเดือนก่อน

      @@risunobushi_ai I'm guessing with this current workflow it has to be done frame by frame and then dump all frames in a third party software like Adobe🤔

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +1

      I wouldn’t use this workflow, that would be a bit of a mess in terms of consistency for no particular gain - I would simply export each frame by rendering it and using a regular animatediff pipeline

    • @kamazooliy
      @kamazooliy 5 หลายเดือนก่อน +1

      ​@@bestof467no. Here's a better way; export animated video sequence from blender/DCC. Use animateDiff + PhotonLCM vid2vid workflow to match the animation.

  • @danacarvey
    @danacarvey 3 หลายเดือนก่อน +1

    Thanks for sharing, this looks awesome! Is it only useful for stills atm? Would it be difficult to get character and scene consistency over a number of frames?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน

      Doing it in real time is useful for stills, but exporting an animation frame by frame and maintaining consistency is definitely doable with the right amount of ipadapters and consistent settings throughout the generations. You'd obviously lose out the real time aspect of it, mainly because adding stuff like multiple ipadapters would slow down the generations to a point where it wouldn't be able to keep up with the frames.
      Although you could also use something like tooncrafter to render out less frames and animate the inbetweens.

    • @danacarvey
      @danacarvey 3 หลายเดือนก่อน

      Fantastic, thanks for your well thought out reply! ​@@risunobushi_ai

  • @karlgustav9960
    @karlgustav9960 4 หลายเดือนก่อน +1

    Wow, this is amazing! I was wondering how my old 3d blocking/speedpainting/camera mapping that I used for game concept art with comfyui. Have you considered using a realtime z-depth material instead of actual lighting? That might speed things up a little because you can skip the z-estimation. I also wonder if it is possible to muliply the realtime z-intensity with a base color to help a conditioner separate certain meaningful objects from the rest of the scene at all times. Sorry I’m really new to comfyui, and I don’t have a local GPU, those are just my thoughts.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      No need to be sorry, these are all great points! I’m on the opposite side of the spectrum, I’m fairly specialized in SD and comfyUI and only know intermediate level stuff in 3D, so most of my 3D implementations are far from being optimal. Integrating z-depth materials, or depth maps directly would work great, speeding things up in both Blender and SD

  • @JohanAlfort
    @JohanAlfort 5 หลายเดือนก่อน +6

    Super exciting to see ..as a vfx artist/supervisor I see many applications for the early stages of production. Previz will change and as you mention, DMP artists and concept artists are getting really nice first stage production tools. I'm hoping some cleaver dev can do some similar implementation for a DCC to the Krita AI plugin.
    As always thanks! Inspiring.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +1

      Exactly! Even if this is a very rough way of implementing instant generations, on one hand there is a space that can take advantage of this, and on the other we're still experimenting. If professionals start to get interested in a 3D/AI mixed pipeline, more refined products will come eventually, but even testing out the early stages of it all is fascinating to me.

  • @andredeyoung
    @andredeyoung 6 วันที่ผ่านมา

    wow! Thanx

  • @onemanhorrorband7732
    @onemanhorrorband7732 5 หลายเดือนก่อน +1

    Any suggestion to increase consistency? If I use more complex 3d models for example would i increase consistency over a character for example ? It would be amazing for storyboarding, yeah the result is not like the actual finished and polished 3d scene but it’s a pretty nice result for few seconds of work basically….maybe with a trained LORA + complex 3d models 🤔idk , if you have any suggestion is welcomed 👍

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      I would suggest a combination of ipadapters, one for style transfer and one for character, and / or a LoRA if you have one handy. But keep in mind that each node you add makes the generations less “instant”

  • @liscarscott
    @liscarscott หลายเดือนก่อน

    this is great but I can't set area in the screen share node. The preview image in the node has a red overlay, and clicking "set area" doesn't bring up that window you have. I'm new to Comfy, please help!

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      Usually when clicking on set area a pop up asks you to give permissions to screen share something via the browser - did you get the pop up / did you allow it?

    • @liscarscott
      @liscarscott หลายเดือนก่อน

      @@risunobushi_ai I did, and it's working over all, but I just can't get the "set region" button to work. I'll try today. It might be a browser plugin interfering.

    • @liscarscott
      @liscarscott หลายเดือนก่อน

      @@risunobushi_ai I tried disabling all plugins and tried using a fresh install of Chrome. When I click "Set Area", a 1-pixel wide vertical white bar pops up on the right edge of the browser window. I suspect that's the window I need, but the "open popup" code within the node itself has a bug.

    • @liscarscott
      @liscarscott หลายเดือนก่อน

      FIXED: I deleted the custom nodes folder for Mix-lab and re-installed through the manager. It works now. I must've borked something the first time I installed

  • @Polyroad3D
    @Polyroad3D 4 หลายเดือนก่อน +1

    Oh, I hope AI don't take our jobs away :)

  • @drmartinbartos
    @drmartinbartos 4 หลายเดือนก่อน

    If you find a style you like then could you feed the last pop-up preview in as a style transfer source, so essentially get fairly consistent rendering based on last style carried forwards..?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      With this setup, either your really fast at saving the generation you like, or you can find the generations in the output folder until you close and relaunch comfyUI, since I have appended a preview image node and not a save image node.
      But once you found an image you like, you could add a IPAdapter style reference group. Note that this would slow down the generations a lot

  • @anicbranko2556
    @anicbranko2556 2 หลายเดือนก่อน

    10:42min video- my controlnet loader does not show any models, how can i fix this?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน +1

      You need to download the ControlNet models in the ControlNet folder (comfyUI -> models -> ControlNet)
      You can find most ControlNet models via the manager (install custom models) and it should place them in the right folder automatically

    • @anicbranko2556
      @anicbranko2556 2 หลายเดือนก่อน

      @@risunobushi_ai Thank you!

  • @pranavahuja1796
    @pranavahuja1796 4 หลายเดือนก่อน

    I am a 3d artist and I am new to all of this, i guess this worklflow can also be used to generate clothing product photos. we can add clothing to this avatar from marvelous designer and then generated product images, what i am confused about is how do we generate a consistent face using a reference image.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Generating a consistent face is actually the easiest part, you want to use something like IPAdapter Face, FaceID, InstantID or other solutions like these. Generating actual products is a lot harder if you want to generate them with AI. Look into zero shot image generation and VITON (virtual try-on) solutions, none of them are quite there yet because the underlying tech, CLIPVision, doesn’t “see” at a high enough resolution to keep all the details, logos and textures intact when generating.

  • @bipinpeter7820
    @bipinpeter7820 5 หลายเดือนก่อน

    Super cool!!

  • @jasonkaehler4582
    @jasonkaehler4582 4 หลายเดือนก่อน

    very cool! Any idea why it doesn't update when i rotate view in Blender? I have to manually 'Set Area' each time for it to regenerate the CN images (depth etc.). I dont want to use "Live Run" just kick off renders in COmfyUI manually. I would expect it to regenerate the CN images each time, but it doesn't.....any suggestsions? Thanks, great stuff! (oh, using Lightning WF)

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I think that the way the screen share node is set up to work is by enabling live run only. I had the same “issue” when I first started testing it, and I didn’t realize that it wasn’t actually refreshing the image unless either of two things happened: using live recording, or resetting an area. So I think there’s no way around it.

    • @jasonkaehler4582
      @jasonkaehler4582 4 หลายเดือนก่อน

      @@risunobushi_ai ah, got it. thanks for fast reply. This is unfortunate as the workflow is so close to being fluid / responsive.... very cool nonetheless. I setup a switcher for two different models at the front so move from Lightning to better SDXL easier...

  • @sundayreview8152
    @sundayreview8152 5 หลายเดือนก่อน

    would be great if the was a 3d to video software/app/plugin that could produce Sora quality stable animation.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      If I were to do something like that, I wouldn’t go with this workflow. For quick storyboarding, sure, but for a full-on video I’d render the animation frame by frame and run it through an animatediff pipeline.
      That would still be different from Sora though, since what’s amazing about video generation is the ability to generate without an underlying reference and still keep a high degree of consistency

    • @PSA04
      @PSA04 5 หลายเดือนก่อน

      I think we are a very long time away from AI creating animations that you can tumble around and are grounded like Sora.. LLM's have a huge library of 2d references, but 3D is extremely different. As an animator, there are never one of the same movement or shot, everything has to always been customized to the need in 3d.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +1

      Oh absolutely. I’m not saying it can be done at the same level, or that it can be done without rendering the underlying 3D scene frame by frame, I’m saying that what’s great about Sora and the like is the object permanence regardless of an underlying “structure”. Whether that structure can or cannot be properly directed (and I haven’t seen anything that tells me it can), at least in terms of current production pipelines, is a completely different matter.

    • @PSA04
      @PSA04 5 หลายเดือนก่อน

      @@risunobushi_ai I agree. You are doing great things here. Keep going!

  • @gonardtronjuan
    @gonardtronjuan 4 หลายเดือนก่อน +2

    nah, i'd rather have control over actual 3d models and materials.
    this is just a cheap way to prodduce a "final" looking result.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      Yep, that’s what it is in the eyes of someone who’s used to 3D. In the eyes of someone who’s not, it’s a *fast way to produce a “final” looking result, and to have more control over how it ends up looking.

  • @phu320
    @phu320 18 วันที่ผ่านมา

    janky, LOL.

  • @inspiredbylife1774
    @inspiredbylife1774 5 หลายเดือนก่อน +7

    Super excited to see the possibilities of SD. Following you for similar content!!!

  • @MDMZ
    @MDMZ 5 หลายเดือนก่อน +7

    this is so innovative, Great video

  • @plinker439
    @plinker439 3 หลายเดือนก่อน +1

    only some month passed but already fed up with ai generated shit, gets boring. :D

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน +1

      to each their own, I guess! when the hype settles, at the end of the day it's a matter of trying things out and seeing if one ends up liking it or finding it useful.

  • @FlorinGN
    @FlorinGN หลายเดือนก่อน

    Amazing workflow. I am trying to use it for archviz quick iterations. It needs some tweaking, but I can see it working. Also, I have not been using Comfy since 1.5 was a big deal and I am amazed about how fast XL models do their job.
    Thank you!

  • @phu320
    @phu320 18 วันที่ผ่านมา

    why not create .blends entirely with ai or llm

  • @spotsnap
    @spotsnap 5 หลายเดือนก่อน +2

    there is depth option in blender itself which is perfect because it uses geometry data.
    it can be found in render layers i think and viewed from compositing layout. in this way it wont require depth anything node and will pick objects in any lighting conditions.

    • @spotsnap
      @spotsnap 5 หลายเดือนก่อน

      depth layer also works in evee render engine

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      I’ll have to try it out, thanks for the heads up!

    • @noeldc
      @noeldc 4 หลายเดือนก่อน

      I was thinking this the whole time.

  • @animatic078
    @animatic078 หลายเดือนก่อน

    😍This is very Good one Thank you for u r info👍

  • @MrTPGuitar
    @MrTPGuitar 3 หลายเดือนก่อน

    Is there a node that takes an EXR (or other deep file format)s with multiple channels? It would be great if we could render out exr with depth, outline & cryptomatte or colormask to be loaded and separated for using with controlnet. Screensharing is good, but the depth and outline is just an approximation when we have our complete data available in the 3d package/fileformat

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน

      In terms of loading EXRs I've found this, by a great dev who's been helping me with another setup: github.com/spacepxl/ComfyUI-HQ-Image-Save
      But if you need to load those masks as ControlNet references, I'd export them separately and use them one by one as a load image. This defeats the purpose of a real time workflow though, but I honestly don't know if the EXR node can actually output depth, outline etc. I think it was developed as a way to import / export 32bit rather than that.

  • @PatriceFERLET
    @PatriceFERLET 23 วันที่ผ่านมา

    Why not using the manager to install plugins? On Linux it is safe (using virtual env) and way easier (despite the fact I'm a developer and confortable with terminal)

    • @risunobushi_ai
      @risunobushi_ai  21 วันที่ผ่านมา

      AFAIK it’s the same thing - if you’re using a venv and activate it before launching comfy, installing via the manager is basically the same, just a bit more convenient? I usually do it via terminal because there’s some nodes where the requirements don’t install properly, so I usually do a pip install -r requirements.txt round even if it should install correctly on the first try

    • @PatriceFERLET
      @PatriceFERLET 21 วันที่ผ่านมา

      @@risunobushi_ai on Linux, at least, there isn't any difference excepting that it's "easier". I'm confortable with terminal, I'm developer 😉, but I'm using ComfyUI from a laptop. ComfyUI is installed on a server (at home). The installation process does the "pip install" if "requirements.txt" exists. But I'm only surprised that Windows users uses the terminal 😜

  • @victordelacruz8440
    @victordelacruz8440 4 หลายเดือนก่อน

    Where can you find the seed number of the SD composite on to the b3d results?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I’m sorry, could you rephrase the question? I’m not sure I understand what you mean

  • @amaru_zeas
    @amaru_zeas 5 หลายเดือนก่อน +3

    Awesome video as usual.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      Edit: whoops just saw your edit. Anyway yes there is a dedicated Blender node suite, but it’s too limited IMO, since all the nodes it has are hard coded into it, at least as far as I understand. So some might not work properly, some might not work entirely, and there’s no way to live source from the viewport, only from rendered images.
      You can find it here: github.com/AIGODLIKE/ComfyUI-BlenderAI-node

    • @amaru_zeas
      @amaru_zeas 5 หลายเดือนก่อน +1

      @@risunobushi_ai actually you are right, even better because you can use other DCC, like Maya or PSD since is not tied to any app. Very cool man.

  • @davekite5690
    @davekite5690 5 หลายเดือนก่อน +1

    Fascinating - 'very interested to see how this develops... (e.g. 3d character training for consistency along with camera movement/posing/animation one day...) - I'd have thought using a live feed from the camera would be great next step... Really good work.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน +2

      I’d be very surprised if there wasn’t a node for capturing live from a camera already! I’ll actually look into it

    • @davekite5690
      @davekite5690 5 หลายเดือนก่อน

      @@risunobushi_ai Super Cool ty.

  • @fairplex3883
    @fairplex3883 4 หลายเดือนก่อน +3

    Idk how to call this, so lets just call it the "3D for noobs"

  • @yatima733
    @yatima733 2 หลายเดือนก่อน

    Great info, thank you very much. Btw, as someone pointed out in another comment, instead of generating the depth map in ComfyUI, it would be much more efficient to use real information generated in Blender. Just use the depth pass in the compositor, add a normalize node, an invert, and adjust it to your liking with RGB curves. If you connect that to a viewer node, you will have the correct depth map in Blender's viewer to feed into ComfyUI.

  • @SjonSjine
    @SjonSjine 4 หลายเดือนก่อน

    Cool! How to render a animation? A 100 frames blender animation, how to sync this so Comfui renders the 100 frames?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I'm not the best at frame by frame animating, I'll be honest. This video follows a different approach, more heavily reliant on Blender than SD, but it's great: th-cam.com/video/8afb3luBvD8/w-d-xo.html
      I'll probably try my hand at frame by frame animation in the future, but for now that's a good starting point!

    • @SjonSjine
      @SjonSjine 4 หลายเดือนก่อน

      Thank you so much! Will check today! Have a good one!!

  • @GES1985
    @GES1985 3 หลายเดือนก่อน

    What about unreal engine? Can we use stable diffusion to make armor that fits the Unreal mannequin?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน +1

      The base workflow - if you're thinking real time - would be the same, you'd just source from the UE viewport instead. The issue is that you'd be creating 2D images, and if you then wanted to turn them into 3D you'd need something like TripoSR (which is usable but not great) to turn the 2D designs into 3D assets.

    • @GES1985
      @GES1985 3 หลายเดือนก่อน

      @risunobushi_ai thanks for the quick response! I keep hoping that Unreal is working on developing native tools to provide ai generative help, for c++/blueprint coding & 3d modeling both!

  • @pranavahuja1796
    @pranavahuja1796 4 หลายเดือนก่อน

    I was wondering if there was a way to generate clothing product photos (like t-shirts) using a flatlay clothing image to a model wearing it.

  • @pongtrometer
    @pongtrometer 4 หลายเดือนก่อน

    Reminds me of using the camera/desktop feature in KREA AI … for live prompt generation

  • @AllExistence
    @AllExistence 4 หลายเดือนก่อน

    I think blender can create a separate window for any area, including viewport.

  • @TheErikRosengren
    @TheErikRosengren 3 หลายเดือนก่อน

    What's your definition of real-time?

    • @risunobushi_ai
      @risunobushi_ai  3 หลายเดือนก่อน

      I guess, since performances change wildly depending on hardware, anything that does a generation in 2-3 seconds max. You'd still get a noticeable lag, but it'd be fast enough to keep up with a user working on a scene. HyperSDXL achieves that on good enough hardware, SDXL Lighting is a bit slower than that.

    • @TheErikRosengren
      @TheErikRosengren 3 หลายเดือนก่อน

      @@risunobushi_ai ok! I think it's a bit misleading to call it real time when there's a noticeable delay. I still enjoyed the video though!

  • @cekuhnen
    @cekuhnen 4 หลายเดือนก่อน

    is this similar to tools like Krea which rendered so incredibly fast?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      I haven’t used Krea at all, so I can’t be of much help there, sorry :/

  • @KalkuehlGaming
    @KalkuehlGaming 5 หลายเดือนก่อน

    I am waiting for AI to actually recomposition the blank scene to what it created as a picture.
    Automatically tweaking the light, textures and objects to the point, the 3D view looks exactly like the picture.

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      In this example I only used depth as a ControlNet, but it would be possible to get a lot more of precision if one were to use a combination of canny, lineart, normal maps, etc.
      By using depth only, some things do not get picked up, but other maps might pick those up.

  • @francescocesarini9266
    @francescocesarini9266 4 หลายเดือนก่อน

    Super nice video! Thank you for sharing your knowledge. Just wanted to ask you if in your opinion is better to integrate the custom codes with the github links or directly searching them in the comfy ui manager?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Thanks! Functionally there’s no difference, if it’s up on the manager you can use either - the manager is just easier.

  • @Howtolaif
    @Howtolaif 4 หลายเดือนก่อน

    Your voice and expressions remind me of the architect from matrix.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      Well I am surrounded by screens, so yeah

  • @andykoala3010
    @andykoala3010 5 หลายเดือนก่อน +1

    Great video - going to try this with my Daz installation

  • @田蒙-u8x
    @田蒙-u8x 4 หลายเดือนก่อน

    Excellent video!
    Would you mind sharing your hardware specs with us?

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน

      Thanks! My hardware is good but not exceptional, I built my workstation two years ago.
      CPU: AMD 5900x
      GPU: Nvidia 3080ti
      RAM: 2x32gb
      Take into consideration that I don't speed up my recordings, but I'm screen capturing at 2k and processing the camera footage as well while I'm generating, so my generations are a bit slower on camera than they are off camera.

  • @MrBaskins2010
    @MrBaskins2010 4 หลายเดือนก่อน

    wow this is wild

  • @Mind_ConTroll
    @Mind_ConTroll 5 หลายเดือนก่อน

    Can you make a video on making a character model in comfy for use in blender?

    • @risunobushi_ai
      @risunobushi_ai  5 หลายเดือนก่อน

      The issue of making a character via comfyUI (either via TripoSR or the likes) is that it wouldn’t be rigged. And even if Mixamo has a tool for auto rigging characters, it more often misses and throws errors than it actually manages to autorig them

    • @Mind_ConTroll
      @Mind_ConTroll 5 หลายเดือนก่อน

      @@risunobushi_ai I have only been learning this stuff for a couple months and I am close, I can make human characters, make the pictures in AI, different poses, then use Facebuilder in blender, but Face builder only works for humans, but with those humans I can the use Metahuman to rig and animate, if I can Use AI to make a decent model of a nonhuman, so far the models have been broken, missing pieces and deformed from different angles, that I can slightly refine in blender, I should be able to pull that onto Metahuman to rig and animate, there has to be a way, being so close with my limited knowledge, is driving me crazy

  • @KananaEstate
    @KananaEstate 4 หลายเดือนก่อน +2

    Awesome workflow

  • @1lllllllll1
    @1lllllllll1 4 หลายเดือนก่อน +2

    This to me has always been the holy grail workflow. I’m excited to see this progression. Soon, we won’t need rendering at all anymore. Shaders are such an enormous time sink.

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +3

      I'm actually developing a comprehensive photography -> 3d -> relight -> preserve details -> final render / photo workflow for next Monday, we'll see how it works out.

    • @vitalis
      @vitalis 4 หลายเดือนก่อน

      @@risunobushi_ai that would be freaking awesome

  • @Shri
    @Shri 4 หลายเดือนก่อน

    This can be made way more efficient. For one: do away with live screen share of Blender altogether as it is taking a lot of compute. Instead just take screenshots or export image of scene on mouseup. Then have a custom ComfyUI node watch for changes to the export folder and re-render whenever the folder has a new file. The advantage of this method is that you have screenshots of all the various changes to the scene and can go back to any particular scene/pose you liked at any time. You have a history to go back and forth in ComfyUI. It would be even cooler to save the image of the rendered output so you can save all the metadata of the generated image (like seed, sampler config etc). That way you can correlate the saved screenshot with generated image and know which image can be passed to the control net to reproduce the same output (as you have saved both input image and generation metadata).

    • @risunobushi_ai
      @risunobushi_ai  4 หลายเดือนก่อน +1

      Yep, that’s basically the workflow from my previous video (except we don’t use a node that automatically expects new images, but just a load image node loaded with the renders). For this one I wanted to automate the process as much as possible, regardless of how unoptimized it would end up.
      But yeah I’m all for finding ways to optimize and make the workflow yours, so I agree there’s room for improvements depending on what one wants to do with it!