🚀 Insane Flux Control - New Loader, Crazy Compositing & Secret Memory Hacks!

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 115

  • @trashstratum
    @trashstratum หลายเดือนก่อน +12

    DUDE. You are so amazing and thorough. I can only imagine the amount of work you put into getting to this level. You're providing incredible value. You deserve 10000x the following. Keep it up and I'm sure you'll get there. Thanks so much!

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      That is amazing feedback - I'm so glad that you and others find this valuable and hoping we can help reach out to many many new people (and subscribers) :)

    • @thefcraft8763
      @thefcraft8763 หลายเดือนก่อน +1

      Hey nice video, can you make a video on how you make such type of realistic looking talking avatar like it perfectly matches to the voice​@@GrocksterRox

  • @cr_cryptic
    @cr_cryptic 15 วันที่ผ่านมา +2

    I’m very new to all this, when I finally figured out how to get Flux installed- it kept taking forever to start. I thought it broke a few times it took so damn long. But, eventually when I waited it out -then it started to render quicker. I think it’s because the first few runs it’s storing it to our caché & it’s HUGE- that’s why it overworks our shii- but once it’s been injected into our caché, it starts running faster every use… So it seems to me & I use it offline.

    • @cr_cryptic
      @cr_cryptic 15 วันที่ผ่านมา +2

      But, when I think of it -why they make us download it then duplicate it into our Caché? Wouldn’t it be faster & less bulky & more user friendly if it just used the files from the massive .softensors files we downloaded? 🤔

    • @GrocksterRox
      @GrocksterRox  15 วันที่ผ่านมา

      Yup, so I believe unfortunately it has to store it on the video card because of the intense fast calculations that the AI engine has to perform. The same calculations of course can be done in regular system memory or even on the hard drive, but they would be so egregiously slow that it would take hours for a single render. Right now it's an unfortunate fact of life, but VRAM is King in order to get fast renders come out so the more we can efficiently stuff into it, the better and faster the results.

  • @bartosak
    @bartosak หลายเดือนก่อน +5

    WOW - what a channel discovery! Thanks YT for this proposal! Grockster you are having a great potential - Definitely I will follow your channel

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thank you so much for the kind words! I really appreciate the opportunity to help the AI community and also feel free to pass this onto your network and friends as well. 💯

  • @vonmetternich9449
    @vonmetternich9449 หลายเดือนก่อน +4

    Hey Grockster, just a quick tip: instead of manually reconnecting nodes if you want to change your noise or model(loader), try adding the „any switch“ (rgthree) node, so automatically those nodes which are not bypassed are further progressed. E.g. you connect your diffusion loader and your gulf loader to the any switch and that to your further workflow. It tries to get the first (active) connected it gets to pass on. Will make your workflow much easier for swapping things around.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Ah, great thought and I did try that, but unfortunately since they are model loaders, keeping them active but connected to the any switch still was causing them all to load causing out of memory issues. When I had them connected to the any switch but then bypassed, the anyswitch node errored/complained that it was missing input. I'll have to experiment more, but had tried several pathways without success, though I love your creative thinking!

  • @SouthbayCreations
    @SouthbayCreations หลายเดือนก่อน +2

    Another info packed video! So many great tips, tricks and the workflow is priceless!! Thank you very much for always sharing the info and thank you for the shout-out! 🙌🙌

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Absolutely and thank you so much for that workflow beta navigation trick - will definitely be helpful for many in the community!

  • @Paulo-ut1li
    @Paulo-ut1li หลายเดือนก่อน +2

    That workflow is just amazing! Thank you so much. Some future improvements suggestion: add a tweaker section to use the detail booster and also lying sigma samples, attention seeker and block buster nodes. Those last ones can also adds some cool improvements sometimes.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      This is awesome feedback, thank you so much and I'll have to research a few of these other items you mentioned!

  • @g4p5l6
    @g4p5l6 หลายเดือนก่อน +1

    Huge. Looking forward to working with this... well presented and thanks for posting.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Absolutely, I'm glad it's helpful and was clear. Thanks so much for the feedback and feel free to share with others.

  • @muuuuuud
    @muuuuuud หลายเดือนก่อน +1

    Some really great info in this video, good work :)

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Thank you so much, I really appreciate you listening in and sharing!

  • @ferniclestix
    @ferniclestix หลายเดือนก่อน +2

    love the compositing bit, its nice.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thanks so much - yup for quick and easy placement, this is definitely a win for everyone!

  • @Afr0man4peace
    @Afr0man4peace หลายเดือนก่อน +1

    Amazing work again.. Now I have even more stuff I will test myself;-) Keep up your good work

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Absolutely, glad it can be helpful as always! My goal is continual learning, so I'm glad I can keep you on your feet 😁

  • @Pewi73
    @Pewi73 หลายเดือนก่อน +2

    Amazing and inspirational! 😃

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thank you so much for the amazing feedback and for sharing this video with others. 💯

  • @leadlayer
    @leadlayer หลายเดือนก่อน +2

    You can also avoid dragging nodes around by dragging with the middle mouse button. This enables you to drag the workflow, even if you're on a node at the time.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +2

      That's a great suggestion. I tried that previously and while it's great for short drags/movement, it's a bit cumbersome for larger stretches of navigation. That's why I typically just hold down the space bar and left click drag. But thanks, it's definitely an option as well!

  • @VuTCNguyenArtist
    @VuTCNguyenArtist หลายเดือนก่อน +2

    With the new interface, the memory monitoring from crytools doesn't display on the top bar like yours... they're disappeared on mine... how does one fix that? I went back to the original interface layout and it' appears on the manager bar like before...

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      I think you have to update ComfyUI (it wasn't previously appearing for me too but once I updated, it displayed - you may need to use the forced update method I have in the video). Good luck!

    • @VuTCNguyenArtist
      @VuTCNguyenArtist หลายเดือนก่อน +1

      @@GrocksterRox which specific update script I should run? I see 3 of them... I should run all 3?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Just try the update-comfy batch file by itself. That should resolve it, otherwise you can do the one with dependencies but that will take a bit longer to get through

    • @VuTCNguyenArtist
      @VuTCNguyenArtist หลายเดือนก่อน +1

      @@GrocksterRox Yea, that did not help.. it says all already up-to-date. Not sure if I want to try the dependencies one :)

    • @VuTCNguyenArtist
      @VuTCNguyenArtist หลายเดือนก่อน +1

      @@GrocksterRox nvm... I think I got it to work. I think when switching the layout (from legacy to the new one) it won't show up... until we restart comfyUI !!!

  • @thefcraft8763
    @thefcraft8763 หลายเดือนก่อน +3

    Hey nice video, can you make a video on how you make such type of realistic looking talking avatar like it perfectly matches to the voice

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Great suggestion, I'll add it to the queue of topics

  • @StephenFletcher-mx6tk
    @StephenFletcher-mx6tk หลายเดือนก่อน +1

    Great workflow mate but how the heck do i add a new subject to the layering node etc?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      It happens automatically after you un-bypass and render a subject, and then just rerun the workflow. So essentially,
      Step one: create a background
      Step two: enabled the compositor group
      Step three: enable at least one subject
      Step four: render a subject
      It really is pretty easy, and if you want we can walk through it if you jump on the discord. Good luck!

  • @Dilfin90
    @Dilfin90 หลายเดือนก่อน +3

    Cool tips! Can you please tell me in which software used to create the talking head animation?

    • @gnoel5722
      @gnoel5722 หลายเดือนก่อน

      might be wrong but it looked like the new Act one update from Runway.

    • @Dilfin90
      @Dilfin90 หลายเดือนก่อน

      @@gnoel5722 It's still just an assumption, though. I wonder what it really is.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      I have a custom blend of work from Face Fusion, Hedra and Live Portrait.

    • @Dilfin90
      @Dilfin90 หลายเดือนก่อน +1

      @@GrocksterRox It's amazing!

  • @Uday_अK
    @Uday_अK หลายเดือนก่อน +1

    Such a great workflow and its so efficient

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      So glad you like it - I'm really excited about how modular (but also not overwhelming) it is... Have been using it daily :)

  • @ArrowKnow
    @ArrowKnow หลายเดือนก่อน +1

    Thank you! I've been waiting for some memory improvements. I'm curious if you've seen what the Invoke team are doing as their software does some amazing things with their canvas system. The latest version has Flux support, layers, and many other options that it seems like you would be able to use way better than I can.

    • @gnoel5722
      @gnoel5722 หลายเดือนก่อน

      Invoke is a beast. I am so surprised they are not more popular. IMO it is because their website is very confusing and it looks like you need to pay to have access to invoke.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Definitely! I heard about canvas system updates happening, but haven't been too deep in the latest developments/releases

  • @gameblasted
    @gameblasted 14 วันที่ผ่านมา +1

    Great video and very thorough!! liked and followed! I do have an issue though, I feel like I've done everything you've mentioned here and I've triple checked to see if my settings are the same as yours, but when i generate a picture, no matter what it is, the detail is very low, its blurry and the pictures come out with this weird texture, any idea what that the problem might be?

    • @GrocksterRox
      @GrocksterRox  14 วันที่ผ่านมา

      Thanks for the kind feedback. It's a bit hard to diagnose, but the texture issue sounds like possibly that upscale noise is being used when the model can't support it. Happy to help support you a bit more if you want to jump on the Discord channel and we can see what's going on

  • @AI_Creatives_Toolbox
    @AI_Creatives_Toolbox หลายเดือนก่อน +1

    Amazing Content! Thank you! Just out of curiosity, why aren't you using the new comfy GUI?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thank you so much! If you mean the new comfy GUI as a standalone executable, I didn't see any real benefit between that versus through a webpage. Re: the new toolbar, I just have to become more comfortable with it since I've been using the existing interface since before SDXL :)

  • @bgtubber
    @bgtubber หลายเดือนก่อน +2

    Very useful tips. Thanks! Does the RAM optimization trick shown at 21:53 affect rendering speed or image quality in any way?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +2

      I haven't seen any impact to image quality, and while I haven't done extensive metric-based testing on overall loading times, I haven't noticed any significant load time increases for initial model load, and once the model is loaded into memory, it kicks out new images right away as expected. I've been using it now for several weeks and it's been great (no observable slow downs)

    • @bgtubber
      @bgtubber หลายเดือนก่อน +1

      @@GrocksterRox Awesome! :)

  • @RhysAndSuns
    @RhysAndSuns หลายเดือนก่อน +1

    Thanks for that. Just a heads up that the args memory trick will slow down generation times and change output composition

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Interesting - I hadn't noticed any slow down. Can you tell me more about the output composition change. What have you noticed from a side-by-side comparison perspective?

    • @RhysAndSuns
      @RhysAndSuns หลายเดือนก่อน +1

      @@GrocksterRox I think the args you suggested are just dependant on the set up. on an a4500m 16gb, i get about 2.1s/it normally, and about 4s/it with the arg changes. The memory changes shift my flux generations from fully loaded to partially loaded, and so the generations are different and less coherent

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Thanks, I'll continue to monitor but haven't seen anything substantial yet.

  • @JagatSingh-me8ko
    @JagatSingh-me8ko หลายเดือนก่อน +1

    wow amazing, could you put a tutorial on how you created the lip sych avatar which was talking ?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      It's a home brew, but a good place to start is Hedra (they have a free trial) - www.hedra.com/

  • @PeterLunk
    @PeterLunk หลายเดือนก่อน +2

    Nice 1 !

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thanks! Enjoy and please share with the community, Reddit, the world :)

  • @jasontaylor4582
    @jasontaylor4582 หลายเดือนก่อน +1

    thanks for this! Do you ever have problems with 'Anything Eveywhere' not working properly? (just did full update). It's maddening trying to use WFs that use it (like yours) and then having to figure out where everything really goes to get them to run.....thanks! Keep up great work!

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Yup it's happened before. I found that either #1 making sure you don't have duplicate pointers or updating the comfy version seems to help. Good luck!

    • @jasontaylor4582
      @jasontaylor4582 หลายเดือนก่อน +1

      @@GrocksterRox i see. can you elabroate on the 'duplicate pointers' ? what is that or how would i go about starting to debug this?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Yup, so if you're using the "Anything Everywhere?" node and have the same name in the input_regex field in two different places in your workflow, the node can get confused and just shuts down without any errors or indications on how to resolve it. You then have to go through all your nodes and find where these duplicates may be happening. It was definitely a HUGE pain to diagnose/fix in the past - I'm hoping the developer will add some sort of checks in place to help resolve it easier in the future.

    • @sven1858
      @sven1858 หลายเดือนก่อน +1

      @@GrocksterRox personally I prefer the get/set nodes

  • @IanLeBot-dk
    @IanLeBot-dk หลายเดือนก่อน +2

    Nice! I´m new to this, is there any way to modify this workflow for the creation of consistent LoRAs characters instead of prompted characters/objects? I would love to be able to compose a scene with multiple LoRA characters that stay conistent.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      If I understand your question, if you're asking if you can use LORAs to influence the composition to have those characters, the answer is yes. You can bring in those LORAs in the subject creation process, and then you include the LORAs in the final img2img to re-influence the characters (so those particular details don't melt away). That would allow you to prompt/controlnet poses, etc. and then make sure all specific details pull through. Hope that makes sense and good luck!

  • @sinuva
    @sinuva หลายเดือนก่อน +1

    Where are uuu ? we need more !!!!! =D

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      I'm so excited that you're excited! I was testing several new flux models and have a VERY exciting video on the way. Get your friends and colleagues excited, because this next video is SUPER COOL. 😁😁😁

  • @Shingo_AI_Art
    @Shingo_AI_Art หลายเดือนก่อน +1

    With my 3060 it's not just loras, just changing the prompt make comfy reload everything each time to the point i just came back to my pony models

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Understood - yeah it's a bit tough especially with less VRAM, but hopefully new innovations will be coming out to making Flux in reach for everyone. Note that there are also free sites that let you play with Schnell (e.g. www.piclumen.com/)

  • @freekhitman9916
    @freekhitman9916 หลายเดือนก่อน +1

    really good video again

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Thank you so much - please feel free to share the educational wealth with others!

  • @k0ta0uchi
    @k0ta0uchi หลายเดือนก่อน +1

    Thanks for the amazing workflow! It's working perfectly!
    However, I can't seem to apply multiple LoRAs. Is there any way to make this work?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Are you using the power LORA loader that's set up in the flow? If so, you just click the add LORA button and you can easily choose extra loras. Make sure the Loras are for flux (if everything else is set up for flux) since SDXL Loras are incompatible and vice versa

    • @k0ta0uchi
      @k0ta0uchi หลายเดือนก่อน +1

      @@GrocksterRox Thank you for your reply! The LoRAs I set with Power LoRA are working. I'm using different LoRAs for each subject, and the subjects are being created correctly. However, should I apply those LoRAs when I finally do Img2Img? If I do, the two LoRAs seem to blend together, and the subject ends up looking strange... Conversely, if I don't use them, the subject changes into a different person.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      @k0ta0uchi Funny timing, wait later today for a video with that answer (the answer will be with segs so that you're only modifying a portion of your image)

    • @k0ta0uchi
      @k0ta0uchi หลายเดือนก่อน +1

      @@GrocksterRox That's incredibly exciting news!! I'm really looking forward to the new video!! Thank you!!

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      @k0ta0uchi it's now live. Thanks for watching and sharing! 💯💥❤️

  • @electrolab2624
    @electrolab2624 หลายเดือนก่อน +1

    🤗 THX! It's a very clear tutorial with many helpful tips - Can't wait to try the new compositing node!
    Can you quickly paste the line which we can paste into our .bat file - here? (Or in the description above) - I need new glasses and it's kinda long-ish ..
    Would be nice if on a preview node we could have sliders for levels saturation brightness contrast, then be able to save the result directly (Just sharing a thought, not a demand😄)

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Hi - it's in the linked resource in the description: civitai.com/models/895350/video-tutorial-resources-flux-controlnet-ez-compositor-memory-boost-bonus
      That's a great thought. Though there are nodes that can easily do that. I've found that it's honestly simpler to just open it up in Photopea and do quick live adjusts that way. Otherwise you have to change the setting, re-render, change, re-render, etc.

  • @fulmine883
    @fulmine883 หลายเดือนก่อน +1

    Can you tell me what you have used to create that speaking avatar (with so perfect lypsinc) at the beginning of the video please?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Hi - it's a combination of Live Portrait, Hedra and Face Fusion

  • @kymatekk
    @kymatekk หลายเดือนก่อน +1

    any guidance on how to create these speaking avatars would be great! did you use liveportrait to create this avatar?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +2

      I tend to switch off and merge techniques with Live Portrait, Hedra and FaceFusion. I think it really depends on your goal and the length of video, etc. I definitely recommend using a head that fill most of the frame but not too much where weird warping can happen. Good luck!

    • @soulacrity7498
      @soulacrity7498 หลายเดือนก่อน +1

      @@GrocksterRox Which do you recommend for extremely long videos like hours long? A live video solution would be best so definitely open-source, and if not live then for very long videos. I would also like to use my own avatar if possible.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      For long-form, I would go with live portrait

  • @svt8253ai
    @svt8253ai 22 วันที่ผ่านมา +1

    Can u make a tutorial how to make the Avatar talk like in this video? Can it be done in ComfyUI?

    • @GrocksterRox
      @GrocksterRox  22 วันที่ผ่านมา

      Thanks - it's definitely on the list (new video coming out soon but not this topic). Definitely check out Live Portrait, it could definitely get you to your goal (and yes, it's in Comfy).

  • @TUSHARGOPALKA-nj7jx
    @TUSHARGOPALKA-nj7jx หลายเดือนก่อน +1

    Great workflow but somehow pressing the continue on the Compositor restarts the entire workflow from the beginning instead of moving the image to preview. Can you tell me how to solve this issue?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Thanks so much! You'll want to make sure the previous samplers all have fixed seeds, otherwise it'll try to run them again. That be said, if everything is fixed, it may run through the samplers again (but it'll do it as a skip through, it shouldn't re-render everything). Happy to chat about it more on discord if it's plaguing you. Good luck!

  • @martinkaiser5263
    @martinkaiser5263 หลายเดือนก่อน +1

    Wich tool do you use for the narrator ?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      The narrator voice is my own 😀

    • @martinkaiser5263
      @martinkaiser5263 หลายเดือนก่อน +1

      @@GrocksterRox I meant the animated face :-)

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน +1

      Ah, I use a blend (based on scenario) of several tools out there including Live Portrait, Hedra, Face Fusion and Reactor

    • @martinkaiser5263
      @martinkaiser5263 หลายเดือนก่อน +1

      @@GrocksterRox Thanks !

  • @ThunderPokee
    @ThunderPokee หลายเดือนก่อน +1

    Broo what you used for that talking character

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      I typically switch off or blend live portrait, Hedra, face fusion and other tools

  • @EternalKernel
    @EternalKernel หลายเดือนก่อน +1

    Where can I get this colossus model? and where is this flux leader board please?

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      It's in the video description, but I've posted here too (go to the third tab / model assessment) - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?usp=sharing

  • @LouisGedo
    @LouisGedo หลายเดือนก่อน +1

    Hi 👋

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Hi there, hope you enjoyed! :)

  • @abhinavbisht9851
    @abhinavbisht9851 หลายเดือนก่อน +1

    Bro what are you pc specs on which you run comfy ui...

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      4090, but otherwise mid-level PC

    • @abhinavbisht9851
      @abhinavbisht9851 หลายเดือนก่อน

      @GrocksterRox 🤣🤣🤣 okay so 4090 pc is now a mid level pc.. I wish I can also afford such mid level pc with a 4090...

  • @2008spoonman
    @2008spoonman หลายเดือนก่อน +1

    Hm, that memory booster command line tip just creates more noise and artifacts in my end results….

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Hi - I just did a side-by-side (same seed, config, everything) and didn't see any additional noise. There were a few slight variations in subject matter, but very miniscule (and again no degradation in image quality from what I saw). I posted the side-by-side on my discord channel here if interested: discord.gg/RXKgquKK7v

    • @2008spoonman
      @2008spoonman หลายเดือนก่อน +2

      I always start my new “flux day” with the last rendered image from the day before (drag & drop image in the default screen). With your memory boost command my image was almost identical but with a lot of noise (like a tv screen capture from the 90’s). Maybe fiddling with VRAM settings is not the correct way. Flux needs all the memory it can get.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      @2008spoonman that may be the denim effect I was mentioning but it's due to the type of noise. If you're using upscaled noise, I would replace it with simple random noise instead, that's what I found solves that issue

  • @697_
    @697_ หลายเดือนก่อน +1

    When I head this AI voice I just think of that robot handing the guy a sandwich and putting away the dishes

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Haha I should definitely market my voice out in that case :)