DUDE. You are so amazing and thorough. I can only imagine the amount of work you put into getting to this level. You're providing incredible value. You deserve 10000x the following. Keep it up and I'm sure you'll get there. Thanks so much!
That is amazing feedback - I'm so glad that you and others find this valuable and hoping we can help reach out to many many new people (and subscribers) :)
Hey nice video, can you make a video on how you make such type of realistic looking talking avatar like it perfectly matches to the voice@@GrocksterRox
I’m very new to all this, when I finally figured out how to get Flux installed- it kept taking forever to start. I thought it broke a few times it took so damn long. But, eventually when I waited it out -then it started to render quicker. I think it’s because the first few runs it’s storing it to our caché & it’s HUGE- that’s why it overworks our shii- but once it’s been injected into our caché, it starts running faster every use… So it seems to me & I use it offline.
But, when I think of it -why they make us download it then duplicate it into our Caché? Wouldn’t it be faster & less bulky & more user friendly if it just used the files from the massive .softensors files we downloaded? 🤔
Yup, so I believe unfortunately it has to store it on the video card because of the intense fast calculations that the AI engine has to perform. The same calculations of course can be done in regular system memory or even on the hard drive, but they would be so egregiously slow that it would take hours for a single render. Right now it's an unfortunate fact of life, but VRAM is King in order to get fast renders come out so the more we can efficiently stuff into it, the better and faster the results.
Thank you so much for the kind words! I really appreciate the opportunity to help the AI community and also feel free to pass this onto your network and friends as well. 💯
Hey Grockster, just a quick tip: instead of manually reconnecting nodes if you want to change your noise or model(loader), try adding the „any switch“ (rgthree) node, so automatically those nodes which are not bypassed are further progressed. E.g. you connect your diffusion loader and your gulf loader to the any switch and that to your further workflow. It tries to get the first (active) connected it gets to pass on. Will make your workflow much easier for swapping things around.
Ah, great thought and I did try that, but unfortunately since they are model loaders, keeping them active but connected to the any switch still was causing them all to load causing out of memory issues. When I had them connected to the any switch but then bypassed, the anyswitch node errored/complained that it was missing input. I'll have to experiment more, but had tried several pathways without success, though I love your creative thinking!
Another info packed video! So many great tips, tricks and the workflow is priceless!! Thank you very much for always sharing the info and thank you for the shout-out! 🙌🙌
That workflow is just amazing! Thank you so much. Some future improvements suggestion: add a tweaker section to use the detail booster and also lying sigma samples, attention seeker and block buster nodes. Those last ones can also adds some cool improvements sometimes.
You can also avoid dragging nodes around by dragging with the middle mouse button. This enables you to drag the workflow, even if you're on a node at the time.
That's a great suggestion. I tried that previously and while it's great for short drags/movement, it's a bit cumbersome for larger stretches of navigation. That's why I typically just hold down the space bar and left click drag. But thanks, it's definitely an option as well!
With the new interface, the memory monitoring from crytools doesn't display on the top bar like yours... they're disappeared on mine... how does one fix that? I went back to the original interface layout and it' appears on the manager bar like before...
I think you have to update ComfyUI (it wasn't previously appearing for me too but once I updated, it displayed - you may need to use the forced update method I have in the video). Good luck!
Just try the update-comfy batch file by itself. That should resolve it, otherwise you can do the one with dependencies but that will take a bit longer to get through
@@GrocksterRox nvm... I think I got it to work. I think when switching the layout (from legacy to the new one) it won't show up... until we restart comfyUI !!!
It happens automatically after you un-bypass and render a subject, and then just rerun the workflow. So essentially, Step one: create a background Step two: enabled the compositor group Step three: enable at least one subject Step four: render a subject It really is pretty easy, and if you want we can walk through it if you jump on the discord. Good luck!
Thank you! I've been waiting for some memory improvements. I'm curious if you've seen what the Invoke team are doing as their software does some amazing things with their canvas system. The latest version has Flux support, layers, and many other options that it seems like you would be able to use way better than I can.
Invoke is a beast. I am so surprised they are not more popular. IMO it is because their website is very confusing and it looks like you need to pay to have access to invoke.
Great video and very thorough!! liked and followed! I do have an issue though, I feel like I've done everything you've mentioned here and I've triple checked to see if my settings are the same as yours, but when i generate a picture, no matter what it is, the detail is very low, its blurry and the pictures come out with this weird texture, any idea what that the problem might be?
Thanks for the kind feedback. It's a bit hard to diagnose, but the texture issue sounds like possibly that upscale noise is being used when the model can't support it. Happy to help support you a bit more if you want to jump on the Discord channel and we can see what's going on
Thank you so much! If you mean the new comfy GUI as a standalone executable, I didn't see any real benefit between that versus through a webpage. Re: the new toolbar, I just have to become more comfortable with it since I've been using the existing interface since before SDXL :)
I haven't seen any impact to image quality, and while I haven't done extensive metric-based testing on overall loading times, I haven't noticed any significant load time increases for initial model load, and once the model is loaded into memory, it kicks out new images right away as expected. I've been using it now for several weeks and it's been great (no observable slow downs)
Interesting - I hadn't noticed any slow down. Can you tell me more about the output composition change. What have you noticed from a side-by-side comparison perspective?
@@GrocksterRox I think the args you suggested are just dependant on the set up. on an a4500m 16gb, i get about 2.1s/it normally, and about 4s/it with the arg changes. The memory changes shift my flux generations from fully loaded to partially loaded, and so the generations are different and less coherent
thanks for this! Do you ever have problems with 'Anything Eveywhere' not working properly? (just did full update). It's maddening trying to use WFs that use it (like yours) and then having to figure out where everything really goes to get them to run.....thanks! Keep up great work!
Yup, so if you're using the "Anything Everywhere?" node and have the same name in the input_regex field in two different places in your workflow, the node can get confused and just shuts down without any errors or indications on how to resolve it. You then have to go through all your nodes and find where these duplicates may be happening. It was definitely a HUGE pain to diagnose/fix in the past - I'm hoping the developer will add some sort of checks in place to help resolve it easier in the future.
Nice! I´m new to this, is there any way to modify this workflow for the creation of consistent LoRAs characters instead of prompted characters/objects? I would love to be able to compose a scene with multiple LoRA characters that stay conistent.
If I understand your question, if you're asking if you can use LORAs to influence the composition to have those characters, the answer is yes. You can bring in those LORAs in the subject creation process, and then you include the LORAs in the final img2img to re-influence the characters (so those particular details don't melt away). That would allow you to prompt/controlnet poses, etc. and then make sure all specific details pull through. Hope that makes sense and good luck!
I'm so excited that you're excited! I was testing several new flux models and have a VERY exciting video on the way. Get your friends and colleagues excited, because this next video is SUPER COOL. 😁😁😁
Understood - yeah it's a bit tough especially with less VRAM, but hopefully new innovations will be coming out to making Flux in reach for everyone. Note that there are also free sites that let you play with Schnell (e.g. www.piclumen.com/)
Are you using the power LORA loader that's set up in the flow? If so, you just click the add LORA button and you can easily choose extra loras. Make sure the Loras are for flux (if everything else is set up for flux) since SDXL Loras are incompatible and vice versa
@@GrocksterRox Thank you for your reply! The LoRAs I set with Power LoRA are working. I'm using different LoRAs for each subject, and the subjects are being created correctly. However, should I apply those LoRAs when I finally do Img2Img? If I do, the two LoRAs seem to blend together, and the subject ends up looking strange... Conversely, if I don't use them, the subject changes into a different person.
@k0ta0uchi Funny timing, wait later today for a video with that answer (the answer will be with segs so that you're only modifying a portion of your image)
🤗 THX! It's a very clear tutorial with many helpful tips - Can't wait to try the new compositing node! Can you quickly paste the line which we can paste into our .bat file - here? (Or in the description above) - I need new glasses and it's kinda long-ish .. Would be nice if on a preview node we could have sliders for levels saturation brightness contrast, then be able to save the result directly (Just sharing a thought, not a demand😄)
Hi - it's in the linked resource in the description: civitai.com/models/895350/video-tutorial-resources-flux-controlnet-ez-compositor-memory-boost-bonus That's a great thought. Though there are nodes that can easily do that. I've found that it's honestly simpler to just open it up in Photopea and do quick live adjusts that way. Otherwise you have to change the setting, re-render, change, re-render, etc.
I tend to switch off and merge techniques with Live Portrait, Hedra and FaceFusion. I think it really depends on your goal and the length of video, etc. I definitely recommend using a head that fill most of the frame but not too much where weird warping can happen. Good luck!
@@GrocksterRox Which do you recommend for extremely long videos like hours long? A live video solution would be best so definitely open-source, and if not live then for very long videos. I would also like to use my own avatar if possible.
Thanks - it's definitely on the list (new video coming out soon but not this topic). Definitely check out Live Portrait, it could definitely get you to your goal (and yes, it's in Comfy).
Great workflow but somehow pressing the continue on the Compositor restarts the entire workflow from the beginning instead of moving the image to preview. Can you tell me how to solve this issue?
Thanks so much! You'll want to make sure the previous samplers all have fixed seeds, otherwise it'll try to run them again. That be said, if everything is fixed, it may run through the samplers again (but it'll do it as a skip through, it shouldn't re-render everything). Happy to chat about it more on discord if it's plaguing you. Good luck!
It's in the video description, but I've posted here too (go to the third tab / model assessment) - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?usp=sharing
Hi - I just did a side-by-side (same seed, config, everything) and didn't see any additional noise. There were a few slight variations in subject matter, but very miniscule (and again no degradation in image quality from what I saw). I posted the side-by-side on my discord channel here if interested: discord.gg/RXKgquKK7v
I always start my new “flux day” with the last rendered image from the day before (drag & drop image in the default screen). With your memory boost command my image was almost identical but with a lot of noise (like a tv screen capture from the 90’s). Maybe fiddling with VRAM settings is not the correct way. Flux needs all the memory it can get.
@2008spoonman that may be the denim effect I was mentioning but it's due to the type of noise. If you're using upscaled noise, I would replace it with simple random noise instead, that's what I found solves that issue
DUDE. You are so amazing and thorough. I can only imagine the amount of work you put into getting to this level. You're providing incredible value. You deserve 10000x the following. Keep it up and I'm sure you'll get there. Thanks so much!
That is amazing feedback - I'm so glad that you and others find this valuable and hoping we can help reach out to many many new people (and subscribers) :)
Hey nice video, can you make a video on how you make such type of realistic looking talking avatar like it perfectly matches to the voice@@GrocksterRox
I’m very new to all this, when I finally figured out how to get Flux installed- it kept taking forever to start. I thought it broke a few times it took so damn long. But, eventually when I waited it out -then it started to render quicker. I think it’s because the first few runs it’s storing it to our caché & it’s HUGE- that’s why it overworks our shii- but once it’s been injected into our caché, it starts running faster every use… So it seems to me & I use it offline.
But, when I think of it -why they make us download it then duplicate it into our Caché? Wouldn’t it be faster & less bulky & more user friendly if it just used the files from the massive .softensors files we downloaded? 🤔
Yup, so I believe unfortunately it has to store it on the video card because of the intense fast calculations that the AI engine has to perform. The same calculations of course can be done in regular system memory or even on the hard drive, but they would be so egregiously slow that it would take hours for a single render. Right now it's an unfortunate fact of life, but VRAM is King in order to get fast renders come out so the more we can efficiently stuff into it, the better and faster the results.
WOW - what a channel discovery! Thanks YT for this proposal! Grockster you are having a great potential - Definitely I will follow your channel
Thank you so much for the kind words! I really appreciate the opportunity to help the AI community and also feel free to pass this onto your network and friends as well. 💯
Hey Grockster, just a quick tip: instead of manually reconnecting nodes if you want to change your noise or model(loader), try adding the „any switch“ (rgthree) node, so automatically those nodes which are not bypassed are further progressed. E.g. you connect your diffusion loader and your gulf loader to the any switch and that to your further workflow. It tries to get the first (active) connected it gets to pass on. Will make your workflow much easier for swapping things around.
Ah, great thought and I did try that, but unfortunately since they are model loaders, keeping them active but connected to the any switch still was causing them all to load causing out of memory issues. When I had them connected to the any switch but then bypassed, the anyswitch node errored/complained that it was missing input. I'll have to experiment more, but had tried several pathways without success, though I love your creative thinking!
Another info packed video! So many great tips, tricks and the workflow is priceless!! Thank you very much for always sharing the info and thank you for the shout-out! 🙌🙌
Absolutely and thank you so much for that workflow beta navigation trick - will definitely be helpful for many in the community!
That workflow is just amazing! Thank you so much. Some future improvements suggestion: add a tweaker section to use the detail booster and also lying sigma samples, attention seeker and block buster nodes. Those last ones can also adds some cool improvements sometimes.
This is awesome feedback, thank you so much and I'll have to research a few of these other items you mentioned!
Huge. Looking forward to working with this... well presented and thanks for posting.
Absolutely, I'm glad it's helpful and was clear. Thanks so much for the feedback and feel free to share with others.
Some really great info in this video, good work :)
Thank you so much, I really appreciate you listening in and sharing!
love the compositing bit, its nice.
Thanks so much - yup for quick and easy placement, this is definitely a win for everyone!
Amazing work again.. Now I have even more stuff I will test myself;-) Keep up your good work
Absolutely, glad it can be helpful as always! My goal is continual learning, so I'm glad I can keep you on your feet 😁
Amazing and inspirational! 😃
Thank you so much for the amazing feedback and for sharing this video with others. 💯
You can also avoid dragging nodes around by dragging with the middle mouse button. This enables you to drag the workflow, even if you're on a node at the time.
That's a great suggestion. I tried that previously and while it's great for short drags/movement, it's a bit cumbersome for larger stretches of navigation. That's why I typically just hold down the space bar and left click drag. But thanks, it's definitely an option as well!
With the new interface, the memory monitoring from crytools doesn't display on the top bar like yours... they're disappeared on mine... how does one fix that? I went back to the original interface layout and it' appears on the manager bar like before...
I think you have to update ComfyUI (it wasn't previously appearing for me too but once I updated, it displayed - you may need to use the forced update method I have in the video). Good luck!
@@GrocksterRox which specific update script I should run? I see 3 of them... I should run all 3?
Just try the update-comfy batch file by itself. That should resolve it, otherwise you can do the one with dependencies but that will take a bit longer to get through
@@GrocksterRox Yea, that did not help.. it says all already up-to-date. Not sure if I want to try the dependencies one :)
@@GrocksterRox nvm... I think I got it to work. I think when switching the layout (from legacy to the new one) it won't show up... until we restart comfyUI !!!
Hey nice video, can you make a video on how you make such type of realistic looking talking avatar like it perfectly matches to the voice
Great suggestion, I'll add it to the queue of topics
Great workflow mate but how the heck do i add a new subject to the layering node etc?
It happens automatically after you un-bypass and render a subject, and then just rerun the workflow. So essentially,
Step one: create a background
Step two: enabled the compositor group
Step three: enable at least one subject
Step four: render a subject
It really is pretty easy, and if you want we can walk through it if you jump on the discord. Good luck!
Cool tips! Can you please tell me in which software used to create the talking head animation?
might be wrong but it looked like the new Act one update from Runway.
@@gnoel5722 It's still just an assumption, though. I wonder what it really is.
I have a custom blend of work from Face Fusion, Hedra and Live Portrait.
@@GrocksterRox It's amazing!
Such a great workflow and its so efficient
So glad you like it - I'm really excited about how modular (but also not overwhelming) it is... Have been using it daily :)
Thank you! I've been waiting for some memory improvements. I'm curious if you've seen what the Invoke team are doing as their software does some amazing things with their canvas system. The latest version has Flux support, layers, and many other options that it seems like you would be able to use way better than I can.
Invoke is a beast. I am so surprised they are not more popular. IMO it is because their website is very confusing and it looks like you need to pay to have access to invoke.
Definitely! I heard about canvas system updates happening, but haven't been too deep in the latest developments/releases
Great video and very thorough!! liked and followed! I do have an issue though, I feel like I've done everything you've mentioned here and I've triple checked to see if my settings are the same as yours, but when i generate a picture, no matter what it is, the detail is very low, its blurry and the pictures come out with this weird texture, any idea what that the problem might be?
Thanks for the kind feedback. It's a bit hard to diagnose, but the texture issue sounds like possibly that upscale noise is being used when the model can't support it. Happy to help support you a bit more if you want to jump on the Discord channel and we can see what's going on
Amazing Content! Thank you! Just out of curiosity, why aren't you using the new comfy GUI?
Thank you so much! If you mean the new comfy GUI as a standalone executable, I didn't see any real benefit between that versus through a webpage. Re: the new toolbar, I just have to become more comfortable with it since I've been using the existing interface since before SDXL :)
Very useful tips. Thanks! Does the RAM optimization trick shown at 21:53 affect rendering speed or image quality in any way?
I haven't seen any impact to image quality, and while I haven't done extensive metric-based testing on overall loading times, I haven't noticed any significant load time increases for initial model load, and once the model is loaded into memory, it kicks out new images right away as expected. I've been using it now for several weeks and it's been great (no observable slow downs)
@@GrocksterRox Awesome! :)
Thanks for that. Just a heads up that the args memory trick will slow down generation times and change output composition
Interesting - I hadn't noticed any slow down. Can you tell me more about the output composition change. What have you noticed from a side-by-side comparison perspective?
@@GrocksterRox I think the args you suggested are just dependant on the set up. on an a4500m 16gb, i get about 2.1s/it normally, and about 4s/it with the arg changes. The memory changes shift my flux generations from fully loaded to partially loaded, and so the generations are different and less coherent
Thanks, I'll continue to monitor but haven't seen anything substantial yet.
wow amazing, could you put a tutorial on how you created the lip sych avatar which was talking ?
It's a home brew, but a good place to start is Hedra (they have a free trial) - www.hedra.com/
Nice 1 !
Thanks! Enjoy and please share with the community, Reddit, the world :)
thanks for this! Do you ever have problems with 'Anything Eveywhere' not working properly? (just did full update). It's maddening trying to use WFs that use it (like yours) and then having to figure out where everything really goes to get them to run.....thanks! Keep up great work!
Yup it's happened before. I found that either #1 making sure you don't have duplicate pointers or updating the comfy version seems to help. Good luck!
@@GrocksterRox i see. can you elabroate on the 'duplicate pointers' ? what is that or how would i go about starting to debug this?
Yup, so if you're using the "Anything Everywhere?" node and have the same name in the input_regex field in two different places in your workflow, the node can get confused and just shuts down without any errors or indications on how to resolve it. You then have to go through all your nodes and find where these duplicates may be happening. It was definitely a HUGE pain to diagnose/fix in the past - I'm hoping the developer will add some sort of checks in place to help resolve it easier in the future.
@@GrocksterRox personally I prefer the get/set nodes
Nice! I´m new to this, is there any way to modify this workflow for the creation of consistent LoRAs characters instead of prompted characters/objects? I would love to be able to compose a scene with multiple LoRA characters that stay conistent.
If I understand your question, if you're asking if you can use LORAs to influence the composition to have those characters, the answer is yes. You can bring in those LORAs in the subject creation process, and then you include the LORAs in the final img2img to re-influence the characters (so those particular details don't melt away). That would allow you to prompt/controlnet poses, etc. and then make sure all specific details pull through. Hope that makes sense and good luck!
Where are uuu ? we need more !!!!! =D
I'm so excited that you're excited! I was testing several new flux models and have a VERY exciting video on the way. Get your friends and colleagues excited, because this next video is SUPER COOL. 😁😁😁
With my 3060 it's not just loras, just changing the prompt make comfy reload everything each time to the point i just came back to my pony models
Understood - yeah it's a bit tough especially with less VRAM, but hopefully new innovations will be coming out to making Flux in reach for everyone. Note that there are also free sites that let you play with Schnell (e.g. www.piclumen.com/)
really good video again
Thank you so much - please feel free to share the educational wealth with others!
Thanks for the amazing workflow! It's working perfectly!
However, I can't seem to apply multiple LoRAs. Is there any way to make this work?
Are you using the power LORA loader that's set up in the flow? If so, you just click the add LORA button and you can easily choose extra loras. Make sure the Loras are for flux (if everything else is set up for flux) since SDXL Loras are incompatible and vice versa
@@GrocksterRox Thank you for your reply! The LoRAs I set with Power LoRA are working. I'm using different LoRAs for each subject, and the subjects are being created correctly. However, should I apply those LoRAs when I finally do Img2Img? If I do, the two LoRAs seem to blend together, and the subject ends up looking strange... Conversely, if I don't use them, the subject changes into a different person.
@k0ta0uchi Funny timing, wait later today for a video with that answer (the answer will be with segs so that you're only modifying a portion of your image)
@@GrocksterRox That's incredibly exciting news!! I'm really looking forward to the new video!! Thank you!!
@k0ta0uchi it's now live. Thanks for watching and sharing! 💯💥❤️
🤗 THX! It's a very clear tutorial with many helpful tips - Can't wait to try the new compositing node!
Can you quickly paste the line which we can paste into our .bat file - here? (Or in the description above) - I need new glasses and it's kinda long-ish ..
Would be nice if on a preview node we could have sliders for levels saturation brightness contrast, then be able to save the result directly (Just sharing a thought, not a demand😄)
Hi - it's in the linked resource in the description: civitai.com/models/895350/video-tutorial-resources-flux-controlnet-ez-compositor-memory-boost-bonus
That's a great thought. Though there are nodes that can easily do that. I've found that it's honestly simpler to just open it up in Photopea and do quick live adjusts that way. Otherwise you have to change the setting, re-render, change, re-render, etc.
Can you tell me what you have used to create that speaking avatar (with so perfect lypsinc) at the beginning of the video please?
Hi - it's a combination of Live Portrait, Hedra and Face Fusion
any guidance on how to create these speaking avatars would be great! did you use liveportrait to create this avatar?
I tend to switch off and merge techniques with Live Portrait, Hedra and FaceFusion. I think it really depends on your goal and the length of video, etc. I definitely recommend using a head that fill most of the frame but not too much where weird warping can happen. Good luck!
@@GrocksterRox Which do you recommend for extremely long videos like hours long? A live video solution would be best so definitely open-source, and if not live then for very long videos. I would also like to use my own avatar if possible.
For long-form, I would go with live portrait
Can u make a tutorial how to make the Avatar talk like in this video? Can it be done in ComfyUI?
Thanks - it's definitely on the list (new video coming out soon but not this topic). Definitely check out Live Portrait, it could definitely get you to your goal (and yes, it's in Comfy).
Great workflow but somehow pressing the continue on the Compositor restarts the entire workflow from the beginning instead of moving the image to preview. Can you tell me how to solve this issue?
Thanks so much! You'll want to make sure the previous samplers all have fixed seeds, otherwise it'll try to run them again. That be said, if everything is fixed, it may run through the samplers again (but it'll do it as a skip through, it shouldn't re-render everything). Happy to chat about it more on discord if it's plaguing you. Good luck!
Wich tool do you use for the narrator ?
The narrator voice is my own 😀
@@GrocksterRox I meant the animated face :-)
Ah, I use a blend (based on scenario) of several tools out there including Live Portrait, Hedra, Face Fusion and Reactor
@@GrocksterRox Thanks !
Broo what you used for that talking character
I typically switch off or blend live portrait, Hedra, face fusion and other tools
Where can I get this colossus model? and where is this flux leader board please?
It's in the video description, but I've posted here too (go to the third tab / model assessment) - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?usp=sharing
Hi 👋
Hi there, hope you enjoyed! :)
Bro what are you pc specs on which you run comfy ui...
4090, but otherwise mid-level PC
@GrocksterRox 🤣🤣🤣 okay so 4090 pc is now a mid level pc.. I wish I can also afford such mid level pc with a 4090...
Hm, that memory booster command line tip just creates more noise and artifacts in my end results….
Hi - I just did a side-by-side (same seed, config, everything) and didn't see any additional noise. There were a few slight variations in subject matter, but very miniscule (and again no degradation in image quality from what I saw). I posted the side-by-side on my discord channel here if interested: discord.gg/RXKgquKK7v
I always start my new “flux day” with the last rendered image from the day before (drag & drop image in the default screen). With your memory boost command my image was almost identical but with a lot of noise (like a tv screen capture from the 90’s). Maybe fiddling with VRAM settings is not the correct way. Flux needs all the memory it can get.
@2008spoonman that may be the denim effect I was mentioning but it's due to the type of noise. If you're using upscaled noise, I would replace it with simple random noise instead, that's what I found solves that issue
When I head this AI voice I just think of that robot handing the guy a sandwich and putting away the dishes
Haha I should definitely market my voice out in that case :)