Hi Michael, thanks for the video. But I think that the way to use iRender in Sphere works best in a different way than the one you´ve described. You don´t insert the iRender plugin on the input directly as you did. Instead, you enable Aux A for instance by turning it On on the right hand side of the Auxes column in your chosen input. Then you insert the iRender there, there´s a plugin insertion point there. Now, if you go to the Aux tab on the Output section, you click on Aux A, open it´s settings and choose it to have the path to your headphones. Then there, if you want, you can insert the Sonarworks headphone correction plugin. Additionally, if you click the respective Aux Lock button, you can have it on on your headphones always On while choosing other outputs. Another way you can use Sphere instead of Loopback if you want to your videos, is for example to use the Cue so that you can send the input signal (stereo) to the headphones while simultaneously sending it to any output for you streaming, being able to adjust volume/gain independently. You can also use the Auxes for that of course. Just different ways of doing routings. Hope you don´t mind the comments 🙂 All the best.
Michael, your method for immersive audio interfacing and monitoring control with headphones is a practical option to that of an investment in an Audient Oria setup using external speakers. Thank you!
Sorry, but that´s not correct in this particular case. The iRender plugin is to emulate Apple´s Spatial Audio only, which is obviously in a binaural form. As Dolby binaural is different sounding this is a great way of comparing what it will sound in Apple Music vs Tidal, Amazon etc. But what this plugin or the workflow shown by Michael is not, is a binaural emulation of a studio in a virtual way, like some plugins from Dear Reality, Waves etc. So, if you need to mix without speakers using headphones only, you need another type of plugin to emulate a studio environment on headphones first, before you can use the one as shown in the video. And by the way, mixing with speakers is way more precise, so this is not a replacement for a product like Oria. Although you may find similar implementations in both products like bass management, time alignment of speakers, EQ curves etc.
If you have the option to go for a multichannel setup with something like the Oria, you should do that. The vast majority of people is listening to spatial audio via Apple’s system though. So while it is correct that it has its own sound, I would still consider it a viable alternative in situations where a full Atmos setup is not possible.
@@michaelgwagner Hi Michael, I agree it´s a viable alternative when mixing for Apple´s Spatial Audio streaming only. Remember that Apple and other streamers for the time being, ask for two files of the same song when delivering Dolby Atmos, one in Atmos (ADM) and a stereo file of the same length. Both are streamed at the same time, so that users can switch if they want or if the device only accepts stereo. In order to evaluate the stereo version on headphones (different mastered mix or the 2.0 re-render), using a plugin that emulates only Apple´s Spatial Audio is not the right thing to do in my humble opinion. For that you´ll need other types of plugins that emulate acoustic spaces, like Dear VR Monitor or Mix, Waves NX etc. to properly do your mix on headphones, be it stereo or multichannel/Atmos. The Spatial Audio emulation plugins should only be used for comparison with the Dolby binaural engine when checking binaural.Also, despite the fact that Apple Music represents an interesting part of the Atmos streaming, often mixers will do an Atmos mix that also gets streamed on Tidal, Amazon, etc. that use the Dolby binaural engine settings when streaming via Dolby AC4 IMS. So you need to check that as well coming binaurally from the Dolby renderer and bypass the emulation of the Apple Spatial Audio plugin. Both Apple and the other streamers use DD+JOC for reproduction on speakers, so having at least a pair of speakers for checking the stereo master is a must also.
Thank you for your videos. Your explanations are always so helpful to make Dolby Atmos music. One question; For users who have more than 16 track physical audio outputs interface, there is no need to use Ground Control Sphere ? Or Is there still any advantage for having 16 multi physical audio output interface and Ground Control Sphere both ?
It is actually meant to be used in such a setup. It allows you to switch between different channel configurations. It replaces a hardware monitor controller.
Morning Michael :) Just watched this for a second time and found yet more information that i missed from the 1st viewing :) Quick Question how much of a difference does the dolby render make with the selection of the overhead speaker ? For example I have my 9.1.6 setup set using the original 4 overhead + adding the middle set and i am just wondering if this will make a big difference? Rather than having two separate setups with different speaker selections :( I have a 7.1.4 setup in the render so i can change back and fourth for checking my mix setting.. Having delivered quite a few mixes now to apple music i have always noticed that there is a small noticeable difference from my render and what I hear with apple :( Yes i am only getting 7.1.4 back from apple music but still notice a difference from the render set to 7.1.4... I will try using my iRender when mixing and monitoring in future on the headphones. I Also noticed a comment on this thread regarding a different way to set thing up :) Could be a good follow up video :) QFXMusic :) This is my other channel
The Maxwell don’t fit me so I might be biased. If you are working on a Mac with Logic or iRender in Sphere then the Airpods Max are the better option imho. Otherwise the Macwell might have the more studio quality sound performance.
Hi Michael, thanks for the video. But I think that the way to use iRender in Sphere works best in a different way than the one you´ve described. You don´t insert the iRender plugin on the input directly as you did. Instead, you enable Aux A for instance by turning it On on the right hand side of the Auxes column in your chosen input. Then you insert the iRender there, there´s a plugin insertion point there. Now, if you go to the Aux tab on the Output section, you click on Aux A, open it´s settings and choose it to have the path to your headphones. Then there, if you want, you can insert the Sonarworks headphone correction plugin. Additionally, if you click the respective Aux Lock button, you can have it on on your headphones always On while choosing other outputs.
Another way you can use Sphere instead of Loopback if you want to your videos, is for example to use the Cue so that you can send the input signal (stereo) to the headphones while simultaneously sending it to any output for you streaming, being able to adjust volume/gain independently. You can also use the Auxes for that of course. Just different ways of doing routings. Hope you don´t mind the comments 🙂 All the best.
Good point. Comments are always appreciated!
Great review of how this works with apple headphones :)
Thanks!
wonderful video showing complex routing in a way that we all could learn
Thanks! Much appreciated!
Thanks for showing us how to set this up 🙏! This will be super useful to help producers render music from the perspective of the end users.
Thanks! Glad the video is helpful.
Sphere is amazing!
Yes, it is. 😉
Awsome!!! Mr. G.Wagner ... Thanks so much
You’re very welcome! ☺️
Michael, your method for immersive audio interfacing and monitoring control with headphones is a practical option to that of an investment in an Audient Oria setup using external speakers. Thank you!
Sorry, but that´s not correct in this particular case. The iRender plugin is to emulate Apple´s Spatial Audio only, which is obviously in a binaural form. As Dolby binaural is different sounding this is a great way of comparing what it will sound in Apple Music vs Tidal, Amazon etc. But what this plugin or the workflow shown by Michael is not, is a binaural emulation of a studio in a virtual way, like some plugins from Dear Reality, Waves etc. So, if you need to mix without speakers using headphones only, you need another type of plugin to emulate a studio environment on headphones first, before you can use the one as shown in the video. And by the way, mixing with speakers is way more precise, so this is not a replacement for a product like Oria. Although you may find similar implementations in both products like bass management, time alignment of speakers, EQ curves etc.
If you have the option to go for a multichannel setup with something like the Oria, you should do that. The vast majority of people is listening to spatial audio via Apple’s system though. So while it is correct that it has its own sound, I would still consider it a viable alternative in situations where a full Atmos setup is not possible.
@@michaelgwagner Hi Michael, I agree it´s a viable alternative when mixing for Apple´s Spatial Audio streaming only. Remember that Apple and other streamers for the time being, ask for two files of the same song when delivering Dolby Atmos, one in Atmos (ADM) and a stereo file of the same length. Both are streamed at the same time, so that users can switch if they want or if the device only accepts stereo. In order to evaluate the stereo version on headphones (different mastered mix or the 2.0 re-render), using a plugin that emulates only Apple´s Spatial Audio is not the right thing to do in my humble opinion. For that you´ll need other types of plugins that emulate acoustic spaces, like Dear VR Monitor or Mix, Waves NX etc. to properly do your mix on headphones, be it stereo or multichannel/Atmos. The Spatial Audio emulation plugins should only be used for comparison with the Dolby binaural engine when checking binaural.Also, despite the fact that Apple Music represents an interesting part of the Atmos streaming, often mixers will do an Atmos mix that also gets streamed on Tidal, Amazon, etc. that use the Dolby binaural engine settings when streaming via Dolby AC4 IMS. So you need to check that as well coming binaurally from the Dolby renderer and bypass the emulation of the Apple Spatial Audio plugin. Both Apple and the other streamers use DD+JOC for reproduction on speakers, so having at least a pair of speakers for checking the stereo master is a must also.
Thank you for your videos. Your explanations are always so helpful to make Dolby Atmos music. One question; For users who have more than 16 track physical audio outputs interface, there is no need to use Ground Control Sphere ? Or Is there still any advantage for having 16 multi physical audio output interface and Ground Control Sphere both ?
It is actually meant to be used in such a setup. It allows you to switch between different channel configurations. It replaces a hardware monitor controller.
@@michaelgwagner Thank you so much. I got it.
Morning Michael :) Just watched this for a second time and found yet more information that i missed from the 1st viewing :) Quick Question how much of a difference does the dolby render make with the selection of the overhead speaker ? For example I have my 9.1.6 setup set using the original 4 overhead + adding the middle set and i am just wondering if this will make a big difference? Rather than having two separate setups with different speaker selections :( I have a 7.1.4 setup in the render so i can change back and fourth for checking my mix setting.. Having delivered quite a few mixes now to apple music i have always noticed that there is a small noticeable difference from my render and what I hear with apple :( Yes i am only getting 7.1.4 back from apple music but still notice a difference from the render set to 7.1.4... I will try using my iRender when mixing and monitoring in future on the headphones. I Also noticed a comment on this thread regarding a different way to set thing up :) Could be a good follow up video :) QFXMusic :) This is my other channel
would you buy iPods max or Maxwell audz, if you have only those 2 what would you buy?
The Maxwell don’t fit me so I might be biased. If you are working on a Mac with Logic or iRender in Sphere then the Airpods Max are the better option imho. Otherwise the Macwell might have the more studio quality sound performance.