A couple of, perhaps, related tidbits of info. A few years ago I worked for a small home loudspeaker company. On the side we worked on various methods of hearing 3d sound from headphones. There were two manipulations we used to get a sound to seem as if it were coming from a specific point around your head. The first was delay - obviously if a sound is to your right, it hits your right ear first and left ear some time after. But the sound on the far ear would be modified - the sound has to travel around (and maybe through with bone conduction) your dense noggin. A little research came up with the concept of head shadow - if a sound comes from the right, you hear it in your right ear more or less clearly. But the sound that hits the left ear gets EQed by your shaper of your head. We had a test sound of a helicopter flying around your head and it really worked well! But there was a weird second problem - when the helicopter flew in front of you, it seemed to move higher. My semi-educated guess as to why - when we hear a sound in front of us, we also see the source and our eyes help resolve the location of the sound. Just a guess. Anyway, this relates a bit to your discussion as to why things sound more in front with the various pieces of software. I wonder if the head measurements in waves nx are used to create head shadow eq. Although can opener eschews such. Perhaps it is factored in in the angle of the speakers. Anyway, thought this would be interesting and stir further conversation... :)
@splitrockmusic The phenomenon you describe as "head shadow" is also known as HRTF (Head-Related Transfer Function). In your auditory system, everything is taken into account, from your weight and height to the shape of your head and ears. Everyone's HRTF is completely unique, like a fingerprint. We all perceive sound differently. Headphones only consider the ears in HRTF, while in-ear monitors (IEMs) only account for the ear canal. The size of the ear cups can also affect the HRTF from your ears inside the cup. The brain isn't used to hearing sound that doesn't have your HRTF in it. It's very unnatural to us. Pre-designed into the frequency response curve of headphones is a something around a generic population average HRTF. (Depending on the measurement system the headphones was designed on and the companies own personal target response). Each company has a different target though for the HRTF they aim for, which is why some people say a headphone sounds good to them while others do not agree Technically they are both right. What Paul describes as in the head to him might be out in front to another person. A lot of information is missing from the picture when we listen to sound through headphones as our HRTF is responsible for timbre and stereo image and the placement of sound. If the headphones curve is closer to your HRTF as well as the drivers been good quality it will likely sound better to you. You can EQ headphones to different curves or HRTF but that's hard to do because you likely don't know your own HRTF just what sounds good to you. You can also EQ to a different target like Harman that factors in a different HRTF average as well as their preference low and high shelf. (This can be better or worse depending on how far your own HRTF is away from their average.) In regard to the waves plugin, they try to address this by using measurements, but the result can sound very strange, as it only adds part of the signal back into the mix. They are not in control of the type of headphones you are wearing, so what they are adding back in can also clash with the parts of the HRTF that are already in the signal. You need to calculate it out which they don't do. Additionally, some headphones use waveguides and angle their drivers in the cup, which can disrupt the crossfeed. A 60-degree angle might now be perceived as 90 degrees, affecting the sound's correct positioning. Cup size and shape plays a big roll in this too. 100% realism can happen but it's a very complex problem that involves more than just measuring your head width or using a crossfeed algorithm.
Love the comment! Makes perfect sense to me. I'll continue with loosely related thoughts and information: Two things I often do on Delay (sends): 1. I swap left and right channel, 2. I play about and mess with the sound a bit. I'm not imitating a head shadow EQ or anything, I just found that (really) delayed sounds often sound fairly messed up. Creates a lot more depth than not doing it. Now you got me thinking. I may need to consider messing with slightly panned instruments too - as in subtly screwing with the information of the channel that has less information. Subtly, because on speakers it will probably be too much soon, but it could make me like headphones more. Thanks for the food for thought.
I normally put a room reverb with 1st reflections only into my monitoring fx chain. I set decay to minumum (around 100-200ms) and hp\lp the verb to 500-5000Hz. Then set wet mix to taste, normally 10-20%. This kinda gives an interresting effect as if you had diffusers in your virtual room. That narrows down the image a bit, but doesn't affect the lows and low mids.
Idk if I'm the weird one or maybe the fact that I've mainly listened to music on headphones but all the reasons why you don't like headphones are the reasons why I love headphones. Kinda interesting how different people perceive and like things.
After years of mixing with K712 pro and never finding a good setup, tried almost all headphone correction softwares but never satisfied, I ended up buying a pair of Ananda Stealth last Xmas…and I was blown away. They really need NO correction, and mixes translate perfectly. I absolutely agree with you! People around me say that these are very loud for them…might it be the reason they don’t need xfeed? They may have some sort of “acoustic” xfeed because they project so much sound in the ambience?
Today I learned something! Some of that information is interesting far beyond the topic. boosting side info and cutting mid info in order to make the sound more distant is something I will do more mindfully now. On another note: I can't and don't like to mix on headphones. It's funny that people attempt to make headphones sound like speakers - I get it, but it's still funny. My headphones are Beyerdynamic DT 1990 Pros and they don't sound like speakers at all and I like them for that. Helps me know how the music sounds in most people's ears. I guess people using crossfeed plugins turn them off at the end of mixing/producing to hear what it turns out to sound like on headphones? Have a good week!
@ thank you for this video, just yesterday I was searching for some free plugin to emulate the crossfeed effect and I discovered Dee speeker by dotec-audio; but luckily now I finally know what kind of effect you mean in this video, I’ve tried it and sounds so nice
Interesting - I felt like the LCD Xs image was coming from the side too. Great that I don’t need to spend £80 on crossfeed. If you crossfeed the se and organic does It become more like the nano - just for the purposes of mixing low and then turning it back off? I’m curious if you find the organic close or better than the nano for low end mixing whilst being better for the other frequencies because of the stereo image. The first thing I noticed on the organics are l they are very bright but then my ears quality adjusted. I’ve heard the se are at least as bright of not more. What’s your take on it? I think you said the se were a bit more slow and less punchy and transient - are the organic more like a middle ground. Perhaps the nano are too punchy to be realistic so you could end up thinking your low end is way better than it is? I’ve seen hifiman are announcing the he1000 and Arya unveiled versions now.
I always eq them to harman so I don't ever take consideration to what they sound like stock. Arya Organic isn't as fast as nanos. Nor does it have the low end detail. I still believe nothing does at the moment. Arya to me is just a less expensive sounding HE1000SE. Kind of a middle ground but I suppose if you didn't have HE1000SE then arya organic would be the best soundstage you've probably heard
I don't mix on headphones but 100% agree the phantom center is SO important! Especially when it comes to setting your volumes and time based fx levels. It's one of the reasons I never really liked mixing on headphones honestly. I heard the same thing you did. It sounded like the vocals were in my head rather than placed with the music which I heard coming from both sides. I get the same thing in the car kind of. Probably because we don't sit in the center of the car if I had to guess!
Tbh I think that so many "studio" headphones have created a negative experience for many when it comes to mixing on Headphones. If you listen to a pair of DT770 you hear that unnatural in your head LCR style sound which 100% benefits from cross feed but if you take them at face value without eq and crossfeed .. Will put you off mixing on headphones for life! Haha
@@PaulThird Before I got my VSX I tried mixing on Sony MDR 7506, that was my only attempt and with no room sims or anything like that. I hated it! I do like VSX but I wouldn't want to mix with them. I just use them as a reference point.
How long until all this becomes, making speaker mixes translate to headphones? Is this audio engineers mixing for audio engineers or the end consumer? How great does a mix sound when reproduced in an untreated room?
Paul this was a great school day :) Thank you and i was so lost as to Cross feed :) and glad you mentioned my two favourite headphones :) Sony MD and Ollo S5X going to try crossed on my Ollo X tomorrow. You have got some brain by the way. Was that sams tune you played ?
what about with the ATH-M50x's? They have angled drivers to give a more "speaker" type listening experience. Idk if you would want to use canopener with that.
Thanks for the very detailed video! Sorry for the noob question, but is there any benefit in using the Canopener instead of Realphones with Ambience and Response parameters for room and speakers simulations set to "0" (if I just want the crossfeed)? Or do Realphones still do more than just add a crossfeed in this scenario? Just trying to figure out if I need to get Canopener (which thanks to this video I know can be replicated with EQ, but still 🙃) if I already have Realphones (and Sonarworks, but their correction sounds very weird to me, so I don't use it). Thanks!
I think realphones possibly adds a blend of left and right into the equation. Thus why you see ripples in doctor and realphones GUI. My guess is that what you are seeing is phase cancellation due to the slight blend of left and right. Doctor sends the same signal left and right so that would obviously cause some cancellation in a linear test. Just a guess though. I may be wrong and it's ringing due to truncation if its possibly convolution poorly sampled. I can't say for sure. But if you stick realphones in Doctors M/S mode it's doing practically the same method of hipassing the sides and dipping the mid channel with a corrective boost in the sides
I tried he1000se and he1000 unveiled last week. He1000se is amazing but he1000 unveiled blowed my mind! It’s the best headphone to my ears. It is so pleasing. I really hope one day you can get those and give us your thoughts:) PS. I also tried audeze lcdx and lcd5, I have to admit Hifiman is way better even though audeze is more expensive.
If using CanOpener I do with 20%-33% crossfeed and 0-15% angle with my Nanos. I have several presets saved to change for taste. Otherwise they sound really weird in stereo - you are right ✌🏼
Yes, when I put the plugin on a mix I've mixed without it But curiously enough the same doesn't happen the other way around, I turn off the plugin and the mix is identical both on speakers and on headphones
The problem: A lot of people listen on headphones, so if you got issues there with the vocal sound not coming from the center front, then you may still have an issue worth fixing.
No it's a headphone driver design thing. Listen to serban, spike, manny mixes on most pairs of headphones, especially closed back and you will have this issue. Its not a mix issue, it's a headphone issue
I still have my audient id 14 mk1, was thinking on getting another audui interface, with adat to connect to my behringer ucd , and i need to power my seinheisers ,, so im consudering topping, but im still so doubtfull.. should get ths mk2 audient? Was considering motu, rme. or saving for a rme or some more high end... bugs me that audient hasnt midi, but have the behringer... Might get the dx3...
@@PaulThirdyeah.., thank you! Just a begginer, and that was me after a sunday going down the holexD might get the dx3pro and then choose a monitor ! But the topping 7 its also tempting, im very nerdy to sound audio...
I don't understand much about filters and how our body affects what we hear, just like the environment affects the sound. From what I know, this is called stereo to binaural. A cheap one I know is SpatialSoundCard (L), and it's 2 to 7.1 virtual channels to binaural. It simulates a room with its speakers, for me it's only worth it two or three rooms, but it's cheap. It's not object-based like Dolby atmos, I think DTS Heahphone:X (v2) is too. The crossfeed is not just the filter, it's the delay in frequencies and phase of the same sound (point in space) that goes to both ears. The reflections and how the simulated room is treated affects a lot. The simulation is recreated by the impulse response captured from the room at the listening point, each speaker has a different filtering for L and R. If you want to experiment with the phantom center, I think Schoeps Mono Upmix from PluginsAlliance is a good plugin. For studio simulation there are many VSTs, SSC is a virtual audio card not a plugin. I hate feeling like the sound source is inside my head, and converting to binaural fixes that. Since we are all different, a filtering may be more "real" for one than another. Creative has an app that scans your ears, head and shoulders to calculate the filtering, crazy right?
Yes, this technology is different and, in some ways, more better, but in other ways, it is not. There are a lot of problems it introduces if not done correctly. Currently, most of the products on the market are not implementing it correctly. I've tried pretty much all of them. DTS Headphone:X might get closer, though but I've never tested as it's both a hardware and software problem. I think this will end up being the default, and it will be built into headphones with DSP.
@@PaulThird From what I read they measure the rooms with a head that has a HRTF/Ear already built into it and you can switch or blend between the 2 different heads HRTF. Having it's own hardware solves some of the problem though. I can't see it been that good of a solution though because you still are using a 2 generic HRTFs and I doubt the drivers in the headphones are anywhere close to the level of HE1000s.
@@jorisboulet8137 Yep. Checked voxengo's website too, "ou can use functionality PHA-979 provides to setup headphone monitoring so that it closer resembles sound stage produced by stereo speaker monitoring" stuck it on my monitoring bus and that's where it'll stay. Thanks for the shout. Talk about plug-ins you have but don't use until someone mentions! Cheers
You explained well that the nanos don't benefit from crossfeed algorithms as they're already centre focused, but what about other "egg shaped cups" is doesn't make them suitable for crossfeed? I remember you saying in one of your harman vids about not 100% trusting Amir’s Reviews for egg shaped cups, as it's a bad fit on his particular dummy head, and reads with too much distortion. Cant see why this would be related in this case but im surely missing something. Fell free to tell me beginner facts about egg shaped cups lol, thanks in advance!
I can only talk about the 3 egg shaped cups I've got which don't need crossfeed to my ears. For measurements to be correct you need the best seal on the dummy head. Amir mentioned himself that he didn't get a good fit. Even slight problems with the seal can alter the measurements including harmonic distortion measurements. Its no coincidence that sundara measures better for distortion on ASR compared to hifimans with egg shaped cups double, triple or even quadruple the price.
buymeacoffee.com/paulthird (if you get any benefit from what I do on this platform) 🤓
A couple of, perhaps, related tidbits of info.
A few years ago I worked for a small home loudspeaker company. On the side we worked on various methods of hearing 3d sound from headphones. There were two manipulations we used to get a sound to seem as if it were coming from a specific point around your head. The first was delay - obviously if a sound is to your right, it hits your right ear first and left ear some time after. But the sound on the far ear would be modified - the sound has to travel around (and maybe through with bone conduction) your dense noggin. A little research came up with the concept of head shadow - if a sound comes from the right, you hear it in your right ear more or less clearly. But the sound that hits the left ear gets EQed by your shaper of your head.
We had a test sound of a helicopter flying around your head and it really worked well! But there was a weird second problem - when the helicopter flew in front of you, it seemed to move higher. My semi-educated guess as to why - when we hear a sound in front of us, we also see the source and our eyes help resolve the location of the sound. Just a guess.
Anyway, this relates a bit to your discussion as to why things sound more in front with the various pieces of software. I wonder if the head measurements in waves nx are used to create head shadow eq. Although can opener eschews such. Perhaps it is factored in in the angle of the speakers.
Anyway, thought this would be interesting and stir further conversation... :)
🤓
@splitrockmusic The phenomenon you describe as "head shadow" is also known as HRTF (Head-Related Transfer Function). In your auditory system, everything is taken into account, from your weight and height to the shape of your head and ears. Everyone's HRTF is completely unique, like a fingerprint. We all perceive sound differently. Headphones only consider the ears in HRTF, while in-ear monitors (IEMs) only account for the ear canal. The size of the ear cups can also affect the HRTF from your ears inside the cup. The brain isn't used to hearing sound that doesn't have your HRTF in it. It's very unnatural to us.
Pre-designed into the frequency response curve of headphones is a something around a generic population average HRTF. (Depending on the measurement system the headphones was designed on and the companies own personal target response). Each company has a different target though for the HRTF they aim for, which is why some people say a headphone sounds good to them while others do not agree Technically they are both right. What Paul describes as in the head to him might be out in front to another person.
A lot of information is missing from the picture when we listen to sound through headphones as our HRTF is responsible for timbre and stereo image and the placement of sound. If the headphones curve is closer to your HRTF as well as the drivers been good quality it will likely sound better to you. You can EQ headphones to different curves or HRTF but that's hard to do because you likely don't know your own HRTF just what sounds good to you. You can also EQ to a different target like Harman that factors in a different HRTF average as well as their preference low and high shelf. (This can be better or worse depending on how far your own HRTF is away from their average.)
In regard to the waves plugin, they try to address this by using measurements, but the result can sound very strange, as it only adds part of the signal back into the mix. They are not in control of the type of headphones you are wearing, so what they are adding back in can also clash with the parts of the HRTF that are already in the signal. You need to calculate it out which they don't do.
Additionally, some headphones use waveguides and angle their drivers in the cup, which can disrupt the crossfeed. A 60-degree angle might now be perceived as 90 degrees, affecting the sound's correct positioning. Cup size and shape plays a big roll in this too.
100% realism can happen but it's a very complex problem that involves more than just measuring your head width or using a crossfeed algorithm.
Love the comment! Makes perfect sense to me.
I'll continue with loosely related thoughts and information: Two things I often do on Delay (sends): 1. I swap left and right channel, 2. I play about and mess with the sound a bit. I'm not imitating a head shadow EQ or anything, I just found that (really) delayed sounds often sound fairly messed up. Creates a lot more depth than not doing it. Now you got me thinking. I may need to consider messing with slightly panned instruments too - as in subtly screwing with the information of the channel that has less information. Subtly, because on speakers it will probably be too much soon, but it could make me like headphones more. Thanks for the food for thought.
I've been using the 'Default' preset in CanOpener, which sounds great... but will try the Mix Engineer setting too.
👍
I normally put a room reverb with 1st reflections only into my monitoring fx chain. I set decay to minumum (around 100-200ms) and hp\lp the verb to 500-5000Hz. Then set wet mix to taste, normally 10-20%. This kinda gives an interresting effect as if you had diffusers in your virtual room. That narrows down the image a bit, but doesn't affect the lows and low mids.
Idk if I'm the weird one or maybe the fact that I've mainly listened to music on headphones but all the reasons why you don't like headphones are the reasons why I love headphones. Kinda interesting how different people perceive and like things.
Sure but that's not good in a mixing context when you are trying to achieve translation to speakers.
After years of mixing with K712 pro and never finding a good setup, tried almost all headphone correction softwares but never satisfied, I ended up buying a pair of Ananda Stealth last Xmas…and I was blown away. They really need NO correction, and mixes translate perfectly. I absolutely agree with you! People around me say that these are very loud for them…might it be the reason they don’t need xfeed? They may have some sort of “acoustic” xfeed because they project so much sound in the ambience?
Various design factors
Am I the only one reading Paul's comments narrated by him in my head? 😂
Today I learned something! Some of that information is interesting far beyond the topic. boosting side info and cutting mid info in order to make the sound more distant is something I will do more mindfully now.
On another note: I can't and don't like to mix on headphones. It's funny that people attempt to make headphones sound like speakers - I get it, but it's still funny. My headphones are Beyerdynamic DT 1990 Pros and they don't sound like speakers at all and I like them for that. Helps me know how the music sounds in most people's ears. I guess people using crossfeed plugins turn them off at the end of mixing/producing to hear what it turns out to sound like on headphones? Have a good week!
Can I copy the ozone eq settings into a proQ3 or do I need to multiply it by 1.41 to make it correct?
Multiply q by 1.41
@ thank you for this video, just yesterday I was searching for some free plugin to emulate the crossfeed effect and I discovered Dee speeker by dotec-audio; but luckily now I finally know what kind of effect you mean in this video, I’ve tried it and sounds so nice
👍
Interesting - I felt like the LCD Xs image was coming from the side too. Great that I don’t need to spend £80 on crossfeed. If you crossfeed the se and organic does It become more like the nano - just for the purposes of mixing low and then turning it back off? I’m curious if you find the organic close or better than the nano for low end mixing whilst being better for the other frequencies because of the stereo image. The first thing I noticed on the organics are l they are very bright but then my ears quality adjusted. I’ve heard the se are at least as bright of not more. What’s your take on it? I think you said the se were a bit more slow and less punchy and transient - are the organic more like a middle ground. Perhaps the nano are too punchy to be realistic so you could end up thinking your low end is way better than it is? I’ve seen hifiman are announcing the he1000 and Arya unveiled versions now.
I always eq them to harman so I don't ever take consideration to what they sound like stock.
Arya Organic isn't as fast as nanos. Nor does it have the low end detail. I still believe nothing does at the moment.
Arya to me is just a less expensive sounding HE1000SE. Kind of a middle ground but I suppose if you didn't have HE1000SE then arya organic would be the best soundstage you've probably heard
@@PaulThirdThe SE and Nano like a winning combo if I can bring myself to spend £1900 on 2 pairs of headphones. I am considering the LCDX or MM500 too.
I use mix engineer present in can opener. Do you switch it from hyper realistic to standard?
Only unless I don't feel that I need the midrange distance
I don't mix on headphones but 100% agree the phantom center is SO important! Especially when it comes to setting your volumes and time based fx levels. It's one of the reasons I never really liked mixing on headphones honestly. I heard the same thing you did. It sounded like the vocals were in my head rather than placed with the music which I heard coming from both sides. I get the same thing in the car kind of. Probably because we don't sit in the center of the car if I had to guess!
Tbh I think that so many "studio" headphones have created a negative experience for many when it comes to mixing on Headphones.
If you listen to a pair of DT770 you hear that unnatural in your head LCR style sound which 100% benefits from cross feed but if you take them at face value without eq and crossfeed .. Will put you off mixing on headphones for life! Haha
@@PaulThird Before I got my VSX I tried mixing on Sony MDR 7506, that was my only attempt and with no room sims or anything like that. I hated it! I do like VSX but I wouldn't want to mix with them. I just use them as a reference point.
How long until all this becomes, making speaker mixes translate to headphones? Is this audio engineers mixing for audio engineers or the end consumer? How great does a mix sound when reproduced in an untreated room?
Top man, I’ll buy another coffee for you mate. You’ve just saved me £80!
👍
Paul this was a great school day :) Thank you and i was so lost as to Cross feed :) and glad you mentioned my two favourite headphones :) Sony MD and Ollo S5X going to try crossed on my Ollo X tomorrow. You have got some brain by the way. Was that sams tune you played ?
Yeah. Was either mine or eds mix.
@@PaulThird Sounded Great
what about with the ATH-M50x's? They have angled drivers to give a more "speaker" type listening experience. Idk if you would want to use canopener with that.
Put them in a room with speakers and compare
Thanks for the very detailed video! Sorry for the noob question, but is there any benefit in using the Canopener instead of Realphones with Ambience and Response parameters for room and speakers simulations set to "0" (if I just want the crossfeed)? Or do Realphones still do more than just add a crossfeed in this scenario? Just trying to figure out if I need to get Canopener (which thanks to this video I know can be replicated with EQ, but still 🙃) if I already have Realphones (and Sonarworks, but their correction sounds very weird to me, so I don't use it). Thanks!
I think realphones possibly adds a blend of left and right into the equation. Thus why you see ripples in doctor and realphones GUI. My guess is that what you are seeing is phase cancellation due to the slight blend of left and right. Doctor sends the same signal left and right so that would obviously cause some cancellation in a linear test.
Just a guess though. I may be wrong and it's ringing due to truncation if its possibly convolution poorly sampled.
I can't say for sure. But if you stick realphones in Doctors M/S mode it's doing practically the same method of hipassing the sides and dipping the mid channel with a corrective boost in the sides
@@PaulThird Oh, I see. Thank you for such a detailed response! The perfectionist in me will probably insist on buying Canopener then, haha.
Or just copy the ozone eq settings into a mid side eq...
@@PaulThird True 😁I definitely should try that first. Thanks!
So the moral of the story is to use your ears and check your mixes, On different speaker set ups boom boxes and all that.
This is invaluable info, cheers buddy!
👍
I tried he1000se and he1000 unveiled last week. He1000se is amazing but he1000 unveiled blowed my mind! It’s the best headphone to my ears. It is so pleasing. I really hope one day you can get those and give us your thoughts:)
PS. I also tried audeze lcdx and lcd5, I have to admit Hifiman is way better even though audeze is more expensive.
One day maybe
If using CanOpener I do with 20%-33% crossfeed and 0-15% angle with my Nanos. I have several presets saved to change for taste. Otherwise they sound really weird in stereo - you are right ✌🏼
👍
I think important to understand this stuff is to remember that EQ is delay.
Paul: Did you have crossfeed on your audio? Coz the audio is 'fuzzy'...
Must be your monitoring 🤷♂️
I have Canopener and stopped using it, I didn’t really like it. Maybe I should give it another try 😊
🤓
Leave me alone !! I just wanna mix on headphones. Love you Paul, you nerd.
🤜🤛
Do you know ever feel like the mix engineer preset in goodhertz can opener makes everything sound brighter?
Not really
Yes, when I put the plugin on a mix I've mixed without it
But curiously enough the same doesn't happen the other way around, I turn off the plugin and the mix is identical both on speakers and on headphones
The problem: A lot of people listen on headphones, so if you got issues there with the vocal sound not coming from the center front, then you may still have an issue worth fixing.
No it's a headphone driver design thing. Listen to serban, spike, manny mixes on most pairs of headphones, especially closed back and you will have this issue.
Its not a mix issue, it's a headphone issue
I still have my audient id 14 mk1, was thinking on getting another audui interface, with adat to connect to my behringer ucd , and i need to power my seinheisers ,, so im consudering topping, but im still so doubtfull.. should get ths mk2 audient? Was considering motu, rme. or saving for a rme or some more high end... bugs me that audient hasnt midi, but have the behringer... Might get the dx3...
Get the topping and take out the guess work
@@PaulThirdyeah.., thank you! Just a begginer, and that was me after a sunday going down the holexD might get the dx3pro and then choose a monitor ! But the topping 7 its also tempting, im very nerdy to sound audio...
I use a Meier JAZZ-FF headphone amp sounds great , has crossfeed and drives anything
Didn't measure terribly well on audio science review. Going by the measurements they would struggle with my HE1000SE
@@PaulThird Luckily, I did not read the Audio Science review before I purchased mine.
Great video Paul
🤓
@PaulThird Do you think HD 660 S need crossfeed?
🤷♂️
I was just wondering what has made you change your mind to not use crossfeed with nanos after years of doing so?
I explained in the video..
Thanks
🤜🤛
Thank you again good sir!
🤘
I don't understand much about filters and how our body affects what we hear, just like the environment affects the sound.
From what I know, this is called stereo to binaural. A cheap one I know is SpatialSoundCard (L), and it's 2 to 7.1 virtual channels to binaural. It simulates a room with its speakers, for me it's only worth it two or three rooms, but it's cheap.
It's not object-based like Dolby atmos, I think DTS Heahphone:X (v2) is too.
The crossfeed is not just the filter, it's the delay in frequencies and phase of the same sound (point in space) that goes to both ears.
The reflections and how the simulated room is treated affects a lot. The simulation is recreated by the impulse response captured from the room at the listening point, each speaker has a different filtering for L and R.
If you want to experiment with the phantom center, I think Schoeps Mono Upmix from PluginsAlliance is a good plugin.
For studio simulation there are many VSTs, SSC is a virtual audio card not a plugin.
I hate feeling like the sound source is inside my head, and converting to binaural fixes that.
Since we are all different, a filtering may be more "real" for one than another.
Creative has an app that scans your ears, head and shoulders to calculate the filtering, crazy right?
Yes, this technology is different and, in some ways, more better, but in other ways, it is not. There are a lot of problems it introduces if not done correctly. Currently, most of the products on the market are not implementing it correctly. I've tried pretty much all of them. DTS Headphone:X might get closer, though but I've never tested as it's both a hardware and software problem. I think this will end up being the default, and it will be built into headphones with DSP.
Jesus. Stop showing the new headphones. I barely finished collecting money for hifiman. Now Danclarkaudio. And i have to start from beginning. 😅
😅😅
Reverb is your friend......😎
No Dan clark stealth on there unfortunately 😕
Might give it a go
👍
Still waiting for a VSX review 😅
Can't see it happening cause I tried with Slate in the past and they declined to send me a pair
@@PaulThird From what I read they measure the rooms with a head that has a HRTF/Ear already built into it and you can switch or blend between the 2 different heads HRTF. Having it's own hardware solves some of the problem though. I can't see it been that good of a solution though because you still are using a 2 generic HRTFs and I doubt the drivers in the headphones are anywhere close to the level of HE1000s.
Voxengo PHA-979 does it for me
How are you using it?
I’m curious as well.
@ after just checking my version of it, there is a preset for headphone monitoring but still working out what it’s doing. Will update later
@@Andrew-vk5fy it puts the kick in the mid ( Headphone monitoring preset ) , feels more like a speaker stereo image on headphones
@@jorisboulet8137 Yep. Checked voxengo's website too, "ou can use functionality PHA-979 provides to setup headphone monitoring so that it closer resembles sound stage produced by stereo speaker monitoring" stuck it on my monitoring bus and that's where it'll stay. Thanks for the shout. Talk about plug-ins you have but don't use until someone mentions! Cheers
🤓🤓🤓
Remember to check out my audio podcast with Ed Thorne if you want to learn more about our mixing journeys 🤓
You explained well that the nanos don't benefit from crossfeed algorithms as they're already centre focused, but what about other "egg shaped cups" is doesn't make them suitable for crossfeed?
I remember you saying in one of your harman vids about not 100% trusting Amir’s Reviews for egg shaped cups, as it's a bad fit on his particular dummy head, and reads with too much distortion. Cant see why this would be related in this case but im surely missing something.
Fell free to tell me beginner facts about egg shaped cups lol, thanks in advance!
I can only talk about the 3 egg shaped cups I've got which don't need crossfeed to my ears.
For measurements to be correct you need the best seal on the dummy head. Amir mentioned himself that he didn't get a good fit. Even slight problems with the seal can alter the measurements including harmonic distortion measurements.
Its no coincidence that sundara measures better for distortion on ASR compared to hifimans with egg shaped cups double, triple or even quadruple the price.
@ ah okay! So it just so happens that they didn’t need it when you listened, rather than the specific fact their egg shaped?
Don't know. I'd need to listen to more egg shaped cups from different brands
@@PaulThird Thanks fella!