A blind test on 140 audiophile respondents was carried out via internet, providing them with a 16/44 and a 24/96 version of 3 music files. They had to guess which was the Hires 24 bit version. The results are exactly what would be expected if people had chosen by flipping a coin: 50% chose the right Hires version, vs 50% for the CD quality version for the first 2 pieces, in fact for the 3rd one they did worse than by choosing by chance...and the subsample of professional musician did worse than average, choosing more often the 16 bit version over the 24bit version. The subsample with very expensive audio gear did no better than average either. A summary of the results: "As you can see, in aggregate there is no evidence to show that the 140 respondents were able to identify the 24-bit sample. In fact it was an exact 50/50 for the Vivaldi and Goldberg! As for the Bozza sample, more respondents actually thought the dithered 16-bit version was the "better" sounding 24-bit file (statistically non-significant however, p-value 0.28). Looking at the individual responses, there were a total of 20 respondents who correctly identified the B-A-A selection of 24-bit samples, and 21 selected the opposite A-B-B. This too is in line with expectations that 17.5 would pick each of these patterns based on chance alone." Source: archimago.blogspot.com/2014/06/24...ce%20alone. archimago.blogspot.com/2014/06/24...rt-ii.html
One test can be interpreted in many ways. you also did not tell us what they were listening to Or the environment that they were listening in. No opinion Matters unless those two things Are figured in. Plus no mention of encoding converters or decoding converters or what kind of dither was used… Plus when you have many tracks they have to be summed down to two tracks, It really depends on the quality of the microphone, preamp and encoder and engineer. I’ll tell you this if you have a quality studio system and you can’t tell the 24-bit master from the 16 bit master you should have your hearing checked
@@DrRyman do you know what 16 bit and 24 bit depth actually mean? the bit depth is the "corridor" between the loudest and the most silent audio signal of the recording. 16 bit mean 96 db and 24 bit 144 db, a human can hear up to 140 db, but c'mon that' insanly loud^ a think standing right next to a starting plane is about 100db^ by the way, the dynamic range of a vinyl record is about 60 db ...
@@DrRyman You should take the time to read the two links I posted above, all the details are there. The master was 24 bit of course, the 16 bit file was downsampled from it. In fact the people with the most expensive systems, the sound professionals and the musicians did no better on this blind test than the sample average. Apple's CEO recently admitted that the vast majority of people cannot tell the difference between Hires and CD quality. This test shows that hardly anybody can reliably tell the difference, time after time...This is not to say that it is not important to use 24bit at the mastering stage of course, but then the CD version of this master is largely enough at the listening stage.
@@DrRyman god, ppl and their esoteric (false) knowledge. Maybe 2 ppl every 1 milion can hear a sure difference between 16bits and 24bits. Go study some info science
I can't hear the difference between a minidisc atrac type-r recording from a straight 16/44.1 pcm... 😂 And my system is about 3000€... To be honest only from speakers (with cans just some micro details here and there but I have to be in 'meditation mode" otherwise... ). Haha
Did he answer the question though? He said that high res is better than upsampled high res because that’s how the original was recorded. But he didn’t explain why 96/24 downsampled to 44.1/16 would be any worse for the end listener. 44.1/16 fully covers the frequency and dynamic range people can hear and instruments can produce. So it still remains mysterious to me why hires is supposed to be better than good old CD red book. I wanted it to be better, I can’t pretend I can hear any difference.
Most of it is screwed up in the mastering process, if you ask me. What's HighRes worth anyway when the dynamic range is squished to 6dB and the equalizers went bananas.
@Douglas Blake Yes. My general suspicion about the preference for Vinyl by many is that this medium is treated more gently by mastering engineers. According to the data on the loudness wars site, all Vinyl releases have significantly more dynamics. No wonder the music is more fun. Personally, I've always wondered why I often enjoy the sound of intros of rock songs the most. The answer is simple: all the dirty tricks kick in when the rest of the band joins at full loudness, and then the sound gets ugly real quick.
@Douglas Blake On the other hand, when recorded properly like orchestral recordings, they can have "Huge" dynamics. TELARC is a great example of recordings that were not compressed. That is one reason why classical music lovers jumped on the CD bandwagon so fast. Of course the other reason was the lack of potential noise from vinyl.
Hi Paul. I'm returning to the "audiophile world" after a 20 year absence, and I'm a software guy so I knew the technology would be very different (in some ways) now. I just wanted to say THANK YOU for taking the time to make these videos. You are teaching me a lot and I am grateful! - Paul K in Vermont.
I'm into broadcast transmitters and with analog stereo encoders, there's also up and down sampling going on. Normally, a part of the L-R signal (stereo side information) on 38kHz is created by switching 38.000 times between the left and the right channel per second. As you might imagine, it's producing "switching clicks" nearby the audible spectrum, and to filter that, you need steep filters that influences phase linearity. So, an old trick was to upsample that 8 or 16 times. The benefits are harmonics that are much further away from the signal you want. Then you are able to filter it in that high frequency domain (304kHz, or 608kHz) and then downsample it to the original 38kHz. The result is a more dynamic and natural sounding signal with better stereo imaging and channel separation. So I do imagine that there's a benefit for upsampling in DAC's for the same reason. It makes filtering more "gentle".
I emailed the streaming station soma fm to ask if they will stream it high res in the future. they run it at something like 128k MP3 with some special broadcast rack thingy. They were saying basically it sounds better at that quality. I can't argue, with the compression they use or whatever streaming soma FM sounds great! better than some high res files I have sometimes.
@@ChiefExecutiveOrbiter Yes, but that's streaming. I Was writing about FM radio, analog over the air. It's true that there's equipment called "Sound Processing". What it actually does is making the audio less dynamic and add some effects to correct for that, like putting more harmonics on the base and mids. It tricks the way our brains "listen" so we do experience dynamics that aren't in the signal anymore. If you do that in a way which is optimal for 128kbps MP3, we experience it as better than a 192kbps stream without the right sound processing. But it's still a way to push a narrow bandwidth to it's limits. About Sona FM, I'd never head about it, but what a cool station! So many streams in different genre's! They have a new listener here in The Netherlands! And yes, it doesn't sound bad at all!
@@hugoromeyn4582 I understand the need for compromise for audio on broadband or FM. I personally still really like the fm band compressed sound (yes sound quality is part of the sentiment not only the song) , as opposed to the horrible quality of DAB (digital audio broadcast) where I currently live. Dropouts, bad sound quality, bad implementation etc. But my point is this: we are surrounded by gigabytes of data. Why the need for a DAC that does up-sampling when the 'original' file should be easily available? Anyway, groeten uit een ander deel van de wereld!
@@Sool101 Tidal offers higher quality streaming at 96kHz / 24bit. And I know about some record labels with 192kHz / 24 bit downloads. But there's not a lot out there in high res audio indeed. Strange that they still stuck to a format from the 1980's? And DAB (also a late 80's child), I know. It are 128kbps streams if you're lucky, most of them are even at lower bit rates. Techmoan did a video about DAB in the UK, that's even worse than in The Netherlands. FM Radio is not really efficient with bandwidth, but it can sound just as good as the original recording. A little bit tweaking and sound processing is okay, they have to play music from all kinds and different masters, different dynamic ranges. It's unpleasant if you have to tweak the volume every time another song is being played. But what some stations are doing with clipping their sound to be a 1-2dB louder... It's insane that even the news sounds distorted. Groet terug uit de lage landen!
I prefer to think of it this way. Upsampling can't produce more information then what is in the original file so in effect all upsampling is doing is to multiply the original file information. You can't create better resolution that isn't there in the first place. The same thing goes for video, 4K discs look much better then upsampled 1080P.
Yep. The comparison I use is this: interpolating to 192kHz isn't going to seriously improve quality if your input source is an 8kHz digital phone line. Some people will try arguing against the 4K analogy; presumably because upsampled 1080P looks pretty good. The patent nonsense of making old VoIP lines sound much better usually shuts them up.
@@FarnhamJ07 you know, ai upsampling is a thing these days....it opens possibilities to enhance the original low res things... although i wont upsample the audio unless it's a default feature built-in to the hardware decoder and unchangeable...
@@imkow That's a pretty cool concept for sure, and a quick Google on it seems promising. It'd be great for stuff that simply isn't available in hi-res at the very least! I'm quite sure that isn't what he and others have in mind when asking about upconversion though :P
If you start with lower res, your system may not be the limiting factor. With a higher res file, you are guaranteed that your system will be the limiting factor and hence give you the option to play around with components to suit you. Upsampling cannot improve audio, it cannot create information that is not there: It is like taking a digital photo, resizing it to a smaller size then transmitting it and resizing it to its original size at receiving end: It simply cannot mathematically be restored to the original quality as the information was lost when it was scaled down (Resized smaller). As Paul alluded to, there is a different reason for upscaling; it purely boils down to inject energy between samples to drive a load and not to increase quality (Which is impossible). Lastly, also as Paul said, oversampling makes it mathematically (So no way around it) easier and gentler to filter: You don’t need such a high order filter which is not only linearly correlated to cost, but higher order filters creates higher resonances (Artefacts) due to overshooting (Frequency response of higher order filters tend to not be flat but can fluctuate significantly throughout its frequency response vs amplitude curve) and introduce sound artefacts as a result. So, yes, a higher order filter can cut off tighter but at the cost of sound quality. So, as Paul indicated, oversampling means that avoids the need for a right, rigorous filter which will introduce sound artefacts. And to do that, you need a higher samples source file
Upsampling can improve audio, but not by adding information. It improves audio by letting far more benign filters to be used. It can’t add good stuff, but it sure can remove bad stuff.
The answer is; If it's shit that goes in, it'll be shit that comes out. Alternatively; You can't polish a turd. There's definitely something to be said for up (and/or down)sampling of digital files but in the end, there's always a weakest link in your rig. And it's easy enough to not let that be the file. I also think it's a non-issue with how cheap storage is these days :).
I think the real question is why do the distributors of music decide to sell low end files, when all the data limitations of the past are no longer relevant, and why is a wav file automatically more expensive than an mp3, and a 24bit file more expensive than a 16bit? Amazon for example is completely capable of selling higher quality files, but is stuck on default mp3 and CD. That being said, people claiming mp3 "sounds bad" is a thing of the past, obviously they are much cleaner than in the old days, but I still prefer wav at max bit rate just because I don't like the idea of variable bit rates. I want the door wide open, I don't care if it's just a mouse coming through it. All the major digital music sellers are just stuck on mp3 as if we need to save space like in the past, when it's just not the case anymore. For years now I've just bought the CD and ripped it to wav at the highest bit rate it will allow and that sounds great, but how many people are going to rebuy every song in their library as 24bit after spending all these years of collecting "standard" release quality, especially at the arbitrary price markups that shouldn't exist in the modern day that has 20MB/s downloads and 100's of gigabytes of storage space? The gate keepers of music seem to be doing most of the foul ups in the industry as far as I can see.
It all circles back to the most popular sales/downloads, aka "Pop" music. Nowadays Pop is compressed, limited and clipped so much in mastering that it makes zero difference how you receive it. 16-bit/44.1KHz for Pop music is perfectly fine and always will be, because the distortion introduced during mastering and tiny noise floor due to huge loudness is not going to have any audible difference at 24-bit/192KHz. We need to work on quieter masters first so that we can get the industry to realise the benefits, albeit miniscule IMO, of hi-res audio.
A song that was upsampled from 44.1 kHz to 88.2 kHz will not sound a good (as correct) as compared to that same song, if it were actually recorded at 88.2 kHz. Using technology, to artificially enhance a song, is never as good as what that song would have been, if it had the higher quality to being with. Why? Upsampling is a mathematical formula that looks at two adjacent samples, and calculates what probably would have been between those two samples, had the recording put it there in the first place. So it never (and I mean never) creates an accurate new frame between the existing frames. Like watching a TV show: Would you want a computer to compare two adjacent frames, and insert an artificial frame in-between those two existing frames? When computers create new frames (when they upsample), the new frames are based on an average of the two existing frames. In the case of music, the "average" is close, but never "on the money". But all of this is moot, when the source material is sub-par. And nearly all popular music is sub-par. Nearly all popular music is over processed, put through one "active" and unnecessary box after another, resulting in hearing the flaws, the coloration, from every box (the degradation is cumulative). So when you end up with a lousy, compressed, equalized, over-processed audio recording, then the sampling rate is moot, just as having a blurry photo will not change no matter its resolution. In summary: -- Higher resolution is better -- but only when the recording is made properly -- and very few are. -- Upsampling from low resolution to high resolution will not and can not be as good as actually recording at the high resolution. Lastly, sometimes upsampling does make a song sound better (depends on each person's taste). But that upsampled sound is not natural. That does not mean that it will not be a pleasing sound. It simply will be pleasing, yet inaccurate. Cheers!
Correct. Also adding: the quality of the up-sampling algorithms is critically important. An upsampling DAC from dCS will sound materially different than a $200 crutchfield model....
Regarding your statement "Would you want a computer to compare two adjacent frames, and insert an artificial frame in-between those two existing frames? " Most, if not all modern TVs have been doing this for probably over a decade now. It's called a "soap-opera effect" in layman's terms. It is on by default on all TVs, so yes, one could say people want a "computer to insert an artificial frame in-between two existing frames". ¯\_(ツ)_/¯
@@unready56 I spoke to that type of process, in my last paragraph: Lastly, sometimes upsampling does make a song sound better (depends on each person's taste). But that upsampled sound is not natural. That does not mean that it will not be a pleasing sound. It simply will be pleasing, yet inaccurate.
For me, after many years of playing loud live music, and knowing my listening audience aren’t audiophiles, 44.1 is more than fine. Hell, my hearing seriously drops at 6k. I still do try to make my recordings sound as good as possible. I just can’t hear well enough any more to know.
When this question is asked, I always think back to the video I saw (somewhere on TH-cam) where the guy argues that there is absolutely no need for artists to record above 16/44.1. I'm guessing that people who genuinely feel that way also feel very little need to really push the quality of the performance being recorded, let alone what they record it at. These individuals, no doubt, are from the same camp of listeners who says "cables don't matter".
ive never understood how physical capability variations are accepted. ex: i can run faster than you, you can jump farther than me, the next guy can throw farther than either of us.... these are all allowed for by the masses. but because person 'A' cant differentiate a sound *must mean* person 'B' cant either and therefore he has to be lying..... or selling snakeoil.
I don't think so. The biggest step forward is to remix the original recordings! The digital audio recorders nowadays are so much better than 25 years ago. That really make sense. These new remixes will sound amazing on 44.1kHz/16 bit.
It’s not that they don’t matter, it’s the diminishing returns and huge costs you get above a certain quality, e.g: studios don’t tend to use those pure silver, Kimber/Russ Andrew’s £200 a metre cables for a reason. They’ll use a good quality OFC one like Van Damme or Klotz, at maybe £2-3 a metre, for that reason.
As I’ve got older I’m ripping at 32khz which is a killer FM sounding on my age old ears. As we get older I really believe we need to downsample. Test it, try it…
I believe the sort answer to the question is this. Yes you should still be interested in high-res audio if you have an upscaler. Rob Watts said he felt it would need a one million tap filter, to extract all the information from redbook. Hence the M-Scaler. However if you give the M-Scaler more information to start with, in a higher res file, it must be better. E.g. the M-Scaler should be able to retrieve and upscale more successfully with e.g. a 192KHz file, over a 44.1KHz file. Going back to a million taps to extract everything from 44.1KHz. Well by nature a higher res file has more information. To doubt that is to doubt the M-Scaler. That's because the M-Scaler sends 706KHz to the DAC output, from a 44.1KHz input. It's the DAC getting more information about the sine wave, that makes the music output better.
The only context where the higher frequencies matter is for music producers and sampling, if they're playing with frequency modulation. However even in that context it's pretty nebulous as to how much it matter.
DSD and flac files have replaced Redbook CDs so there is no point in going there. SACD is probably the closest thing to what you are looking for, it's DSD CDs.
They tried dvd-a, SACD, HD-Audio and blu-ray audio, but the general public never bought into them, and those formats either died completely, or became niche formats.
After having to replace our 8-Track collection, then our vinyl collection, then our cassette collection, the American public balks at the idea of having to replace our entire collection yet again.
@@Progmatist Do you still use a black and white TV? Tech improves that is just how it is. Streaming is the replacement and now we pay for streaming music we already own hard copies of.
I have a Chord MScaler and it definitely makes a difference. It's very easy to do A/B comparisons by cycling through the upsampling rates. As for hi-res, as has been said many times before, it all comes down to the recording, mixing and mastering. There are some extremely poor hi-res recordings out there.
Now I finally got a DAC and streaming service that goes up to 192kHz 24bit and 40KHz bandwidth speakers I am still waiting for someone, somewhere to release some music I actually like in that format. Not a fan of commercial pop, jazz or classical, unfortunately. All I know is that the system sounds fantastic at 44.1KHz 16bit for now. Another thing I wonder is if the DAC filters really do roll off more gently when given a higher sampling rate to work with or the manufactures just shift the harsh, ringing brick wall further up the spectrum.
It's not just about frequency. Think cycle rate, or how many times per second the mic polls and records the audio. If you're recording at 192,000 times per second, you're able to capture much more detail in the attack of instruments and flow of waves. When you cut from 192KHz to 44.1KHz, you're losing 4x the level of detail and it changes the entire sound. It's not drastic, but if you have a (very, very) high quality audio system and a perfect listening environment, you'll hear the difference in minute details.
@@olorinhenderson Thanks for your response. I had thought that was more in the bit rate of 16 or 24 how many times each cycle at a given frequency is sampled.
@Douglas Blake Thanks for your response Whoa 4-16th Harmonics that's some serious stuff. I would guess if you have a quite enough room and are listening at a high level some of that would be ascertained.
I have rarely understood people's answers to this question, because mostly they never make sense. Like here Paul says cutting everything off above 44.1KHz is bad. Straight away this makes no sense. The reason is the following. If he is saying that cutting off recording sound above 20KHz is bad, that is where it makes no sense. If you record at 96KHz, any audio frequency over 20KHz is recorded into the sound anyway. That's harmonics, timbre, and physics. ... I'm referring to where he mentions filtering off at 22HKz. To explain. 2:50 Does he mean filtering off any sound about 22KHz in frequency. Or does he mean filtering out any samples that are more than 44.1Khz frequent. If you record in 96KHz and down-sample (or remove samples) then it makes sense that you lose audio quality. The more samples you have, the easier it is to pick up the harmonics of a note. Hence easier it is to hear the note timbre, so the sound sounds right.
I think you might be confusing sample rates with sound frequencies. The filtering to which Paul refers is the effort to replicate cleanly the original source of the sound which is itself often composed of several frequencies (like a band playing). Even a single piano note produces harmonics and ‘likes’ other notes in the same chord structure, etc. (I’m not a musician.) What I get out of Paul’s discussion is that in order to produce a 44.1 kHz sample rate (or more likely a 48 kHz sample rate), you have to design filters work at a 22.5 kHz (or 24 kHz) sample rate (he doesn’t explain why this is so), but he does explain why this is bad for the sound quality, the sound is distorted. Remember that to filter a signal is to choose one set of frequencies to pass through, and others to be ignored. If you filter at a 12 bB slope on a signal you’re only seeing 22,050 times a second, the wave form is going to look pretty choppy compared to filtering the same wave form when you can see it 44,100 times a second (half of 96 kHz). You might be able to filter at a 6 dB slope and achieve a smoother finished product (signal). Judging from the comments Paul and others on this thread have made, it seems DAC designers use this trick to good advantage. Driving the rate at which the signal is sampled way up in order to use filters which are not so harsh, producing smoother, less distorted signals. Anyway, I’m not an engineer, either, and I struggled to understand all this business about sample rates until I heard someone explain the advantages of higher sample rates for filter design. I hope I’ve helped you, at least I tried. :-)
It does make perfect sense. To cut off frequencies above 22.5kHz, a very sharp filter must be used. These filters almost always do some damage to the resultant sound we hear, such as time domain problems. It is possible to make filters than for the most part eliminate these problems, but only with very expensive filters, which would cause DACs and products with DACs in them to be a lot more expensive.
@@jeremiahchamberlin4499 I made it clear that I understood sample rates and sound frequencies. (I find it bizarre that you would question me on that.) I was not sure what Paul meant, because I could not make sense of his explanation. Note, not being picky, you should check your maths, as 44.1KHz is not half of 96KHz. I am not sure what the issue is of filtering around 20KHz is. I have Chord Electronics DACS and they have four settings. Pure signal, 20KHz cut-off which cuts noise from HD-audio, and two other filters.
@@doowopper1951 Well I have two Chord DACs that cut off over 20KHz, with a filter setting. They are expensive. However I don't recall right now any reason why low pass-filtering for above 20KHz would cause issues. Paul said here that it can be heard if you cut clean off at say 22KHz. However with my Chord DACs, I play a high res like 96KHz or 192KHz, and sometimes filter off above 20KHz. I hear no immediate change. There is so little going up there any way. Any harmonics playing out at that frequency or above, are part of the signal under 20KHz. It's therefore (to my mind) best to keep sampling rate high to pick up the harmonic fluctuations on the basenotes. (That is base-notes, not bass-notes.) All while filtering of sound frequencies above 20KHz, to cut off high-frequency noise caused by high res files.
@@threecats8219 I misunderstood your question, I believe Neil Russell has a good handle on it. I have seen TH-cam videos which document the high frequency sounds that are introduced by high resolution processing of a signal. These high frequency sounds are characterized as ‘noise’ which a filter designed to cut off frequencies greater than the range of human hearing (>20 kHz) eliminates. As Mr. Russell points out, the benefit of the higher resolution processing is supposed to be heard at base frequency level, not at the higher frequency levels of it’s harmonic. So the filter at 20 kHz gives you the benefit without detraction.
I have made 2 passive subwoofer boxes with 12 inch pioneer subwoofer that I run from a monoblock sony car amplifier for home use. So Paul sir , can you tell me if whether I should use polyfill or acoustic foam inside the subwoofer boxes. Your fan writing from India.
I'd go for 96khz lossy AAC as consumer format...( a 4minute song in 96khz aac/m4a is like 10MB in size, but in FLAC that's 120MB. ) AAC and WMA Pro are two known lossy codec that can encode Hi-Res audio.. and more recent snake-oily MQA. they are hi-res while small in size.Not mp3, ogg/vorbis, wma v2, nor opus, those are in 48khz gang. I often convert mediocre songs from 96khz FLAC into v0/48khz/270kbps mp3 .. 48khz is like hi-res enough for me...
@@doowopper1951 tell your judgement to the streaming services that adopted the MQA format. a lossy one for high-resolution..the future is lossy . the mainstream is lossy..
i use a dsp with harmonic extrapolation to approximate and add the high frequencies and soft basd lost in lossy compression. this dsp also uses a multiband compressor to pronounce transients, with the side effect of creating a slight low volume mumbliness that helps fill spectral holes. the dsp uses 24 bits, giving the added effects more headroom, so the end result has a slightly extended dynamic range. furthermore i add a tiny bit of reverb to make my small room sound wider, but thats besides the point of lossy audio repair . lossy files and algorythmic reconstruction are a surprisingly good sounding combination. not technicaly hi-fi, but enjoyable. analog amps would likely help in further smoothing out the lossyness
Although you'll think I'm a bit of a primitive hi-fi enthusiast? I own a very respectable hi-fi set up and did a weird thing a couple of years back. I bought an Amazon Echo Dot Alexa and connected it to the MP3 input via a 3.5mm jack to jack to the amp. My CD player is a good Rotel model although quite elderly. I thought I'd do an A versus B comparison playing Donald Fagan's, 'Nightfly' CD to the Amazon HD version on the Alexa. I've been a musician since around 1968 so I have oodles of experience, both professional, semi-pro, and jamming, both on bass trombone and bass guitar across loads of various genres. I consider that my ears are very well trained. When doing the comparison, l discovered that the two sources sounded very similar if not quite identical which puzzled me? Your input would be very well accepted if you can find the time, because other viewers may find it interesting.....Thanks so much and kind regards. Den Finch in the UK.
@Douglas Blake Hi Doug, I get slated on Facebook by so-called Audiophiles criticising my Alexa usage! All I can say is the Alexa cost £50 GBP and the Amazon Music subscription is £3-99 per month. I have access to virtually everything that's been recorded as well as being able to verbally ask the thing anything you'd type into your search engines.....:)
Can someone explain to me ripping files off of TH-cam, and also if bit rates above 192kbps matter in that kind of scenario? I know it’s not the most ideal way to source your files but hey i don’t have $100 a pop to hear some of these zappa records..
A transport doesn't do anything but read the files on a disc. The DAC you connect it to converts it to analog. You can hook 1 transport to multiple DAC's and get a different sound. If you feed a lossless 16/44.1 file to the same DAC, either via stream or transport, it will sound identical. Same zeros and ones being converted.
@@hoihallo2904 Audiophiles will never understand how digital works! Hence they buy "audiophile network cables" and "audiophile swichtes" - the same dumbness as thinking the last 0.5 meters of a power chain for thousands of Dollars will magically fix the sound
@@Harald_Reindl Audiophiles, finding solutions for things that aren't a problem😉. Power cables supplied with the product you buy are always good enough. And if the cable is faulty, you will have a different problem than "muddy bass" or " less air around the instrument". You have a fire hazard 😲😂
Surely buying tickets to hear some concert or opera or any other true live performances is a helluva lot cheaper than the cost of all this ultra expensive audio gear and methods of recording
But trying to follow the good bands or acts may cost a lot for a one time event!!! And those may be subpar etc!!! And members die leave bands etc therefore negating later experiences and now Covid has cut into a chuck of touring
I agree that downsampling from the sample rate that was used in the capture is probably a bad idea, but there are two facts that make me skeptical of the conclusion. 1. There is very little musical content above 5 kHz. 2. Almost anyone over, say, 45 couldn't hear anything above 10 kHz even if there was important content to be appreciated. The highest note on a piano is C8, which reveals the piano features 8 octaves of C, which is a very wide range compared with most other musical instruments. C8 has a frequency of 4186 Hz. The human voice goes up to about 4 kHz. Even considering Nyquist, I am a little unclear about why there is all this concern for a 20 kHz upper bandwidth.
right the main frequency of music is below 5kHz. but the resonating sounds of instrument is well above 15khz or even above 20khz... that's what make every instrument unique .. little-above-5khz theory only applies to the old FM 8bit game music . even old MIDI will go beyond 5khz...
@Douglas Blake New music yes, but a lot of older recordings are being remastered from the original tapes. Vinyl has never been better if you look at the more audiophile labels. Analogue Productions, Mobile Fidelity etc. would be a good starting point.
All this hi-res nonsense needs to be exposed for the marketing fraud that it is. Higher sample rates and more bits are used in the recording and mastering to give the engineer headroom and the ability to deliver a superb 16/44.1 version to the listener. Many audiophiles have been misled into thinking that they are being denied access to the original masters and record labels are happy to “fix” the problem by selling them the same music again, only a so called hi-res version. The number of bits defines the available dynamic range/noise floor, this is all it does. 16 bits gives a huge dynamic range, way more than needed. A recording with a larger dynamic range would be undesirable as quiet passages would be too quiet to hear or loud passages would be deafening. The sample rate defines the highest frequency able to be reproduced. 44.1KHz = 20KHz which is at the upper limit of hearing. This is all it does. Sample rates higher than 44.1KHz produces ultrasonic noise that was not part of the performance. If you buy a so called hi-res version you are buying a large file padded out with millions of zeros. Several well designed ABX tests show audiophiles cannot pick hi-res recordings over CD quality.
if one hears only below 14000hz such like many elderly people. 15000hz to him is ultrasonic... and 15000hz sound is everywhere and must be included into the recording. else people would feel satisfying with 72kbps mp3 files - the sub-wideband quality. my point is that certain safe ultrasounds may possibly be a part of the performance , which need to be included in the recording only possible on higher resolution..
@@imkow The average 35 year old male has 11dB hearing loss at 8KHz. The finest studio microphines have a deliberate steep roll off at 20KHz to prevent instability. A CD is capable of delivering a slat response from DC to 20KHz.
@@geoff37s38 -11dB doesnt mean silence..and is still audible. we still can see from statistics of audio files that show samples above 20khz very often...especially from those modern digital composed music with electric instruments
@@imkow -11dB is a reduction of 92%. The vast majority of adults have no hearing above 15KHz of the very few that are fortunate (or is it unfortunate?) to hear up to 20KHz the sound level will be way below background and totally masked by all other frequencies in the mid range.
@@geoff37s38 I don't really support this -11dB conclusion else i'd be feeling this sudden deafening event... you know the lossy audio codec always filter out the 15khz at low bit rate like 72kbps mp3, and make the audio sound like flat...I may be not hearing 15khz. but I want my mp3s have at least 18khz for reaching audio transparency... ie you also want those directional background sounds there on recording. in the book "A Survey of Musical Instrument Spectra to 102.4 KHz" James Boyk tested instruments like trumpet, horn, violin, and oboe they all make waves above 20khz..i guess it'd be good to provide an option for individuals who want to hear (or feel) the waves more abundantly via hi-res devices..
Well I used to send photo's to publishers and they wanted 300dpi images and went NO when they got 72dpi images. Bad quality they say. I resample the same image to 300dpi. Accepted. Even if the file is crap and full off artifacts doesn't matter high dpi good, low dpi bad. Even though it doesn't matter. The file has the same number of pixels. dpi is a meaningless term.
I'm sure they were probably hoping you'd send them an original image captured at the higher dpi (assuming it was). As a web developer we used 72 dpi for the longest time because screen pixels weren't bunched together close enough to see the higher dpi, but nice printers can print at much higher dpis. The dpi isn't meaningless, those printers you sent to just didn't enforce strict quality controls or their machines required 300 dpi.
Do the reverse. Take an original image/photo that was created or captured at 300dpi or above and resample it to 72dpi and then upsample it back to 300dpi. Now print the original and the resampled version and you'll see dpi is not meaningless.
@@vanoverbay774 exact same image. Deterioration would come from recompressing an already compressed image, not from changing the dpi. If you create a 72dpi and a 300dpi image file from the same raw image you get the exact same amount of pixels with the same information. The only difference is the projected size on the medium you are viewing it on if it is not rescaled. The dpi requirement is a deprecated custom from when software had trouble with scaling images.
The die is cast before the end user buys then listens to the music... Done right 16/44.1 is as good as 24/96 to the point most of us would not do well in a DBT. 'Oh this is better' is just another way to get more of your money.
First of all, don’t upsample. Why? Also what is played above 22khz that I could or would want to hear? I try to buy music, if known and if possible, in its originally mastered format. If you master at a lower rate than what was recorded then why would I want a down sampled, mastered, re-up sampled distribution? Just silly. Also, I’ve heard (not an expert, can’t confirm) that dsd is a major pain to edit and master and that most are converted to pcm to master and then back. May have changed and many of the native dsd stuff is orchestra and live music that may not get edited but that’s just dumb. Personal caveat for me; I do not possess the means or desire to have a $200k system. Yes, someone with Paul’s listening room can probably hear the difference in a recording that was native at 96khz, played back at 96khz vs red book cd 44.1. I could not on my system. Yes I can hear the difference between Spotify and lossless, but not the previous example. Enjoy the music, not the numbers. Buying a cd vs a 96khz shouldn’t rob the joy of the music if it is a good master (which is really what matters, not the sample rate).
1:10 This is like asking: 'With modern 4K high-end TV's capable of upsampling/upscaling/upresing 1080p content to 4k UHD resolution, is there a need for native 4K UHD content?" Well duh, of course there is. The reason for audio is that 44.1 kHz sampling rate is not even close to all that we need for what our ears are capable of hearing. The Niquist sampling theorem has been wrongly interpreted for decades - not to get into other, more sinister motives. High-res audio is not about frequencies beyond 20kHz. It's about sampling all the individual frequencies that we can hear at the same time in the audible 20Hz-20kHz(ish) spectrum. We don't need high res audio to sample 100 kHz frequency audio waves. We need it to sample multiple concurrent audio waves in the 20Hz-20kHz spectrum. Yes, we can hear multiple soundwaves concurrently! This is the big revelation the audio industry should wake up to! We can hear multiple soundwaves at the same time (many in fact) and we need sampling rates that cover the sum of the required sampling rate for each individual soundwave that happens in the content getting recorded. Also, even if the amount of soundwaves that we can hear concurrently may prove to be a certain limit (so maybe not all the 20-ish thousand soundwaves in the audible spectrum), it may be possible that we could tune our ears on the fly to different sets of multiple frequencies which we could hear at the same time, which would support the need for very high sampling rates (many Mhz likely). A whole world of research & development awaits the audio world in this area. Get to it! :D
@Douglas Blake You're funny. Why did you not reply anymore to the conversation we had in the other of Paul's videos when I first made this claim? I told you before: Your claim is bs: a speaker cone or diaphragm, like the human eardrum, does not have to be in more than 1 place at once to produce or, in the case of the eardrum, perceive and transfer multiple soundwaves at the same time. You're just being stubborn and unimaginative. A diaphragm, be it of a speaker of the human ear, can vibrate at a, let's say, 100Hz frequency cycle while doing smaller cycles of higher frequencies. The physical material of the diaphragm or eardrum does not have to be in more than one place at once. You're just trying to derail the conversation with a false assumption. This revelation, of multi-frequency perception, is a true paradigm shift in the audio world, and is the explanation of why we can easily hear huge differences between standard CD 16/44.1kHz and hi-res audio like DSD, which is in the MHz range sampling rates.
@Douglas Blake You seem incorrigible to the fact that a diaphragm of any sort, be it of a speaker, eardrum or microphone, can produce multiple audio frequencies concurrently while not violating any law of physics. What else about the 'complex waveform' are you trying to convey? I am fully open to conversation with people who are open to facts & reason. The history of science is chock full of examples of long-believed ideas that were eventually refined or downright disproved, like the long-used argument about the Niquist-Shannon theorem proving that it's enough to only sample at twice the highest frequency in the audible range for us to perceive the entire range of audiowaves. This argument is false, because we can hear multiple soundwaves at the same time, and we need to sample each wave in a sum of samples. There, the claims about 44.1kHz sampling being enough is proven false. Listen to native DSD recordings at MHz rates on a decent hi-fi system, which I'm sure you have, and that will definitively confirm my claims to you or anyone with openness to reason.
@Douglas Blake Well then let me have the last word: You clearly haven't said the same thing I did in different words, and I will also leave that to others reading this comment thread to judge. from your first reply above: What we agree on is that an object can only be in one place at a time (at least on a non-quantum scale :D). What we disagree on, based on your replies, is that a diaphragm of any kind (speaker, microphone, human eardrum etc.) can capture and reproduce multiple soundwave at the same time. If you think we agree on this issue as well, that's odd, because you've made clear claims to the exact contrary.
Much like The Rolling Troll says - a computer is only as smart as the data it's given, And all this Hi-Res, 4k gobbledygook is getting to be such overkill for the average consumer. An 80" Hi Res TV to watch The Dark Knight - okay, why not 4k? But a 21" TV to watch Citizen Kane in 4k? Seriously, how much better can it look? Same with audio. if it's The London Philharmonic doing it's best performance using the best microphones and recording equipment, and one's got the best stereo, speakers and/or headphones - okay. But to expect some cassette transfer that is recorded at 22khz or 128kbs is going to sound amazing on a upscaling system of the highest order. That's like expecting a used Beatles record to sound amazing on a Marantz turntable with the best diamond stylus. Maybe slightly louder but certainly not any better.
A blind test on 140 audiophile respondents was carried out via internet, providing them with a 16/44 and a 24/96 version of 3 music files. They had to guess which was the Hires 24 bit version. The results are exactly what would be expected if people had chosen by flipping a coin: 50% chose the right Hires version, vs 50% for the CD quality version for the first 2 pieces, in fact for the 3rd one they did worse than by choosing by chance...and the subsample of professional musician did worse than average, choosing more often the 16 bit version over the 24bit version. The subsample with very expensive audio gear did no better than average either.
A summary of the results:
"As you can see, in aggregate there is no evidence to show that the 140 respondents were able to identify the 24-bit sample. In fact it was an exact 50/50 for the Vivaldi and Goldberg! As for the Bozza sample, more respondents actually thought the dithered 16-bit version was the "better" sounding 24-bit file (statistically non-significant however, p-value 0.28).
Looking at the individual responses, there were a total of 20 respondents who correctly identified the B-A-A selection of 24-bit samples, and 21 selected the opposite A-B-B. This too is in line with expectations that 17.5 would pick each of these patterns based on chance alone."
Source:
archimago.blogspot.com/2014/06/24...ce%20alone.
archimago.blogspot.com/2014/06/24...rt-ii.html
One test can be interpreted in many ways. you also did not tell us what they were listening to Or the environment that they were listening in. No opinion Matters unless those two things Are figured in.
Plus no mention of encoding converters or decoding converters or what kind of dither was used…
Plus when you have many tracks they have to be summed down to two tracks, It really depends on the quality of the microphone, preamp and encoder and engineer. I’ll tell you this if you have a quality studio system and you can’t tell the 24-bit master from the 16 bit master you should have your hearing checked
@@DrRyman do you know what 16 bit and 24 bit depth actually mean? the bit depth is the "corridor" between the loudest and the most silent audio signal of the recording. 16 bit mean 96 db and 24 bit 144 db, a human can hear up to 140 db, but c'mon that' insanly loud^ a think standing right next to a starting plane is about 100db^ by the way, the dynamic range of a vinyl record is about 60 db ...
@@DrRyman You should take the time to read the two links I posted above, all the details are there. The master was 24 bit of course, the 16 bit file was downsampled from it. In fact the people with the most expensive systems, the sound professionals and the musicians did no better on this blind test than the sample average. Apple's CEO recently admitted that the vast majority of people cannot tell the difference between Hires and CD quality. This test shows that hardly anybody can reliably tell the difference, time after time...This is not to say that it is not important to use 24bit at the mastering stage of course, but then the CD version of this master is largely enough at the listening stage.
@@DrRyman god, ppl and their esoteric (false) knowledge. Maybe 2 ppl every 1 milion can hear a sure difference between 16bits and 24bits.
Go study some info science
I can't hear the difference between a minidisc atrac type-r recording from a straight 16/44.1 pcm... 😂
And my system is about 3000€...
To be honest only from speakers (with cans just some micro details here and there but I have to be in 'meditation mode" otherwise... ). Haha
I feel like Paul has been kind enough to answer this question for the 53rd time.
Did he answer the question though?
He said that high res is better than upsampled high res because that’s how the original was recorded. But he didn’t explain why 96/24 downsampled to 44.1/16 would be any worse for the end listener. 44.1/16 fully covers the frequency and dynamic range people can hear and instruments can produce. So it still remains mysterious to me why hires is supposed to be better than good old CD red book. I wanted it to be better, I can’t pretend I can hear any difference.
Most of it is screwed up in the mastering process, if you ask me. What's HighRes worth anyway when the dynamic range is squished to 6dB and the equalizers went bananas.
@Douglas Blake Yes. My general suspicion about the preference for Vinyl by many is that this medium is treated more gently by mastering engineers. According to the data on the loudness wars site, all Vinyl releases have significantly more dynamics. No wonder the music is more fun. Personally, I've always wondered why I often enjoy the sound of intros of rock songs the most. The answer is simple: all the dirty tricks kick in when the rest of the band joins at full loudness, and then the sound gets ugly real quick.
@Douglas Blake Yup, that is ironic indeed. The qualities of digital recording found their use in unintended ways.
@Douglas Blake On the other hand, when recorded properly like orchestral recordings, they can have "Huge" dynamics. TELARC is a great example of recordings that were not compressed. That is one reason why classical music lovers jumped on the CD bandwagon so fast. Of course the other reason was the lack of potential noise from vinyl.
Time to ramp up production of the PS Audio Cloning Machine, then we can all have our own Personal Paul® to answer our questions
Be careful what you wish for.😀
Hi Paul. I'm returning to the "audiophile world" after a 20 year absence, and I'm a software guy so I knew the technology would be very different (in some ways) now. I just wanted to say THANK YOU for taking the time to make these videos. You are teaching me a lot and I am grateful! - Paul K in Vermont.
I'm into broadcast transmitters and with analog stereo encoders, there's also up and down sampling going on. Normally, a part of the L-R signal (stereo side information) on 38kHz is created by switching 38.000 times between the left and the right channel per second. As you might imagine, it's producing "switching clicks" nearby the audible spectrum, and to filter that, you need steep filters that influences phase linearity. So, an old trick was to upsample that 8 or 16 times. The benefits are harmonics that are much further away from the signal you want. Then you are able to filter it in that high frequency domain (304kHz, or 608kHz) and then downsample it to the original 38kHz. The result is a more dynamic and natural sounding signal with better stereo imaging and channel separation. So I do imagine that there's a benefit for upsampling in DAC's for the same reason. It makes filtering more "gentle".
I emailed the streaming station soma fm to ask if they will stream it high res in the future. they run it at something like 128k MP3 with some special broadcast rack thingy. They were saying basically it sounds better at that quality. I can't argue, with the compression they use or whatever streaming soma FM sounds great! better than some high res files I have sometimes.
@@ChiefExecutiveOrbiter Yes, but that's streaming. I Was writing about FM radio, analog over the air. It's true that there's equipment called "Sound Processing". What it actually does is making the audio less dynamic and add some effects to correct for that, like putting more harmonics on the base and mids. It tricks the way our brains "listen" so we do experience dynamics that aren't in the signal anymore. If you do that in a way which is optimal for 128kbps MP3, we experience it as better than a 192kbps stream without the right sound processing. But it's still a way to push a narrow bandwidth to it's limits. About Sona FM, I'd never head about it, but what a cool station! So many streams in different genre's! They have a new listener here in The Netherlands! And yes, it doesn't sound bad at all!
@@hugoromeyn4582 I understand the need for compromise for audio on broadband or FM. I personally still really like the fm band compressed sound (yes sound quality is part of the sentiment not only the song) , as opposed to the horrible quality of DAB (digital audio broadcast) where I currently live. Dropouts, bad sound quality, bad implementation etc. But my point is this: we are surrounded by gigabytes of data. Why the need for a DAC that does up-sampling when the 'original' file should be easily available?
Anyway, groeten uit een ander deel van de wereld!
@@Sool101 Tidal offers higher quality streaming at 96kHz / 24bit. And I know about some record labels with 192kHz / 24 bit downloads. But there's not a lot out there in high res audio indeed. Strange that they still stuck to a format from the 1980's? And DAB (also a late 80's child), I know. It are 128kbps streams if you're lucky, most of them are even at lower bit rates. Techmoan did a video about DAB in the UK, that's even worse than in The Netherlands. FM Radio is not really efficient with bandwidth, but it can sound just as good as the original recording. A little bit tweaking and sound processing is okay, they have to play music from all kinds and different masters, different dynamic ranges. It's unpleasant if you have to tweak the volume every time another song is being played. But what some stations are doing with clipping their sound to be a 1-2dB louder... It's insane that even the news sounds distorted. Groet terug uit de lage landen!
I prefer to think of it this way. Upsampling can't produce more information then what is in the original file so in effect all upsampling is doing is to multiply the original file information. You can't create better resolution that isn't there in the first place. The same thing goes for video, 4K discs look much better then upsampled 1080P.
Yep. The comparison I use is this: interpolating to 192kHz isn't going to seriously improve quality if your input source is an 8kHz digital phone line. Some people will try arguing against the 4K analogy; presumably because upsampled 1080P looks pretty good. The patent nonsense of making old VoIP lines sound much better usually shuts them up.
@@FarnhamJ07 you know, ai upsampling is a thing these days....it opens possibilities to enhance the original low res things...
although i wont upsample the audio unless it's a default feature built-in to the hardware decoder and unchangeable...
@@imkow That's a pretty cool concept for sure, and a quick Google on it seems promising. It'd be great for stuff that simply isn't available in hi-res at the very least! I'm quite sure that isn't what he and others have in mind when asking about upconversion though :P
Wow Paul, I had no idea that many people write in to ask you things! I feel so incredibly LUCKY now I had one of my questions answered! Thank you sir!
If you start with lower res, your system may not be the limiting factor. With a higher res file, you are guaranteed that your system will be the limiting factor and hence give you the option to play around with components to suit you.
Upsampling cannot improve audio, it cannot create information that is not there: It is like taking a digital photo, resizing it to a smaller size then transmitting it and resizing it to its original size at receiving end: It simply cannot mathematically be restored to the original quality as the information was lost when it was scaled down (Resized smaller). As Paul alluded to, there is a different reason for upscaling; it purely boils down to inject energy between samples to drive a load and not to increase quality (Which is impossible).
Lastly, also as Paul said, oversampling makes it mathematically (So no way around it) easier and gentler to filter: You don’t need such a high order filter which is not only linearly correlated to cost, but higher order filters creates higher resonances (Artefacts) due to overshooting (Frequency response of higher order filters tend to not be flat but can fluctuate significantly throughout its frequency response vs amplitude curve) and introduce sound artefacts as a result. So, yes, a higher order filter can cut off tighter but at the cost of sound quality. So, as Paul indicated, oversampling means that avoids the need for a right, rigorous filter which will introduce sound artefacts. And to do that, you need a higher samples source file
Upsampling can improve audio, but not by adding information. It improves audio by letting far more benign filters to be used. It can’t add good stuff, but it sure can remove bad stuff.
The answer is; If it's shit that goes in, it'll be shit that comes out.
Alternatively; You can't polish a turd.
There's definitely something to be said for up (and/or down)sampling of digital files but in the end, there's always a weakest link in your rig. And it's easy enough to not let that be the file.
I also think it's a non-issue with how cheap storage is these days :).
Yes and no. A big reason to uppsample is getting away from the problem with sharp filters and phase shift.
@@marcusm5127 I'm not saying it doesn't work, it's just not as good as actually having a good source. Maybe you can polish a turd a tiny bit
@@rollingtroll Yeah I much prefer a good source. If Spotify could have their entire library in DSD that would be great.
The weakest link is your ear (see the blind test in my post above yours).
I think the real question is why do the distributors of music decide to sell low end files, when all the data limitations of the past are no longer relevant, and why is a wav file automatically more expensive than an mp3, and a 24bit file more expensive than a 16bit? Amazon for example is completely capable of selling higher quality files, but is stuck on default mp3 and CD. That being said, people claiming mp3 "sounds bad" is a thing of the past, obviously they are much cleaner than in the old days, but I still prefer wav at max bit rate just because I don't like the idea of variable bit rates. I want the door wide open, I don't care if it's just a mouse coming through it.
All the major digital music sellers are just stuck on mp3 as if we need to save space like in the past, when it's just not the case anymore. For years now I've just bought the CD and ripped it to wav at the highest bit rate it will allow and that sounds great, but how many people are going to rebuy every song in their library as 24bit after spending all these years of collecting "standard" release quality, especially at the arbitrary price markups that shouldn't exist in the modern day that has 20MB/s downloads and 100's of gigabytes of storage space?
The gate keepers of music seem to be doing most of the foul ups in the industry as far as I can see.
It all circles back to the most popular sales/downloads, aka "Pop" music.
Nowadays Pop is compressed, limited and clipped so much in mastering that it makes zero difference how you receive it. 16-bit/44.1KHz for Pop music is perfectly fine and always will be, because the distortion introduced during mastering and tiny noise floor due to huge loudness is not going to have any audible difference at 24-bit/192KHz.
We need to work on quieter masters first so that we can get the industry to realise the benefits, albeit miniscule IMO, of hi-res audio.
because tons of people listen on sh1tty earbuds
A song that was upsampled from 44.1 kHz to 88.2 kHz will not sound a good (as correct) as compared to that same song, if it were actually recorded at 88.2 kHz.
Using technology, to artificially enhance a song, is never as good as what that song would have been, if it had the higher quality to being with. Why?
Upsampling is a mathematical formula that looks at two adjacent samples, and calculates what probably would have been between those two samples, had the recording put it there in the first place. So it never (and I mean never) creates an accurate new frame between the existing frames.
Like watching a TV show:
Would you want a computer to compare two adjacent frames, and insert an artificial frame in-between those two existing frames?
When computers create new frames (when they upsample), the new frames are based on an average of the two existing frames. In the case of music, the "average" is close, but never "on the money".
But all of this is moot, when the source material is sub-par. And nearly all popular music is sub-par.
Nearly all popular music is over processed, put through one "active" and unnecessary box after another, resulting in hearing the flaws, the coloration, from every box (the degradation is cumulative). So when you end up with a lousy, compressed, equalized, over-processed audio recording, then the sampling rate is moot, just as having a blurry photo will not change no matter its resolution.
In summary:
-- Higher resolution is better -- but only when the recording is made properly -- and very few are.
-- Upsampling from low resolution to high resolution will not and can not be as good as actually recording at the high resolution.
Lastly, sometimes upsampling does make a song sound better (depends on each person's taste).
But that upsampled sound is not natural. That does not mean that it will not be a pleasing sound. It simply will be pleasing, yet inaccurate.
Cheers!
Correct.
Also adding: the quality of the up-sampling algorithms is critically important. An upsampling DAC from dCS will sound materially different than a $200 crutchfield model....
Regarding your statement "Would you want a computer to compare two adjacent frames, and insert an artificial frame in-between those two existing frames?
" Most, if not all modern TVs have been doing this for probably over a decade now. It's called a "soap-opera effect" in layman's terms. It is on by default on all TVs, so yes, one could say people want a "computer to insert an artificial frame in-between two existing frames". ¯\_(ツ)_/¯
@@unready56 I spoke to that type of process, in my last paragraph:
Lastly, sometimes upsampling does make a song sound better (depends on each person's taste).
But that upsampled sound is not natural. That does not mean that it will not be a pleasing sound. It simply will be pleasing, yet inaccurate.
For me, after many years of playing loud live music, and knowing my listening audience aren’t audiophiles, 44.1 is more than fine. Hell, my hearing seriously drops at 6k. I still do try to make my recordings sound as good as possible. I just can’t hear well enough any more to know.
"It's just me!" LOL!! And that's why we watch your video's Paul! Two Pauls... Paul McGowan and Paul McCartney :)
And Paul Cotton
When this question is asked, I always think back to the video I saw (somewhere on TH-cam) where the guy argues that there is absolutely no need for artists to record above 16/44.1. I'm guessing that people who genuinely feel that way also feel very little need to really push the quality of the performance being recorded, let alone what they record it at. These individuals, no doubt, are from the same camp of listeners who says "cables don't matter".
ive never understood how physical capability variations are accepted.
ex: i can run faster than you, you can jump farther than me, the next guy can throw farther than either of us.... these are all allowed for by the masses.
but because person 'A' cant differentiate a sound *must mean* person 'B' cant either and therefore he has to be lying..... or selling snakeoil.
I don't think so. The biggest step forward is to remix the original recordings! The digital audio recorders nowadays are so much better than 25 years ago. That really make sense. These new remixes will sound amazing on 44.1kHz/16 bit.
It’s not that they don’t matter, it’s the diminishing returns and huge costs you get above a certain quality, e.g: studios don’t tend to use those pure silver, Kimber/Russ Andrew’s £200 a metre cables for a reason. They’ll use a good quality OFC one like Van Damme or Klotz, at maybe £2-3 a metre, for that reason.
As I’ve got older I’m ripping at 32khz which is a killer FM sounding on my age old ears. As we get older I really believe we need to downsample.
Test it, try it…
I believe the sort answer to the question is this. Yes you should still be interested in high-res audio if you have an upscaler. Rob Watts said he felt it would need a one million tap filter, to extract all the information from redbook. Hence the M-Scaler. However if you give the M-Scaler more information to start with, in a higher res file, it must be better. E.g. the M-Scaler should be able to retrieve and upscale more successfully with e.g. a 192KHz file, over a 44.1KHz file.
Going back to a million taps to extract everything from 44.1KHz. Well by nature a higher res file has more information. To doubt that is to doubt the M-Scaler. That's because the M-Scaler sends 706KHz to the DAC output, from a 44.1KHz input. It's the DAC getting more information about the sine wave, that makes the music output better.
The only context where the higher frequencies matter is for music producers and sampling, if they're playing with frequency modulation.
However even in that context it's pretty nebulous as to how much it matter.
Why hasn't the CD been upgraded like home video formats over the years? I'd love to have CDs at higher bit depth and whatnot.
DSD and flac files have replaced Redbook CDs so there is no point in going there. SACD is probably the closest thing to what you are looking for, it's DSD CDs.
Too bad there's next to no content genuinely recorded in 24 bit or DSD
They tried dvd-a, SACD, HD-Audio and blu-ray audio, but the general public never bought into them, and those formats either died completely, or became niche formats.
After having to replace our 8-Track collection, then our vinyl collection, then our cassette collection, the American public balks at the idea of having to replace our entire collection yet again.
@@Progmatist Do you still use a black and white TV? Tech improves that is just how it is. Streaming is the replacement and now we pay for streaming music we already own hard copies of.
I have a Chord MScaler and it definitely makes a difference. It's very easy to do A/B comparisons by cycling through the upsampling rates. As for hi-res, as has been said many times before, it all comes down to the recording, mixing and mastering. There are some extremely poor hi-res recordings out there.
Or just get foobar and sox resampler, it always does a better job than dacs built in filtering.
Now I finally got a DAC and streaming service that goes up to 192kHz 24bit and 40KHz bandwidth speakers I am still waiting for someone, somewhere to release some music I actually like in that format. Not a fan of commercial pop, jazz or classical, unfortunately. All I know is that the system sounds fantastic at 44.1KHz 16bit for now.
Another thing I wonder is if the DAC filters really do roll off more gently when given a higher sampling rate to work with or the manufactures just shift the harsh, ringing brick wall further up the spectrum.
Curious as to what instruments plat above 22K Hz. I would guess there are some 2nd Harmonics in that range?
It's not just about frequency. Think cycle rate, or how many times per second the mic polls and records the audio.
If you're recording at 192,000 times per second, you're able to capture much more detail in the attack of instruments and flow of waves.
When you cut from 192KHz to 44.1KHz, you're losing 4x the level of detail and it changes the entire sound.
It's not drastic, but if you have a (very, very) high quality audio system and a perfect listening environment, you'll hear the difference in minute details.
@@olorinhenderson Thanks for your response. I had thought that was more in the bit rate of 16 or 24 how many times each cycle at a given frequency is sampled.
@Douglas Blake Thanks for your response Whoa 4-16th Harmonics that's some serious stuff. I would guess if you have a quite enough room and are listening at a high level some of that would be ascertained.
Not with human ears you wont.
The shady dark art of the digital world well explained there Paul 👍🎧😁
I have rarely understood people's answers to this question, because mostly they never make sense. Like here Paul says cutting everything off above 44.1KHz is bad.
Straight away this makes no sense. The reason is the following. If he is saying that cutting off recording sound above 20KHz is bad, that is where it makes no sense. If you record at 96KHz, any audio frequency over 20KHz is recorded into the sound anyway. That's harmonics, timbre, and physics. ... I'm referring to where he mentions filtering off at 22HKz.
To explain. 2:50 Does he mean filtering off any sound about 22KHz in frequency. Or does he mean filtering out any samples that are more than 44.1Khz frequent.
If you record in 96KHz and down-sample (or remove samples) then it makes sense that you lose audio quality. The more samples you have, the easier it is to pick up the harmonics of a note. Hence easier it is to hear the note timbre, so the sound sounds right.
I think you might be confusing sample rates with sound frequencies. The filtering to which Paul refers is the effort to replicate cleanly the original source of the sound which is itself often composed of several frequencies (like a band playing). Even a single piano note produces harmonics and ‘likes’ other notes in the same chord structure, etc. (I’m not a musician.) What I get out of Paul’s discussion is that in order to produce a 44.1 kHz sample rate (or more likely a 48 kHz sample rate), you have to design filters work at a 22.5 kHz (or 24 kHz) sample rate (he doesn’t explain why this is so), but he does explain why this is bad for the sound quality, the sound is distorted. Remember that to filter a signal is to choose one set of frequencies to pass through, and others to be ignored. If you filter at a 12 bB slope on a signal you’re only seeing 22,050 times a second, the wave form is going to look pretty choppy compared to filtering the same wave form when you can see it 44,100 times a second (half of 96 kHz). You might be able to filter at a 6 dB slope and achieve a smoother finished product (signal). Judging from the comments Paul and others on this thread have made, it seems DAC designers use this trick to good advantage. Driving the rate at which the signal is sampled way up in order to use filters which are not so harsh, producing smoother, less distorted signals.
Anyway, I’m not an engineer, either, and I struggled to understand all this business about sample rates until I heard someone explain the advantages of higher sample rates for filter design. I hope I’ve helped you, at least I tried. :-)
It does make perfect sense. To cut off frequencies above 22.5kHz, a very sharp filter must be used. These filters almost always do some damage to the resultant sound we hear, such as time domain problems. It is possible to make filters than for the most part eliminate these problems, but only with very expensive filters, which would cause DACs and products with DACs in them to be a lot more expensive.
@@jeremiahchamberlin4499 I made it clear that I understood sample rates and sound frequencies. (I find it bizarre that you would question me on that.) I was not sure what Paul meant, because I could not make sense of his explanation. Note, not being picky, you should check your maths, as 44.1KHz is not half of 96KHz.
I am not sure what the issue is of filtering around 20KHz is. I have Chord Electronics DACS and they have four settings. Pure signal, 20KHz cut-off which cuts noise from HD-audio, and two other filters.
@@doowopper1951 Well I have two Chord DACs that cut off over 20KHz, with a filter setting. They are expensive. However I don't recall right now any reason why low pass-filtering for above 20KHz would cause issues. Paul said here that it can be heard if you cut clean off at say 22KHz. However with my Chord DACs, I play a high res like 96KHz or 192KHz, and sometimes filter off above 20KHz. I hear no immediate change.
There is so little going up there any way. Any harmonics playing out at that frequency or above, are part of the signal under 20KHz. It's therefore (to my mind) best to keep sampling rate high to pick up the harmonic fluctuations on the basenotes. (That is base-notes, not bass-notes.) All while filtering of sound frequencies above 20KHz, to cut off high-frequency noise caused by high res files.
@@threecats8219 I misunderstood your question, I believe Neil Russell has a good handle on it. I have seen TH-cam videos which document the high frequency sounds that are introduced by high resolution processing of a signal. These high frequency sounds are characterized as ‘noise’ which a filter designed to cut off frequencies greater than the range of human hearing (>20 kHz) eliminates. As Mr. Russell points out, the benefit of the higher resolution processing is supposed to be heard at base frequency level, not at the higher frequency levels of it’s harmonic. So the filter at 20 kHz gives you the benefit without detraction.
I have made 2 passive subwoofer boxes with 12 inch pioneer subwoofer that I run from a monoblock sony car amplifier for home use. So Paul sir , can you tell me if whether I should use polyfill or acoustic foam inside the subwoofer boxes. Your fan writing from India.
I'd go for 96khz lossy AAC as consumer format...( a 4minute song in 96khz aac/m4a is like 10MB in size, but in FLAC that's 120MB. ) AAC and WMA Pro are two known lossy codec that can encode Hi-Res audio.. and more recent snake-oily MQA. they are hi-res while small in size.Not mp3, ogg/vorbis, wma v2, nor opus, those are in 48khz gang. I often convert mediocre songs from 96khz FLAC into v0/48khz/270kbps mp3 ..
48khz is like hi-res enough for me...
Those are not considered hi-res because they are lossy. Maybe they use higher sampling rates, but that will never make any lossy format hi-res.
@@doowopper1951 tell your judgement to the streaming services that adopted the MQA format. a lossy one for high-resolution..the future is lossy . the mainstream is lossy..
i use a dsp with harmonic extrapolation to approximate and add the high frequencies and soft basd lost in lossy compression. this dsp also uses a multiband compressor to pronounce transients, with the side effect of creating a slight low volume mumbliness that helps fill spectral holes. the dsp uses 24 bits, giving the added effects more headroom, so the end result has a slightly extended dynamic range.
furthermore i add a tiny bit of reverb to make my small room sound wider, but thats besides the point of lossy audio repair .
lossy files and algorythmic reconstruction are a surprisingly good sounding combination. not technicaly hi-fi, but enjoyable.
analog amps would likely help in further smoothing out the lossyness
Although you'll think I'm a bit of a primitive hi-fi enthusiast? I own a very respectable hi-fi set up and did a weird thing a couple of years back. I bought an Amazon Echo Dot Alexa and connected it to the MP3 input via a 3.5mm jack to jack to the amp. My CD player is a good Rotel model although quite elderly. I thought I'd do an A versus B comparison playing Donald Fagan's, 'Nightfly' CD to the Amazon HD version on the Alexa. I've been a musician since around 1968 so I have oodles of experience, both professional, semi-pro, and jamming, both on bass trombone and bass guitar across loads of various genres. I consider that my ears are very well trained. When doing the comparison, l discovered that the two sources sounded very similar if not quite identical which puzzled me? Your input would be very well accepted if you can find the time, because other viewers may find it interesting.....Thanks so much and kind regards.
Den Finch in the UK.
@Douglas Blake Hi Doug, I get slated on Facebook by so-called Audiophiles criticising my Alexa usage! All I can say is the Alexa cost £50 GBP and the Amazon Music subscription is £3-99 per month. I have access to virtually everything that's been recorded as well as being able to verbally ask the thing anything you'd type into your search engines.....:)
That's how you do it ! A letter ! It stands out 😆
All these so-called experts supposedly understand about bit rates and upsampling. Clucking hens.🐓
Paul, you mentioned about upsampling DSD. Q: To do that do you convert it to hirez PCM (or DXD) first?
I'm looking for another job.
I’m sure Terri doesn’t agree with everything you say, either!
What a boring world it would be if our 'others agreed with everything we said.
Can someone explain to me ripping files off of TH-cam, and also if bit rates above 192kbps matter in that kind of scenario? I know it’s not the most ideal way to source your files but hey i don’t have $100 a pop to hear some of these zappa records..
@Douglas Blake Do you know of any good programs? I’m sure that is better but I don’t know how it works yet
there is nothing with 192kbit or even more on TH-cam no matter what the video pretends - that hires shit is nothing else than turn up treble and bass
@Douglas Blake awesome, thanks
@Douglas Blake youtube sucks better than Gennifer Flowers.
But upsampling doesn't RESTORE any of that filtered information...
Does anyone know if Paul signs his Audiophiles Guide books? Thanks in advanced!!
mine wasnt
Just send me big money and it still won't be signed but you ll make me real happy lol
@@DENVEROUTDOORMAN lol
The reason is the money in your pocket.
He sell hi rise equipment.
What a fantastic explanation✌
My reason for higher sample rates , lower Latency
Good one Paul
Not when your shitty 6k dac is 75dB sINAD
But CD transport sound better than digital stream, right?
Depends on the streamer, and the transport and the DAC lol.
why should it?
A transport doesn't do anything but read the files on a disc. The DAC you connect it to converts it to analog. You can hook 1 transport to multiple DAC's and get a different sound. If you feed a lossless 16/44.1 file to the same DAC, either via stream or transport, it will sound identical. Same zeros and ones being converted.
@@hoihallo2904 Audiophiles will never understand how digital works! Hence they buy "audiophile network cables" and "audiophile swichtes" - the same dumbness as thinking the last 0.5 meters of a power chain for thousands of Dollars will magically fix the sound
@@Harald_Reindl Audiophiles, finding solutions for things that aren't a problem😉. Power cables supplied with the product you buy are always good enough. And if the cable is faulty, you will have a different problem than "muddy bass" or " less air around the instrument". You have a fire hazard 😲😂
there is no reason, unless you are a big record company who wants to sell you stuff over and over again !!!!!!!!!!!!!
No.
Surely buying tickets to hear some concert or opera or any other true live performances is a helluva lot cheaper than the cost of all this ultra expensive audio gear and methods of recording
But trying to follow the good bands or acts may cost a lot for a one time event!!! And those may be subpar etc!!! And members die leave bands etc therefore negating later experiences and now Covid has cut into a chuck of touring
I agree that downsampling from the sample rate that was used in the capture is probably a bad idea, but there are two facts that make me skeptical of the conclusion.
1. There is very little musical content above 5 kHz.
2. Almost anyone over, say, 45 couldn't hear anything above 10 kHz even if there was important content to be appreciated.
The highest note on a piano is C8, which reveals the piano features 8 octaves of C, which is a very wide range compared with most other musical instruments. C8 has a frequency of 4186 Hz. The human voice goes up to about 4 kHz.
Even considering Nyquist, I am a little unclear about why there is all this concern for a 20 kHz upper bandwidth.
right the main frequency of music is below 5kHz. but the resonating sounds of instrument is well above 15khz or even above 20khz... that's what make every instrument unique ..
little-above-5khz theory only applies to the old FM 8bit game music . even old MIDI will go beyond 5khz...
The solution is to listen to tape recorded music on vinyl
@Douglas Blake right, that’s why I like collecting vintage
@Douglas Blake New music yes, but a lot of older recordings are being remastered from the original tapes. Vinyl has never been better if you look at the more audiophile labels. Analogue Productions, Mobile Fidelity etc. would be a good starting point.
*off with your head **_!!!_*
Nope..😊
There is no reason for Peanut Butter Milkshakes either but some make them.
🤪😜🙄
Hi res sounds better to me!!!!!!!
All this hi-res nonsense needs to be exposed for the marketing fraud that it is. Higher sample rates and more bits are used in the recording and mastering to give the engineer headroom and the ability to deliver a superb 16/44.1 version to the listener. Many audiophiles have been misled into thinking that they are being denied access to the original masters and record labels are happy to “fix” the problem by selling them the same music again, only a so called hi-res version. The number of bits defines the available dynamic range/noise floor, this is all it does. 16 bits gives a huge dynamic range, way more than needed. A recording with a larger dynamic range would be undesirable as quiet passages would be too quiet to hear or loud passages would be deafening. The sample rate defines the highest frequency able to be reproduced. 44.1KHz = 20KHz which is at the upper limit of hearing. This is all it does. Sample rates higher than 44.1KHz produces ultrasonic noise that was not part of the performance. If you buy a so called hi-res version you are buying a large file padded out with millions of zeros. Several well designed ABX tests show audiophiles cannot pick hi-res recordings over CD quality.
if one hears only below 14000hz such like many elderly people. 15000hz to him is ultrasonic... and 15000hz sound is everywhere and must be included into the recording. else people would feel satisfying with 72kbps mp3 files - the sub-wideband quality. my point is that certain safe ultrasounds may possibly be a part of the performance , which need to be included in the recording only possible on higher resolution..
@@imkow The average 35 year old male has 11dB hearing loss at 8KHz. The finest studio microphines have a deliberate steep roll off at 20KHz to prevent instability. A CD is capable of delivering a slat response from DC to 20KHz.
@@geoff37s38 -11dB doesnt mean silence..and is still audible. we still can see from statistics of audio files that show samples above 20khz very often...especially from those modern digital composed music with electric instruments
@@imkow -11dB is a reduction of 92%. The vast majority of adults have no hearing above 15KHz of the very few that are fortunate (or is it unfortunate?) to hear up to 20KHz the sound level will be way below background and totally masked by all other frequencies in the mid range.
@@geoff37s38 I don't really support this -11dB conclusion else i'd be feeling this sudden deafening event... you know the lossy audio codec always filter out the 15khz at low bit rate like 72kbps mp3, and make the audio sound like flat...I may be not hearing 15khz. but I want my mp3s have at least 18khz for reaching audio transparency... ie you also want those directional background sounds there on recording. in the book "A Survey of Musical Instrument Spectra to 102.4 KHz" James Boyk tested instruments like trumpet, horn, violin, and oboe they all make waves above 20khz..i guess it'd be good to provide an option for individuals who want to hear (or feel) the waves more abundantly via hi-res devices..
Well I used to send photo's to publishers and they wanted 300dpi images and went NO when they got 72dpi images. Bad quality they say. I resample the same image to 300dpi. Accepted. Even if the file is crap and full off artifacts doesn't matter high dpi good, low dpi bad. Even though it doesn't matter. The file has the same number of pixels. dpi is a meaningless term.
I'm sure they were probably hoping you'd send them an original image captured at the higher dpi (assuming it was). As a web developer we used 72 dpi for the longest time because screen pixels weren't bunched together close enough to see the higher dpi, but nice printers can print at much higher dpis. The dpi isn't meaningless, those printers you sent to just didn't enforce strict quality controls or their machines required 300 dpi.
Do the reverse. Take an original image/photo that was created or captured at 300dpi or above and resample it to 72dpi and then upsample it back to 300dpi. Now print the original and the resampled version and you'll see dpi is not meaningless.
@@vanoverbay774 exact same image. Deterioration would come from recompressing an already compressed image, not from changing the dpi. If you create a 72dpi and a 300dpi image file from the same raw image you get the exact same amount of pixels with the same information. The only difference is the projected size on the medium you are viewing it on if it is not rescaled. The dpi requirement is a deprecated custom from when software had trouble with scaling images.
@@RetroGameStream capturing dpi has nothing to do with it. Camera's capture with ppi (pixels per inch) these numbers are far larger than DPI.
The die is cast before the end user buys then listens to the music... Done right 16/44.1 is as good as 24/96 to the point most of us would not do well in a DBT. 'Oh this is better' is just another way to get more of your money.
First of all, don’t upsample. Why?
Also what is played above 22khz that I could or would want to hear?
I try to buy music, if known and if possible, in its originally mastered format. If you master at a lower rate than what was recorded then why would I want a down sampled, mastered, re-up sampled distribution? Just silly. Also, I’ve heard (not an expert, can’t confirm) that dsd is a major pain to edit and master and that most are converted to pcm to master and then back. May have changed and many of the native dsd stuff is orchestra and live music that may not get edited but that’s just dumb.
Personal caveat for me; I do not possess the means or desire to have a $200k system. Yes, someone with Paul’s listening room can probably hear the difference in a recording that was native at 96khz, played back at 96khz vs red book cd 44.1. I could not on my system. Yes I can hear the difference between Spotify and lossless, but not the previous example. Enjoy the music, not the numbers. Buying a cd vs a 96khz shouldn’t rob the joy of the music if it is a good master (which is really what matters, not the sample rate).
Is that dog HMV ???
There are no reasons whatsoever for any digital audio files. Listen to records and tapes.
1:10 This is like asking: 'With modern 4K high-end TV's capable of upsampling/upscaling/upresing 1080p content to 4k UHD resolution, is there a need for native 4K UHD content?"
Well duh, of course there is.
The reason for audio is that 44.1 kHz sampling rate is not even close to all that we need for what our ears are capable of hearing. The Niquist sampling theorem has been wrongly interpreted for decades - not to get into other, more sinister motives.
High-res audio is not about frequencies beyond 20kHz. It's about sampling all the individual frequencies that we can hear at the same time in the audible 20Hz-20kHz(ish) spectrum.
We don't need high res audio to sample 100 kHz frequency audio waves. We need it to sample multiple concurrent audio waves in the 20Hz-20kHz spectrum.
Yes, we can hear multiple soundwaves concurrently! This is the big revelation the audio industry should wake up to!
We can hear multiple soundwaves at the same time (many in fact) and we need sampling rates that cover the sum of the required sampling rate for each individual soundwave that happens in the content getting recorded.
Also, even if the amount of soundwaves that we can hear concurrently may prove to be a certain limit (so maybe not all the 20-ish thousand soundwaves in the audible spectrum), it may be possible that we could tune our ears on the fly to different sets of multiple frequencies which we could hear at the same time, which would support the need for very high sampling rates (many Mhz likely).
A whole world of research & development awaits the audio world in this area. Get to it! :D
@Douglas Blake You're funny. Why did you not reply anymore to the conversation we had in the other of Paul's videos when I first made this claim?
I told you before: Your claim is bs: a speaker cone or diaphragm, like the human eardrum, does not have to be in more than 1 place at once to produce or, in the case of the eardrum, perceive and transfer multiple soundwaves at the same time.
You're just being stubborn and unimaginative. A diaphragm, be it of a speaker of the human ear, can vibrate at a, let's say, 100Hz frequency cycle while doing smaller cycles of higher frequencies. The physical material of the diaphragm or eardrum does not have to be in more than one place at once. You're just trying to derail the conversation with a false assumption.
This revelation, of multi-frequency perception, is a true paradigm shift in the audio world, and is the explanation of why we can easily hear huge differences between standard CD 16/44.1kHz and hi-res audio like DSD, which is in the MHz range sampling rates.
@Douglas Blake You seem incorrigible to the fact that a diaphragm of any sort, be it of a speaker, eardrum or microphone, can produce multiple audio frequencies concurrently while not violating any law of physics.
What else about the 'complex waveform' are you trying to convey?
I am fully open to conversation with people who are open to facts & reason. The history of science is chock full of examples of long-believed ideas that were eventually refined or downright disproved, like the long-used argument about the Niquist-Shannon theorem proving that it's enough to only sample at twice the highest frequency in the audible range for us to perceive the entire range of audiowaves. This argument is false, because we can hear multiple soundwaves at the same time, and we need to sample each wave in a sum of samples.
There, the claims about 44.1kHz sampling being enough is proven false. Listen to native DSD recordings at MHz rates on a decent hi-fi system, which I'm sure you have, and that will definitively confirm my claims to you or anyone with openness to reason.
@Douglas Blake Well then let me have the last word: You clearly haven't said the same thing I did in different words, and I will also leave that to others reading this comment thread to judge.
from your first reply above:
What we agree on is that an object can only be in one place at a time (at least on a non-quantum scale :D).
What we disagree on, based on your replies, is that a diaphragm of any kind (speaker, microphone, human eardrum etc.) can capture and reproduce multiple soundwave at the same time. If you think we agree on this issue as well, that's odd, because you've made clear claims to the exact contrary.
Much like The Rolling Troll says - a computer is only as smart as the data it's given, And all this Hi-Res, 4k gobbledygook is getting to be such overkill for the average consumer. An 80" Hi Res TV to watch The Dark Knight - okay, why not 4k? But a 21" TV to watch Citizen Kane in 4k? Seriously, how much better can it look? Same with audio. if it's The London Philharmonic doing it's best performance using the best microphones and recording equipment, and one's got the best stereo, speakers and/or headphones - okay. But to expect some cassette transfer that is recorded at 22khz or 128kbs is going to sound amazing on a upscaling system of the highest order. That's like expecting a used Beatles record to sound amazing on a Marantz turntable with the best diamond stylus. Maybe slightly louder but certainly not any better.