Jump to content
IGNORED

92/192 vs. 44.1


Recommended Posts

In a very interesting interview on Audiostream, Steve Nugent from Empirical Audio claims the following:

 

"Now that I have a really low-jitter Async USB interface and my own DAC, which eliminates most of the issues with digital filtering, I am not so convinced that higher sample-rates are significantly better".

Later in the interview he still defends higher sample rates for their role in better transients etc., but this comment still makes you think.

 

I don't intent to kick-off yet another hi-res vs. redbook debate, we'd had enough of those already, but I've always been wondering where the bigger improvement came from, the 16 vs. 24 bits or the higher sample rates.

Link to comment
but I've always been wondering where the bigger improvement came from

 

Time to stop wondering and do something about it yourself.

 

- Get a 24/192 file.

- Convert it to 24/44.1, 16/192 and 16/44.1.

- Compare the four files blindly

- Deduct the answer from your preference

Link to comment
Time to stop wondering and do something about it yourself.

 

- Get a 24/192 file.

- Convert it to 24/44.1, 16/192 and 16/44.1.

- Compare the four files blindly

- Deduct the answer from your preference

 

Yep sounds like a sound plan of action.

The Truth Is Out There

Link to comment

I had done exactly this when I first got my DAC, but only on one album, and found the difference to be rather small. However, I was handicapped by my DAC that has a only a 24/96 fixed USB input, so anything I fed was up or downsampled by Audirvana anyhow.

 

In spite of this, I purchased several high-res albums since, so it's a good idea and I'll try what you suggested. Now that I have my new BelCanto mlink USB-SPDIF, I can actually feed unconverted redbook and true 192khz files into my chain, will see if it makes a difference.

Link to comment
I had done exactly this when I first got my DAC, but only on one album, and found the difference to be rather small. However, I was handicapped by my DAC that has a only a 24/96 fixed USB input, so anything I fed was up or downsampled by Audirvana anyhow.

 

In spite of this, I purchased several high-res albums since, so it's a good idea and I'll try what you suggested. Now that I have my new BelCanto mlink USB-SPDIF, I can actually feed unconverted redbook and true 192khz files into my chain, will see if it makes a difference.

 

This would not be a test of the files, but the ability of the Benchmark's capabilities. It does not up sample per se, but resamples in it's inputs at 110k IIRC. All files are then converted to that internally. You would need to look into a DAC that doesn't sample convert.

Forrest:

Win10 i9 9900KS/GTX1060 HQPlayer4>Win10 NAA

DSD>Pavel's DSC2.6>Bent Audio TAP>

Parasound JC1>"Naked" Quad ESL63/Tannoy PS350B subs<100Hz

Link to comment
This would not be a test of the files, but the ability of the Benchmark's capabilities. It does not up sample per se, but resamples in it's inputs at 110k IIRC. All files are then converted to that internally. You would need to look into a DAC that doesn't sample convert.
Don't most DACs oversample in the DAC chip?

13.3" MacBook Air, 4GB RAM, 256GB SSD; iTunes/Bit Perfect; MacBook Air SuperDrive; Western Digital My Book Essential 2TB USB HD; Schiit Bifrost USB DAC; Emotiva USP-1, ERC-1 and two UPA-1s; Pro-Ject Xpression III and AT440MLa; AKAI AT-2600 and Harman Kardon TD4400; Grado SR80i; Magnepan MMG Magnestands; and, Rythmik Audio F12

Link to comment
Don't most DACs oversample in the DAC chip?

 

Yes, but up/overampling is different than converting to a different frequency altogether.

Forrest:

Win10 i9 9900KS/GTX1060 HQPlayer4>Win10 NAA

DSD>Pavel's DSC2.6>Bent Audio TAP>

Parasound JC1>"Naked" Quad ESL63/Tannoy PS350B subs<100Hz

Link to comment
Yes, but up/overampling is different than converting to a different frequency altogether.

 

Forrest, how, in terms of the resulting sound?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Jud: Homogenization(for better or worse) is the best descriptor I can think of at the moment, and that is why I concluded it would be a test of the Benchmark, not the files ultimately.

Forrest:

Win10 i9 9900KS/GTX1060 HQPlayer4>Win10 NAA

DSD>Pavel's DSC2.6>Bent Audio TAP>

Parasound JC1>"Naked" Quad ESL63/Tannoy PS350B subs<100Hz

Link to comment
Jud: Homogenization(for better or worse) is the best descriptor I can think of at the moment, and that is why I concluded it would be a test of the Benchmark, not the files ultimately.

 

Ah, got it. Hadn't thought of that, thanks for pointing out why a Benchmark DAC might be almost uniquely unsuited to comparing the sound quality of files with different sample rates. I feel like providing more detail (will probably confuse more than it helps, but what the heck), so here goes:

 

In most DACs (like my Bifrost), the "8x oversampling" step prior to digital-to-analog conversion is actually done by the DAC chip as rounds of 2x multiplication. For 44.1 and 48kHz files, it will take 3 rounds to get to the 8x rates (352.8 and 384kHz, respectively). For files already at 2x rates (88.2 and 96kHz) it will take 2 rounds; for files at 4x rates (176.4 and 192), one round; and if the DAC is one of the few that accepts 8x or higher input rates, no oversampling step may be necessary.

 

Each 2x oversampling round in the chip necessitates application of a filter (any sample rate change requires this). So with RedBook or DVD soundtracks (44.1 or 48), by the time the digital stream gets to the digital-to-analog conversion step, it's gone through this in-the-DAC filtering process 3 times. No filter is perfect, because the mathematical process used (Fourier transform) bollixes things up in the frequency domain as you make them more perfect in the time domain, and vice versa. (The DAC manufacturer Resonessence Labs has a wealth of information about this on their web site, and I highly recommend browsing it.) Thus the inevitable imperfections in the filter output (e.g., pre- and/or post-ringing, phase shift/group delay) are repeated 3 times over for the lowest rates, but only once for material at 4x rates. This could very well affect how audible these filter imperfections are in the final analog (music) product.

 

Turning to the Benchmark, I'm unfamiliar with its architecture. I don't know whether there's an "8x oversampling" step before everything's converted to 110kHz. (While it wouldn't make a lot of abstract sense to do the 8x conversion prior to moving everything to 110kHz - which I assume is done as an asynchronous sample rate conversion step to minimize jitter - most DAC chips are set up by default to do the 8x oversampling step(s) anyway, and I don't know whether Benchmark has made sure it doesn't happen in the chip they're using.) If Benchmark converts to 110kHz before or instead of the typical 8x oversampling step, then all files regardless of original sample rate will go through the same number of filtering steps, eliminating any filtering-based difference in the resulting analog (music) product.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Jud thank you for taking the time to write that out. It was very easy for me to understand, and I appreciate the opportunity to learn a bit more.

Main / Office: Home built computer -> Roon Core (Tidal & FLAC) -> Wireless -> Matrix Audio Mini-i Pro 3 -> Dan Clark Audio AEON 2 Noire (On order)

Portable / Travel: iPhone 12 Pro Max -> ALAC or Tidal -> iFi Hip Dac -> Meze 99 Classics or Meze Rai Solo

Link to comment
Jud: Homogenization(for better or worse) is the best descriptor I can think of at the moment, and that is why I concluded it would be a test of the Benchmark, not the files ultimately.

 

I must have missed a thread or it's gone, where did the Benchmark come from, I thought Musicophile was using a BelCanto mlink, anyway carry on

The Truth Is Out There

Link to comment
I must have missed a thread or it's gone, where did the Benchmark come from, I thought Musicophile was using a BelCanto mlink, anyway carry on

 

 

I was a little puzzled by that myself when I looked at Musicophile's sig, but by that time I was halfway through my response, and it was kinda like that scene in Animal House:

 

Bluto: "Was it over when the Germans bombed Pearl Harbor?!"

 

Boone: "The Germans...?"

 

Otter: "Let 'im go, he's on a roll."

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Oops, looks like I confused this with another thread where we actually were talking about a Benchmark DAC. Sorry people!

Forrest:

Win10 i9 9900KS/GTX1060 HQPlayer4>Win10 NAA

DSD>Pavel's DSC2.6>Bent Audio TAP>

Parasound JC1>"Naked" Quad ESL63/Tannoy PS350B subs<100Hz

Link to comment

As I get deeper into the computer audiophile thing one thing is becoming very clear in my experience/testing.... The best equipment in the world can (probably, im not there yet but close) pull enough detail out of redbook that I can't tell the difference between my ripped CDs and using my 7 year old Denon SACD player. But my point really is why struggle so hard? If the perfect source can make redbook sound as good as hires why not give me the hires so I don't spend so much time trying to put together the prefect source. It's much easier to get to really good with the extra bits then trying to endlessly perfect rate conversion on redbook. That's my experience so far but I'm early at this game.

Link to comment

Jud, that post describing the process of the 8x oversampling at 2x steps is simply brilliant. Now I have to ask, why do they do it in the first place?

Roon Rock running on a Gen 7 i5, Akasa Plao X7 fanless case. Schiit Lyr 2, Schiit Bifrost upgraded with Uber Analog and USB Gen 2, Grado RS1s, ADAM A3x Nearfield Monitors.

Link to comment
Jud, that post describing the process of the 8x oversampling at 2x steps is simply brilliant. Now I have to ask, why do they do it in the first place?

 

Thanks, but I can't claim credit. I'm just repeating things that other people, like Miska and PeterSt (the in-DAC 8x oversampling and filtering process and its problems), or Mark Mallinson on Resonessence Labs' website (the operation of filters, and in particular the Fourier Transform reference) have said before.

 

As for why they do it that way: Kind of like why so many people have back pain, it's historical accidents that cause evolution (of the human body or the digital recording and reproduction process) to result in structures that might not be the optimum designs if one were doing it all at once on a "clean sheet of paper."

 

Just as one example re digital audio, for decades the CD was the dominant form of digital recording (and still is dominant for higher quality stuff, since all except a tiny fraction of music downloads are mp3 or an Apple lossy format). So DAC chips were designed to deal with 44.1/48kHz data rates. When people noticed sound quality problems with those rates very early on in the CD era, the cost-effective solution was not to move to downloads (the Web and downloads - at relatively glacial speeds - weren't even widely available to the public at the time) and get rid of all the CD stamping plants. Much easier to redesign/reprogram chips to move the 44.1/48kHz input to a much higher rate before decimation (the low-pass filtering step that turns digits to music) took place. Chips doing 8x rates became the standard very quickly, therefore the only units available in bulk, therefore the only practical choice if you were a CD player/DAC manufacturer wanting to make a unit costing less than several thousands of dollars. As higher sample rates at the recording stage and relatively fast downloads of high-res files have become available, they've had to fit into a world where the vast majority of DAC chips are still made with the idea that input will typically come in at 44.1/48kHz rates. That's still the best market assumption, even for audiophiles - how many of us have more high-res downloads than CD rips? - so I think it is unlikely to change soon.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
I don't intent to kick-off yet another hi-res vs. redbook debate, we'd had enough of those already, but I've always been wondering where the bigger improvement came from, the 16 vs. 24 bits or the higher sample rates.

 

Wasn't this the real subject ?

 

Think how either influences the other - hence both need eachother;

 

Envision a graph with X/Y axis. Look at the sample rate as the horizontal resolution. The bit depth is the vertical resolution.

 

Increasing the bit depth from 16 bits to 24 bits implies a 256 times better vertical resolution (or granularity if you like).

Increasing the sampling rate from 44.1 to 176.4 implies a 4 times better horizontal resolution / granularity.

 

It seems clear that the increased sample rate does nothing, relatively to what the increased bit depth can do.

That bit depth is about the preciseness of the volume level between samples.

The sample rate is about the preciseness of when that happens in time. This is *also* about the accuracy of frequencies.

 

To the extremes :

Wen the bit depth is increased but the sampling rate is not, no better "volume level" accuracy can be reached. Well, for the registered samples the level will be more accurate, but the time they are registered will be "as off" as they were.

When the sampling rate is increased but the bit depth is not, totally nothing happens. So, for simplicity, think of 2 x the base sampling rate and in between each two original samples one other is injected; *or* the 2nd sample (X axis) has the same level value (Y axsis) as the 1st, *or* the 2nd sample will have the same value as the 3rd.

 

So the question can be simplified to which of the both has a chance to change sound. And the answer is clear.

Mind you, this counts when only each of the both is changed.

 

The answer becomes far more complicated when both are changed at the same time and the question would be : which of the both contributes most.

Thinking about mixes like available choices between 2x sampling rate and 4x and/vs bith depth increase to 20 instead of 24, well *then* you have something to work out. And notice that these choices exist from a design point of view (could be bandwidth related).

 

Here's another extreme :

Only when the samping rate is 256 times the 16/44.1 situation, the 24 bits can be fully exploited;

Though maybe theoretically true, we should notice the relation with the 44100 samples per second (X axis) and the 65536 level possibilities (Y axis) with 16 bits. So, just not enough samples to exploit the available levels *if* these latter would change from sample to sample by the value (level) difference of 1 only. But they usually don't ... This makes the 44100 samples sufficiently enough for the granularity of the levels to catch. Now (but it has been said already) when the number of levels is increased by 256 times, the levels will be more accurate for each sample, but the change from sample to sample is as "off" because it needs more granularity in the time domain (actually also this 256 times).

 

In the end it is not just this math and far more a physics thing;

When we'd (A/D) sample at 192KHz and 16 bits, each sample will carry a *consistent* level for that. It's only 256 times less accurate compared to a 24/192 A/D. Is that bad ? I'd say when the D/A converter applied at playback is "slow" enough, it won't matter much because it will be an analogue thing smearing the error. The levels are not represented very accurately anyway.

Now we take a 24/44.1 A/D and play that back. Now suddenly the levels are 256 times more acurate than the previous example, but the jumps in level from sample to sample are over 4 times (192/44.1) higher and this creates distortion. Harmonic distortion because of the too high jumps which come down to squares (those implying the false harmonics).

 

The latter we already know about, because this is about 44.1 not being allowed to stay like that and it needs upsampling/filtering/interpolation. Remember, *needs*. When that has been applied the result will be similar to the 16/192 A/D and playback, depending on the applied filtering (at playback for the example here) with the difference of frequencies above 22.05KHz not being in there (I regard this unimportant).

 

Conclusion with the last paragraphs also taken into account ? it can't be compared. You could, but a sample rate of 44.1 can't stay like that anyway because huge distortion would be our part.

If someone could ever follow this, he/she should come to one conclusion only :

 

The one part of the story tells that increasing the sampling rate without increasing the bit depth is a moot thing;

The other part of the story tells that the sampling rate must be increased (with 44.1 as the base) because otherwise the result is poor to begin with.

From this follows that both have to be applied together.

 

The result on Harmonic Distortion is a product of the both anyway. I could have said this as a one liner and leave it to that. But hopefully you can now see somewhat how that works. Even when I was unclear all over. :-)

 

Peter

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...