Jump to content
IGNORED

A couple of riffs on the notion of "bit perfect"...


Jud

Recommended Posts

What we think about when we discuss "bit perfect" music reproduction is usually the stuff between digital consumer source - disc or download, typically - and the DAC inputs. It's understandable, since that's the part we can see. But it's not nearly the whole of the digital chain.

 

So why should we care? Well, the whole notion behind "bit perfect," at least as I understand it, is to avoid conversions. Conversions tend to be tricky spots in audio, whether from mechanical to electrical or vice versa (e.g., phono cartridge and turntable, speakers), analog to digital or vice versa, or even from one sample rate to another (interpolation or decimation). (For purposes of my own post I'm going to leave out the dynamic range numbers, but I certainly don't mind, and in fact would appreciate, discussion of that side of things from folks who know.)

 

I want to leave CDs out of this for the moment, even though they're still a more popular format than lossless downloads, because the RedBook standard means they're locked in to a sample rate (44.1kHz) that I would guess is seldom if ever used these days for ADC or DAC. So we're discussing downloads, and possibly DVDs, since we know the latter can handle at least "4x" rates (176.4 and 192kHz).

 

As I understand it, most DACs do their decimation filtering to convert to analog on "8x" bitstreams, 352.8 or 384kHz. (Here, as anywhere else in this post, I welcome correction of any inaccuracies I may have unintentionally committed.) Why then has more attention not been devoted so far to maintaining this sample rate from ADC on, thus avoiding the need for sample rate conversion?

 

- Most DACs these days have inputs restricted to 192kHz. Is this a limitation of USB2/3, optical, or coax? (At least for USB2 and thus 3, it doesn't appear to be. Otherwise PeterSt's DAC would have problems.) Is it that "everyone" just begins their DAC designs with the assumption, and thus the need for, a chip that does oversampling (8x for RedBook/DVD, 4x for 88.2/96kHz, 2x for 176.4/192)? Is it that any DAC chip available these days which is not New Old Stock and/or prohibitively expensive, has such oversampling built into the chip in a way difficult or impossible to bypass?

 

- What are the common ADC sampling rates available in equipment for recording studios? Assuming the "8x" sampling rates aren't common, is it a matter of not bothering because of the DAC input limitation referred to above, or are there technical limitations on the ADC equipment?

 

Do DSD recordings played back on DSD-capable DACs avoid the sample rate conversion problem, or are there sample rate (or other significant) conversions involved in this chain as well? More particularly, is the change of "1-bit" to, e.g., "6-bit" essentially a lossless change of digital "container" format, or are there potential audio problems that may apply to this stage?

 

Hope this is an interesting topic for people, and I very much look forward to your informative comments.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

I just wrote a longer reply but then lost my mobile connection as I tried to post. For now, I'd just like to post an image that I've posted in other threads:

 

DSD vs. PCM.jpg

 

As you said, 'bit perfect' refers only to the chain from the source to the DAC. After that there's usually a whole lot more going on.

 

Looking forward to hearing what the real experts here have to say.

 

Mani.

Main: SOtM sMS-200 -> Okto dac8PRO -> 6x Neurochrome 286 mono amps -> Tune Audio Anima horns + 2x Rotel RB-1590 amps -> 4 subs

Home Office: SOtM sMS-200 -> MOTU UltraLite-mk5 -> 6x Neurochrome 286 mono amps -> Impulse H2 speakers

Vinyl: Technics SP10 / London (Decca) Reference -> Trafomatic Luna -> RME ADI-2 Pro

Link to comment

The whole idea behind 24/192 is more headroom. Even if the type of aliasing filter in an ADC is "causal", i.e minimum-phase (Infinite Impulse Response) rather than linear-phase (Finite Impulse Response) so that the pre-ringing artifacts are absent (see the "apodizing" filter), one side effect of that will be the presence of group delay artifacts. Admittedly, it's possible to eliminate group delay using two filters instead of only one (running the filtered signal through an identical filter in reverse, i.e. "symmetrically", will effectively cancel the group delay caused by this filter). However, after the signal has been digitally recorded, typically there will be processing. Processing, which obviously causes less artifacts if there's more headroom available to work with.

So, why can 24/192 still be advantageous as a delivery format? Simple. It is to cover up bad engineering. Most of the digital music that we listen to today has gone through digital equipment that's so primitive it doesn't even apply dither, let alone noise-shaped dither, or fancy schmancy aliasing filters. People use cheap DACs to play the content. Before the signal reaches the customer, it went through so many tortures already that it makes all the more sense to leave the 192 kHz samplerate of the original recording unchanged. Even, if (in theory) 24/96 should be sonically transparent nonetheless, as Bob Stuart's paper Coding High Quality Digital Audio shows.

Just because a cheap 24/192 capable DAC isn't transparent, doesn't necessarily also mean it can't produce better sound by operating at a 192 kHz samplerate. This IMO can be explained from the fact most modern DACs rely on one type of processing or another (delta sigma combined with internal upsampling and noise shaping, for example), and possibly the fact the presence of other types of distortions in the signal path might, one way or another, still have an audible impact on the impact of artifacts caused by operating at only 96 kHz versus 192 (or 88.2 versus 176.4).

The problem (IMO) with DSD is it causes non-linear distortions that cannot be fully dithered out, and it has to be converted to PCM if you want to use processing, i.e. a lossy conversion to PCM. Furthermore, the ultrasonic noise of DSD still has to be rolled off in the playback chain in order not to overload the driver of a tweeter, and lossless data compression on DSD-encoded data is next to impossible.

As for the NOS in NOS DACs, AFAIK it stands for Non OverSampling, not New Old Stock (as in NOS vacuum tubes). :)

If you had the memory of a goldfish, maybe it would work.
Link to comment

I have read that three times, and I am still not certain of what you said. :)

 

I do think that higher sample rates mean better reproduction is possible, and the reasons why have been argued all over creation, so no need to go into them again here.

 

I do suggest that it is much more difficult for a DAC to take in a 24/192 or 24/176.4 signal and do a good job with it, than it is for a DAC to take in and competently process a 24/96 or 24/88.2 signal, which in turn is more difficult than processing a 16/44.1 signal.

 

But I think the rewards are also, more or less, commeasurate.

 

The key to all that, is in my opinion, delivering a data perfect and perfectly timed bitstream to the DAC. Asynch USB seems to do that better than any other technology today.

 

And that about exhausts my opinion ln the subject. I will note that most unfortunately, my taste in DACs is growing more and more expensive. They seem to be a critical component in my enjoyment of the music.

 

The little Peachtree DAC*IT sounds wonderful, in part because it upsamples all the input. But the Wavelength Proton, when presented with well engineered material, seems to always sound better over the long run. Just an observation, and may not hold generally true. The NAD 390DD and M51 DACs sound glorious with high res material, but I am not sure they do as good a job on redbook material as the Proton.

 

There are too many factors to really state anything with utter certainty, but I am certain that the sound of either DAC is easily destroyed if you send non-bitperfect output to them. At least, the sound is easily degraded that way.

 

 

-Paul

 

The whole idea behind 24/192 is more headroom. Even if the type of aliasing filter in an ADC is "causal", i.e minimum-phase (Infinite Impulse Response) rather than linear-phase (Finite Impulse Response) so that the pre-ringing artifacts are absent (see the "apodizing" filter), one side effect of that will be the presence of group delay artifacts. Admittedly, it's possible to eliminate group delay using two filters instead of only one (running the filtered signal through an identical filter in reverse, i.e. "symmetrically", will effectively cancel the group delay caused by this filter). However, after the signal has been digitally recorded, typically there will be processing. Processing, which obviously causes less artifacts if there's more headroom available to work with.

So, why can 24/192 still be advantageous as a delivery format? Simple. It is to cover up bad engineering. Most of the digital music that we listen to today has gone through digital equipment that's so primitive it doesn't even apply dither, let alone noise-shaped dither, or fancy schmancy aliasing filters. People use cheap DACs to play the content. Before the signal reaches the customer, it went through so many tortures already that it makes all the more sense to leave the 192 kHz samplerate of the original recording unchanged. Even, if (in theory) 24/96 should be sonically transparent nonetheless, as Bob Stuart's paper Coding High Quality Digital Audio shows.

Just because a cheap 24/192 capable DAC isn't transparent, doesn't necessarily also mean it can't produce better sound by operating at a 192 kHz samplerate. This IMO can be explained from the fact most modern DACs rely on one type of processing or another (delta sigma combined with internal upsampling and noise shaping, for example), and possibly the fact the presence of other types of distortions in the signal path might, one way or another, still have an audible impact on the impact of artifacts caused by operating at only 96 kHz versus 192 (or 88.2 versus 176.4).

The problem (IMO) with DSD is it causes non-linear distortions that cannot be fully dithered out, and it has to be converted to PCM if you want to use processing, i.e. a lossy conversion to PCM. Furthermore, the ultrasonic noise of DSD still has to be rolled off in the playback chain in order not to overload the driver of a tweeter, and lossless data compression on DSD-encoded data is next to impossible.

As for the NOS in NOS DACs, AFAIK it stands for Non OverSampling, not New Old Stock (as in NOS vacuum tubes). :)

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

As for the NOS in NOS DACs, AFAIK it stands for Non OverSampling, not New Old Stock (as in NOS vacuum tubes). :)

 

I know. But for his non-oversampling DAC, if I am not mistaken PeterSt required New Old Stock chips, because the latest ones could not do what he needed. So I was discussing possible design constraints imposed by (un)available chips.

 

Thanks for the discussion of constraints on processing of DSD material.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
I read this yesterday. It may be helpful.

 

Q&A with Charles Hansen of Ayre Acoustics | AudioStream

 

Tremendously informative (at least to me, with my rudimentary level of knowledge). Thanks!

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
I know. But for his non-oversampling DAC, if I am not mistaken PeterSt required New Old Stock chips, because the latest ones could not do what he needed. So I was discussing possible design constraints imposed by (un)available chips.

 

Thanks for the discussion of constraints on processing of DSD material.

 

The NOS1 does not use new old stock chips. The PCM1704 is still produced in limited quantities, but it is very pricey- and good!

Forrest:

Win10 i9 9900KS/GTX1060 HQPlayer4>Win10 NAA

DSD>Pavel's DSC2.6>Bent Audio TAP>

Parasound JC1>"Naked" Quad ESL63/Tannoy PS350B subs<100Hz

Link to comment
The NOS1 does not use new old stock chips. The PCM1704 is still produced in limited quantities, but it is very pricey- and good!

 

Ah, so I *was* mistaken. Thanks to you, Forrest, for kindly setting me straight, and apologies to spdif-usb for my confusion.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Paul,

 

You say you think that higher sample rates mean better reproduction is possible, whereas I say this is not necessarily always the case, because it still depends on alot of things. Some of these things might have been argued all over creation, yet I am confident that there's still a few of them that have not, or barely, been discussed here on CA forum.

 

I don't think it's that difficult for a DAC to take in a 24/192 signal, seeing as all of the el cheapo PC motherboards nowadays have onboard sound that can do it. The question that really matters is obviously not if, but how well they can do it. As PeterSt has explained in another thread about half a year ago or so, async USB causes the data to be transferred in short, periodic bursts. This causes problems of its own, he then said. Each time a burst occurs, there is a large peak in the amount of power drawn from the power supply. Their side effects (ground loop noise) can be audible, especially due to their periodicity (i.e. because this periodicity can correlate with the music).

 

Moreover, these side effects can do more harm to the sound than the jitter caused by, for example, S/PDIF or adaptive USB. In other words, very low jitter is always good, but not if it means the sacrifices made to achieve it are being ignored by factory design. This is why I think the best results can be obtained only if the async USB implementation is done right, or if using, for example, S/PDIF when combined with a very pure external clocking signal via optical ST, as Miska has explained in another thread some months ago.

 

Whether the DAC can or cannot competently process the signal is a whole other story, though. It depends not only on the quality of the DAC, but also the quality of the entire reproduction chain (i.e. not just the playback chain, but everything, from the studio microphone's environment all the way down to the listening room's characteristics). I guess we all know the fact errors (distortions) in a signal can stack, and that this is why having a certain amount of headroom might be helpful. However, estimating how much headroom will be best in terms of human audibility is something I believe to be a much more complicated task than assumed by many (especially when one of the limiting factors involved is eventually always how much money it'll cost).

 

I agree with you that choosing a good DAC is fundamental. Some people say if a DAC puts out too much detail it will sound way cold and clinical, musically uninvolving. In my own experience, however, one can never have too much detail. This is the reason why I had decided to go for a SABRE based DAC. It can internally upsample 192 kHz material to 1536 kHz, while using a 32-bit internal data path to do so. My Eastern Electric MiniMax DAC Plus is just over a year old now. In its price range, I still have been unable to find a DAC that outperforms it. Removing the vacuum tube from it has resulted in an improvement to the sound for me. This improvement was not subtle, and IMO it has catapulted the performance of the DAC to a whole new league.

 

As for bit perfect playback, yes of course it's a necessity because the OS (I use Windows 7) messes with the data in such way that it ruins the sound, obviously. However, the volume control slider of foobar2000 also messes with the data. Only difference, the dither applied by it is inaudible on a good DAC with 24-bit input (even, if the volume slider is set to a low background listening level, like say -48 dB), whereas the thermal noise caused by analog volume control in an expensive preamp usually is audible. This is still something that's very often misunderstood about bit perfectness IMO.

 

I have read that three times, and I am still not certain of what you said. :)

 

I do think that higher sample rates mean better reproduction is possible, and the reasons why have been argued all over creation, so no need to go into them again here.

 

I do suggest that it is much more difficult for a DAC to take in a 24/192 or 24/176.4 signal and do a good job with it, than it is for a DAC to take in and competently process a 24/96 or 24/88.2 signal, which in turn is more difficult than processing a 16/44.1 signal.

 

But I think the rewards are also, more or less, commeasurate.

 

The key to all that, is in my opinion, delivering a data perfect and perfectly timed bitstream to the DAC. Asynch USB seems to do that better than any other technology today.

 

And that about exhausts my opinion ln the subject. I will note that most unfortunately, my taste in DACs is growing more and more expensive. They seem to be a critical component in my enjoyment of the music.

 

The little Peachtree DAC*IT sounds wonderful, in part because it upsamples all the input. But the Wavelength Proton, when presented with well engineered material, seems to always sound better over the long run. Just an observation, and may not hold generally true. The NAD 390DD and M51 DACs sound glorious with high res material, but I am not sure they do as good a job on redbook material as the Proton.

 

There are too many factors to really state anything with utter certainty, but I am certain that the sound of either DAC is easily destroyed if you send non-bitperfect output to them. At least, the sound is easily degraded that way.

 

 

-Paul

If you had the memory of a goldfish, maybe it would work.
Link to comment

spdif-usb, thanks again for your latest post, particularly the stuff from PeterSt and Miska.

 

I do want to try to keep this, to the extent possible, not so much on the subject of whether higher sample rates are better, which has been done to death, but on reasons why a 352.8/384 sample rate straight through from ADC to DAC isn't being used or, as far as I know, even discussed. This could, it seems to me, avoid the need for necessarily imperfect interpolation in most DACs, which for a couple of decades now have been set up to use those "8x" rates internally.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
The NOS1 does not use new old stock chips. The PCM1704 is still produced in limited quantities, but it is very pricey- and good!

 

Looked it up on TI site - 1704 available at $54 in large quantity (thousand units), the replacement chip at $2.90!

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
I do want to try to keep this, to the extent possible, not so much on the subject of whether higher sample rates are better, which has been done to death, but on reasons why a 352.8/384 sample rate straight through from ADC to DAC isn't being used or, as far as I know, even discussed.

I think I already answered that question TBH. It's simply because it doesn't sound any better.

 

I know that it's been done to death, yet people still don't seem to understand it. Even if the interpolation is imperfect, a 192 kHz samplerate already provides more than enough headroom to cover up even the worst type of imperfections, because even a 96 kHz samplerate ought be more than enough for that also (although it isn't necessarily always, but you get the idea).

If you had the memory of a goldfish, maybe it would work.
Link to comment
I think I already answered that question TBH. It's simply because it doesn't sound any better.

 

Even if the interpolation is imperfect, a 192 kHz samplerate already provides more than enough headroom to cover up even the worst type of imperfections, because even a 96 kHz samplerate ought be more than enough for that also (although it isn't necessarily always, but you get the idea).

 

 

 

 

(1) And you could repaint a wall that you first smeared with dirty handprints, thus "cover[ing] up even the worst type of imperfections," but why would you put the handprints there in the first place? In other words, even agreeing arguendo that any audible effects of interpolation at a particular sample rate can be effectively eliminated, why go to the trouble?

 

(2) If 192kHz is sufficient, why not simply keep that sample rate in the DAC? Why 352.8/384 in the DAC then?

 

In other words: No, this really is not about how high a sample rate is sufficient. It is about why we see a system set up to make interpolation necessary. The final sample rate before conversion in the vast majority of DACs has been pretty well standard for 20 years, so why not standardize on the same sample rate for recording and at the DAC inputs? Or, if there is no need for that final sample rate to be as high as it is, why has no one bothered to lower it to the rate produced by recording equipment and fed to the DAC at its inputs?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Yes, I agree it doesn't make much sense to cover up sloppy engineering because it doesn't fix the root of the problem, but then it's not my fault that the engineers are being sloppy. It's also a matter of finances, as I already tried to point out. Alot of the digital music available to public is like that, it has the dirty handprints all over it because it went through cheap old equipment and software, and in fact it went through there mighty fast, even...

 

Like I said, the internal upsampling in a delta sigma DAC is there because it's a cheaper, more practical way of getting very similar, if not better results. It's the record companies that decide for themselves what samplerate should be sufficient, so I'm glad we have at least some that didn't deem it necessary to screw up their entire catalog and it's also part of the reason why I like vinyl so much. You say nobody has bothered to lower that final samplerate, but AFAIK Dan Lavry has, and so has Bob Stuart. http://www.meridian.co.uk/ara/coding2.pdf

 

(1) And you could repaint a wall that you first smeared with dirty handprints, thus "cover[ing] up even the worst type of imperfections," but why would you put the handprints there in the first place? In other words, even agreeing arguendo that any audible effects of interpolation at a particular sample rate can be effectively eliminated, why go to the trouble?

 

(2) If 192kHz is sufficient, why not simply keep that sample rate in the DAC? Why 352.8/384 in the DAC then?

 

In other words: No, this really is not about how high a sample rate is sufficient. It is about why we see a system set up to make interpolation necessary. The final sample rate before conversion in the vast majority of DACs has been pretty well standard for 20 years, so why not standardize on the same sample rate for recording and at the DAC inputs? Or, if there is no need for that final sample rate to be as high as it is, why has no one bothered to lower it to the rate produced by recording equipment and fed to the DAC at its inputs?

If you had the memory of a goldfish, maybe it would work.
Link to comment
Yes, I agree it doesn't make much sense to cover up sloppy engineering because it doesn't fix the root of the problem, but then it's not my fault that the engineers are being sloppy. It's also a matter of finances, as I already tried to point out. Alot of the digital music available to public is like that, it has the dirty handprints all over it because it went through cheap old equipment and software, and in fact it went through there mighty fast, even...

 

Like I said, the internal upsampling in a delta sigma DAC is there because it's a cheaper, more practical way of getting very similar, if not better results. It's the record companies that decide for themselves what samplerate should be sufficient, so I'm glad we have at least some that didn't deem it necessary to screw up their entire catalog and it's also part of the reason why I like vinyl so much. You say nobody has bothered to lower that final samplerate, but AFAIK Dan Lavry has, and so has Bob Stuart. http://www.meridian.co.uk/ara/coding2.pdf

 

IMO, that's a heavy handed generalization your making in regards to mastering. If the Artist is happy with the final product, and the majority is satisfied with the end result, why would or should anything change?

 

If we consider some occurrences that are cutting profit margins on music media such as illegal file sharing, the recording industry is unfortunately forced to reconsider costs to remain viable.

Link to comment
(1) And you could repaint a wall that you first smeared with dirty handprints, thus "cover[ing] up even the worst type of imperfections," but why would you put the handprints there in the first place? In other words, even agreeing arguendo that any audible effects of interpolation at a particular sample rate can be effectively eliminated, why go to the trouble?

 

(2) If 192kHz is sufficient, why not simply keep that sample rate in the DAC? Why 352.8/384 in the DAC then?

 

In other words: No, this really is not about how high a sample rate is sufficient. It is about why we see a system set up to make interpolation necessary. The final sample rate before conversion in the vast majority of DACs has been pretty well standard for 20 years, so why not standardize on the same sample rate for recording and at the DAC inputs? Or, if there is no need for that final sample rate to be as high as it is, why has no one bothered to lower it to the rate produced by recording equipment and fed to the DAC at its inputs?

 

I don't have the answers for you Jud as I'm unfamiliar with the high sampling rates of your inquiry but I could speculate that there may well be an entire addition of inserted 'problems' when sample rates are as high as suggested. Maybe clock rates become unstable , power consumption, heat, premature failure from......which if it may effect the overall reliability of the chip long term might not make their use viable.......just thinking out loud.

Link to comment
If you read that marketing white paper, you'll notice the primary references to support his assertions are mainly papers written by the author almost almost two decades ago.

 

Without going into the minutia, the fact remains that if the industry accepted his view that redbook was as flawed as he contends, there would be serious pressure to replace it with a higher resolution standard and that there would be a significant percentage of releases available in the higher resolution be it download, sacd or dvd audio. The fact that hasn’t happened is pretty clear evidence that the market doesn’t believe the view that the higher sample rate and dynamic range proposed yields any audible benefit.

 

Is it possible that the "market" is indiscriminate to quality in the sense that while an actual improvement can be had, the hoi polloi can't understand/discern it? Since it is they who buy most of what is produced, why change?

 

It becomes a matter of tastes and someone from the trailer park may actually prefer ketchup on his steak. What happens when he is presented one careful prepared by one of the best chefs in the world? Maybe he still reaches for the ketchup. Nothing inherently wrong in that, right? It's personal preference. Somehow the thought saddens me though.

Rob C

Link to comment

Folks, let me try pulling this back on track again. A 352.8/384 sample rate has been used internally in the vast majority of high end DACs almost since such a category came into being 20 years ago. That's the sample rate almost certainly used internally in the DAC you have right now, unless you're lucky enough to own one of PeterSt's Phasures. On the other hand, the maximum input sample rate accepted by the very same DAC is very, very likely 192kHz or below. And at the ADC stage, I really don't know what the commonly used rate(s) are, but even for material intended for high-res downloads I'm guessing 352.8/384 aren't among them. So why is that?

 

mayhem13, you bring up an interesting point regarding the capabilities of DAC inputs, but as I mentioned previously, then why does the Phasure DAC (and the M2Tech Young as well) apparently have no trouble?

 

And Diogenes, and once again spdif-usb: Bob Stuart and Dan Lavry are discussing the same old same old same old issue of whether 96kHz is an adequate sampling rate, which is not what this thread is about. It is about why one of the following two things has not occurred:

 

(1) The internal sample rate in most DAC chips prior to conversion being adjusted down to the common 176.4/192kHz DAC input rates (rates I'd guess are more common at the ADC end than 352.8/384); or

 

(2) The typical ADC and DAC input rates being adjusted up to 352.8/384 so as to be identical to that used internally by most DAC chips;

 

in order to avoid sample rate conversions in the digital signal chain.

 

Everyone on board now?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
I really don't know what the commonly used rate(s) are, but even for material intended for high-res downloads I'm guessing 352.8/384 aren't among them. So why is that?

The SABRE ADC chips support 32-bit 384 kHz output just fine. http://www.esstech.com/PDF/Sabre32%20ADC%20Series%20PF%20111222.pdf

why does the Phasure DAC (and the M2Tech Young as well) apparently have no trouble?

They are just based on a completely different design approach IMO. What makes you think internal upsampling cannot yield better results audibly than 384 kHz input? http://www.esstech.com/PDF/sabrewp.pdf

And Diogenes, and once again spdif-usb: Bob Stuart and Dan Lavry are discussing the same old same old same old issue of whether 96kHz is an adequate sampling rate, which is not what this thread is about. It is about why one of the following two things has not occurred:

 

(1) The internal sample rate in most DAC chips prior to conversion being adjusted down to the common 176.4/192kHz DAC input rates (rates I'd guess are more common at the ADC end than 352.8/384); or

 

(2) The typical ADC and DAC input rates being adjusted up to 352.8/384 so as to be identical to that used internally by most DAC chips;

 

in order to avoid sample rate conversions in the digital signal chain.

(1) Because precision rises dramatically if upsampling is used.

(2) Because it simply isn't necessary if upsampling is used, i.e. the technology that uses internal upsampling basically just works.

Everyone on board now?

You asked me a direct question: "Or, if there is no need for that final sample rate to be as high as it is, why has no one bothered to lower it to the rate produced by recording equipment and fed to the DAC at its inputs?"

My reply was someone has very much bothered.

If you had the memory of a goldfish, maybe it would work.
Link to comment
The SABRE ADC chips support 32-bit 384 kHz output just fine. http://www.esstech.com/PDF/Sabre32%20ADC%20Series%20PF%20111222.pdf

 

Right, but I asked about commonly used ADC sample rates. If the ADC hardware supports 352.8/384, but those rates are not commonly used, then apparently we have to look elsewhere than at hardware (or at least chip) capabilities to find the reason they aren't.

 

What makes you think internal upsampling cannot yield better results audibly than 384 kHz input? http://www.esstech.com/PDF/sabrewp.pdf

 

The link you provide supports the opposite proposition of the one you cited it for. On page 2, the lower half of the left column through to the start of the right column describes the Sabre DAC's ingenious solutions for problems caused by the sample rate conversion process. My thought is, why create the problem in the first place?

 

I then went on to pose the question of the thread again, i.e., why are sample rates not matched through the chain to eliminate rate conversion? You responded in two ways:

 

(1) Because precision rises dramatically if upsampling is used.

 

This leaves me with two questions:

 

- What do you mean by "precision"?

 

- What about the result is more "precise" than if one had simply begun at the higher sampling rate rather than having to do interpolation to arrive at the identical rate?

 

(2) Because it simply isn't necessary if upsampling is used, i.e. the technology that uses internal upsampling basically just works.

 

There's something to this, but I have a somewhat different take on it. As the ESS paper you linked to shows, interpolation creates some knotty problems that are resolved to a lesser or greater extent, though certainly not perfectly. If by "just works" you mean "...to a degree currently acceptable to consumers and those in the industry," I agree. But digital music reproduction is improving all the time, so what is acceptable now may not be in a few years.

 

You asked me a direct question: "Or, if there is no need for that final sample rate to be as high as it is, why has no one bothered to lower it to the rate produced by recording equipment and fed to the DAC at its inputs?"

My reply was someone has very much bothered.

 

I apparently missed the information in the Bob Stuart paper or in the Lavry literature and specs that showed the final internal sample rate actually used in their DACs was the same as what was fed to the DAC inputs, i.e., that they did no sample rate conversion within their DACs. Can you tell me where it is?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

I think I can answer part of that question - 20 years ago we had nothing like the sheer raw computing power we take for granted today. Especially for gear with embedded processors, which most pro-audio gear is. Not to mention peripheral gear like memory - in 2002,it was still a bit uncommon to have a gigabyte of memory in even a fast PC, and certainly not in an affordable hunck of pro gear.

 

Take that back into the early 1990s, and we still commonly had machines with 512K/640K of memory or so.

 

I submit that pure economics has played a part in this - meaning that the existing gear was setup and filtered for a max input of 192K, and going beyond that was prohibitively expensive. Until recently that is.

 

-Paul

 

 

Folks, let me try pulling this back on track again. A 352.8/384 sample rate has been used internally in the vast majority of high end DACs almost since such a category came into being 20 years ago. That's the sample rate almost certainly used internally in the DAC you have right now, unless you're lucky enough to own one of PeterSt's Phasures. On the other hand, the maximum input sample rate accepted by the very same DAC is very, very likely 192kHz or below. And at the ADC stage, I really don't know what the commonly used rate(s) are, but even for material intended for high-res downloads I'm guessing 352.8/384 aren't among them. So why is that?

 

mayhem13, you bring up an interesting point regarding the capabilities of DAC inputs, but as I mentioned previously, then why does the Phasure DAC (and the M2Tech Young as well) apparently have no trouble?

 

And Diogenes, and once again spdif-usb: Bob Stuart and Dan Lavry are discussing the same old same old same old issue of whether 96kHz is an adequate sampling rate, which is not what this thread is about. It is about why one of the following two things has not occurred:

 

(1) The internal sample rate in most DAC chips prior to conversion being adjusted down to the common 176.4/192kHz DAC input rates (rates I'd guess are more common at the ADC end than 352.8/384); or

 

(2) The typical ADC and DAC input rates being adjusted up to 352.8/384 so as to be identical to that used internally by most DAC chips;

 

in order to avoid sample rate conversions in the digital signal chain.

 

Everyone on board now?

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
I apparently missed the information in the Bob Stuart paper or in the Lavry literature and specs that showed the final internal sample rate actually used in their DACs was the same as what was fed to the DAC inputs, i.e., that they did no sample rate conversion within their DACs. Can you tell me where it is?

You are correct Jud, Meridian equipment uses pretty standard DAC chips (they don't tend to state which device) so as you say they upsample / oversample internally.

 

Eloise

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
If the ADC hardware supports 352.8/384, but those rates are not commonly used, then apparently we have to look elsewhere than at hardware (or at least chip) capabilities to find the reason they aren't.

The ADC hardware must be disigned to deliver a fair bit of "overkill" because the mastering engineers need lots of headroom to work with in the DAW software, etcetera. Inaudible errors can become audible in the finished product due to the impact the processing has on them, as well as due to the impact the errors of the playback chain have on them.

The link you provide supports the opposite proposition of the one you cited it for. On page 2, the lower half of the left column through to the start of the right column describes the Sabre DAC's ingenious solutions for problems caused by the sample rate conversion process. My thought is, why create the problem in the first place?

Sigma delta DAC chips are the most popular because they are more practical and more predictable from an engineering standpoint. These chips have their own set of known (and much less known) problems, and of course not all of the chipmakers tend to often talk about their latest innovations.

RMAF 11: Noise Shaping Sigma Delta Based Dacs, Martin Mallison, CTO, ESS Technology - YouTube

- What do you mean by "precision"?

Good question. Precision means different things to different people, as the video I linked above suggests.

- What about the result is more "precise" than if one had simply begun at the higher sampling rate rather than having to do interpolation to arrive at the identical rate?

It's impractical. Feeding more data faster into the input pins of the chip raises its own problems, which are jitter and noise related.

There's something to this, but I have a somewhat different take on it. As the ESS paper you linked to shows, interpolation creates some knotty problems that are resolved to a lesser or greater extent, though certainly not perfectly. If by "just works" you mean "...to a degree currently acceptable to consumers and those in the industry," I agree. But digital music reproduction is improving all the time, so what is acceptable now may not be in a few years.

Moore's law. :)

I apparently missed the information in the Bob Stuart paper or in the Lavry literature and specs that showed the final internal sample rate actually used in their DACs was the same as what was fed to the DAC inputs, i.e., that they did no sample rate conversion within their DACs. Can you tell me where it is?

Sorry, I must have misunderstood there. Anyway, they most likely have tried IMO.

If you had the memory of a goldfish, maybe it would work.
Link to comment
The ADC hardware must be disigned to deliver a fair bit of "overkill" because the mastering engineers need lots of headroom to work with in the DAW software, etcetera. Inaudible errors can become audible in the finished product due to the impact the processing has on them, as well as due to the impact the errors of the playback chain have on them.

 

Good point. Hopefully Moore's Law, as you mentioned, will help. Wonder how the 2L people do it. (Don't they have some recordings at 8x sample rates?)

 

Sigma delta DAC chips are the most popular because they are more practical and more predictable from an engineering standpoint. These chips have their own set of known (and much less known) problems, and of course not all of the chipmakers tend to often talk about their latest innovations.

RMAF 11: Noise Shaping Sigma Delta Based Dacs, Martin Mallison, CTO, ESS Technology - YouTube

 

Have just begun watching - it looks fascinating! Thank you.

 

It's [feeding 8x rates to the inputs] impractical. Feeding more data faster into the input pins of the chip raises its own problems, which are jitter and noise related.

 

Interesting. Would like to get Miska and PeterSt's thoughts on this.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...