Jump to content

Lavry Engineering Paper on Hi-Res


Recommended Posts

One of [Lavry's] basic points, near the beginning, is that you don't get anywhere near a 24-bit word length due to inherent inaccuracies until you have a sample rate as low as 50-60 Hz.

 

But several people here are totally ignoring this and talking about 24/192.

 

So do you think he is just plain wrong on this?

 

Let's do the furthest thing from ignoring, and see where paying close attention gets us.

 

Here's what the paper says about sampling rates and word length:

 

On page 1 -

 

There is also a tradeoff between speed and accuracy. Conversion at 100MHz yields around 8 bits, conversion at 1MHz may yield near 16 bits and as we approach 50-60Hz we get near 24 bits.

 

And on page 27 -

 

AD converter designers can not generate 20 bits at MHz speeds, yet they often utilize a circuit yielding a few bits at MHz speeds as a step towards making many bits at lower speeds.

 

Did you see anything there comparing bit depth at 88.2 or 96kHz to bit depth at 176.4 or 192kHz, or referring to 192kHz at all, 192kHz being the sampling rate Lavry announces on page 1 he will prove wanting ("...the author's motivation is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192KHz")? He's got data points at 100mHz, 1mHz, and 50-60Hz. Sorry, but I find his three data points over a 100mHz range - his "sampling rate," if you will - lacking in terms of telling us what the max bit depth is at 192, or comparing this to 88.2 or 96kHz.

 

There are two other points I'd like to make here:

 

1 - Does Lavry's "permanent" bit depth limit actually exist?

 

Lavry says "The compromise between speed and accuracy is a permanent engineering and scientific reality." He has quantified this "compromise" - the bit depth limits - twice in his paper. So presumably we are to take as permanent scientific reality the reference to not being able to get to 24 bits until we "approach" 50-60Hz (though how close that approach must be is a mystery, since Lavry's next data point is at 1mHz).

 

But Barry Diament talks about using bit depths of 64 and 80 while working on his 192kHz sample rate recordings. And you or I can buy inexpensive consumer sound cards that will encode audio at 24/192. (I did, purchasing a very well thought of card for under $200 - http://www.esi-audio.com/products/julia/ . You've previously referred to UK laws about false advertising, so I note in that connection the card is for sale in the UK.) So perhaps Lavry's specific bit depth limitation is not such a permanent feature of the scientific and engineering landscape after all.

 

2 - Does Lavry fairly characterize a limitation in bit depth as a limitation on the "accuracy" of the audible signal?

 

Eight times in the 27-page paper, Lavry characterizes bit depth limitation as reducing the "accuracy" of the musical signal that can be obtained from a recording with a 192kHz sampling rate, though he never says what the bit depth is at 192kHz, nor does he compare it to 88.2 or 96kHz. Lavry laments the inaccuracy resulting from only 16 bits at 1mHz.

 

16 bits at 1mHz is a bit rate almost three to six times higher than SACD/DSD. 8 bits at 100mHz is almost 150 to 300 times higher. Lavry's bit depth limitation, if it existed and if it in fact reduced the accuracy of the musical signal, would quite simply render reasonable-sounding SACD/DSD recordings impossible. Since SACDs and DSD files seem to exist; many people who have an interest in good audio seem to like them; and knowledgeable programmers/designers don't seem to have any conceptual problem with how SACD/DSD recordings work; then my conclusion is Lavry's eight references to problems with "accuracy" are hogwash.

 

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> wi-fi to router -> EtherREGEN -> microRendu -> USPCB -> ISO Regen (powered by LPS-1) -> USPCB -> Pro-Ject Pre Box S2 DAC -> Spectral DMC-12 & DMA-150 -> Vandersteen 3A Signature.

Link to post
Share on other sites

In response to some of the statements in Jud’s most recent post, Dan Lavry has asked me to publish his response:

 

Dan Lavry’s Response-

The Sampling Theory paper does NOT suggest that there is any “permanent” bit depth limitation or any sample rate limitations. When I wrote the paper, I used the present day technology (8 bits at 100MHz or 16 bits at 1MHz) as a tool to point out that as the speed increases, the accuracy (thus the bit depth) decreases. Years ago, getting 8 bits at 1MHz was beyond the state of the art technology. It would be ridiculous for me to assume that we will (or will not) have 8 bits technology of say 1GHz or 10GHz at some future time… However, at any given time, when one looks at conversion speed and accuracy, one finds that the slower the conversion, the more accurate it is. That was true 40 years ago when I was a young design engineer, and it will be true for as long as the basic principles that govern analog design hold true.

 

The point is that to do an optimal job, one cannot sample too slowly (you need to cover the audio bandwidth in the case of sound), and you cannot sample too fast (you lose accuracy). No one suggests sampling audio at 1Hz. No one suggests sampling audio at 1GHz. So there is an optimal rate! But where is it? First we need to accept the fact that there is some optimal rate. Those that advocated faster is automatically better are not even accepting that fact! Optimal rate depends on the application. Video calls for more bandwidth so we must sample faster. But video conversion is less accurate. Audio needs to accommodate the ear, which does not need video speeds yet is more sensitive in terms of accuracy. The ear does not hear 80kHz; thus sampling too fast reduces accuracy while gaining nothing for it. Think of a camera that can include invisible light at a cost of degradation to the visible spectrum.

 

If one desires to confirm this relationship, feel free to go to the website of any manufacturer that make a wide array of conversion products (such as Analog Devices, TI and more). Check the selection guides of today; check the data from 10, 20, or 30 years ago…. You will see that speed always costs accuracy and accurate conversion demands slower speeds.

 

Again, my example about 8 bits at 100 MHz and 16 bits at 1MHz were there to show the RELATIVE accuracy as it relates to speed. I never stated that a permanent bit depth limit exists; I used (then) contemporary data to demonstrate a point- that speed compromises accuracy and increased accuracy demands lower speed. Technology improves over time, and still, faster will remain a tradeoff against accuracy, as it always has been.

 

Analog designers will understand that statement very well. Say one wishes to “take a sample” and to do so, you need to charge a capacitor. The charging curve is an exponential one – the longer you wait, the closer you get to the actual value of the sampled input. If one reduces the capacitance to speed thing up, you pay a price!

1. A smaller capacitor does not hold the charge as well. It will partially discharge before the AD conversion can complete, resulting in a lower sample value.

2. A larger capacitor reduces switching transients. Switching transient introduce other inaccuracies.

3. And relatively small capacitors (such as found in sigma delta switch capacitor networks) generate more noise; which is the major limitation of that technology today.

 

No analog designer can dispute that!

 

Another example is an OP-AMP. Converters use OP-AMPS that operate at very high speeds, to handle the required fast voltage (or current) “steps”. One can look at settling time of such circuits, and again, the longer you wait after the voltage step occurs, the closer the output of the OP-AMP is to the ideal final value (thus more accurate). The vocabulary is “settling time” (such as settling time to reach less than 1% error). If you “look” at the OP-AMP’s output voltage too soon after the step occurs, the result of the conversion is less accurate.

 

There are numerous other examples of the tradeoffs, all based on basic electrical principles of physics. This is not the place to lecture about analog design. I was probably too detailed as is.

 

So the assertion that I said there is a "permanent bit depth limit” is not true.

Dan Lavry

 

End of response

 

We do appreciate the idea that Dan Lavry is cited as an authority on digital audio conversion; however we would ask that he not be miss-quoted by any means; including taking parts of his paper out of context or making claims that he “says” something in words other than his own.

 

The point is not that conversion at sample frequencies higher than 96kHz is “not accurate enough for audio;” it is that conversion at sample frequencies higher than 96kHz will always be less accurate than conversion at 96 kHz (or lower) with the same technology. As with many other things,there is a point of diminishing return; and thus there is an upper limit for sample frequency to achieve the most accurate conversion of audio.

 

Brad Johnson

Lavry Engineering Technical Support

 

 

Link to post
Share on other sites

I very much appreciate Lavry Engineering's explanation of what was intended by the paper regarding the topic of bit depth.

 

So the assertion that I said there is a "permanent bit depth limit” is not true.

 

I apologize if there is anything in the article about which I gave a misimpression. That wasn't my intention. However, please note that both Mark Powell (in a comment favorable to what he took to be a point made by the article) and I were left with the idea that the bit depth quantifications given in the article were supposed to be valid today. Mark's comment was:

 

One of [Lavry's] basic points, near the beginning, is that you don't get anywhere near a 24-bit word length due to inherent inaccuracies until you have a sample rate as low as 50-60 Hz.

 

But several people here are totally ignoring this and talking about 24/192.

 

So do you think he is just plain wrong on this?

 

There is nothing in the article saying "Currently...," "At the time of writing...," etc., to denote that the quantifications regarding bit depth were only intended to apply as of 2004. The only statement in the article regarding duration is the one I quoted, "The compromise between speed and accuracy is a permanent engineering and scientific reality." Perhaps a more apt phrasing for what was intended would be "While the bit depth attainable at a particular sample rate has increased over time and may be expected to continue to do so, a given technology will always be capable of greater bit depth at lower sample rates."

 

If anyone at Lavry Engineering would like to respond regarding the topic of bit depth in a historical context, I would be interested in knowing what bit depths were commonly used in recorded material available at the "2x" sample rates (88.2kHz and 96kHz) and the "4x" ones (176.4 and 192kHz) when the paper was written, and what the corresponding bit depths are today. I would also be interested in understanding more about how bit depth and sample rate interact with each other with regard to accuracy of reproduction of the musical signal, given the praise accorded by audiophiles to a "one-bit" high sample rate technology like SACD/DSD.

 

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> wi-fi to router -> EtherREGEN -> microRendu -> USPCB -> ISO Regen (powered by LPS-1) -> USPCB -> Pro-Ject Pre Box S2 DAC -> Spectral DMC-12 & DMA-150 -> Vandersteen 3A Signature.

Link to post
Share on other sites

The point is not that conversion at sample frequencies higher than 96kHz is “not accurate enough for audio;” it is that conversion at sample frequencies higher than 96kHz will always be less accurate than conversion at 96 kHz (or lower) with the same technology.

 

This applies only to multi-bit PCM. But with delta-sigma, higher the sampling rate, better the accuracy in audio band. One-bit conversion at 24 MHz rates can already give extremely good audio-band linearity with extremely simple and accurate circuitry.

 

Even with multi-bit PCM, using higher sampling rate and noise shaping combined with suitable analog filtering can improve accuracy significantly by reducing and shaping the LSB mis-match errors.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to post
Share on other sites

On the 96k vs. 192k sampling rate, this might be of interest to further muddy the waters

 

I wonder why they use 10+ year old DAC chip (AD1853) instead of more modern but still old AD1955, which would already perform better at 192k than at 96k...

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to post
Share on other sites

Regarding:

“One of [Lavry's] basic points, near the beginning, is that you don't get anywhere near a 24-bit word length due to inherent inaccuracies until you have a sample rate as low as 50-60 Hz.

 

But several people here are totally ignoring this and talking about 24/192.

 

So do you think he is just plain wrong on this?”

 

There is accurate information and inaccurate information. One can produce 24 bits of information using any number of means; the paper was addressing the issue of accuracy. Yes, it is possible to get good results recording audio at 192kHz; but if it were possible to use the exact same converter in a way that it was optimized for 96kHz operation; it would yield more accurate audio information. Part of the problem with making comparisons between recording made at 192 and 96 kHz is that an AD converter that is optimized to operate at 192 kHz will by definition have compromised operation when set to 96 kHz output. All contemporary multi bit AD converters actually sample at frequencies much higher than the output frequency; which is independent of the output sample frequency setting in cases such as 192 versus 96 versus 48 kHz.

 

Regarding:

“The point is not that conversion at sample frequencies higher than 96kHz is “not accurate enough for audio;” it is that conversion at sample frequencies higher than 96kHz will always be less accurate than conversion at 96 kHz (or lower) with the same technology.

This applies only to multi-bit PCM….”

 

No, it does not.

 

First of all, the term “multi-bit PCM” is confusing as it refers to AD converter architecture (multi-bit as versus single-bit) which BOTH utilize sigma-delta conversion and “PCM” which is an output format and can be produced from non sigma-delta as well as sigma-delta AD converters.

 

And, YES, there is a trade-off even with one-bit sigma delta between bandwidth and accuracy in the audio band. It is interesting that so many people think that a system that has extremely high noise energy just beyond 20 kHz requiring it to be limited to a bandwidth of ~20kHz has “more accuracy” because of the very high sampling frequency than a 96kHz multi bit system with a bandwidth TWICE that of DSD.

 

For those interested in his opinion on DSD, there are a number of Posts on the Lavry Forum regarding this matter. Here are two examples:

 

http://www.lavryengineering.com/lavry_forum/viewtopic.php?f=1&t=916&hilit=DSD

http://www.lavryengineering.com/lavry_forum/viewtopic.php?f=1&t=610&hilit=DSD

 

Dan Lavry has spent hours in this and other forums responding to individuals who make assertions without any solid scientific basis. He did feel that he would like to make one final response:

 

Dan Lavry’s response:

You seem to have dismissed what I said about the speed–accuracy tradeoff altogether, and you counter it with what? That 24MHZ one bit is “good”?

 

Sigma delta, as well as most modern multi bit converters, does utilize very high speed at the front end circuitry. So why not claim that “PCM” has a sample frequency of 24MHz? Because it does not represent the audio sample rate. It is the modulator rate! Conversion is much more than how fast one clocks a modulator.

 

The concepts of DSD and multi bit sigma delta are both based on noise shaping. With a given technology (the basic parameters are modulator clock speed which you are confusing with converter sample rate, number of modulator bits and loop filter order), one can have a much better result when aiming at the frequency band that the ear hears. When you accommodate say 90kHz of usable signal range, you get a lot of range that is not usable by the human ear, and for that you pay a price. It is better to accommodate the usable range.

 

You can take the same basic resources (modulator clock speed, modulator bits and filter order) and design a converter for some industrial use requiring 1MHz usable signal bandwidth. It will not have anywhere near the accuracy of a converter aimed at 50kHz usable signal range.

 

Here is an analogy: A worker can dig 10 cubic feet of sand (this will represents some given technology). You can tell the worker to dig a 10 foot long 1 foot wide trench with a 1 foot depth. Or you can choose to dig a 10 foot deep hole with a 1 square foot area. You have to decide what to do. Deeper is better (audio quality) but the application requires some minimum area (cover the audible range).

 

So here I have shown you how higher speed (more signal bandwidth) costs accuracy right up-front, at the block diagram stage of design. That is BEFORE I even touch on the real limitations of the analog tradeoffs between speed and accuracy, including the sample and hold and OP-AMP examples in my previous response.

 

I don’t see why it is so difficult to grasp the concept of the existence of tradeoff between speed and accuracy. I can think of many real-life “cases” where such a tradeoff exists. However, I am not making universal statements about life in general. I am restricting my comments to what I know as a professional with 4 decades of hands-on design experience.

 

Anyone saying that there is no compromise between speed and accuracy does not know electronic circuits. Diverting the conversation into other aspects to avoid reality issues of the most fundamental level is a disservice to those seeking the truth. I have encountered too much stuff like that already in discussions on the internet. Some talked about the advantage of a narrow impulse, ignoring (or being ignorant) of the fact that impulse width is THE SAME THING as signal bandwidth. Others talked about “more samples is better” failing to understand a basic theorem (not a theory, theorem is PROVEN) called the Nyquist Theorem; one of the most fundamental corner stones of technology and engineering. Others claimed that the ear hears way up there, into the range of 100kHz…

 

In Ignoring vs. Paying Attention

“…Since SACDs and DSD files seem to exist; many people who have an interest in good audio seem to like them; and knowledgeable programmers/designers don't seem to have any conceptual problem with how SACD/DSD recordings work; then my conclusion is Lavry's eight references to problems with "accuracy" are hogwash…”

 

I wrote my paper Sampling Theory to dispel the “baloney.” I tried my best to keep it simple, and know it is not easy reading for a novice. I feel that I have done my part, and I cannot reply to every comment on the web, especially when so much of it is based on misinformation. And I do not appreciate labeling the knowledge I have chosen to share with others, which was gained from my 40 plus years of work and experience, as “hogwash.”

 

End of response.

 

 

Link to post
Share on other sites

A theorem is not proved, it is just generally accepted as true or worthwhile. In mathematics, it is a proposition that is proved by other propositions. Neither of which elevates it to the the level of "proved" in any sense other than a mathematical sense.

 

Beyond that, while I appreciate the paper, there are some stumbling blocks in it that are difficult to get past. For example, Nyquist/Shannon theorem does not say a sampled file contains all the information of the original source. It says that if the source were sampled at a rate of at least twice the highest frequency, the sampled file will contain the minimum amount of information needed to reconstruct the original waveform.

 

Perhaps that is a difference that makes no difference, but it was enough to cause me to stop and question what I read.

 

To be perfectly honest, while it may be true that 96K is the ultimate resolution that will ever be needed (according to the lavryengineering.com website, Dan Lavry is "against" 192K?), there are certainly other people that disagree with that.

 

I would like to see a review of DA11 though. Why don't you guys send one to Chris and get him to review it? There are plenty of DACs around to compare it to in the same price range.

 

-Paul

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to post
Share on other sites

For those interested in his opinion on DSD, there are a number of Posts on the Lavry Forum regarding this matter.

 

Funny part in those messages is that multibit PCM converters are practically dead. Much more than DSD. All the best performing converters are SDMs these days. And now we can do SDM in computers too instead of DAC chips, and with much better quality and SNR.

 

So the way to get best results today is to keep the data SDM all the way from ADC to DAC without going through two conversion steps to first convert it to PCM in ADC and then back to SDM in DAC. Especially because SDM is more space efficient format too and without the brickwall filter implications of the conversion.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to post
Share on other sites

Nyquist/Shannon theorem does not say a sampled file contains all the information of the original source

 

There are two often ignored aspects of the theorem vs. it's application to real world:

1) It assumes infinitely steep perfect filters - in real world "infinite" and "perfect" don't exist

2) It also assumes perfect sample timing and infinite sample accuracy

 

It also ignores transient and signal-change related parts, things like Gibbs phenomenon.

 

Also multi-bit ladder PCM converters can be significantly improved by noise shaping. This way it is possible to achieve more resolution in the audio band by utilizing higher sampling rates.

 

Just look at last and best multi-bit ladder DAC, PCM1704 and how it's performance improves by oversampling and noise shaping the input and using 768 kHz sampling rate and 24-bit resolution.

 

SDM on the other hand starts from the assumption, that since real world cannot be perfect and very accurate (due to manufacturing tolerances, temperature changes and such) - don't even try, but use that to your advantage.

 

Already in the past, radios became software defined (see http://en.wikipedia.org/wiki/Software-defined_radio ), now we can have software defined audio converters! That's my personal area of work.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to post
Share on other sites

I wish I could comment, but I can't. This thread has gotten to a level of not just layman's opinions from the web, but ideas presented by some very smart people. All I know is to keep a open mind, learn what I read here on this forum, then sit down in front of my system, and enjoy as well as listen, with a more educated ear.

 

Alpha Dog>Audirvana+>Light Harmonic Geek>MacBook Pro> Sound Application Reference>Modwright Oppo 105>Concert Fidelity CF 080 preamp>Magnus MA 300 amp>Jena labs and Prana Wire cables>Venture CR-8 Signature[br]

Link to post
Share on other sites

I can't say I disagree with Mr Lavry's vision as expressed here, and luckily I never said so. That this is different from the reconstruction means he (and who not) adheres, is something else and not for this post.

 

But

 

Maybe I do not agree with the general assumption that the more (sampling) speed, the less accuracy occurs. This is all to the specs of the components used/selected, and "over speced" means just that. It means that you won't gain on slower rates than the linearity of things promise, and it just as well can be utilized.

*THAT* assumed of course.

 

But now comes the real thing : select the parts which can do what you want. This *is* about capacitors, the size of them, the combination of the sizes of them, and what they must do.

This *is* about the speed of analogue parts.

It even is about the perceived speed of analogue behind the converter. Speakers for instance.

 

Coincidentally -and well known by now- I don't "need" HiRes at all. So, I am for 100% sure not advocating HiRes to be Walhalla. I never even play it (ok, also for the reasons that 95% of it is flawed to begin with).

Still I use a 768KHz converter, or let's say 705.6Khz while playing 16/44.1. And really, it is magnitudes better than when used at 352.8KHz. Should be impossible, right ?

 

Well, it already isn't for the components selected (nothing is in there by accident or price or anything), but which doesn't prevent the D/A chips used (1704) to be as they are without further options once R2R in chip form. So, for 100% sure they will be less accurate at the higher speeds used. Still that doesn't show. The contrary. How ?

 

Well, because there's somewhat more than a tunnel vision hence one aim. So, the aim could well be that the slower speed works for the better (accuracy), but what about the consistent whole of the remainder ? what when I didn't use the filtering means which should be provenly so good ? what if I use filtering means which is 100% based on that sample speed ?

 

Then each higher speed would work out for the better for that reason, and the remainder must be looked at as a tradeoff between those accuracy implications from speed and accuracy implications because of the filtering. What is more important ?

 

Never mind the answer because this post is not about that.

What it is about, is that no single element within the complex is allowed to be a justification for all.

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2      Ethernet^2     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to post
Share on other sites

When the filtering is applied as it normally is anyway, it would not be in my mind to have a post titled "Accuracy of AUDIO conversion". There is no such thing as accuracy in sight anywhere. So, before you think that accuracy of e.g. linearity plays even the slightest role, I'd examine how analysers work first. They nicely anticipate on the same "flaws" the filtering apply, and *thus* will show you a 0.0018% THD+N which is good enough.

But now play music.

 

First of all I really can't see how anything which rings from here to eternity can be called accurate. It is not, and it does not sound so. It's vague.

It is the "proven" way to do it in the mean time.

Oh ? By means of those analysers you mean ?

 

So now we have something that does not ring at all. Again that 0.0018% shows.

What it does though, is violating all good rules of the proven theory.

And, does someone care ?

 

Well, we should all, because the analyser tells the same in either situation. This, while all specs and such are based on these analysers.

 

Ringing is a given fact. Sure, the less the better. No pre-ringing ? also better. Well, until someone can express the phase anomalies in "numbers" against the now better attack.

 

No ringing at all ? ah ! now we have something.

Yea ? well, no, because now where are the "numbers" of now *real* THD figures (which are worse - and way worse depending on the frequency) against the unreal ?

 

We can't compare this. Not by numbers.

But one things remains : there is NO way ringing implies accuracy.

Harmonic distortion also does not. But 100% sure it is less devistating. And worse : the no-ringing + higher THD sounds totally accurate to me.

 

But now the fun :

Who is able to play with these things ? Measure the one against the other ?

Listen to either ?

 

Lush^3-e      Lush^2      Blaxius^2      Ethernet^2     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to post
Share on other sites

Your quoted 'consumer sound card' may well claim 24/192. But it is the accuracy that it important, surely? Encoding inaccurately at 24/192 really means that some, we don't know how many, of the least significant bita are inaccurate. From Lavry's "24 bit when we approach 50-60Hz" I would make a guess, and say half of them. So we have 12 bit accuracy at 192K. Whatever, it wil be a lot less than 24 accurate bits.

 

Re his other figures, ignore his 100MHz figure, that's too high to concern us. But 24 bit at 50Hz, and whatever he said (can't remember, say 12) at 1MHz still leaves us with way less than 24 bit accuracy at 192K.

 

Manufactures, and studio people, can say what they like, such as 24/192, and the manufacturers may be correct. I have no doubt they do sample at 192K at take measurements at a depth of 24 bits, but without telling us how many of those bits are accurate it is meaningless. The 'studio' guys just read the manufacturers numbers, same as we do.

 

And yes, of course, as time moves on these things will become more accurate. But as no one discloses how accurate they are now, we are left with 'more accurate than what?'

 

Just some thoughts, I am not disagreeing with what you or anyone says.

 

Link to post
Share on other sites

There is accurate information and inaccurate information. One can produce 24 bits of information using any number of means; the paper was addressing the issue of accuracy.

 

Of the eight times accuracy is addressed by the 27-page paper with regard to sample rate, in five the issue is framed in terms of bit depth. Of the other three, one mentions capacitor charging and amplifier settling. Two others simply mention accuracy without talking about an underlying reason. So I quite understandably took the paper to be saying that accuracy was an issue of bit depth.

 

Considering bit depth as something entirely separate from other considerations, which is the understanding that anyone reasonably reading the paper would take from its repeated emphasis on bit depth to the exclusion of anything else except a single passing mention of capacitor charging and amplifier settling, is what I characterized as "hogwash." I can certainly understand Dan Lavry's displeasure at reading that term from someone who has such a relatively slight understanding of the issues, as I freely admit. So now, given that you apparently agree bit depth isn't the be-all and end-all, perhaps we can characterize such a view with a less loaded term, like "incorrect," or at least "incomplete."

 

I'm still looking for a quantification of the bit depth issue at the frequencies of interest, since the only bit depths quantified in the paper were for other frequencies. How many "real" or "accurate" bits of information can one obtain at 88.2 and 96kHz sample rates, and how many at 176.4 and 192kHz?

 

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> wi-fi to router -> EtherREGEN -> microRendu -> USPCB -> ISO Regen (powered by LPS-1) -> USPCB -> Pro-Ject Pre Box S2 DAC -> Spectral DMC-12 & DMA-150 -> Vandersteen 3A Signature.

Link to post
Share on other sites

I'm sorry, but I must come back on this.

 

"20 bit accuracy at 50-60Hz" is clear enough. "Near 16 bit accuracy at 1MHz" is also clear.

 

So let's accept, say, 17 or 18 at 192K. Whatever, it aint 24, like many manufacturers claim. Sure they measure it, but their last 6 or 7 bits are worthless. Assuming Lavry is right, of course.

 

Maybe they know they are worthless, and that lets them 'justify' their digital volume controls :)

 

Link to post
Share on other sites

"Coincidentally -and well known by now- I don't "need" HiRes at all. So, I am for 100% sure not advocating HiRes to be Walhalla. I never even play it (ok, also for the reasons that 95% of it is flawed to begin with).

Still I use a 768KHz converter, or let's say 705.6Khz while playing 16/44.1. And really, it is magnitudes better than when used at 352.8KHz. Should be impossible, right ?"

 

Very interesting stuff. Listen to 16/44.1 , but not in native form, rather in highly upsampled 768kHz. I would definitely save $$$$, by listening to my CD collection, instead of buying all over again from HDTracks!:)

 

Alpha Dog>Audirvana+>Light Harmonic Geek>MacBook Pro> Sound Application Reference>Modwright Oppo 105>Concert Fidelity CF 080 preamp>Magnus MA 300 amp>Jena labs and Prana Wire cables>Venture CR-8 Signature[br]

Link to post
Share on other sites

"20 bit accuracy at 50-60Hz" is clear enough.

 

From the Lavry paper:

 

as we approach 50-60Hz we get near 24 bits

 

Not clear enough, apparently. ;-)

 

So let's accept, say, 17 or 18 at 192K. Whatever, it aint 24, like many manufacturers claim.

 

Except this assumes two things, one of which we've learned is wrong from Lavry/Brad Johnson, and the other of which is not at all certain.

 

The thing we've learned is wrong is that the bit depth numbers in the 2004 papers are not permanent truths, so if 24 bits was possible near 50 or 60Hz in 2004, perhaps it's 32 by now. I have no idea - all Brad/Lavry said is that we were incorrect in taking the remark in the paper about "permanent" to reference the particular bit depth numbers, rather than the speed/accuracy tradeoff.

 

The thing that's not at all certain (to me, anyway) is the relationship between bit depth numbers at 100mHz, 1mHz, 50 or 60Hz, 192kHz, and 96kHz. Is it linear? Logarithmic? Some other curve? The bit depth numbers given in the Lavry paper don't show a linear relationship (i.e., constant bit rate).

 

At the end of the day, it is whether the differences in bit depths among the frequencies of interest (88.2, 96, 176.4, 192) have any resulting audible significance that is the important question. No one's yet provided the bit depth figures, let alone placed them in a context that would allow us amateurs to understand whether any differences would have audible significance.

 

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> wi-fi to router -> EtherREGEN -> microRendu -> USPCB -> ISO Regen (powered by LPS-1) -> USPCB -> Pro-Ject Pre Box S2 DAC -> Spectral DMC-12 & DMA-150 -> Vandersteen 3A Signature.

Link to post
Share on other sites

The Lavry Report is a well done, professional document and is exactly the kind of post that computeraudiophile needs. Design methodology always involves listing the criteria that effect the design objective. In most real systems the variables are dependent on one another. Because of these dependencies an optimal design result can only be achieved by an optimal combination of the criteria. For example, a strong aircraft wing is heavier, but you want an aircraft light enough to fly. In this case, strong and light are dependent on one another. It does no good to have the strongest wing ever made on an aircraft that's too heavy to fly. In general, after obtaining a list of all the parameters that effect the design, and testing their effects on the system, a design decision must be taken that chooses the optimal compromise between all the competing criteria. This field of engineering is called "Design of Experiments" and I have some experience in it.

The Lavry report is great news for Audiophiles, because it means that we do not have to chase after the latest and greatest bit depth and sample rate recordings by spending ever more and larger amounts of money on the latest and greatest equipment. There are other criteria effecting sound quality that need to be considered at this point in time. The identification of these other factors and their optimization should be the area of concentration of this blog. We all know it when we hear a great recording and when we hear a mediocre one, but what really made the difference between the two? The Lavry report indicates that we need to start looking in areas other than just bit depth and sample rate.

 

Link to post
Share on other sites

It has long been evident to me (and I've posted to this effect elsewhere) that asking three audio people a question will tend to result in at least four different answers. With this in mind, my two cents:

 

While I find theoretical analyses of great interest, before reaching any practical conclusions based on these, I ask myself if I believe the analyses take every possible variable into account.

 

In the case of products whose only purpose in the Universe (of which I'm aware) is to be listened to, I've found actual listening to often be quite a bit more informative than anything else, more often than not, revealing what the theoretical analyses omit (or sometimes, outright deny).

 

My job involves listening to musical performances and making decisions about how hardware/software (and my engineering practice) reflect what I hear in the performance itself, before any hardware/software enters into the picture. Since I seek results as close as possible to being present at the event itself, I listen to what changes between what I hear from the position of the mic array to the sound of the mic feed, to what I hear when listening to a playback.

 

So, theory notwithstanding, I'll say what I've said before, which is with the best 4x digital, for the very first time in my decades of recording, I cannot as yet distinguish between the mic feed and the playback. With the very best 2x in my experience, it sounds "very good" but, as I delineated in an earlier post in this thread, not at all the same as the mic feed.

 

Let me put this another way: I've used all sorts of Sculley, Ampex, MCI, Studer and other analog recorders. I've used all sorts of digital recorders and a wide variety of A-D converters and D-A converters. They have, to my ears, all colored the microphone signals, except the best A-D/D-A converters running at 4x rates (i.e. 176.4 and 192k).

 

I remember the promise of digital and "Perfect Sound Forever" when I first encountered it in early 1983. To put it mildly, it was quite far from perfect. But, I figured, we've had analog for a hundred years and I saw no reason why digital wouldn't improve, given another hundred years. To my great delight, small improvements came only a few years later but the real improvement, the delivery on the promise took a bit over two decades; 80 years "early"! ;-}

 

I've finally (finally!) got gear that gives me back the sound of my mic feed. Nothing I've heard will do this at 96k but it will at 192k.

 

So what am I to make of a theoretical analysis that tells me this is inferior? Should I go back to 96k, which sounds "very good" but ultimately nothing at all like what the mics are sending, simply because of what a "white paper" says? I don't think so.

 

I think I'll continue working at the "inferior" (;-}) 192k and "suffer" a signal I can't distinguish from the mic feed.

 

As always, just my perspective

(perhaps theoretically "wrong" and if so, quite happily).

 

Best regards,

Barry

www.soundkeeperrecordings.com

www.barrydiamentaudio.com

 

 

 

Link to post
Share on other sites

Is it linear? Logarithmic? Some other curve?

 

It's a curve that can be modified by using noise shaping, so you can retain the low frequency resolution while increasing sampling rate.

 

But in the end, all the PCM bit depths of modern converters are fake, they just define how many bits the built-in DSP uses for external PCM communication.

 

When it comes to converters, my signal analyzer has 16-bit resolution at 10 MHz sampling rate.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to post
Share on other sites

Interesting thread. In this one and the one on "24/192 Downloads ... and why they make no sense?" a very significant question is raised. In both threads there are a lot of answers, but in my view it is not quite clear which question these answers refer to. I don't believe they are relevant to the question at hand (192/24 vs. 44.1/16). My conclusion is that everyone in the two threads claiming to hear differences between 192/24 and 44.1/16 is probably correct, But whether these differences are actually caused by going from 44.1/16 to 192/24 is very open question to me.

 

Please bear with me, this will take a moment.

 

My journey in music reproduction has been a most enjoyable one over a few decades. I also play music with passion. This question of whether hires is better than CD is an intriguing one. Going from vinyl to CD, I was unsatisfied with CD (harsh sounding, the lot). Moving on to a Berkeley Alpha DAC Series 1 a few years back, I discovered how good 44.1/16 could sound. Hearing the Weiss DAC202 was another veil lifted. Changing speakers a month ago revealed a new world of detail (still in 44.1/16). While I can hear differences between reproduction of 44.1/16 and 192/24 material, I find the differences between recordings much greater than any systematic difference I could detect. There are CDs that I vastly prefer to certain 192/24 material.

 

In this thread two data points raised my interest:

- An AES article claims that the insertion of a 44.1/16 A/D D/A loop into a high resolution signal cannot be detected (in that specific test setup) http://www.aes.org/e-lib/browse.cfm?elib=14195

- Barry Diament says that only 192/24 can reproduce the mike feed. Keith O. Johnson has a similar view

 

I believe both views are correct. I have great respect for both Barry D. and Keith J. and I am sure the differences they hear are for real. I have my doubts however that they are singularly caused by going from a lower bit rate / bit depth to 192/24. The recording / reproduction chain's performance is the end result of the interaction of a long chain of components. Microphones, recording equipment, mastering, player, amp, speakers or headphones, cables, ears, brain, all contribute. If one component produces ringing, jitter, transients, it will go through the chain and be modified by the following components. Same thing with acoustic background noise (which is way above the noise floor of 44.1/16), electrical supply noise, cable microphony etc. Not to speak of the imperfections of the ear and the processing that happens in the brain (listeners bias, we hear what we want to hear).

 

Given there are so many variables in the chain, the differences heard by Barry D. and Keith J. and many others (including myself) could very easily be caused by other factors than purely by the difference between 44.1/16 and 192/24 (i.e. higher quality amps, cables, power supplies etc.). As others have pointed out, it is extremely difficult to construct a test setup that ONLY tests the difference between 192/24 and 44/16 and nothing else.

 

Dan Lavry argues that due to noise, there is an inherent limit of time / amplitude resolution product for a given technology. One can either record / reproduce very small amplitude differences with a low time resolution (by averaging several samples) or record / reproduce very small time differences with a low amplitude resolution. Similar to Heisenberg's uncertainty principle, but not quite at Heisenberg's resolution level. This limit seems to be below 100khz sampling rate for 24bit resolution. If we sample higher (192/24) the theoretical resolution limit of the system would be masked by electrical noise (acoustic background noise in the best studio being several orders of magnitude higher).

 

On the other hand I have no doubt whatsoever that for editing / mastering higher sampling rates / bit depths are better and improve accuracy. Here we are purely in the digital domain and we are not real time. Hence Dan Lavry's time / amplitude limit does not hold and the higher accuracy is not masked. If after the high bit rate mastering we down sample to 44.1/16, we get a better CD, than if the mastering had been done at 44.1/16. No contradiction to theory here.

 

I think we need to think about what question it is we want ot answer.

 

Is it better to record at 192/24 than at 44.1/16? I don't know, I am not a recording engineer. I would suspect that Dan Lavry's limit will kick in at some point, but I have no clue where. Barry D. or Keith J. could probably answer this one, or an experiment could be devised to test this (true resolution limits of the microphones and the A/D converters).

 

Is it better to master at 192/24 than at 44.1/16? Certainly, no theory would contradict that.

 

Is it better to reproduce at 192/24 than at 44.1/16? With all the information at hand, I would suspect that given current technology, the "optimum" would be somewhere around 96/24. Dan Lavry's argument would suggest the point of diminishing returns to be a bit (no pun intended) below that. As technology progresses, the limit will increase, but whether the ear and the brain can detect the difference I have no clue. Looks like more testing is needed.

 

My conclusion is that everyone in the two threads claiming to hear differences between 192/24 and 44.1/16 is probably correct. But whether these differences are actually singularly caused by going from 44.1/16 to 192/24 is very open question.

 

For the time being I am quite happy to rediscover my CD collection through my new Piega 90.2s

 

 

Let the music touch your soul

Link to post
Share on other sites

Listening is what counts. But too many people are just number chasing, without even considering what it actually means. They read 24/192 in the advertisement or on the box and just believe it, despite Lavry's paper saying the two together are currently impossible. Sample as deep as you like and as fast as you like, but don't necessarily believe the values you get.

 

It is like a digital clock. It says 10:24. But I bet it is not accurate to four digits. It might well actually be 10:19, and making it read 10:24:46 does not make it any more accurate.

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...