Jump to content

Lavry tech

Members
  • Content Count

    5
  • Joined

  • Last visited

Everything posted by Lavry tech

  1. While it is possible to get "good" results with converters optimized to operate at sample rates higher than 96kHz; this does not "prove" that higher sample rates do not come without a cost. The unfortunate fact that, by design, a multi-bit sigma delta converter optimized to operate at 192kHz (or higher) cannot also be optimized to operate at 96kHz makes it nearly impossible to make a meaningful comparison of the effects of changing ONLY the sample rate via listening tests. Other factors such as differences in analog circuitry, jitter in conversion clocking, or even PC board layout can have a significant effect on the perceived "sound" of the entire converter unit; which makes comparing different models of converters useless in this regard. To help illustrate the difference between scientific facts and subjective test results I submit the following argument: How about that joker Columbus? Can you believe that B.S. about the world being "round?" Anyone with eyes can just look around and see that the world is FLAT. Anyone. As Dan Lavry points out in the following response; the Nyquist theorem is not intuitive. Trying to apply "common sense" analogies such as comparing samples to pixels only confuses the matter. Dan Lavry 's response: I have been making the case against higher sample rates for audio for a long time. I have encountered no credible arguments to my paper “Sampling Theory”. The same is true for my recent paper “The Optimal Sample Rate for Quality Audio”. I encounter some that want to counter the message by “shooting the messenger”. Meanwhile the facts I preset are correct and UN-challenged. I realize that reading the papers demands time and concentration. So here is a shorter description of many of the points I presented in the papers. Let’s refrain from diverting the conversation away from the topics. 1. Sampling is not intuitive. SAMPLING IS NOT ANALOGUS TO PIXELS! A more detailed picture may require more pixels, but more audio detail does NOT require more samples. There is an “electronic tool” (filter) that enables recovering ALL of the audio from a limited number of samples. It is not intuitive and requires much study. In fact it is counter-intuitive and goes against “everyday common sense.” This is the reason why the marketing of “more samples is better” is successful in convincing so many of the false notion. 2. Nyquist theorem (theorem is a PROVEN theory) tells us that recovering ALL the audio intact does require the sampling rate (frequency of sampling) to be at least twice as fast as the highest signal (audio) frequency. Theory demands a perfect “reconstruction tool” filter. In practice, real world filters require sampling a little faster than twice the audio bandwidth. For 20 KHz audio bandwidth, the theory requires at least 40 KHz sample rate. The 44.1 KHz standard provides 4.1 KHz margin. The margin for the filter (from the theoretical filter) is 100*(44.1KHz-2*20KHz)/(2*20KHz) = 10.25% 3. Some people argue that we need more than 20 KHz for audio. The decision as to how wide the audio range is should be left to the ears. Say we agree to accept a 25 KHz as the audio bandwidth. When using 88.2 KHz sampling, (and 25 KHz for the audio bandwidth) the margin is i100*(88.2KHZ-2*25KHz) /(2*25KHZ) = 76.4%. 4. At 96 KHz sampling and 25 KHz audio, the margin is 92%. At 96 KHz sampling and 30 KHz audio the margin is 60%. At 192KHz sampling and 30KHz the margin is 220%!. For anyone crazy enough to claim they hear or feel 40 KHz, when sampling at 192 KHz the margin is still 140%. At 384 KHz sampling the margin is 380%! 5. Some argue that at 44.1 KHz the margin of 10.25% is tight, and that theoretical filters fail to provide a near perfect reconstruction. Others argue that 20 KHz audio is too small to accommodate some ears. Such arguments support some reasonable increase in sampling rate. Many argue that 44.1 KHz rate is good enough. Others disagree. But few will argue with the statement that 44.1 KHz is at least pretty close to acceptable. In order to accommodate those that want improvements, let’s increase the margin by a factor of say 2. You want more, OK, by a factor of 4. You want more audio bandwidth? OK let’s raise it to a factor of 5… And all that is more than covered by the use of 96 KHz sample rate! 6. A few manufacturers are starting to advocate 384 KHz and even 768 KHz sample rates. When audio sampled at 44.1KHz is considered as being somewhere between “not perfect” and “near perfect”, the notion of sampling 870% faster (for 384KHz) or even 1741% (for 768KHz) faster than a CD makes no sense. I expect even the least competent of designers to be able to design a filter that does not require such huge margins. I would also expect any converter designer to have enough background to know that more samples are not analogous to more pixels! I would expect converter designers to insist that their marketing department knows that, instead of closing their eyes to the crock of steering audio in the wrong direction. I also understand it is not easy when one’s job is on the line. 7. It is not wise to keep increasing the sample rate unnecessarily. The files keep growing, and faster sampling yields less accuracy. Yet the marketing of higher sample rates has no basis, other than some spreading of misinformation. The latest I saw claims that faster sampling yields better stereo location (time resolution). The argument is false. Faster sampling offers the ability to process wider bandwidth, but has no impact what so ever on stereo location! 8. Faster sampling for capturing bandwidth that we do not hear (ultrasonic) is not wise. If we did not hear it (or feel it) we don’t need it. If we did hear it (or feel it) it is not ultrasonic, it is audible bandwidth (by definition). Ultrasonic energy may cause problems by spilling over to the audible range (intermodulation distortions). At best case, ultrasonic energy adds nothing to audio while requiring faster sampling, thus larger files and slower file transfers. In reality there is another price to pay; the faster one samples, the less accurate the result. Dan Lavry
  2. I would appreciate something to support your assertion that my “surrounding statements really hurt” my credibility. 1.) “Interested in the facts?” is not a statement; it is a query with the specific goal of generating interest in a rather “dry subject” that has important implications for anyone serious about digital audio. It is relevant to the subject of the paper because the vast majority of “rebuttals” to Dan Lavry’s assertion that there is an optimal sample rate for high quality audio are based on opinion or subjective “test” results. We are not afraid of facts; and would be interested in hearing from the “AES Fellows who contradict much of what Dan says” in their own words. This is not a “new subject,” and during the years that have passed since the original Sampling Theory paper was published, no one has yet come forward with credible scientific evidence to the contrary. 2.) Regarding- “If you post here on CA in an effort to educate there is no need to tout Dan as "One of the world’s top converter designers..." or to begin your post with "Interested in the facts?" One cannot really “educate” anyone else; one can only show them the way and hope they can educate themselves. I find it quite surprising that anyone associated with an online Forum would take the perspective that people who are not familiar with a very narrow field of electronics design would also not be interested in this subject. For example; I typed “Optimal sample rate” into Google, and Computer Audiophile was third on the list of results; which is something anyone, anywhere in the world can do. Personally; I believe despite that fact that Dan Lavry is well known and respected in the professional audio industry, there are millions of people world-wide that are interested in the subject and are not aware of who Dan Lavry is; or why his fact-based argument might be more credible than either the opinions of people who lack anything even close to the depth of his understanding, or who have commercial interests in promoting lower quality audio as “better.” In a world where nothing less than “extreme” even registers with so many who are overwhelmed by the amount of information available to them (useful and otherwise), I felt that a mildly provocative subtitle would help in the effort to bring attention to the subject. Here is what Dan Lavry had to say: “The industry is exposed to a well-financed campaign by large manufacturers trying to sell the false notion that faster sampling is better. There is a lot of advertising of higher sample rate conversion gear, aimed at benefiting the makers of such gear. A smaller converter manufacture has a choice. One can join the high sample rate crowd (making high sample rate converters) while riding the advertising hype that is well financed by larger companies. The alternative is to stay true to quality audio. Lavry Engineering stands for quality audio. So we do what we can to steer the industry in the right direction in a manner that is transparent and does not benefit only our interests. A few years back, I resisted the 192kHz sampling hype. That is when I wrote the paper “Sampling Theory” and refused to make higher sample rate gear. The hype died down and 44.1- 96kHz became mainstream again in professional recording and Mastering studios. A few years passed by and here we are again, this time with the pushing of 384kHz and even 768kHz. Again there is no credible engineering reason for it, and no supporting objective listening tests results. We are trying to do our best to steer audio in the right direction. I am sorry to see that you seem to be focused on the paper introduction instead of the paper itself. I agree that the introduction was aimed towards getting people interested in reading the paper. I think that the Lavrytech introduction was a drop in the ocean compared to the well subsidized advertising hype for higher and higher sample rates for audio. I hope that people would concentrate more on the issue (the paper content) and less on the packaging (the announcement).”
  3. Interested in the facts? One of the world’s top converter designers Dan Lavry has written a new paper in simple language to demystify the subject. http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf See why many professional engineers still work at 96kHz years after 192kHz became available. Find out why “more” is not always “better!”
  4. Regarding: “One of [Lavry's] basic points, near the beginning, is that you don't get anywhere near a 24-bit word length due to inherent inaccuracies until you have a sample rate as low as 50-60 Hz. But several people here are totally ignoring this and talking about 24/192. So do you think he is just plain wrong on this?” There is accurate information and inaccurate information. One can produce 24 bits of information using any number of means; the paper was addressing the issue of accuracy. Yes, it is possible to get good results recording audio at 192kHz; but if it were possible to use the exact same converter in a way that it was optimized for 96kHz operation; it would yield more accurate audio information. Part of the problem with making comparisons between recording made at 192 and 96 kHz is that an AD converter that is optimized to operate at 192 kHz will by definition have compromised operation when set to 96 kHz output. All contemporary multi bit AD converters actually sample at frequencies much higher than the output frequency; which is independent of the output sample frequency setting in cases such as 192 versus 96 versus 48 kHz. Regarding: “The point is not that conversion at sample frequencies higher than 96kHz is “not accurate enough for audio;” it is that conversion at sample frequencies higher than 96kHz will always be less accurate than conversion at 96 kHz (or lower) with the same technology. This applies only to multi-bit PCM….” No, it does not. First of all, the term “multi-bit PCM” is confusing as it refers to AD converter architecture (multi-bit as versus single-bit) which BOTH utilize sigma-delta conversion and “PCM” which is an output format and can be produced from non sigma-delta as well as sigma-delta AD converters. And, YES, there is a trade-off even with one-bit sigma delta between bandwidth and accuracy in the audio band. It is interesting that so many people think that a system that has extremely high noise energy just beyond 20 kHz requiring it to be limited to a bandwidth of ~20kHz has “more accuracy” because of the very high sampling frequency than a 96kHz multi bit system with a bandwidth TWICE that of DSD. For those interested in his opinion on DSD, there are a number of Posts on the Lavry Forum regarding this matter. Here are two examples: http://www.lavryengineering.com/lavry_forum/viewtopic.php?f=1&t=916&hilit=DSD http://www.lavryengineering.com/lavry_forum/viewtopic.php?f=1&t=610&hilit=DSD Dan Lavry has spent hours in this and other forums responding to individuals who make assertions without any solid scientific basis. He did feel that he would like to make one final response: Dan Lavry’s response: You seem to have dismissed what I said about the speed–accuracy tradeoff altogether, and you counter it with what? That 24MHZ one bit is “good”? Sigma delta, as well as most modern multi bit converters, does utilize very high speed at the front end circuitry. So why not claim that “PCM” has a sample frequency of 24MHz? Because it does not represent the audio sample rate. It is the modulator rate! Conversion is much more than how fast one clocks a modulator. The concepts of DSD and multi bit sigma delta are both based on noise shaping. With a given technology (the basic parameters are modulator clock speed which you are confusing with converter sample rate, number of modulator bits and loop filter order), one can have a much better result when aiming at the frequency band that the ear hears. When you accommodate say 90kHz of usable signal range, you get a lot of range that is not usable by the human ear, and for that you pay a price. It is better to accommodate the usable range. You can take the same basic resources (modulator clock speed, modulator bits and filter order) and design a converter for some industrial use requiring 1MHz usable signal bandwidth. It will not have anywhere near the accuracy of a converter aimed at 50kHz usable signal range. Here is an analogy: A worker can dig 10 cubic feet of sand (this will represents some given technology). You can tell the worker to dig a 10 foot long 1 foot wide trench with a 1 foot depth. Or you can choose to dig a 10 foot deep hole with a 1 square foot area. You have to decide what to do. Deeper is better (audio quality) but the application requires some minimum area (cover the audible range). So here I have shown you how higher speed (more signal bandwidth) costs accuracy right up-front, at the block diagram stage of design. That is BEFORE I even touch on the real limitations of the analog tradeoffs between speed and accuracy, including the sample and hold and OP-AMP examples in my previous response. I don’t see why it is so difficult to grasp the concept of the existence of tradeoff between speed and accuracy. I can think of many real-life “cases” where such a tradeoff exists. However, I am not making universal statements about life in general. I am restricting my comments to what I know as a professional with 4 decades of hands-on design experience. Anyone saying that there is no compromise between speed and accuracy does not know electronic circuits. Diverting the conversation into other aspects to avoid reality issues of the most fundamental level is a disservice to those seeking the truth. I have encountered too much stuff like that already in discussions on the internet. Some talked about the advantage of a narrow impulse, ignoring (or being ignorant) of the fact that impulse width is THE SAME THING as signal bandwidth. Others talked about “more samples is better” failing to understand a basic theorem (not a theory, theorem is PROVEN) called the Nyquist Theorem; one of the most fundamental corner stones of technology and engineering. Others claimed that the ear hears way up there, into the range of 100kHz… In Ignoring vs. Paying Attention “…Since SACDs and DSD files seem to exist; many people who have an interest in good audio seem to like them; and knowledgeable programmers/designers don't seem to have any conceptual problem with how SACD/DSD recordings work; then my conclusion is Lavry's eight references to problems with "accuracy" are hogwash…” I wrote my paper Sampling Theory to dispel the “baloney.” I tried my best to keep it simple, and know it is not easy reading for a novice. I feel that I have done my part, and I cannot reply to every comment on the web, especially when so much of it is based on misinformation. And I do not appreciate labeling the knowledge I have chosen to share with others, which was gained from my 40 plus years of work and experience, as “hogwash.” End of response.
  5. In response to some of the statements in Jud’s most recent post, Dan Lavry has asked me to publish his response: Dan Lavry’s Response- The Sampling Theory paper does NOT suggest that there is any “permanent” bit depth limitation or any sample rate limitations. When I wrote the paper, I used the present day technology (8 bits at 100MHz or 16 bits at 1MHz) as a tool to point out that as the speed increases, the accuracy (thus the bit depth) decreases. Years ago, getting 8 bits at 1MHz was beyond the state of the art technology. It would be ridiculous for me to assume that we will (or will not) have 8 bits technology of say 1GHz or 10GHz at some future time… However, at any given time, when one looks at conversion speed and accuracy, one finds that the slower the conversion, the more accurate it is. That was true 40 years ago when I was a young design engineer, and it will be true for as long as the basic principles that govern analog design hold true. The point is that to do an optimal job, one cannot sample too slowly (you need to cover the audio bandwidth in the case of sound), and you cannot sample too fast (you lose accuracy). No one suggests sampling audio at 1Hz. No one suggests sampling audio at 1GHz. So there is an optimal rate! But where is it? First we need to accept the fact that there is some optimal rate. Those that advocated faster is automatically better are not even accepting that fact! Optimal rate depends on the application. Video calls for more bandwidth so we must sample faster. But video conversion is less accurate. Audio needs to accommodate the ear, which does not need video speeds yet is more sensitive in terms of accuracy. The ear does not hear 80kHz; thus sampling too fast reduces accuracy while gaining nothing for it. Think of a camera that can include invisible light at a cost of degradation to the visible spectrum. If one desires to confirm this relationship, feel free to go to the website of any manufacturer that make a wide array of conversion products (such as Analog Devices, TI and more). Check the selection guides of today; check the data from 10, 20, or 30 years ago…. You will see that speed always costs accuracy and accurate conversion demands slower speeds. Again, my example about 8 bits at 100 MHz and 16 bits at 1MHz were there to show the RELATIVE accuracy as it relates to speed. I never stated that a permanent bit depth limit exists; I used (then) contemporary data to demonstrate a point- that speed compromises accuracy and increased accuracy demands lower speed. Technology improves over time, and still, faster will remain a tradeoff against accuracy, as it always has been. Analog designers will understand that statement very well. Say one wishes to “take a sample” and to do so, you need to charge a capacitor. The charging curve is an exponential one – the longer you wait, the closer you get to the actual value of the sampled input. If one reduces the capacitance to speed thing up, you pay a price! 1. A smaller capacitor does not hold the charge as well. It will partially discharge before the AD conversion can complete, resulting in a lower sample value. 2. A larger capacitor reduces switching transients. Switching transient introduce other inaccuracies. 3. And relatively small capacitors (such as found in sigma delta switch capacitor networks) generate more noise; which is the major limitation of that technology today. No analog designer can dispute that! Another example is an OP-AMP. Converters use OP-AMPS that operate at very high speeds, to handle the required fast voltage (or current) “steps”. One can look at settling time of such circuits, and again, the longer you wait after the voltage step occurs, the closer the output of the OP-AMP is to the ideal final value (thus more accurate). The vocabulary is “settling time” (such as settling time to reach less than 1% error). If you “look” at the OP-AMP’s output voltage too soon after the step occurs, the result of the conversion is less accurate. There are numerous other examples of the tradeoffs, all based on basic electrical principles of physics. This is not the place to lecture about analog design. I was probably too detailed as is. So the assertion that I said there is a "permanent bit depth limit” is not true. Dan Lavry End of response We do appreciate the idea that Dan Lavry is cited as an authority on digital audio conversion; however we would ask that he not be miss-quoted by any means; including taking parts of his paper out of context or making claims that he “says” something in words other than his own. The point is not that conversion at sample frequencies higher than 96kHz is “not accurate enough for audio;” it is that conversion at sample frequencies higher than 96kHz will always be less accurate than conversion at 96 kHz (or lower) with the same technology. As with many other things,there is a point of diminishing return; and thus there is an upper limit for sample frequency to achieve the most accurate conversion of audio. Brad Johnson Lavry Engineering Technical Support
×
×
  • Create New...