John Dyson Posted February 1, 2020 Share Posted February 1, 2020 5 minutes ago, pkane2001 said: What do you use for FFTs? FFTW is pretty amazing at running large transforms and convolutions in 64 bits, but only if you're willing to abide by the GPL license. I have watched FFTW since in came out back in the '90s, and really like the completeness & apparent quality of their work. I haven't needed FFT yet on this project, and a previous project just needed something simple -- absolute and complete accuracy/production quality wasn't a goal at the time. I just 'grabbed' a freely usable FFT, and used it. (Investgated dynamic range compression/expansion in the Fourier domain. Results worked, but inconclusive.) Off topic: I almost considered using some FFTs in the DHNRDS DA instead of the other filtering techniques, but *at the time* I felt that I would be delving into another science project realm. (the DA was already a science project by itself) Using an FFT for the input bandpass filters would be very interesting. The needed filters are very smooth (Q=0.470 and Q=0.450), smooth phase, and right now I have an evil brute force FIR implementation, because IIR filters don't match the requirements because of the subtle nature of the difference between the DSP world and analog world. The input filters cost a big part of the CPU for non-MD reduction decodes. (like 30% of a Haswell core at 96k. WAY too much.) Important thing is that they must be sample accurate in timing. NlgN would be a speed improvement over what I did (too ugly to admit to.) Link to comment
pkane2001 Posted February 1, 2020 Share Posted February 1, 2020 1 minute ago, John Dyson said: I have watched FFTW since in came out back in the '90s, and really like the completeness & apparent quality of their work. I haven't needed FFT yet on this project, and a previous project just needed something simple -- absolute and complete accuracy/production quality wasn't a goal at the time. I just 'grabbed' a freely usable FFT, and used it. (Investgated dynamic range compression/expansion in the Fourier domain. Results worked, but inconclusive.) Off topic: I almost considered using some FFTs in the DHNRDS DA instead of the other filtering techniques, but *at the time* I felt that I would be delving into another science project realm. (the DA was already a science project by itself) Using an FFT for the input bandpass filters would be very interesting. The needed filters are very smooth (Q=0.470 and Q=0.450), smooth phase, and right now I have an evil brute force FIR implementation, because IIR filters don't match the requirements because of the subtle nature of the difference between the DSP world and analog world. The input filters cost a big part of the CPU for non-MD reduction decodes. (like 30% of a Haswell core at 96k. WAY too much.) Important thing is that they must be sample accurate in timing. NlgN would be a speed improvement over what I did (too ugly to admit to.) Applying large filters with lots of non-zero coefficients would be a lot faster using a well-optimized FFT library. mansr 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
John Dyson Posted February 1, 2020 Share Posted February 1, 2020 1 minute ago, pkane2001 said: Applying large filters with lots of non-zero coefficients would be a lot faster using a well-optimized FFT library. Off topic: I was possibly planning to use the FFT in a case where I am using LOTS of FIR filters. I did a very ugly, brute force emulation of the apprx 9kHz and 3kHz input filters as implemented by the analog Q=0.450 and Q=0.470 filters. An FFT would be an improvement over the travesty that I hacked under pressure. It is the most embarassing part of the decoder, but works so well that I haven't wanted to touch it. (If the frequency or equiv Qs are off much at all, then the sound isn't right for one reason or another.) John Link to comment
audiobomber Posted February 1, 2020 Share Posted February 1, 2020 6 hours ago, mansr said: What has that got to do with anything? It means that some designers disagree with you. I'm well beyond believing that just because someone is a math-head they know all there is to know about great sound. sandyk 1 Main System: QNAP TS-451+ NAS > Silent Angel Bonn N8 > Sonore opticalModule Deluxe v2 > Corning SMF with Finisar FTLF1318P3BTL SFPs > Uptone EtherREGEN > exaSound PlayPoint and e32 Mk-II DAC > Meitner MTR-101 Plus monoblocks > Bamberg S5-MTM sealed standmount speakers. Crown XLi 1500 powering AV123 Rocket UFW10 stereo subwoofers Upgraded power on all switches, renderer and DAC. Link to comment
mansr Posted February 1, 2020 Share Posted February 1, 2020 Just now, audiobomber said: It means that some designers disagree with you. I'm well beyond believing that just because someone is a math-head they know all there is to know about great sound. Clocking of DAC chips has nothing to do with the question at hand. esldude 1 Link to comment
Ralf11 Posted February 1, 2020 Share Posted February 1, 2020 2 minutes ago, audiobomber said: just because someone is a math-head they know all there is to know about great sound. yes, you need some engineering chops too he does have that, so try not to get snarky - it just makes you look like an audiobummer why not ask a question instead? Here, I'll show you how: mansr, why would a designer use two different clocks for different clock rates, instead of doing the math (to adequate precision)? is it just for marketing or is there a functional reason? Link to comment
audiobomber Posted February 1, 2020 Share Posted February 1, 2020 3 minutes ago, Ralf11 said: why not ask a question instead? I tend to respond according to the tone of the post I'm responding to. daverich4 1 Main System: QNAP TS-451+ NAS > Silent Angel Bonn N8 > Sonore opticalModule Deluxe v2 > Corning SMF with Finisar FTLF1318P3BTL SFPs > Uptone EtherREGEN > exaSound PlayPoint and e32 Mk-II DAC > Meitner MTR-101 Plus monoblocks > Bamberg S5-MTM sealed standmount speakers. Crown XLi 1500 powering AV123 Rocket UFW10 stereo subwoofers Upgraded power on all switches, renderer and DAC. Link to comment
Ralf11 Posted February 1, 2020 Share Posted February 1, 2020 1 minute ago, audiobomber said: I tend to respond according to the tone of the post I'm responding to. then it is best to converse with Pacific Islanders, and NZers (kiwis), Mini-sotans, etc. not Aussies, SAers, Israelis, or Texans esldude and daverich4 1 1 Link to comment
Popular Post mansr Posted February 1, 2020 Popular Post Share Posted February 1, 2020 2 minutes ago, Ralf11 said: mansr, why would a designer use two different clocks for different clock rates, instead of doing the math (to adequate precision)? is it just for marketing or is there a functional reason? I think we can all agree that an oversampling sigma-delta DAC needs a clock to work at all. If this clock is synchronous with the audio data, the design of the DAC chip is simplified since everything can run in lockstep with a constant number of cycles per sample in which to do computations. For this reason, most chips require a clock (various designated system or master) at a simple multiple of the sample rate, typically 128, 256, or 512. If a chip supports these multiples, a constant 24.576 MHz clock can be used for sample rates of 48 kHz, 96 kHz, and 192 kHz while the 44.1 kHz rate family can be handled with a 22.5792 MHz clock. For best jitter performance, designers often use two crystal oscillators at these frequencies and enable one or the other depending on the current sample rate. To support audio sample rates unrelated to the system clock frequency, an asynchronous sample rate converter (ASRC) is required. This device compares the incoming sample rate to the system clock rate and periodically adjusts the parameters of an interpolation filter to match. The output is a data stream with a sample rate synchronised to the system clock. The rest of the chip works as usual. Downsides of this approach include greater design effort, larger chip area, increased power consumption, and more electrical noise. To work well, a substantially faster system clock is also needed, and this too can add to the challenges. Nonetheless, this is how ESS DACs work, so clearly these issues can be overcome. Then again, those chips are not cheap. The constraints and trade-offs involved in DAC chip design are almost entirely unrelated to software resampling between two known rates. The maths behind the interpolation filters is of course the same, but that's where the similarities end. Sonicularity, Don Hills and Ralf11 2 1 Link to comment
fas42 Posted February 1, 2020 Share Posted February 1, 2020 1 hour ago, audiobomber said: It means that some designers disagree with you. I'm well beyond believing that just because someone is a math-head they know all there is to know about great sound. Great sound comes about because one worries about the "little things" - heavy duty, excruciatingly precise maths manipulation is only useful if one "likes precision", or if one wishes to try and analyse precisely what is going on, at some arbitrarily low level within a signal. It would be relatively easy to 'wreck' "incredibly precise" audio, by introducing some subtle interference mechanism - causing people to run from the room, with their hands over their ears. All the maths in the world can't save this sound - it needs an ability to see "the bigger picture"; to understand the full set of factors that may be impacting what you hear. Link to comment
sandyk Posted February 1, 2020 Share Posted February 1, 2020 On 1/30/2020 at 10:44 PM, marce said: Will try the files tonight when I find my headphones. And the results were ? Are you able to describe any differences that you heard ? How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file. PROFILE UPDATED 13-11-2020 Link to comment
lucretius Posted February 2, 2020 Share Posted February 2, 2020 6 hours ago, mansr said: I think we can all agree that an oversampling sigma-delta DAC needs a clock to work at all. If this clock is synchronous with the audio data, the design of the DAC chip is simplified since everything can run in lockstep with a constant number of cycles per sample in which to do computations. For this reason, most chips require a clock (various designated system or master) at a simple multiple of the sample rate, typically 128, 256, or 512. If a chip supports these multiples, a constant 24.576 MHz clock can be used for sample rates of 48 kHz, 96 kHz, and 192 kHz while the 44.1 kHz rate family can be handled with a 22.5792 MHz clock. For best jitter performance, designers often use two crystal oscillators at these frequencies and enable one or the other depending on the current sample rate. To support audio sample rates unrelated to the system clock frequency, an asynchronous sample rate converter (ASRC) is required. This device compares the incoming sample rate to the system clock rate and periodically adjusts the parameters of an interpolation filter to match. The output is a data stream with a sample rate synchronised to the system clock. The rest of the chip works as usual. Downsides of this approach include greater design effort, larger chip area, increased power consumption, and more electrical noise. To work well, a substantially faster system clock is also needed, and this too can add to the challenges. Nonetheless, this is how ESS DACs work, so clearly these issues can be overcome. Then again, those chips are not cheap. The constraints and trade-offs involved in DAC chip design are almost entirely unrelated to software resampling between two known rates. The maths behind the interpolation filters is of course the same, but that's where the similarities end. Do I understand correctly? (1) All this talk about floating point math relates only to the asynchronous sample rate converter? (2) A DAC with only a 22.5792 MHz clock uses a asynchronous sample rate converter only for the 48K family of sample rates? (3) If an asynchronous sample rate converter is used, extra effort is required to reduce jitter? Thanks. mQa is dead! Link to comment
lucretius Posted February 2, 2020 Share Posted February 2, 2020 14 hours ago, mansr said: To work well, a substantially faster system clock is also needed, and this too can add to the challenges. Nonetheless, this is how ESS DACs work, so clearly these issues can be overcome This would explain the 100MHz clock in the Brooklyn DAC+. Since this clock is not at a simple multiple of any sample rate, then does that mean an asynchronous sample rate converter is required for all (i.e. both 44.1K and 48K families) sample rate conversions? mQa is dead! Link to comment
John Dyson Posted February 2, 2020 Share Posted February 2, 2020 8 hours ago, lucretius said: Do I understand correctly? (1) All this talk about floating point math relates only to the asynchronous sample rate converter? (2) A DAC with only a 22.5792 MHz clock uses a asynchronous sample rate converter only for the 48K family of sample rates? (3) If an asynchronous sample rate converter is used, extra effort is required to reduce jitter? Thanks. Most people reading this already know: Floating point math is a useful option for the internal calculations for audio software, esp beneficial if the calculations are complicated. For precision, 32bit floating point math effectively has similar digits of precision as 24 bit integer, and the in-between calculations are more safe from truncation or overflow errors. Subtle math errors can be semi-automatically handled with CPU floating point math techniques, the CPU operation exception condtions are often disabled for audio software so that they are mostly automatically handled. 32bit floating point data is often easier to program with than 32bit integer dada, and floating point doesn't have overflow problems. The amount of precision in 32bit integer is often better than 32bit floating point, but generally floating point has fewer gotchas. Any integer format can have more scaling issues, overflow problems and variable relative precision. If the CPU/memory resoruces are sufficient, then the best common internal format is 64bit floating point. It has more resolution than any other common format, and has more dynamic range than 32bit floating point, which already has much more than enough. For file interchange, in order of quality, best first: 64bit FP, 32bit FP, 32bit INT, 24biit INT, 16bit INT. Good quality consumer material is almost always based on 24bit INT and 16bit INT, but the floating point formats are better for pro applications because of lack of finalization, normalization, in the interim recordings. All of these matters are independent of DAC, other than the last conversion to integer. On a real-world DAC are no precision issues when converting to 32bit integer when math is done in 64bit floating point, and there are no practical precision issues when converting from 32bit floating point. Internal calculations being done in floating point is more common today because the CPUs are now very good at floating point and SIMD FP math. SIMD FP math is a blessing for audio processing. Again, FP math should normally be independent of DAC. Last I read (and I am >10yrs out of date on DACs), DACs still use an integer interface, and I wouldn't expect that to change. John lucretius 1 Link to comment
Popular Post mansr Posted February 2, 2020 Popular Post Share Posted February 2, 2020 8 hours ago, lucretius said: Do I understand correctly? (1) All this talk about floating point math relates only to the asynchronous sample rate converter? I think we need to take a step back. We've been discussing three different situations. Firstly, there is resampling to an integer multiple of the input rate. This is the simplest case. Doubling (or tripling, etc) the sample rate can be done by simply inserting one (or two, etc) zero samples after each input sample, then applying a low-pass filter with a cut-off at the Nyquist frequency of the input (half the sample rate). As we know, applying a filter means convolving the signal with the impulse response of the filter. Since we've inserted a bunch of zeros into the signal, we know that many of the multiplications involved in the convolution will give a zero result, so we can simplify the calculations by skipping those entirely. We can also skip the step of actually placing zeros into the signal and directly do the multiplications that (might) give a non-zero result. On a rate doubling, half the output samples are coincident in time with the input samples while the other half are positioned midway between two input samples. The computations for the former of these involve half the values in the filter impulse response (let's say the even-numbered ones), and for the latter the other half of the impulse response (the odd-numbered values) are used. Secondly, we discussed resampling to a non-integer multiple of the input rate. Conceptually, this can be achieved by zero-stuffing the input to yield a sample rate equal to the lowest common multiple of the two rates, low-pass filtering this, and finally discarding samples to leave the desired target rate. For example, to produce 1.5 (3/2) times the input rate, we would first triple the rate by inserting two zeros after each sample and low-pass filtering as discussed above. Then we'd simply discard half of those samples (which is fine since the signal is already properly band-limited), thus halving the sample rate to the desired 1.5x multiple of the input. That's the long way around, and as before, there are some shortcuts to be made. Multiplying by zero is silly, so that can be skipped. It is likewise silly to actually calculate the values of the samples that are then immediately discarded. After these simplifications, we notice that the output samples can be divided into three sets: those coincident with input samples, those positioned one third of the way between input samples, and those at the two thirds point. As in the rate doubling case, each of these sets involves a separate subset of values from the filter impulse response. This is what the term polyphase refers to. For a conversion from 44.1 kHz to 96 kHz, the ratio reduces to 320/147, so the impulse response is split into 320 parts or phases. Compared to doubling the rate, we need to store 160 times as many filter coefficients which may be an issue for a small microcontroller or DAC chip. The computational effort per output sample is, however, the same. For software running on a PC this extra memory requirement is of no consequence. Thirdly, we have the asynchronous sample rate converter. This is used to convert an input with an unknown or variable sample rate to a (typically higher) fixed rate. Two parts are involved here. First, a digital PLL determines the input rate compared to the chosen output rate. Second, that ratio is used to configure a polyphase resampler as discussed above. The input rate is monitored continuously, and if it drifts, the resampler is adjusted accordingly. An ASRC is typically used only for on-the-fly conversions. In offline processing, the source rate is known (or can be determined), so a fixed-ratio converter is all one needs. In all the cases above, arithmetic must be performed with sufficient precision that the accumulated error ending up in each output sample is smaller than one LSB of the output format (roughly). If the input and output are both 24-bit integer, the intermediate format used for the convolution must have somewhat higher precision. On a PC, it is often easiest to simply use 64-bit floating-point which has all the precision required, though it is possible to screw things up and amplify small errors into large ones. In a constrained environment, a likelier choice is something like 48-bit fixed-point. Fixed-point requires a little more design effort to ensure everything stays in range, but once done it is more efficient in terms of silicon utilisation. 8 hours ago, lucretius said: (2) A DAC with only a 22.5792 MHz clock uses a asynchronous sample rate converter only for the 48K family of sample rates? While such a design is possible, it is definitely unusual. A single-clock design typically uses an ASRC to convert all inputs to a much higher fixed rate. Benchmark DACs, for example, work this way. 8 hours ago, lucretius said: (3) If an asynchronous sample rate converter is used, extra effort is required to reduce jitter? Actually, an ASRC is a common tool for jitter reduction. Simply put, jitter can be dealt with in two ways: by adjusting the local clock to match the data, or by adjusting (resampling) the data to match the local clock. Since the ASRC uses a digital PLL, it can be made with a very low corner frequency, down to a few Hz, whereas analogue PLL/VCO designs tend to have a much higher corner frequency in order not to lose lock. The best analogue jitter cleaners use a cascade of two or more PLLs, each stage lowering the corner frequency. Needless to say, that can get expensive. That's not to say the ASRC is without issues of its own. The appropriate choice depends on many factors, and neither method can be universally declared superior. pkane2001, lucretius and Sonicularity 2 1 Link to comment
lucretius Posted February 2, 2020 Share Posted February 2, 2020 3 hours ago, mansr said: I think we need to take a step back. We've been discussing three different situations. Firstly, there is resampling to an integer multiple of the input rate. This is the simplest case. Doubling (or tripling, etc) the sample rate can be done by simply inserting one (or two, etc) zero samples after each input sample, then applying a low-pass filter with a cut-off at the Nyquist frequency of the input (half the sample rate). As we know, applying a filter means convolving the signal with the impulse response of the filter. Since we've inserted a bunch of zeros into the signal, we know that many of the multiplications involved in the convolution will give a zero result, so we can simplify the calculations by skipping those entirely. We can also skip the step of actually placing zeros into the signal and directly do the multiplications that (might) give a non-zero result. On a rate doubling, half the output samples are coincident in time with the input samples while the other half are positioned midway between two input samples. The computations for the former of these involve half the values in the filter impulse response (let's say the even-numbered ones), and for the latter the other half of the impulse response (the odd-numbered values) are used. Secondly, we discussed resampling to a non-integer multiple of the input rate. Conceptually, this can be achieved by zero-stuffing the input to yield a sample rate equal to the lowest common multiple of the two rates, low-pass filtering this, and finally discarding samples to leave the desired target rate. For example, to produce 1.5 (3/2) times the input rate, we would first triple the rate by inserting two zeros after each sample and low-pass filtering as discussed above. Then we'd simply discard half of those samples (which is fine since the signal is already properly band-limited), thus halving the sample rate to the desired 1.5x multiple of the input. That's the long way around, and as before, there are some shortcuts to be made. Multiplying by zero is silly, so that can be skipped. It is likewise silly to actually calculate the values of the samples that are then immediately discarded. After these simplifications, we notice that the output samples can be divided into three sets: those coincident with input samples, those positioned one third of the way between input samples, and those at the two thirds point. As in the rate doubling case, each of these sets involves a separate subset of values from the filter impulse response. This is what the term polyphase refers to. For a conversion from 44.1 kHz to 96 kHz, the ratio reduces to 320/147, so the impulse response is split into 320 parts or phases. Compared to doubling the rate, we need to store 160 times as many filter coefficients which may be an issue for a small microcontroller or DAC chip. The computational effort per output sample is, however, the same. For software running on a PC this extra memory requirement is of no consequence. Thirdly, we have the asynchronous sample rate converter. This is used to convert an input with an unknown or variable sample rate to a (typically higher) fixed rate. Two parts are involved here. First, a digital PLL determines the input rate compared to the chosen output rate. Second, that ratio is used to configure a polyphase resampler as discussed above. The input rate is monitored continuously, and if it drifts, the resampler is adjusted accordingly. An ASRC is typically used only for on-the-fly conversions. In offline processing, the source rate is known (or can be determined), so a fixed-ratio converter is all one needs. In all the cases above, arithmetic must be performed with sufficient precision that the accumulated error ending up in each output sample is smaller than one LSB of the output format (roughly). If the input and output are both 24-bit integer, the intermediate format used for the convolution must have somewhat higher precision. On a PC, it is often easiest to simply use 64-bit floating-point which has all the precision required, though it is possible to screw things up and amplify small errors into large ones. In a constrained environment, a likelier choice is something like 48-bit fixed-point. Fixed-point requires a little more design effort to ensure everything stays in range, but once done it is more efficient in terms of silicon utilisation. While such a design is possible, it is definitely unusual. A single-clock design typically uses an ASRC to convert all inputs to a much higher fixed rate. Benchmark DACs, for example, work this way. Actually, an ASRC is a common tool for jitter reduction. Simply put, jitter can be dealt with in two ways: by adjusting the local clock to match the data, or by adjusting (resampling) the data to match the local clock. Since the ASRC uses a digital PLL, it can be made with a very low corner frequency, down to a few Hz, whereas analogue PLL/VCO designs tend to have a much higher corner frequency in order not to lose lock. The best analogue jitter cleaners use a cascade of two or more PLLs, each stage lowering the corner frequency. Needless to say, that can get expensive. That's not to say the ASRC is without issues of its own. The appropriate choice depends on many factors, and neither method can be universally declared superior. Thank you. This is well written and informative; I was able to follow along but I will need a little time to fully digest (i.e. to connect all the dots together in my mind). mQa is dead! Link to comment
lucretius Posted February 2, 2020 Share Posted February 2, 2020 5 hours ago, lucretius said: This would explain the 100MHz clock in the Brooklyn DAC+. Since this clock is not at a simple multiple of any sample rate, then does that mean an asynchronous sample rate converter is required for all (i.e. both 44.1K and 48K families) sample rate conversions? Already answered by @mansr in another post: "A single-clock design typically uses an ASRC to convert all inputs to a much higher fixed rate." mQa is dead! Link to comment
fas42 Posted February 2, 2020 Share Posted February 2, 2020 I have found the usual trade-off in audio is that complexity is introduced, to 'solve' some technical imperfection, but then the extra circuitry introduces other anomalies, by its very presence and operation. A perfect, mechanical analogy are the attempts to solve tracking error in turntables, by having extra stuff to move the pivot point, or to swivel the cartridge - these have the downsides in that unless they are engineered to the highest levels then they only cause more problems - and tend to disappear from the scene ... An example when listening to a, umm, rig further away - an upmarket CDP, with multiple options for changing the filtering of the DAC output - "Which sounds best?" queries the owner,as he flicks through the numerous settings ..."Err, is there a way of switching off, bypassing this extra processing?" ... and indeed there was. Of course, this produced the best SQ - the 'musicality' jumped up quite substantially, the greyness permeating the sound was now significantly reduced. Link to comment
Popular Post Ajax Posted July 6, 2020 Author Popular Post Share Posted July 6, 2020 Hi Everyone, Result from the listening tests below. However, I don't anticipate it will change many people's thinking as time and time again human beings will pick the emotional outcome, or what feels good, over what is rational. What is important for those subjectives amongst us is that Mark Waldrep (Dr Aix) was originally of the view that Hi-res was necessary.i.e. if anything his bias was towards Hi-res, not what his findings show. (I'm taking about playback here, not the extra headroom 24 bit affords while recording). From his email below: "I was not alone in dismissing the Meyer and Moran study. In 2007, I was convinced that high-resolution recording — real HD-Audio — would be perceptible. I recognized the shortcomings of their research and have written extensively about the important of “provenance.” If the original master of an album or track was produced prior to the introduction of high-resolution recording equipment, then it is impossible for that album or track to be considered “hi-res audio” in spite of the best marketing efforts of the labels and others. So after carrying out my own research project, I am forced to agree with the conclusion of the Meyer and Moran research." Researching HD-Audio: The Truth Dr. AIX It's finished. The HD-Audio Challenge II, my sabbatical research project from last fall, has run its course. It's time to start presenting the data and the analysis associated with almost 500 responses. Among those that submitted their results were audiophiles, casual listeners — both young and old, as well as a few professional audio engineers. And while the age of the participants skewed higher than desired and are predominantly male, the truth is that audiophiles tend to be older men. We're the target group that is supposed to care about fidelity. But with Amazon Music HD and other "so-called Hi-Res Audio" streaming and download sites marketing to ALL music listeners, HD-Audio trying to be ubiquitous. But is HD-Audio really sound better or is it merely a sale gimmick? That's what the study was supposed to help determine. Here's the question: Would average music listeners be able to pick out a hi-res audio track over a Red Book standard CD version of the same master recordings using their own playback systems? Paul MacGowan of PS Audio said, "Oh God yeah" in one of his videos. My research survey, conducted over these last 8 months, arrives at a different conclusion. Hi-Res Audio or HD-Audio provides no perceptible fidelity improvement over a standard-resolution CD or file. CD-spec and hi-res audio versions sound identical to vast majority of listeners through systems of all kinds. I'll present the track by track breakdown over the next few articles, but the responses present a picture that is undeniable. In fact, over 25% of the listeners that submitted their results indicated "No Choice" when asked to pick the hi-res track. People were honest and acknowledged that they could not tell the two different versions apart. And those that made a selection admitted that it "was virtually impossible" to detect any differences or "they were essentially guessing" which was which. Hi-Res Audio or HD-Audio provides no perceptible fidelity improvement over a standard-resolution CD or file. Outcome of the HD-Audio Challenge II - Mark Waldrep So it's time to face the hard facts IMHO. Hi-Res Audio or HD-Audio, the much touted next generation in music fidelity, should NOT be a major determining factor when selecting which music to enjoy. As I've often stated in these articles, it is the production path that establishes the fidelity of the final master. Things like how a track was recorded, what processing was applied during recording and mixing, and how the tracks were ultimately mastered. If all of these things are done with maximizing fidelity as the primary goal, a great track will result. However, it's very easy to destroy fidelity at any number of steps in the process. Since the introduction of high-resolution digital methods for recording and reproducing audio emerged in the 1980s and practical distribution formats launched in the late 1990s, research has been conducted to determine whether and why “hi-res audio” is better than existing delivery standards. One of the most well-known among them was the 2007 AES paper authored by Meyer and Moran titled, “Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback”, which concluded that “… test results show that the CD-quality A/D/A loop was undetectable at normal-to-loud listening levels, by any of the subjects, on any of the playback systems.” Basically, what the researchers did was play a commercially distributed “hi-res audio” SACD (one was a DVD-Audio disc) directly through a very good stereo playback system and then through an A/D/A conversion chain running at Red Book specifications 44.1 kHz/16-bits. None of the listeners, which included “professional recording engineers, students in a university recording program, and dedicated audiophiles,” could perceive any differences. Sounds pretty convincing, right? When I first encountered the Meyer and Moran study in the AES Journal, I faulted their process and paid only cursory attention to the conclusion. I believed that it was critically important to point out the fact that the researchers did not verify that the recordings they played during their study exceeded the fidelity of a compact disc! They assumed that the “hi-res audio” SACD albums being released by the record labels possessed greater fidelity than the previous CD versions. But they didn’t. They couldn't since they were made using analog tape technology. So how is anyone supposed to hear a difference if both versions are identical? When released on the new SACD format, the fidelity of the mostly analog-based tracks — analog provenance — were not even up to Red Book standards. I was not alone in dismissing the Meyer and Moran study. In 2007, I was convinced that high-resolution recording — real HD-Audio — would be perceptible. I recognized the shortcomings of their research and have written extensively about the important of “provenance.” If the original master of an album or track was produced prior to the introduction of high-resolution recording equipment, then it is impossible for that album or track to be considered “hi-res audio” in spite of the best marketing efforts of the labels and others. So after carrying out my own research project, I am forced to agree with the conclusion of the Meyer and Moran research. I'm sure that I will become the target of similar criticism. Someone will insist that my files were't typical, weren't properly processed from 96 to 44.1 kHz, or that participants could have cheated when listening or submitting their results. Additionally, the guru behind MQA — Robert Stuart — wrote in another AES paper, “... there exist audiblesignals that cannot be encoded transparently by a standard CD; and second, an audio chain used for such experiments must be capable of high-fidelity reproduction.” His position is untenable if the results of my survey are true. If real world audiophiles cannot hear a difference then there is no audible difference. More to follow but I'll leave you with another short MQA related item. In a previous article, I explained yet again why MQA is a hoax and not worthy of support by listeners and equipment manufacturers. In fact like hi-res audio, it is a marketing ploy designed to enrich its stake holders. You can read the article by clicking here. I received an email from a reader noting that the entire section of my piece was lifted by a member of a FB group and posted as a comment on their group without attribution of any kind. They omitted my name and there was not link to the original article. It did raise a great deal of controversy and there were lots of comments. MQA is a hot topic. In order to read the post, I have to request and receive permission from the administrators to join the group, which was granted. So I wrote to the administrator: "This is Mark Waldrep. I was informed by one of my blog readers that one of my recent posts about MQA was lifted wholesale from my blog and posted on your FB group...without attribution or a link back to my site. I find this somewhat disturbing and hope that you can ensure that your members respect the work of others. I am happy to contribute to your discussions and even allow quotes from my blog, but this was excessive. Regards, Mark" I received the following responses from ORCHUN CAGLIAN, one of the groups administrators: "I'm removing you from the group I also removed the post and the member too Dont contact me with an attitude again I dont want people like you in my groups." (This is copied from his response ... the lack of punctuation is his.) Frankly, I was surprised at the response. It confirmed to me that some FB admins aren't worth supporting. JediJoker, Summit, Teresa and 1 other 1 3 LOUNGE: Mac Mini - Audirvana - Devialet 200 - ATOHM GT1 Speakers OFFICE : Mac Mini - Audirvana - Benchmark DAC1HDR - ADAM A7 Active Monitors TRAVEL : MacBook Air - Dragonfly V1.2 DAC - Sennheiser HD 650 BEACH : iPhone 6 - HRT iStreamer DAC - Akimate Micro + powered speakers Link to comment
Popular Post Superdad Posted July 6, 2020 Popular Post Share Posted July 6, 2020 31 minutes ago, Ajax said: Here's the question: Would average music listeners be able to pick out a hi-res audio track over a Red Book standard CD version of the same master recordings using their own playback systems? Well that’s just it: Audiophiles—many of whom have been critically listening and refining their home music systems for decades—are by definition not “average” music listeners. I could torture a few analogies—to cooking, wine, driving, or X-ray interpretation—but there is no need. Audiophile Neuroscience and sandyk 1 1 UpTone Audio LLC Link to comment
Rexp Posted July 6, 2020 Share Posted July 6, 2020 In this case I preferred the hi-res but it doesn't mean hi-res is better, degradation is caused by poor downsampling IMO. sandyk 1 Link to comment
Audiophile Neuroscience Posted July 6, 2020 Share Posted July 6, 2020 3 hours ago, Ajax said: My research survey, conducted over these last 8 months, arrives at a different conclusion. Hi-Res Audio or HD-Audio provides no perceptible fidelity improvement over a standard-resolution CD or file. polls are interesting but not conclusive (which is why they can get it wrong in predicting the next president of USA). Was this a self selecting (non random) sample? Sound Minds Mind Sound Link to comment
Audiophile Neuroscience Posted July 6, 2020 Share Posted July 6, 2020 2 hours ago, Rexp said: In this case I preferred the hi-res but it doesn't mean hi-res is better, degradation is caused by poor downsampling IMO. Ah yes, I just read his blog. Apparently all files started out as high res recordings and were down sampled and then up sampled again. Sound Minds Mind Sound Link to comment
Popular Post One and a half Posted July 6, 2020 Popular Post Share Posted July 6, 2020 I think Mr. Waldrep should follow his own advice, including 95% of audio recording/mix/mastering engineers, in that "it is the production path that establishes the fidelity of the final master. " I could never engage with AIX recordings, kind of muddled little depth or wide soundstage. In contrast, recordings from an analogue source, eg TBM ca 1970 onward recordings still outshine modern engineer's abilities to faithfully record music. There is a difference between art of the recording and the art that is played by musicians. Teresa and Audiophile Neuroscience 1 1 AS Profile Equipment List Say NO to MQA Link to comment
sandyk Posted July 6, 2020 Share Posted July 6, 2020 2 hours ago, Audiophile Neuroscience said: Ah yes, I just read his blog. Apparently all files started out as high res recordings and were down sampled and then up sampled again. IOW, Just like FrederickV did with his X and Y files sometime back where I accurately described the differences between them https://audiophilestyle.com/forums/topic/30381-mqa-is-vaporware/page/671/#comments #16768 http://klinktbeter.be/hushhush/x.wavhttp://klinktbeter.be/hushhush/y.wav How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file. PROFILE UPDATED 13-11-2020 Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now