Jump to content

Tony Lauck

  • Content Count

  • Joined

  • Last visited

1 Follower

About Tony Lauck

  • Rank
    Sophomore Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The preach level is entirely justified when it comes to the importance of precise level matching whenever subjective sound quality comparisons are performed. People who don't do this are either ignorant or dishonest shysters. This one factor goes back throughout the history of audio, in my case all the way back to the late 1950's when I started getting interested in hi-fi. If you want to measure levels accurately between two formats where a proprietary decode process is bundled into a DAC you will have to conduct measurements at the analog output of the DAC. You can do this by tapping into the analog output going to your preamplifier and sending it to high quality analog to digital converter running at a high sampling rate and bit resolution. Then you can take the captured waveforms of both signals and process them with an audio editor and any other analysis software you want. If you wish to adjust the levels of the two signals, to a fraction of a dB that will be a different problem. If one of the formats is straight 24 bit PCM you can make adjustments in level with an audio editor, but this will make a slight degradation of signal quality of this format due to reduction of signal to noise ratio and the addition of a second dither process. There are other problems in comparing the formats when you are forced to work with analog circuitry in the loop, as would be the case with MQA DACs. A small level change in the input may make a large change in distortion levels if the DAC circuitry clips digitally, or affect distortion of downstream analog buffers. And level matching becomes an ambiguous concept if there are substantial differences in frequency response in the two signals being compared. But these come into play at a secondary level. You can hear the effect of a slight gain difference by taking a 24 bit PCM file and using an audio editor to change the gain by fractions of a dB and then compare sound quality. You can also hear all of the various aspects of lower resolution PCM formats by starting with a high quality (e.g. 192/24 PCM) format and downsampling to lower resolution (e.g. 44/16) and then upconverting back to the original format. Software such as iZotope RX allows for selection of many filter parameters and one can hear all kinds of tradeoffs between filter settings if one is sufficiently patient. You can train your ear to hear these effects, but it will probably take hundreds of hours to become familiar with the effects, which also depend on the particular musical genres and recording techniques used.
  2. Subtractive dither is more than a ruse, however. Compared to TPDF dither, subtractive dither provides higher audio quality (removing all correlation between signal and dither noise and not just first and second order correlation). In addition compared to TPDF dither it provides an approximate 6 dB gain in S/N ratio or, equivalently, saves approximately one bit. I found that subtractive dither used to convert 96/24 audio to 96/8 audio sounded musical, albeit noisy like an old 4 track 7.5 ips pre-recorded tape. At 96/12 the noise was similar to that on high quality analog tape. In this regard, there is a difference between a streaming codec and a recording codec. That's because a non-repetitive noise pattern (the streaming case) is different from a repetitive nose pattern (the non-streaming case, especially where a brief sound clip is used that is shorter than aural short-term memory). I did all of this work ten years ago at a time when I could still hear up to 15 kHz. Since then all of this work is irrelevant, first to me as I now can barely hear 12 kHz, and generally because bits have become vastly cheaper in the past ten years.
  3. I was discussing the need for the scrambling function needed to whiten the low order bits that represent the folded high frequency information. Since these bits appear in the code space for the undecoded playback they need to appear to the receiver and listener as uncorrelated with the music. That way, they will be heard as random noise, rather than distortion. (It's actually slightly more complicated because changing the low order bits to random values will still be correlated to the music in the form of noise modulation, but these are details of the dithering algorithms and are unrelated to the method of generating the pseudo-randomness.) There is no way to recover the lost bits if they have been repurposed so that they can encode other information. This is true regardless of the specific algorithm used to generate the pseudo-randomness. This is easily proven by the use of the pigeon-hole principle. However, if the goal of the system is to make some kind of tradeoff between perceived audio quality and bandwidth uses, the pseudo-randomness does not require the use of an actual encryption algorithm which includes hidden (key) values. The use of actual encryption algorithms is necessary to inhibit reverse engineering and to invoke DMCA style legal protection for content, but these relate to the DRM-like aspects of MQA, not the sonic improvement or bandwidth reduction aspects. Note that lossless encoding schemes such as FLAC can not guarantee that they will provide compression for all possible input data. In fact, for any lossless scheme that compresses some inputs, there must be other inputs that are expanded. Similarly, if a CODEC takes a fixed input bandwidth and reduces it to a fixed (lower) output rate, there must be some inputs that can not be encoded and then decoded losslessly. This is another application of the pigeon-hole principle. So we can be absolutely sure that MQA can not losslessly encode higher resolution input formats at a lower data rate.
  4. Encryption is not a necessary part of the folding from an audio perspective. In the MQA "ecosystem" encryption serves three purposes: a technical purpose associated with the folding, a control purpose which adds difficulty to competing product implementations, and a legal purpose enabling criminal prosecution of control violations. When encoding information that is correlated to the music (and this includes the folded high frequency content) the encoded information must be scrambled or "whitened" into pseudo-noise to convert distortion artifacts into noise artifacts. This can be done using pseudo-random number generators which involve no cryptographic algorithm, it can be done using cryptographic one-way hash functions which accomplish the same effect with more confidence, or it can done with encryption algorithms which use a key to control access to the encoded information. All three of these methods will achieve the same technical result from an audio quality perspective, albeit with different hardware implementation costs. The control purpose can be effected by keeping algorithms proprietary, but this will be difficult for decoding algorithms which enjoy mass distribution and are subject to reverse engineering. However, by using encryption algorithms it becomes possible to gain much greater control over the process, including potentially identifying individual units that have been reverse engineered. There is a long history of this process associated primarily with protection of video content and goes back to early broadcast satellite video, aborted attempts for CD audio protection, DVD encryption, Blue Ray Encryption, etc... Unfortunately from the perspective of the content providers these methods were easily broken. The criminal prosecution purpose came into play when it became obvious that the control purpose could not be realized without criminalizing the reverse engineering process. In the USA this was accomplished by the Digital Millenial Copyright Act. As a result of this legislation threats of criminal arrest were thrown world wide, including to European university based cryptographers who were threatened with arrest if they published technical details of their cryptographic research. One case was the encryption protection used with the HDMI interface. The researchers in question were told that if they published there results they would never be able to enter the US without fear of arrest.
  5. This is complete and utter BS. There are many formats that I have in my library and I can play most of these on my DAC, converting them where necessary by software. There are things that I do with DBpoweramp, HQPlayer, SoundForge and iZotope RX that allow me to deal with these formats and where necessary I can download new CODEC software to access different formats. However, if I were to be forced to use MQA I would not be able to do things that I presently do, such as to do digital room correction for both PCM and DSD using HQPlayer. I don't want hardware. I don't want children of parasitic middlemen to eat at the expense of starving musicians. I want to chose how I spend my money, and when I buy new hardware it comes out of my music budget. When I want technical information I read research papers and patents. When I want to see how software works I examine the documentation and I test it. MQA has been set up in such a way as to make these tests difficult, if not impossible, to perform. All we get is the typical rigged demonstrations and comparisons which are geared up to confuse non-technical audiophiles. What we get are comparisons of different masters and non-level matched playback of the same masters, and doing this amounts to nothing less than fraud when it is done under the authority of experts, which me must assume includes AES Fellows.
  6. Depends. Unfortunately, many labels have victimized their artists, who are generally innocent of all of this skullduggery. But some labels have been on my do not buy list for decades, generally because they have a track record of poor sound quality. I suspect quite a bit of overlap, here.
  7. All the money spent buying new DAC hardware is going to vultures. Consumers have limited funds and any money that doesn't go to the musicians and song writers and the engineers who made the original recordings is going to parasites. MQA is nothing but a parasitical scheme involving the legal system to extract rent from music lovers at the expense of everyone else. (Legal aspects include proprietary formats backed up with non-disclosure agreements and patents and the use of encryption that can't legally be "circumvented" due to fascist legislation such as the US DMCA law.) IMO the invention of a proprietary digital audio format should be treated as a capital crime under Napoleonic law, where the defendant is presumed guilty unless he can prove his innocence. But since I'm not an emperor I will just have to apply what little market power I have. For starters, this means I will not purchase any DAC from a manufacturer who sells MQA products, nor will I purchase any recordings or deal with streaming services that support MQA. To be specific, once I learned that Mytek has signed on to MQA, I have removed them from my "acceptable vendor list" despite my being satisfied with Mytek DAC that I have owned for several years.
  8. DAC designers have to make a tradeoff. Assuming the digital sampling has extra guard bits they can afford to provide headroom to prevent clipping. However, unless the output of the upsampling is then reduced down to the actual resolution of the converter circuitry there will be clipping. Here's the conflict: if they provide little reduction than "hot" music will sound distorted, but the measured S/N of the DAC will be good. If they provide more reduction then "hot" music will be clean, but the measured S/N of the DAC will be inferior. Pro audio equipment often allows for the converter to be set up so that these tradeoffs are under user control. (For example the Mytek Stereo 192-DSD allows for setting the amount of analog headroom over a 4 dB range, as I recall.)
  9. Easily done with audio workstation software such as Audacity, Izotope RX, or Soundforge. There is no way to tell whether or not a DAC will produce intersample clipping on a recording that has no clipping except by listening (or measuring the inpulse response). For listening, you can use a digital volume control in the computer, such as HQPlayer or a digital volume control in the DAC (if it's been done correctly) and see if this eliminates the harshness. (You will need to make corresponding analog volume control changes to get a fair comparison.) If you are a decent recording engineer you will be able to recognize clipping when it occurs on most signal peaks. Example of the theory: An unclipped square wave at 8 kHz has harmonics at 24 kHz, 40kHz, etc... If these are stripped off as the result of a filter for the 44.1 Khz sampling rate the result will be a sine wave at 8 kHz. The peak amplitude of the sine wave divided by the peak amplitude of the square wave will be a ratio of 4 / pi, about 2.1 dB. For worst case there are pathological waveforms for which the peaks can be arbitrarily large if the theoretical filter (perfect sinc) is used. (I've constructed some where the peaks are more than 10 dB above 0 dBfs, but not something likely to occur in any decent music. (Pop music mastered for loudness by such as Lucy do not count as decent in my book, neither the music, nor the engineering.) It is easy to see these effects when doing any sort of EQ. One would expect to lose headroom when the amplitude response of a filter includes a boost, but it is even possible to lose headroom if the filter has a cut. (Witness the squarewave discussion above.) HQPlayer will show clips when upsampling or if doing digital room correction as I do. I use about 4 dB of digital reduction in HQPlayer before sending the signal to my DAC. (I don't lose any resolution at this point, because the DAC takes a 32 bit signal and I run it with digital volume control.) I have carefully calibrated the analog gain in my system so there is only a few dB of headroom when the digital volume control is set to -0 dB. It is not possible to clip the active monitors at full volume setting, and this is about 10 dB louder than what is good for my ears, while still having completely quiet sound out of the speakers when playing 24 bit dither noise.
  10. No bats. Just a non-linear mechanism, my ears. Not to mention non-linearity in the DAC, amp, speakers and air. Take a non-clipped waveform with energy close to 22 kHz and with peaks close to 0 dBFs. Now put this waveform through a steep filter (or just about any low pass filter for that matter). You will now have a waveform that has peaks above 0 dBFs if you do the calculations in floating point or otherwise with headroom. Now, when you put the result out in a regular PCM format without a gain reduction you will get clipping, and this will often result in audible distortion. (This is something that mastering engineers know about, it's called "inter-sample peaks".)
  11. This is all BS. The filter used for playback is going to interact with the filter used for recording. In the event of compromised systems, such as 44.1 kHz PCM, these filters have subtle but audible interactions. It is not possible to have a system that has full frequency range (e.g. up to 20 kHz), is free of ringing, and does not create spurious frequencies due to aliasing. The optimum filter for playback of 44.1 recordings is going to vary according to the filter used for playback and upon the quality of the original recording, even the type of music. This is not a matter of "proper" or "improper". With appropriate playback software you can tweak the playback filter and observe these effects. However, this is generally a waste of time if one's goal is music rather than sound, especially if the music is available in high resolution format, where the effects of filtering are much less significant. This can be done with software such as HQPlayer which has a choice of filters, or with pro software such as iZotope RX, which allows setting many different parameters of filters used for downsampling (recording) and upsampling (playback), including a mix of linear vs. minimum phase filters, filter slope, and filter center.
  12. One of the problems I had with the bass was room related. The speakers needed to be placed appropriately for decent midrange and highs, but this created some problems with room modes. I tried many different positions from the back walls and adjusting the cross-over controls on the main speakers, and none were satisfactory. The sub allowed another degree of freedom, particularly with respect to the vertical room mode. In addition to gain controls, the sub had a phase control and adjustable cross-over. Unfortunately, all of these knobs required crawling about on the floor and forgetting what the sound had just been. After two weeks of crawling about I got the calibrated microphone and analysis software and used that to get more-or less flat response in the bass from 30 Hz up. There were still some peaks, which I then took out with a (digital) parametric equalizer. This removed false dynamics on walking acoustic bass lines and made the side and back walls of the listening space disappear. (I am listening facing into a corner.) It took about 2 days after familiarizing myself with the measurement equipment to sort out the entire system.
  13. Let me guess. You are not a computer hardware engineer familiar with design of state of the art digital equipment, whether it be computer chip design, computer motherboard design, storage device design, or communications equipment. All of these disciplines involve pushing the limits of signal integrity and all the people working competently in this space are well aware that "bits are bits" is a convenient mathematical abstraction for some purposes, but irrelevant when designing equipment that has to work reliably in the real-world. And if you are a programmer, I hope you aren't working on low level operating systems functions such as drivers or network protocol design or any other areas where it is critical to understand that bits aren't just bits and they can and do occasionally flip spontaneously.
  14. I took some of these older piano recordings and digitized some of them. Also some live concert material where there won't be "in the room" credibiilty since there will be venue sonics. These are on a web site: http://www.susanlauck.com/ Enjoy these free downloads. On the studio recordings you may notice that the piano image is excessively large. This will happen if the physical spacing of hyour speakers is wider than the spacing of the speakers that I used, where the setup had prevously been determined as a compromise with many high quality recordings that I used for playback setup at the time. I presently have a small room in which I have two Focal twin 6 monitors and a single sub woofer. These are all powered, driven directly from a Mytek Stereo 192-DSD DAC which also serves as a preamp for auditioning analog tapes. As set up, and playing one of these recordings, a non-audiophile friend spontaneously observed that she had never heard a piano realistically reproduced. This system will also reproduce Mahler Symphonies at row five live concert levels with adequate headroom, and the monitors come with a safety warning of ear damage, with peak sound level capability rated at 118 dB at my 1 m listening position. When I got these powered speakers they sounded like shit. It took many hours of adjustment to location, listening position and crossover settings to realize that this wasn't going to work with out adding the sub woofer. And then when I got this I discovered it was absolutely impossible to get this balanced with the subs until I got a calibrated microphone and measurement software. Once I did this, I was able to turn more knobs (mostly on the sub) and get good sound, but there was still boominess in some room modes. I eventually used software parametric equalizer to get flat response in at the listening position from 30 hz up to 1000 kHz. The regular tweeter adjustment provided a suitable high frequency roll off, and I had previously set this on a mixture of about three dozen recordings of acoustic music of various genres. Basically, a fairly standard curve that is flat at 1 KHz and down about -2 dB at 10 Khz did the trick, making the most brilliant recordings listenable (the Mercury Living Presence transfers) while none of these recordings sounding excessively dull. All told, I put several weeks of my time into making this system sound excellent, but no more money once I bought the sub. The alternative would have been to spend endless time trading equipment and never settling on something that provided realistic playback. Setup is the most important part of any system, providing that you start with decent gear.
  15. I have done this several times, with the exception of the curtain. Apart from the different location of the speakers and the piano, no way to tell. However, in addition to obsessing over playback setup it took quite a bit of work finding the right microphone positioning. The first time I did this was in the mid 1970's, later in the 1980's. Speakers the first time were AR-3a's, the second time Snell AIIIs. Two caveats: the recording and playback were in the same room and the music did not use the bottom three notes on the keyboard. In both cases a 7.5 IPS 2T Tandberg recorder was used. In the 1980's I tried the same experiment using a Nak CR-7a instead of the Tandberg and the results were unsuccessful. Using Dolby destroyed the dynamics and without it there was excessive tape hiss, unless the recording level was jacked up at which case the tape oxide compressed the dynamics. Interestingly, the entire recording equipment came to under $500 in the mid 1970's.
  • Create New...