Jump to content


  • Content Count

  • Joined

  • Last visited

Blog Comments posted by mitchco

  1. Hi JAVA Alive, welcome to CA!


    Yes, one of the difference factors indeed is the DAC and ADC not being synced, plus issues with DiffMaker's drift compensation function. However, listening to the analog diff file, there is a point where the music is no longer audible. Other noise factors include D/A and A/D conversion and associated analogue circuitry. But I would say the biggest contributor was my test laptops analog input/ADC is noisy. You can see the RMAA test in the previous blog post (FLAC vs WAV) with a noise level of - 85 dBA.


    You also may find Archimago's DiffMaker protocol interesting and his results correlated virtually identical to mine. Archimago also tested a number of Windows and Mac software players with similar results, including JPlay.


    Happy listening!


  2. Hi Mitch' date=' did you repeat your experiment with the Hilo and if so what was the outcome? I agree with the downsample, upsample, compare with original approach. This seems to me fair, as upsampling to 24/192 and playing the upsampled file is merely another way of playing a 16/44 file, you could do the same with any 16/44 source.[/quote']


    Happy New Year Tim.


    I have not got around to this yet. I have been sidetracked by several projects, including refinishing my speakers over Xmas and putting them back together today. Probably in a month or so.





  3. Excellent list! Here are a few others that I have come across:


    Room Correction: A Primer, by Nyal Mellor of Acoustic Frontiers

    Sound Correction in the Frequency and Time Domain, by Bernt Ronningsbak of Audiolense

    The Three Acoustical Issues a Room Correction Product Can't Actually Correct, by Nyal Mellor of Acoustic Frontier

    FIR Filter Basics

    Alan Jordan's Digital Room Correction Designer

    The Subjective and Objective Evaluation of Room Correction Products by Sean Olive

    What speaker and room correction is all about by Bernt Ronningsbak of Audiolense


    Acoustics/Treatment Reference Guide by gearslutz

    Savihost - a standalone VST convolver host

  4. hey robocop, thanks for your comment. What super tweeters are you using? I am interested.


    I recently upgraded my DAC to a Lynx Hilo and was quite surprised at the SQ difference I heard compared to my Lynx L22 (a ten year old design, but still sounded pretty good), especially at 24/192.


    Have a look at this very interesting paper on the Theory Of Upsampled Digital Audio for possible answers to this dilemma.


    Once I have some time, I intend to re-run my tests here with the Hilo and see if the results are the same.


    Cheers, Mitch

  5. Hi donberry,


    Thank you for your comment. I do understand what you are saying/asking.


    In my ABX listening tests, I was unable to discern any differences between the two players, over speakers or headphones. I have a new DAC on the way, and this is one of the tests I was going to rerun, but my expectation is the result will be the same. However, I am open to being surprised.


    There is a demo of JPlay, why not give it a try? Personally, I am happy with the SQ of JRiver.





  6. Hey Chris, thanks for the kinds words. Also, thanks for having a cool site where audiophiles can hang and talk shop.


    In my opinion, there are only 3 acoustic measures required to understand any given speaker/room combo. And each one of those measures has a specification, or target, or preferred operating range to match the measurement to. If any of the measurements are “out of spec”, then comes the hard part.


    The hard part is correctly correlating between the measurement results and what is being heard and vice versa. That’s the art of the science. I did work professionally in the acoustics business, and several hundred rooms later, it’s just a hobby now :-)


    The 3 measures and corresponding specs are:


    1) Frequency response at the listening position: Try to match to the preferred spectral responses as defined in B&K (Fig. 5) and Dr. Sean Olive (slide 24). Try for +-1 dB tolerance to the spec. Each dB does make an audible difference in timbre.


    2) Energy Time Curve (ETC): All early room reflections (0 to 50 milliseconds) are – 15 dB down or greater from the main signal.


    3) Waterfall (3D): Identifies room resonances and room decay times. Target is between .4 To .6 seconds (RT60) evenly across the frequency range (i.e. same rate of decay across frequency).


    That’s it. Get these 3 right and you are home free. It’s the 80/20 rule. The rest is fine tuning.


    Part of my acoustic learning was being an early adopter of this breakthrough measurement device in the 80’s. Thanks to Richard C Heyser, “acoustical research would never be the same, as for the first time, complex on-site measurements of systems and spaces were possible from a commercially available unit”.


    Today, we have software like REW that can perform the same type of acoustic measurements. And software like Audiolense that can generate high resolution, digital correction filters to assist in meeting the above specifications.


    Additionally, software like Audiolense can time-align the sound source and further fine tune the timbre in the time domain by matching the target frequency response to each individual’s specific speaker’s natural roll-off at the frequency extremes.


    I have shakers, triangles, guitars, and other musical instruments. When I hit a triangle or the bell of a cymbal, it has a very pure tone or ring to it and sounds crystal clear. With Audiolense time domain correction, I am able to fine tune the arrival of the waveform at the listening position to get the best timbre reproduction, so it sounds as close to real as possible to my live musical instruments.


    This final tuning is done be ear and comparing 1dB incremental changes to the target, generating the filters, and AB’ing the filters. With JRiver’s Convolution engine, it is easy to AB the digital filters while music is playing. The process is like tuning guitar strings. When tuned, dead on, it produces the purest timbre. But can go over (sharp) or under (flat) and does not produce the “right” timbre.


    If folks on CA take the time and effort to set up REW and take the measures, with time permitting, I would be happy to assist in interpreting the results. Feel free to PM me the REW .mdat file.


    Cheers, Mitch

  7. Hi Bill,


    Thanks for the kind words. The Acoustic Panels by ATS Acoustics panels and bass traps totalled $572 plus shipping. Inexpensive for acoustic treatments and work great. I chose this vendor because a) they made it easy to ship to Canada and b) got a good rep – they have sold over 100,000 panels. You can further reduce the cost by buying their DIY kits.


    Audiolense, the XO version with both frequency and time correction, was $500. The mic and preamp kit was $200, but I consider that a wash as I would require it either way.


    I do have the measurements for DRC without room treatments, along with a binaural recording of the same tune, and would be happy to share both. I decided not to include in this post as it was already way too long.


    However, you can view the DRC results without room treatments here Hear music the way it was intended to be reproduced - conclusion - Blogs - Computer Audiophile Scroll down and you will see a waterfall graph with DRC enabled and can clearly see a) there is still midrange ringing and b) the -3 dB frequency response dip at 2 KHz to try and tone back the midrange ringing (that did not work).


    If you scroll further down into the comments, you will see I included the RT60 reverb times with DRC enabled and no acoustical treatments. Alas, very little difference to the RT60 measurements I made in this post for the untreated room. The RT60 was still .7 seconds long in the midrange and out of spec, even with DRC enabled in the untreated room.


    That’s why I presented the results the way I did. In my specific (2nd worst sounding room) case, the only way for me to reduce the long midrange decay times was to use acoustic treatments that dropped the room gain in half (i.e. 3 dB) – that’s significant, enough to bring the room’s RT60 into spec. There is no other way that I know of to do this.


    Similarly, the only way to get a tighter bass sound was to put bass traps behind the speakers to reduce a) early reflections off the wall and b) reduce room modes. By 5 dB in the 200 Hz range (really significant as it is almost a 4 times reduction in room gain in this frequency range). This not only affected the frequency response, but also reduced the early reflections significantly, which also attributes to the tighter bass sound. Of course Audiolense contributes greatly by correcting the speaker/room frequency response (in my case from 14 dB swing to 4 dB - that's really significant) and some early reflections, but without the bass traps, the overall bass tone still sounded muddy as the issue is room modes and early reflections, not frequency response.


    The ETC, RT60, and Waterfall measures confirm this as does listening to the binaural recordings. Even if you don’t have high-end headphones, I am pretty sure you will hear the differences.


    I personally feel acoustic treatments and DRC are complementary technologies, and while there is some overlap, to get the absolute best timbre out of the speaker/room combo, I feel requires both. It's the best $1000 I spent on audio that has made the most improvement to the timbre of my speaker/room combo.


    Hope that answers your questions.


    Cheers, Mitch

  8. Bob Katz, a well-known mastering engineer, has proposed an integrated system of metering and monitoring. If adopted, it would go a long way to do away with the loudness war: http://www.digido.com/level-practices-part-2-includes-the-k-system.html




    To get an idea of the technical material covered, here is the summary:




    "For the last 30 years or so, film mix engineers have enjoyed the liberty and privilege of a controlled monitoring environment with a fixed (calibrated) monitor gain. The result has been a legacy of feature films, many with exciting dynamic range, consistent and natural-sounding dialogue, music and effects levels. In contrast, the broadcast and music recording disciplines have entered a runaway loudness race leading to chaos at the end of the 20th century. I propose an integrated system of metering and monitoring that will encourage more consistent leveling practices among the three disciplines. This system handles the issue of differing dynamic range requirements far more elegantly and ergonomically than in the past. We're on the threshold of the introduction of a new, high-resolution consumer audio format and we have a unique opportunity to implement a 21st Century approach to leveling, that integrates with the concept of Metadata. Let's try to make this a worldwide standard to leave a legacy of better recordings in the 21st Century."




    It is technical in nature as it is a proposed “specification”. It is also fairly complicated as it appears counterintuitive. Effectively, the idea is to turn down the level meters during recording, mixing, and mastering and turn the damn volume (i.e. monitor) up to a calibrated level! We all have volume controls :-) Here are a few practical examples: http://www.digido.com/honor-roll.html




    There are already specifications for calibrating speaker to room interfaces: http://www.computeraudiophile.com/blogs/Speaker-Room-Calibration-Walkthrough Metering and monitoring is the last piece of the puzzle. When I was recording/mixing, I was taught to monitor (i.e. listen) at between 80 to 90 dB SPL. This was the accepted industry standard due to Fletcher Munson equal loudness curves. So to make it louder, we would just push the levels up (on the VU meters).




    There is +26 dBFS headroom in much of pro audio gear I used as it was industry standard. During the analog tape days, it was common practice to “hit” the tape hard on some instruments as a little bit of tape saturation “sounded” like a mild compressor. Drums would sound punchier for example. Guitars would sound like they are really ripping, etc.




    How loud can you go? Here is an example, I like the Black Keys and a tune called Lonely Boy. Here is its waveform in Audacity. It is so over leveled that it has been pushed into the red (hard clipping). I wonder if this was a mistake during the recording, mixing, or mastering process? Or is it part of their sound?










    Personally, I am hoping the Audio Engineering Society, or ITU, or whatever standards body, adopt Bob’s proposal and draft it as a standard and go through the process as both professionals and consumers would benefit greatly. Then the hires audio format would really mean something. As it stands, highly compressed music material simply won't benefit from any high resolution file format.




    Other sites of interest on the topic: http://www.justiceforaudio.org/

  9. Hey Paul, you would laugh as the IBM monitor was thrown into the picture by one of our techs at the time cause he thought it was cool, but was not used at all in the in the studio :-0




    We were the first studio in Canada to be all digital. We had 48 tracks of digital using 2 x Sony PCM 3324a (with the Apogee filters) and 3 or 4 (I can’t remember) Sony PCM 3202 2 channel machines. The picture here only shows part of the tape room:








    There is a story around the Neve console that it came from England and a list of bands that used it. I can’t remember the details, but I reached out to another bud (the tech!) that worked with me and see if he remembers.






    Just to set the record straight, I did not design the studio with Chips. I was one of the resident house engineers at the time and we had 3 studios. Myself and a few others got to hang with Chips throughout the build, from the ground up. What was really cool was to see the “designed” frequency response and ETC’s before the studio was built and then when the frequency response (and ETC and all of the other stuff that makes up a certified LEDE room) was measured, it came out to being almost identical. Pretty amazing given it was almost 30 years ago.




    Today? It’s all Digital Audio Workstations (DAW’s) for about 1/1000 of the cost. What’s going to be in another 10 years?




    Thanks tonmeister86 for your comments. I hear you. Remember this was almost 30 years ago and at that time, it was state of the art. I think this comment sums it up: http://www.gearslutz.com/board/5575228-post7.html An LEDE room had several design criteria to be met, not just frequency response, in order to be certified. Since then most people agree that a reflection free zone at the listening position is what most people strive for, with diffusion on the rear wall and some bas traps to tame room modes, but still keep the room live sounding. I don’t like the “dead end” sound of too much absorption as can be evidenced in the picture of my room acustics. But I do appreciate Chip’s contribution to advancing the science (and sound!) of control room acoustics.




    With respect to the 813B’s, what I liked most was the “time alignment” by Ed Long as Mix Magazine states, “…the UREI 813 was the most successful large format studio monitor ever made.” http://tecfoundation.com/hof/06techof.html#10 Surely they must have done something right… Btw, I do like the http://www.audioheritage.org/html/profiles/jbl/4430-35.htm as well with the Bi-Radial horns.








    These did not measure so well off axis either: http://www.jblpro.com/pub/obsolete/443035.pdf It is too bad that large format, 3 –way speakers are not as popular anymore as the analogy goes, there is no substitute for cubic inches ;-) I would be curious to know what speakers you think are the shizzle.




    Hey Miska, yes, those ribbon tweets do sound pretty amazing, especially the detail and soundstage they throw. I am always amazed when I hear my friends system on how good they sound.




    Aloha Bob! Not sure if you are aware of this, but using digital FIR filters, you have an incredible degree of control over the filter “taps” which means, “…the amount of "filtering" the filter can do; in effect, more taps means more stopband attenuation, less ripple, narrower filters, etc.” http://www.dspguru.com/dsp/faqs/fir/basics In Audiolense, you can set the number of taps from 0 to 65,535 and anywhere in-between. So you can have almost infinite control over filter width from 20 Hz to 20 KHz. Of course you can also limit the correction to whatever frequency range you want to cover. So if you only want to work on the rooms modes below the room’s calculated cutoff frequency, you can. In some respects it is like an infinitely variable parametric eq, but in the digital domain.




    With Audiolense DRC, using “just” frequency correction, I have found no need to engage the multi-seat correction as it sounds good anywhere in the room and sounds excellent seated anywhere on the couch. Using True Time Domain (TTD), does require the use of multi-seat correction as this mode of operation does involve correcting the time domain, in addition to the frequency domain. For me, this is the best sound possible and reminds of the “time aligned” sound that I used to hear through the Urei time align speakers. It is a real treat to hear crystal clear 3D sound while minimizing the impact of the dreaded “small room acoustics.”




    With respect to the other DRC solutions, I have no experience with these and encourage readers to try them out, presuming there is a trial version to evaluate. I would love to hear feedback of other DRC software and how it works for folks.




    As a 20 year professional software engineer, I can say with experience that most people underestimate the power of software and computers. As mentioned in my post, digital audio has been around over 30 years. Over that time, any music that has been recorded using an ADC or played back through a DAC has passed through a digital signal processing (DSP) chain. Every modern day ADC/DAC employs a digital and analog anti-aliasing filter. http://en.wikipedia.org/wiki/Anti-aliasing_filter Without DSP, there would be no computer audio or Computer Audiophile :-)




    Today given the processing power of computers and that that the mass majority of consumer and professional digital audio software programs are under a grand, it will only get better, cheaper and faster. That multi-million dollar facility I worked in years ago is easily replaced with under $10,000 in computer, software, and a few good mics and can sound better! I wonder what it will be like in 10 years…

  10. Hey Rod, thanks for your kind words.




    A calibrated mic is indeed critical. In my speaker to room interface frequency response tests, I have found a few dB up or down makes a substantial impact on the overall tone quality (i.e. timbre) and soundstage.




    Relative to a "target" frequency response curve like either the B&K target or the similar (i.e. almost identical) to Harman target (both are in the article above), a few dB up slope or down slope makes the sound either too bright or too dull. Small difference make big changes.




    I think if you were just tuning the low end and used a Rad Shack meter and corresponding calibration file would work, but not ideal. If you are interested in the full range tonal balance, then a calibrated mic, from 20Hz to 20KHz is the ticket.




    No worries on the calibration file format. There is a "standard" format for this and REW accepts the format as simple as loading the file. Since it is standard, any of the folks doing mic calibration will supply that file to you np.




    Don't worry about the quality of the mic cable and don't spend a fortune. I just used regular mic cable and all good.




    Yes, REW can help figuring out time alignment, but that is another topic. First step is to measure the frequency response to see how well your nice Infinity's interface with your room. I have another post coming out shortly that walks through the steps for what you are about to embark on.




    Edit: Here is that walkthrough: http://www.computeraudiophile.com/entries/173-Speaker-to-Room-Calibration-Walkthrough



    Cheers, Mitch

  11. no probs at all. In fact, I encourage poking holes at all of this. Just having some fun and sharing the results :-)




    From the DiffMaker help file:




    The Difference signal that Audio DiffMaker makes is the instantaneous difference between the signals.




    When a significant Difference signal is found, there is no guarantee that it might be audible when played as part of the original program material. The program material could mask the difference signal, or the effect may not be of a kind that people can hear. This second case might result, for example, from a device with a slightly nonlinear group delay, or from inserting some extra samples or dropping some from a signal. Audio DiffMaker will highlight differences whether they are audibly significant or not. "Different" doesn't necessarily mean "audibly different"!




    From a technical perspective:




    Sensitivity of Audio DiffMaker to signal changes




    Signal cancellation depth will usually vary with frequency. The sensitivity of the DiffMaker subtractive process to time and relative amplitude errors is easily analyzed mathematically (for any frequency), yielding the following results:




    Phase or Time Sensitivity




    The achievable depth (drop in Difference track energy, relative to the Reference track energy), at any frequency will be limited by the phase error of "theta" degrees existing between the Reference track and the Compared track at that frequency, and will be no better than




    10*log (2-2*cos(theta)) [dB]




    To appreciate this sensitivity to time error, consider an error of just 1/100th of a sample at 48kHz (equal to 208 nanoseconds). At a frequency of 10kHz this is equivalent to 0.75 degrees phase shift error, and it is also the time it takes sound to travel about 0.003 inch (!). From the formula you can infer that if during an acoustical recording a microphone position changes just 0.003 inch that can limit the achievable "depth" at 10kHz to 37dB. In other words, if we wanted to verify that there is no difference between tracks more than 37dB below the existing Reference track levels in a frequency band around 10kHz, microphone to loudspeaker distance should be held to within at least three thousandths of an inch over the duration of the recording of the Reference or Compared tracks.




    Similarly, Audio DiffMaker has to align Reference and Compared tracks to within a small fraction of a sample to possibly be able to cancel to a deep null at high frequencies.




    Difficulty can occur when the sample clock of the digital recording soundcard is not locked to the clock of the signal source. Changes in relative clock speeds that can occur between the two recordings can result in undesirable residual levels. Even quite small amounts of drift can compromise a setup, so it is best to lock sampling clocks together, or provide the Source sound from the same card as is used for recording.




    Amplitude Sensitivity




    The limit to the Difference level depth at any frequency, due to an amplitude error of "G" dB at that frequency from frequency response error or gain error (and neglecting phase shift contributions), is




    20*log(abs(1-10^(G/20))) [dB]




    For example, a volume control shift during a recorded track, resulting from vibration or temperaure drift effects, of just 0.1 dB would limit the Difference signal drop to only 39dB at any frequency.




    Time Drift susceptibility




    Any test in which the signal rate (such as clock speed for a digital source, or tape speed or turntable speed for an analog source) is not constant can result in a large and audible residual level in the Difference track. This is usually heard as a weak version of the Reference track that is present over only a portion of the Difference track, normally dropping into silence midway through the track, then becoming perceptable again toward the end. When severe, it can sound like a "phlanging" effect in the high frequencies over the length of the track. For this reason, it is best to allow DiffMaker to compensate for sample rate drift. The default setting is to allow this compensation, with an accuracy level of "4".




    Gain Drift susceptibility




    Usually a lesser problem, Gain Drift is a varying signal gain during the time the recordings are made. The gain drift may result from mechanically or thermally induced changes in circuit components slowly drifting over time, or from variations in voltage references used by the A/D or D/A converters in the soundcard being used.

  12. Rod, keep going you are almost there.




    I am not too familiar with the LIO-8, but it sounds like you will require a DB25 to XLR connector for the mic. This link points to Redco that should know how to to do this: http://www.computeraudiophile.com/content/RCA-out-instead-XLR-LIO-8-possible-or-even-recommended-no-loss-Sound-Quality




    You should have no problem using REW to output pink noise and swept sine wave, while at the same time input the response from the mic. This is should be a routing setup in LIO-8 mixer. Have a look at: http://www.hometheatershack.com/forums/spl-meters-mics-calibration-sound-cards/10001-rew-cabling-connection-basics.html




    I don't know about the other software you mention, I have had excellent results with REW




    Let us know how it goes.




    Cheers, Mitch

  13. the Rad Shack mic, make sure you install the mic calibration file from: http://www.hometheatershack.com/forums/downloads-area/19-downloads-page.html




    From your comment, looks like you already have an ADC, so you could plug the Rad Shack mic directly into your LIO-8 with the right connector.




    However, if you want to measure the overall tonal quality of your system, you would be better off with a Behringer ECM8000 or Dayton EMM-6 or one of the measurement mics I mentioned earlier.




    If your LIO-8 has the mic pre option, then you won't need a seperate mic preamp/phantom power.




    Cheers, Mitch

  14. I believe REW runs on the Mac: http://www.hometheatershack.com/roomeq/ Scroll down to the download section near the bottom of the page.




    Just under the download section is the help files, which provides tutorials on how to take measurements.




    There is also the REW forum to ask questions and get assistance. Great group of folks on the forum: http://www.hometheatershack.com/forums/rew-forum/ On the forum, is another link with a walkthrough guide: http://www.hometheatershack.com/forums/rew-forum/11707-room-eq-wizard-rew-information-index-links-guides-technical-articles-please-read.html




    REW is free, works fantastic, and easy to use.




    Aside from a mic stand (a camera tripod works too if you have one) and some cabling, a calibrated measurement microphone is required.




    I use http://www.content.ibf-acoustic.com/catalog/product_info.php?cPath=30&products_id=35 and had excellent results. You could also try http://www.parts-express.com/pe/showdetl.cfm?Partnumber=390-801 but you will need a mic preamp with phantom power... I like the IBF Acoustic "kit" as both the mic and the mic preamp that comes with it are calibrated.




    With the REW software and calibrated mic, you can now take acoustic measurements. I like to correlate what I hear with what I measure and vice versa. It is great to experiment by moving the speakers/listening position around and listening and taking measurements. It is pretty quick to find the best spots in the room that sound and measure the best. That is if you have the flexibility to move the gear around.




    From there it is a matter of how far you want to take your hobby with respect to adding acoustic treatment and/or digital room correction to further smoothen the frequency response of your speaker to room interface.




    Others may chime in that have specific experience on the Mac.




    Cheers, Mitch

  15. The Scientists and Engineer’s Guide to Digital Signal Processing. Free online version: http://www.dspguide.com/




    You can read the reviews on http://www.amazon.com/Scientist-Engineers-Digital-Signal-Processing/product-reviews/0966017633/ref=cm_cr_dp_all_summary?ie=UTF8&showViewpoints=1&sortBy=bySubmissionDateDescending




    I highly recommend this chapter (and overall book for that matter) that provides an intro level science and engineering textbook on how ADC and DAC works: http://www.dspguide.com/ch3.htm




    Topics are, Nyquist sampling theorem , quantization, dithering, aliasing, impulse train, sinc function, antialias filters, single Bit ADC and DAC, delta modulation, etc.




    Includes a mythbuster fact about analog versus digital signals. Plus a few other audio myths are dispelled along the way as it is clear what does and does not affect the audio signal during ADC and DAC.




    Other chapters of interest:




    Sound Quality vs Data Rate: http://www.dspguide.com/ch22/3.htm




    “16/44 satisfies even the most picky audiophile. Better than human hearing.”




    High Fidelity Audio: http://www.dspguide.com/ch22/4.htm




    “Audiophiles demand the utmost sound quality, and all other factors are treated as secondary. If you had to describe the mindset in one word, it would be: overkill. Rather than just matching the abilities of the human ear, these systems are designed to exceed the limits of hearing. It's the only way to be sure that the reproduced music is pristine. Digital audio was brought to the world by the compact laser disc, or CD. This was a revolution in music; the sound quality of the CD system far exceeds older systems, such as records and tapes. DSP has been at the forefront of this technology.”




    Human Hearing: http://www.dspguide.com/ch22/1.htm




    Timbre: http://www.dspguide.com/ch22/1.htm




    My opinion? I am still gathering data for my experiment plus I need to perform the ABX test as outlined nicely by audiventory and thanks to Barry for supplying state of the art recordings.

  16. Hi Barry,




    Thanks for indulging me. I don't disagree with anything you say. It is just an experiment.




    As to analogy, here is another. Some would argue this is the state of the art in movie production: http://www.imdb.com/title/tt0796366/ Watching the behind the scenes footage on how the movie was made is very insightful. No longer can most people tell what was physically shot in the studio or at a site location versus what is digitally enhanced/created by ever increasing powerful computers and sophisticated software: http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg




    My experiment will include listening tests much like audiventory performed below. Both objective and subjective data will get equal weighting in the experiment.




    Re: Confluence. Congrats!









  17. Re: An interesting comparison of the data, even if it isn't of the two sources as they are distributed by Soundkeeper.




    I downloaded the files from http://soundkeeperrecordings.com/format.htm Are you saying those are the wrong files to be using? If so, can you provide me with a location that has the right files? Thanks.




    Re: I suggest a different test that might more closely reflect how most folks would experience the two: listening.




    As a recording/mixing engineer, I have access to multi-million dollar studio facilities to listen in here on the West Coast. I also have a calibrated listening environment in my home that I have measured and documented in other parts of my blog. Listening is not the issue at this time. Please indulge me.




    Our ears are wonderfully adaptable, in fact we are susceptible to auditory illusions http://en.wikipedia.org/wiki/Auditory_illusion As a recording/mixing engineer, I count on the ability to fool peoples ears in believing a recording is in a much bigger space than it really was recorded in for example. Aside from the multitude of digital processors, I can convolve instruments with the impulse responses from famous halls: http://www.audioease.com/IR/audioeaseirs.html to make it sound like it was recorded in that hall. Even people that know a particular concert hall intimately can be fooled.




    As described at the front of my post, Monty’s article suggests that 16/44 is good enough and 24/192 is not required. So the purpose of my “experiment” is to correlate what we hear with what we measure and vice versa. If we are hearing a difference, then the audio signal must have been altered in some way. If it has been altered, then the difference can be measured.




    I am looking for both objective and subjective data points to support or deny the claim. Not just objective or subjective data, but both. It has been my experience that there is a direct correlation between the two. That is what I am hoping the outcome of my experiment will be. I have no vested interested one way or another as to the final outcome. It's just an experiment :-)




    I intend to follow-up with the listening tests, but right now I am making some measurements first.




    Barry, if you have preferred files for me to conduct both subjective listening and objective measurement tests on, please let me know where I can download them.









  18. re: "May I suggest you use two of the files from Barry Diamant's site instead?"




    That's where the two files came from... click on the link I provided in the post.




    re: "...Since upsampling doesn't restore any information that may have been lost"




    That's the point of the test! The point is to either agree or disagree with Monty's post that states 24/192 is a waste of time because 16/44 is good enough and we can't hear the difference.




    According to my DiffMaker test and listening to the difference file attached, it is indicative that he may very well be right...

  19. My “null” test above included the digital to analog conversion and analog line output stage, while recording the results in real-time on a different computer. The reason I did this was to capture any computer noise, jitter, and any other possible artifacts that could be a potential cause for any possible audible differences between the two music players.




    As it turned out, even in this worst possible case, including sample rate drift between the playback and recording computers, the Audio DiffMaker result was -90 dB. In other words, an inaudible difference between the two music players relative to the program level.




    The DiffMaker test below is using the digital loopback feature of my Lynx L22 sound card: http://www.lynxstudio.com/support_faq_result.asp?c=32




    What this means is that I am able to record the digital output of my Lynx L22 directly to the Audio DiffMaker program on the same computer and eliminate any sample rate drift, the digital to analog conversion stage, and the 2nd computer. Here is the result, using the same Tom Petty 24/96 FLAC file that I used for my tests above.








    The result, worst case, -133dB. Inaudible. Attached is the Difference file. It is interesting and educational to hear what is left over. I encourage you to listen to the difference because this is exactly what should be left over when comparing two bit-perfect music players.




    What does this mean? The two players are identical with respect to bit-perfect playback, on my computer, whether I null test at the digital or at the analog outputs. Or whether I use my ears to ABX, I hear (and measure) no difference whatsoever between the two players.




    This is by design: http://en.wikipedia.org/wiki/Bit-perfect “In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough.”

  20. from 1audio: The next step is to try to validate that the tool will show an audible difference of the sort we are interested in. Its pretty important to verify that it does detect significant differences.




    I think comparing a .wav with a 64K mp3 would be the next clear step to confirm that a well accepted sonic difference is easily detectable and at what level.




    I compared the Tom Petty original FLAC to a LAME encoded 320kbps MP3 so I can correlate that with the work I already performed here: http://www.computeraudiophile.com/entries/94-FLAC-vs-WAV-vs-MP3-vs-M4A-Experiment The difference being that was a null test on the file format. I have attached the results of that test (FLAC vs MP3 File Null test.zip) so it can be sonically compared with the results of the Audio DiffMaker test.




    Here is the result of the Audio DiffMaker test:








    The difference is about -40 dB, quite the difference from the -90 dB when comparing FLAC vs WAV in the same test configuration.




    I have attached the MP3 difference file (FLAC vs MP3 Audio DiffMaker test.zip)




    If you listen to both difference files, one using the file null test and the other DiffMaker null test (at the analog outs of my gear), they correlate extremely well. You can hear the same annoying high frequency residue that is left over as a result from comparing a lossless file format to a lossy file format (using the highest quality MP3 encoder and settings).




    How close is the correlation? You can listen with your ears and you can measure it as well.




    Here is the Audacity frequency spectrum of the null "file" test result:








    And frequency spectrum of the "DiffMaker" test result:








    You can hear the correlation with your ears and see the correlation with your eyes. As mentioned before, I like a "balanced view" between objective measures and subjective listening tests.




    Just to call out what is happening. The first "file" null test was comparing FLAC vs MP3 at the file format level. The Audio DiffMaker test is comparing the same, but includes the computer signal chain of music player -> ASIO driver -> Lynx L22 DAC -> Lynx analog outputs, and in real-time being recorded to a different computer using Audio DiffMaker




    If you think about it, one test is comparing just the file formats, the other is comparing the file formats and the rest of the playback chain to the analog outs. Look how close the waveforms match (and by listening!) even when introducing the playback chain into the measurement equation.




    Given what I hear and what I measure, and the direct correlation to another null test approach, this validates that Audio DiffMaker is a sensitive and accurate measurement tool.

  21. I think comparing a .wav with a 64K mp3 would be the next clear step to confirm that a well accepted sonic difference is easily detectable and at what level. My guess is that the difference will be on the order of 20 dB below program level with a really annoying sounding "residue".




    Agreed. I performed a file null test with FLAC vs MP3 at: http://www.computeraudiophile.com/entries/94-FLAC-vs-WAV-vs-MP3-vs-M4A-Experiment I have attached the difference file to that blog post and sounds exactly as you describe. It will be interestng to see if DiffMaker correlates the result.




    On another note, I was asked to perform the same DiffMaker test on the digital output of the Lynx L22 sound card, but not including the digital to analog conversion and analog output stage.




    The Lynx L22 card supports loopback mode: http://www.lynxstudio.com/support_faq_result.asp?c=32




    I used a music player to play TP Refugee 24/96 FLAC/WAV and in real-time record the waveforms in Audio DiffMaker on my computer. One pass for FLAC and one for WAV.




    Here is the difference result:
















    "Lossless compression formats enable the original uncompressed data to be recreated exactly."




    With respect to Audio DiffMaker and how it works:




    AES Paper: http://www.libinst.com/AES%20Audio%20Differencing%20Paper.pdf




    Slides: http://www.libinst.com/Detecting%20Differences%20(slides).pdf





  • Create New...