Jump to content
IGNORED

Are we just kidding ourselves?


Recommended Posts

It worth pointing out in the research about human hearing exceeding the Fourier limit that the best single result by a single human subject was 3 milliseconds timing discrimination. 16/44 has no problem with such an interval. Further the reason human hearing can exceed the Fourier limit is thought by the researchers to be from non-linearity in the hearing system. This Fourier limit is about discriminating frequency and time of signals, not pure time or pure frequency.

 

So while a linear system like any rate of PCM would be subject to Fourier limits it doesn't mean what is being implied. Which is an inability of PCM to portray timing equal to or surpassing that of human hearing with normal 44/16 redbook format, and further implying that higher sample rates help with this issue.

 

I think I want to unpack this a little. Absolutely true that this Fourier uncertainty limit has to do with a pair of conjugate variables, time and frequency. I don't know if you felt I was the one implying 16/44.1 PCM is unable to portray timing equal to or surpassing that of human hearing by reference to the experiment showing human hearing is capable of time/frequency discrimination exceeding the Fourier limit, but that wasn't what I intended. What I think of that experiment in connection with is filtering, and whether the compromise that must be made in filter design between time domain and frequency domain optimization would in principle be inaudible. My take from this experiment, which may be completely off base, is that one cannot say in principle that this compromise would be inaudible. This would still leave room to determine experimentally how much variation in filter behavior, and in which parameters, is required in order for the differences to be audible to most people.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

I've spent a lot of thought and experiments on the subject of this thread over the years in my DAC design endeavors.

 

It started when I had built a PCM1704 DAC and listened to it NOS, it sounded very alive, exciting, "pulling you into the music", but "dirty". The infamous aliases were causing "roughness around the edges" to most music.

 

Then I tried a DF1704, which many were touting as the best digital filter to use. Well it was a massive disappointment, the sound was clean, but it was dull, flat, uninvolving, the exact opposite of "pull you into the music". This then started a fairly long quest to find out why the music sounded so dull and lifeless.

 

I won't go into the details of the quest, but the eventual result was that it is the very common practice of cascading multiple filters that seems to be the problem. An FPGA implementing a single SINC filter with the same overall filter characteristics as the DF1704 sounds WAY better. Note that I am not exactly sure WHY the cascaded multiple filters causes this lifeless sound, that has been a much harder nut to crack. The evidence seems to be pointing at the time domain behavior of the filter on transients, but there is a LOT more work to go on fleshing this out.

 

What this has to do with the subject of this thread is that many DAC chips change their upsampling filter depending on the sample rate. At 44.1/48 most use a 3 stage filter, a 2 stage at 88.2/96, a single stage filter at 176.4/192 and no filters at 352.8/384. As far as I can tell this is one of the primary reasons higher sample rates sound better to many people, it has nothing to do with hearing ultrasonics, it's that in many cases going with higher rates cuts down on the number of cascaded filters in the DAC chip, and THAT is what makes it sound better.

 

So if it makes things sound so bad, why do almost all the DAC chips do it that way? As far as I can tell it is a combination of cost and specsmanship. For some reason the stop band attenuation number has become one of the important numbers in a DAC chip spec sheet. (Our chip has 120db attenuations, well OUR chip has 134db etc). Getting these numbers takes a lot of digital horsepower, which would make chips cost too much so the designers need to use things like cascading filters in order to get those numbers for the spec sheets. But if it causes such problems why haven't they come up with other approaches? Probably because the practice is so pervasive that no DAC chip designer has ever heard it any other way. And it doesn't take a megabuck system to hear the difference.

 

BTW this does NOT mean that all single stage filters sound the same. As a matter of fact once you get out of the multistage filter implementations differences in filters can be quite striking. (this is why I think that many people can tell very little difference in the slope settings with most DAC chips that have the options, the affect of the multistage filtering is swamping the differences in slope etc) There is still a lot of work to do in producing really good sounding digital filters.

 

There seems to be a strongly held assumption by a lot of people in this "biz" that the stop band attenuation is the primary metric for a digital filter. This leads to filters with very large numbers of taps, my experimentation seems to point to this being a bad thing. Keeping the convolution kernel small seems to have a much greater impact on sound than the ultimate stop band attenuation. Exactly WHY that is so, who knows, it has to do with human perception and how that wonderfully complicated pattern matching systems takes in the sound filed and determines "that sounds real" or "that sounds not-real".

 

With some decent attention being given to filter design and how it sounds, not just spec sheet numbers, I think the overall level of sound quality from digital audio is going to be increasing significantly in the next few years. Even if the designers don't know exactly WHY it is.

 

John S.

Link to comment

I gotta step back and apologize a bit to Jud and the HiRes camp in regards to capture or recording in the highest resolution possible. My thoughts come after examining yet again the dismal state of transducers....and how far behind speakers are in the context of audio. If and when....hopefully someday soon......a new speaker technology is introduced that can faithfully reproduce this content, it would be sad to have to feed them junk as that's all that's available. I'm with you guys on this one!.....USB cables and power supplies, not so much! Lol

 

What would really make me happy is the death of stereo all together, replaced with a well thought and developed multichannel format without wacky spacial effects......twelve discreet channels of full range, high definition audio and a data track for processing. If thin membrane transducers ever do appear, we're all in for a real treat!

Link to comment
I've spent a lot of thought and experiments on the subject of this thread over the years in my DAC design endeavors.

 

It started when I had built a PCM1704 DAC and listened to it NOS, it sounded very alive, exciting, "pulling you into the music", but "dirty". The infamous aliases were causing "roughness around the edges" to most music.

 

Then I tried a DF1704, which many were touting as the best digital filter to use. Well it was a massive disappointment, the sound was clean, but it was dull, flat, uninvolving, the exact opposite of "pull you into the music". This then started a fairly long quest to find out why the music sounded so dull and lifeless.

 

I won't go into the details of the quest, but the eventual result was that it is the very common practice of cascading multiple filters that seems to be the problem. An FPGA implementing a single SINC filter with the same overall filter characteristics as the DF1704 sounds WAY better. Note that I am not exactly sure WHY the cascaded multiple filters causes this lifeless sound, that has been a much harder nut to crack. The evidence seems to be pointing at the time domain behavior of the filter on transients, but there is a LOT more work to go on fleshing this out.

 

What this has to do with the subject of this thread is that many DAC chips change their upsampling filter depending on the sample rate. At 44.1/48 most use a 3 stage filter, a 2 stage at 88.2/96, a single stage filter at 176.4/192 and no filters at 352.8/384. As far as I can tell this is one of the primary reasons higher sample rates sound better to many people, it has nothing to do with hearing ultrasonics, it's that in many cases going with higher rates cuts down on the number of cascaded filters in the DAC chip, and THAT is what makes it sound better.

 

So if it makes things sound so bad, why do almost all the DAC chips do it that way? As far as I can tell it is a combination of cost and specsmanship. For some reason the stop band attenuation number has become one of the important numbers in a DAC chip spec sheet. (Our chip has 120db attenuations, well OUR chip has 134db etc). Getting these numbers takes a lot of digital horsepower, which would make chips cost too much so the designers need to use things like cascading filters in order to get those numbers for the spec sheets. But if it causes such problems why haven't they come up with other approaches? Probably because the practice is so pervasive that no DAC chip designer has ever heard it any other way. And it doesn't take a megabuck system to hear the difference.

 

BTW this does NOT mean that all single stage filters sound the same. As a matter of fact once you get out of the multistage filter implementations differences in filters can be quite striking. (this is why I think that many people can tell very little difference in the slope settings with most DAC chips that have the options, the affect of the multistage filtering is swamping the differences in slope etc) There is still a lot of work to do in producing really good sounding digital filters.

 

There seems to be a strongly held assumption by a lot of people in this "biz" that the stop band attenuation is the primary metric for a digital filter. This leads to filters with very large numbers of taps, my experimentation seems to point to this being a bad thing. Keeping the convolution kernel small seems to have a much greater impact on sound than the ultimate stop band attenuation. Exactly WHY that is so, who knows, it has to do with human perception and how that wonderfully complicated pattern matching systems takes in the sound filed and determines "that sounds real" or "that sounds not-real".

 

With some decent attention being given to filter design and how it sounds, not just spec sheet numbers, I think the overall level of sound quality from digital audio is going to be increasing significantly in the next few years. Even if the designers don't know exactly WHY it is.

 

John S.

 

+1 and this summarize the whole "battle" I think!

C'mon, lets stop the racing of specs vs. BS and concentrate on music as the whole. IMHO, it doesn't matter what kind of sampling rate files you're going to feed to your DAC (at least for some point), what does really matter is the DAC and digital filtering implementation with power source quality I think - it does not involve so called here megabucks equipment.

--

Krzysztof Maj

http://mkrzych.wordpress.com/

"Music is the highest form of art. It is also the most noble. It is human emotion, captured, crystallised, encased… and then passed on to others." - By Ken Ishiwata

Link to comment
What's all the stuff in the various journal papers about the great difficulty of modeling non-sinusoidal sounds?

 

As I mentioned earlier, there is only "difficulty" if you limit number of fourier components you are "allowed" to use. It would be analogous to asking why 128kbit MP3 is worse at reproducing signals than 320 kbit mp3.

Link to comment

The limit is imposed by the sampled frequency cutoff (22.05 kHz in the case of rebook) replacing an infinite upper bound. Since the FT is linear, even though the finite limit imposes an approximation, the idea is that we will not be able to resolve the difference between the reconstructed function and the minimal one.

 

However, if the paper you posted is true (i.e., that the brain doesn't do a Fourier transform), it might be that a reconstructed impulse could be audibly distinguished. I use "could be" here both to indicate I may be mistaken and also it might not be practically discernible.

Link to comment
You are concentrating too much on the wrong part of the analogy, I think. That's possibly my fault, but on the other hand, I'm sure you get my point: We can't see ultraviolet light just as we can't hear ultrasonic frequencies. They both might have an effect on parts of the human body (UV gives one a sun-tan/sun-burn, ultra-sonics may be detectable through skin or bone conduction), but neither of these seem to be detectable by the organs of the body dedicated to seeing light or hearing sound.

 

 

 

What parts are missing? Dynamic range, lack of distortion, flat frequency response, a truly coherent sound-stage, omnidirectional radiation pattern, etc. But the thing that tells me that ultrasonics is NOT one of the primary differences between live and "canned" music, is the fact that in spite of ultra-sonics being hyper-directional one can be walking down the street and pass an open door to bar or night club and tell instantly that there is live music playing inside. No signs are needed, no one has to announce it, one just knows! It's very doubtful that any ultrasonic frequencies can turn corners to make it out the front door. I've experienced this phenomenon many times (as I'm sure many here have as well). The best way to dramatically experience it is to walk down Bourbon Street at night in New Orleans. this place has live jazz, this place possibly has live music, but I'm hearing sound re-enforcement equipment (yeccchh!) but this next place is playing music out of a juke box. It's often startling how apparent live music is under these circumstances.

 

Ah, yes, we are in pretty much agreement here along these lines. I am a little reluctant to cut out the ultrasonics, not so much because of their possible perception effects, but because using a high sample rate (including the ultrasonics) simple makes for better reproduction. The reasons may be legion, but the effect itself appears to be very clear.

 

I do understand about the difference with live music, but even rather modest systems today can be extremely realistic with simple things - like vocals or simple guitar work. I have, upon occasion, had to stop and ask myself if that music I am hearing is live or recorded.

Again, the part about UV giving a sunburn was a bit of hyperbole on my part, done to make a point and you're right. Our video technology does not produce enough UV to harm us. On the other hand, the part about humans being unable to see UV in the same way (and for the same reason) that we cannot hear ultrasonics (that our sensory apparatus is blind/deaf to these wavelengths) is valid, I think.

 

Sure, we cannot directly hear 30khz - no matter what anyone says.

 

But I am not aware of any authoritative studies on the second order effects of US resonance in materials found in common listening rooms either. I know ultra sonic resonance can and is used to detect faults in material similar to drywall. It is possible therefore, that humans may be sensitive to such resonances, perhaps in ways not clearly understood. Also, it raised my hackles a bit with some of the usual crowd puts out their same old stuff. It is the definition of insanity, when you do the same thing over and over and over again with exactly the same results. (grin)

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
As I mentioned earlier, there is only "difficulty" if you limit number of fourier components you are "allowed" to use. It would be analogous to asking why 128kbit MP3 is worse at reproducing signals than 320 kbit mp3.

 

Hmm, so a paper that says "If we limit ourselves to really crappy filtering, the result won't be good" was considered significant enough for publication?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
The limit is imposed by the sampled frequency cutoff (22.05 kHz in the case of rebook) replacing an infinite upper bound. Since the FT is linear, even though the finite limit imposes an approximation, the idea is that we will not be able to resolve the difference between the reconstructed function and the minimal one.

 

However, if the paper you posted is true (i.e., that the brain doesn't do a Fourier transform), it might be that a reconstructed impulse could be audibly distinguished. I use "could be" here both to indicate I may be mistaken and also it might not be practically discernible.

 

Agreed, and I'd add speculatively only that beyond the question of whether or not the reconstructed impulse could be distinguished, it would be interesting to see what characteristics of a reconstruction result in its being perceived as more or less "real" by the processing actually done in the ear-brain, versus those that are important for better reconstruction via Fourier transform.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Hmm, so a paper that says "If we limit ourselves to really crappy filtering, the result won't be good" was considered significant enough for publication?

 

That particular paper had nothing to do with filtering. It said (I am way overgeneralizing):

 

"If you have a transient signal, then using only a handful of sinusoidal frequencies to reconstruct doesn't work well. Instead, use a handful of our super-special not-quite-sinusoidal components and it works better"

 

It's basically a paper saying that one compression and decompression algorithm works better than another.

Link to comment

I won't go into the details of the quest, but the eventual result was that it is the very common practice of cascading multiple filters that seems to be the problem. An FPGA implementing a single SINC filter with the same overall filter characteristics as the DF1704 sounds WAY better. Note that I am not exactly sure WHY the cascaded multiple filters causes this lifeless sound, that has been a much harder nut to crack. The evidence seems to be pointing at the time domain behavior of the filter on transients, but there is a LOT more work to go on fleshing this out.

 

What this has to do with the subject of this thread is that many DAC chips change their upsampling filter depending on the sample rate. At 44.1/48 most use a 3 stage filter, a 2 stage at 88.2/96, a single stage filter at 176.4/192 and no filters at 352.8/384. As far as I can tell this is one of the primary reasons higher sample rates sound better to many people, it has nothing to do with hearing ultrasonics, it's that in many cases going with higher rates cuts down on the number of cascaded filters in the DAC chip, and THAT is what makes it sound better.

 

BTW this does NOT mean that all single stage filters sound the same. As a matter of fact once you get out of the multistage filter implementations differences in filters can be quite striking. (this is why I think that many people can tell very little difference in the slope settings with most DAC chips that have the options, the affect of the multistage filtering is swamping the differences in slope etc) There is still a lot of work to do in producing really good sounding digital filters.

 

There seems to be a strongly held assumption by a lot of people in this "biz" that the stop band attenuation is the primary metric for a digital filter. This leads to filters with very large numbers of taps, my experimentation seems to point to this being a bad thing. Keeping the convolution kernel small seems to have a much greater impact on sound than the ultimate stop band attenuation. Exactly WHY that is so, who knows, it has to do with human perception and how that wonderfully complicated pattern matching systems takes in the sound filed and determines "that sounds real" or "that sounds not-real".

 

John S.

 

Very interesting, and thank you for what is as usual a very informative contribution, John. I'm going to have a re-look at my Audirvana Plus "filter max length" oversampling setting, which as I (possibly mis-) understand it, represents fewer taps the shorter it is. Meanwhile, this may also affect whether people want to upsample in software if they have DACs that filter in one step rather than using a cascade. (These include the Ayre QB-9 and I believe the Benchmarks, though I could be wrong about the latter.) A question, John: Have you looked at whether and to what degree this cascade effect applies to *offline* filtering? That is, have you tried to determine whether fewer conversions/filters between recording and listening makes for better sound? Should sigma-delta modulation be included among the conversions/filters one considers?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
By the same logic, we should have high definition cellular communication? Where would you like to draw the proverbial line in the sand?

 

What do you have against getting hi res (or hi def video) from your phone? ;D

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
That particular paper had nothing to do with filtering. It said (I am way overgeneralizing):

 

"If you have a transient signal, then using only a handful of sinusoidal frequencies to reconstruct doesn't work well. Instead, use a handful of our super-special not-quite-sinusoidal components and it works better"

 

It's basically a paper saying that one compression and decompression algorithm works better than another.

 

Re your hypothetical quote, I agree. Re "It's basically...," you're still saying this is all about compression and decompression and not about audio?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Re your hypothetical quote, I agree. Re "It's basically...," you're still saying this is all about compression and decompression and not about audio?

 

IT would be pretty analogous to discussing which compression algorithm of mp3, aac, vorbis, etc sounds better. The results in the paper have little to do with DACs and amplifiers and speakers, since they don't know what the file format originally was.

Link to comment
By the same logic, we should have high definition cellular communication? Where would you like to draw the proverbial line in the sand?

 

That would be luverly! Remember the days of analog mobile phones, and how it would sound as if you were in the same room even if in different countries? Now I can barely understand (especially ATT) a mobile caller even if around the corner.

 

And on that note how about Sirius radio with high def ( or even regular def) as opposed to that unlistenable shite quality.

 

:)

Link to comment
That would be luverly! Remember the days of analog mobile phones, and how it would sound as if you were in the same room even if in different countries? Now I can barely understand (especially ATT) a mobile caller even if around the corner.

 

And on that note how about Sirius radio with high def ( or even regular def) as opposed to that unlistenable shite quality.

 

:)

 

As an ex Principal Telecommunications Officer with Telstra, I couldn't agree more. We went to great lengths to ensure crystal clear phone calls, The rot set in when the national carriers ((U.K. too ?) were opened up to those out to make a quick buck.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Well, you could try this. Take your nicest 192khz recording. Duplicate it in Audacity or similar sound editor. Invert the second copy. EQ the first copy so it has a brickwall digital filter at 20 khz. Mix the two and see what is left. Of course everything above 20khz will be there along with any of the transient killing effects of filter ringing. Play the result and see if you can hear anything.

 

You can use a filter with many more taps than is typically found in hardware DACs so the transient smearing effects should result in extreme results.

 

Do be careful with the EQ in Audacity, it is easy to slide the line so you get a gain change you didn't intend. This will result in residual signals. There is a small trick to prevent that.

 

All the effects of the filter ringing is in the transition band of 20-24 khz, and were just as audible as those in the 80-96khz transition band which is to say I heard nothing.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Well, you could try this. Take your nicest 192khz recording. Duplicate it in Audacity or similar sound editor. Invert the second copy. EQ the first copy so it has a brickwall digital filter at 20 khz. Mix the two and see what is left. Of course everything above 20khz will be there along with any of the transient killing effects of filter ringing. Play the result and see if you can hear anything.

 

You can use a filter with many more taps than is typically found in hardware DACs so the transient smearing effects should result in extreme results.

 

Do be careful with the EQ in Audacity, it is easy to slide the line so you get a gain change you didn't intend. This will result in residual signals. There is a small trick to prevent that.

 

All the effects of the filter ringing is in the transition band of 20-24 khz, and were just as audible as those in the 80-96khz transition band which is to say I heard nothing.

 

Dennis, I don't think this is a particularly accurate reading of John's comment. He didn't venture even a guess regarding the reason more taps might affect the sound. (I think Miska may have written some comments about the effects of number of taps, and I've also heard offline from someone else in the filter design business that minimizing number of taps might improve sound, at least in certain circumstances.) He said the deleterious impact of cascading filters *seemed* to come from the effect on transients, but he said much more work needed to be done on that. I take it from this last statement that if it were as easy as running a file or two through Audacity the conclusion wouldn't have been provisional and wouldn't have required a lot of work to confirm.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
That would be luverly! Remember the days of analog mobile phones, and how it would sound as if you were in the same room even if in different countries? Now I can barely understand (especially ATT) a mobile caller even if around the corner.

 

And on that note how about Sirius radio with high def ( or even regular def) as opposed to that unlistenable shite quality.

 

:)

 

I should have been clearer......the HD voice signal would be lost on the little micro speaker in the phone. We can't get high end two and three way stand alone systems to do it justice as it is.

Link to comment
Very interesting, and thank you for what is as usual a very informative contribution, John. I'm going to have a re-look at my Audirvana Plus "filter max length" oversampling setting, which as I (possibly mis-) understand it, represents fewer taps the shorter it is. Meanwhile, this may also affect whether people want to upsample in software if they have DACs that filter in one step rather than using a cascade. (These include the Ayre QB-9 and I believe the Benchmarks, though I could be wrong about the latter.) A question, John: Have you looked at whether and to what degree this cascade effect applies to *offline* filtering? That is, have you tried to determine whether fewer conversions/filters between recording and listening makes for better sound? Should sigma-delta modulation be included among the conversions/filters one considers?

 

Back to our discussion about 8X and downsampling ¨-)

 


Link to comment
Dennis, I don't think this is a particularly accurate reading of John's comment. He didn't venture even a guess regarding the reason more taps might affect the sound. (I think Miska may have written some comments about the effects of number of taps, and I've also heard offline from someone else in the filter design business that minimizing number of taps might improve sound, at least in certain circumstances.) He said the deleterious impact of cascading filters *seemed* to come from the effect on transients, but he said much more work needed to be done on that. I take it from this last statement that if it were as easy as running a file or two through Audacity the conclusion wouldn't have been provisional and wouldn't have required a lot of work to confirm.

 

Well it wasn't directed at John's comment. Just the topic in general. Seems all these worried about filter effects are ultrasonic. They are real enough, but inaudible. So my proposed procedure was just seeing what a lower frequency digital brick wall left vs one at 4 times the sample rate. I repeated it by running it through a different series of gentler filters 3 times. Same result with none of the residuals being audible.

 

I also generated some digital single sample impulses. The kind that cause the nasty looking graphs with pre and post echo covering in extreme cases a few milliseconds. I subtracted the pure digital impulse from the same signal which had undergone brickwall filtering. So all that was left was filter effects. You could see it in an FFT, you could see it in the waveform. You couldn't hear anything. I then slowed down the speed of playback. Same file just played at 50% and then 25% normal speed. When you do that you can hear the results of the filter as a high pitched momentary 'tick' sound. Again because it appears those effects of filtering are pushed to an ultrasonic region. Where in normal playback they are a non-issue.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
I repeated it by running it through a different series of gentler filters 3 times. Same result with none of the residuals being audible.

 

Surprise, Surprise.

But you went into this, not expecting to hear any difference, didn't you ?

If someone else said they heard a difference, you would be likely to put down to the same reason ,(Expectation Bias) wouldn't you ?

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Well it wasn't directed at John's comment. Just the topic in general. Seems all these worried about filter effects are ultrasonic. They are real enough, but inaudible. So my proposed procedure was just seeing what a lower frequency digital brick wall left vs one at 4 times the sample rate. I repeated it by running it through a different series of gentler filters 3 times. Same result with none of the residuals being audible.

 

But you failed to express the subjective effect of cascading filters noted by John in his post. These threads don't lead anywhere but to theorize the theoretical. These differences need to be substantiated conclusively before theorizing their caused can begin effectively. This is troubleshooting 101.......confirmation first, then appropriate diagnosis and solution followed again by confirmation.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...