Jump to content
IGNORED

Misleading Measurements


Recommended Posts

50 minutes ago, The Computer Audiophile said:

I highly encourage people to read with their glass half full

As a flexible and rational objectivist, I focus first and foremost on what’s in the glass. Then I analyze the data to see if the level is stable, rising or falling.  If it’s stable, is it static or in a dynamic equilibrium?  If it’s falling, where’s the leak and where is the loss going?  If it’s rising, what’s the source and do I really want or need more of whatever’s in it?  
 

Etc.

Link to comment
16 minutes ago, The Computer Audiophile said:

I certainly hear you but if there’s harm in unmeasurable USB cables then the same harm is there for unhearable measurements. 
 

I don’t see any harm, but the double standard is blatant. 

Perhaps you missed the animated disclaimer I added at the end, Chris. I love it so much that here it is again to underscore how seriously you should take my response above.

0DC776A1-AB59-48E5-96F8-6EC520E68974.gif.7185bd60af064be8a6d6174c3ca05f79.gif

Link to comment
2 hours ago, fas42 said:

There is absolutely nothing "below the level of audibility" that's important, in a direct sense - what it may tell you is how robust the component is, when faced with interference factors, and similar possibly degrading influences.

If something is inaudible in isolation but has audible effects on other factors, its presence is audible even if it makes no sound of its own.  I think this is directly important.  Look no further than a silent person on a creaky step - you hear the normally silent step because an inaudible person stepped on it.  DC is silent - but a voltage drop that makes its way to an audio signal as a DC offset can affect SQ.

 

Then there’s the question of why it’s “below the level of audibility”.  Is it making sound at an SPL below the threshold of audibility? Is it producing AC out of the frequency range of audibility?  Or is it making otherwise audible sound that’s masked to inaudibility by other sounds in its environment?  Each cause has its own set of direct, audible consequences, eg intermodulation or sucking amplifier power.

Link to comment
2 hours ago, John Dyson said:

This is a corollary of my anti-'Tweak-tweak-tweak' stance.   I don't mean to say: NEVER 'Tweak tweak tweak', but instead why not take advantage of the WONDERFUL tools that we already have.  

You seem to understand yourself well, John. So you own up to and try to make the best of your difficulty with communication skills (even though you seem to communicate fine to me).  So you should understand how it is that many audiophiles believe they have no technical skills and are bewildered or even frightened by technology and the tools we find helpful and even fun to use.

 

It’s as hard for them to learn how, when and why to use these tools as it was for you to learn efficient, effective verbal & written communication.  I have a similar problem with foreign languages.  Sadly, despite a bachelors degree in chemistry, 2 doctoral degrees and an MBA, decades of traveling around the world and trying hard to learn (plus a wife with graduate degrees and a prior career teaching French, Spanish & Italian at the college level), I cannot get beyond basic greetings and questions to which I often don’t understand the responses.  Just as you are by communication skills and I am by other than my native language, many audiophiles have trouble with and are really put off by even a simple FFT.  Nevertheless, they deserve love too :) 
 

Maybe your scientific side could be happier with something like this for those who can’t or don’t want to go beyond playing around:  observe-document baseline-tweak-observe-document change-tweak-observe-document change-stop and review efforts & results-repeat until happy with SQ.

 

Thanks so much for your contributions! I‘Ve learned a lot from you and look forward to your posts.  Have a great weekend and stay safe!

 

David

Link to comment
56 minutes ago, John Dyson said:

Wow, that is good.   Definitely modulation distortion on a channel can distort the temporal relationships as a *secondary* effect.  Modulation distortion (as what one gets with fast gain control/agc/compression/expansion) can 'fuzz' spatial relationships along with the compression/expansion itself causing a modification of the 'space'.

 

John

 

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

Link to comment
46 minutes ago, pkane2001 said:

 

IMD is just the way a nonlinear transfer function affects multiple tones. Harmonic distortion is a simple case of IMD with a single tone. 


If you want to see what IMD looks or sounds like, try my DISTORT app. You can look at pure tones, multi-tones, or just apply the same simulated IMD to any recorded piece of music and listen to it.

 

When you say that you want to learn about IM in the digital domain, do you mean in the frequency domain? Standard digital processing (such as in a DAC) should produce no IMD, unless specifically designed to introduce a non-linear transformation. It's the analog section that will introduce some level of non-linearity.

But I’m talking about intermodulation, not IM distortion. Intermodulation is a perfectly normal acoustic phenomenon that occurs when two instruments play different notes at the same time.  It’s an integral part of live music, so it’s in the source performance. IM distortion is the addition of intermodulation products that are not in the source.  
 

Not all intermodulation in audio is distortion.  In fact, natural intermodulation among instruments is a large component of the sound of a symphony orchestra. The phenomenon of recreation of the fundamental from the harmonic structure of a tone is what lets you hear lower notes in a musical performance than were played.  The lowest note in the spectral splash of the tympani can be heard below the pitch to which the head is tuned, if this is what the composition demands. Some composers use this effect as part of their music.  As I recall, the oboe’s spectrum is almost entirely harmonics, yet we hear the fundamental being played because we reconstruct it from those overtones and intermodulation products.
 

You can play two analog sine waves of differing frequencies and find the sum and difference intermodulation products in a spectral analysis of the sound.  The first ones are also easily heard if the fundamentals are audible and their sum and/or difference frequencies are in the audible range, although amplitude drops off precipitously after that. This does not appear to be the case with digitized sine waves, as I don’t see the same spectrum - what should be the biggest sum and difference components seem to be missing.  And I don’t hear those IM products when I play the tones through speakers or headphones.
 

So I’m trying to understand if and how digital tones intermodulate with each other, not how IM distortion products are created.  There’s a big difference between the two.

Link to comment
21 minutes ago, pkane2001 said:


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

But spectral analysis shows more than just the 2 digital sine wave frequencies and the electronic floor of noise and distortion (which is quite low when examined without the sine waves).   So something is happening.  I can’t define any relationships among the peaks I see - they’re clearly not natural harmonics or harmonic products.   I’d love to know what this is, where it originates, how it affects audio playback, and if we can use or reduce it.

 

My next experiment is to mix the same two sine waves at multiple sampling rates, to see if the spectrum changes with resolution of the digital waveforms. This may all be nonproductive, except for the educational value.  But I just gotta know what’s under the hood!

Link to comment
1 hour ago, pkane2001 said:

 

Maybe I missed it, but what are you measuring? Any frequencies that are not in the original signal are distortion. IMD happens in the analog domain, except for some special software or plugins  designed to simulate it, like my DISTORT.

 

Spectral analysis of analog signal can certainly show IMD, but usually not digital.

I expected to find nothing - but it shows content in addition to the two digital "sine waves" being played and the noise + distortion floor of the equipment, which is minimal.  And I get the same spectrum whether I play the tones directly into an analyzer or record them with Audacity and analyze the wav file.  I'll have to find the time to run some and post them.

 

Thanks!

Link to comment
1 hour ago, fas42 said:

The job of the playback chain is to not add IMD, which in real life systems can be difficult - so, there is no automatic "created anew" taking place.

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

When you play back the program material, you will generate again the same acoustic intermodulation from the same two original tones, which are now playing from your speakers along with their recorded intermodulation products.  It is indeed created anew, and is purely an acoustic phenomenon just as it was in the performance.  It has nothing to do with the electronics, which add whatever intermodulation distortion products they generate.  But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

Link to comment
3 minutes ago, Audiophile Neuroscience said:

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

I'm not sure if it's good or bad or a bit of each. Acoustic intermodulation is part of what we hear at a concert, and it clearly helps to create the overall soundscape of the live performance and venue. But as is clear from some of the responses to my posts in this thread, there aren't many people who've even given this any thought let alone come to an understanding of it. I think it's logical to assume that creating the same intermodulation products during playback that were created and recorded at the performance might have a clearly audible effect on how real it sounds. I do not know this for sure, and it'll take a lot of work to begin to figure it out.  But I strongly suspect it's hiding an overlooked opportunity to further improve SQ.

 

Interestingly, recording bands part by part in isolation eliminates this consideration except for the intermodulation among fundamental and harmonic tones from the individual instrument being recorded.  So playback is the first opportunity for intermodulation among all the instruments in the ensemble.  It's only for full scale live performances by multiple instruments that they can interact acoustically.

 

It doesn't matter whether the recording and/or playback equipment is analog or digital - the sounds of the instruments coming from your speakers are analog, so they will generate natural intermodulation. Maybe we could use real time spectral analysis to identify any recorded intermodulation and DSP to cancel it with out of phase addition and summation.  I think this may be important.  It also may be a wild goose chase - but I like goose with the right sauce.....

Link to comment
4 minutes ago, fas42 said:

Yes, I understand the value of creating beating effects to detect frequencies ... at one point I used Audacity to add a slightly different frequency to the harmonics of sine bass frequencies being fed to a small mid/bass driver - made it easy to get a measure of the level of distortion of the actual driver, by listening to the intensity of the beats.

Acoustic intermodulation is not in your head - it's physical and audible.  The beat frequency you hear and we use to tune our guitars is recordable and audible on playback.  But it seems that it only occurs with analog sources - it doesn't seem to develop when the differing frequencies themselves are digitally generated.  However, even an all digital record-playback chain starts with analog input from live instruments and ends up with purely analog output as sound, so it definitely occurs with what's coming out of your speakers on playback.

Link to comment
9 minutes ago, fas42 said:

 

Yes, but if I record that with an instrumentation microphone, and look at the spectrum of the captured air vibration - is there actually a 60Hz signal in the mix?

Yes.  Here's an example of the kind of research and results found in the scientific literature of acoustics.  The paper (out of Dartmouth) is investigating ways of enhancing the harmonic richness of the sound of musical instruments with the intermodulation products of the instrument's sound and additional injected frequencies.  They include spectral analysis of this phenomenon:

 

"Modulation is often used in sound synthesis to reduce the number of oscillators needed to generate complex timbres by producing additional signal components prior to the output. For example, FM Synthesis is employed to emulate rich timbres of acoustic instruments [5]. Another method of modulation synthesis is via Intermodulation (IM), a form of amplitude modulation acting on the signal harmonics from two or more injected signals....

 

In this paper we have detailed and defined a new approach to nonlinear acoustic synthesis through IM. We have shown that it is possible to produce IM components in a variety of instrumental contexts and have shown that by parametrically increasing modulation depth β, more frequency components can be produced in a continuous, controlled fashion. Control over both the number and frequency of sidebands suggests that IM is a powerful method of producing broad timbral synthesis in modified or newly-designed acoustic instruments, capable of bridging the electronic with the acoustic."

Link to comment
9 hours ago, Audiophile Neuroscience said:

So even with one ear and one speaker, Intonation will occur from combination of the sound waves in the air.

Intonation is the accuracy of the pitch of a note.  If an instrument is faulty (eg a poorly crowned or heavily worn fret on a guitar), not all the notes played on it will be perfectly in tune even though the instrument is tuned / pitched / tempered correctly.  Poor intonation can also be the player’s fault, eg a violinist who places his or her fingers imprecisely on the fingerboard.  
 

There must be a better term for whatever you’re trying to describe. I don’t think you mean intonation.

Link to comment
3 hours ago, STC said:

You may want to look at Tartini tones and intermodulation distortion that occurs naturally due to non linearity of the ears. 

From Jeffrey Freed (violinist.com)


My current thinking is that [Tartini tones] are real (meaning that they actually exist as pressure pulses in the air which conveys the sound from the source to the ear), because these pressure pulses at the Tartini frequency are picked up by a microphone, and can be seen by zooming in far enough on a recorded sound file.“

 

He also opines (as do I) that digital processing of the sources may reduce or eliminate these intermodulation tones, and that this may explain why they don’t appear in some spectral displays.  A sampled sine wave is technically not a continuous function, which may be why digitizing alters, reduces, or eliminates acoustic intermodulation.

Link to comment
9 hours ago, pkane2001 said:


Sampled sine is a continuous function within the limits of the sampling frequency. It’s stored as discontinuous samples but reproduced as a continuous waveform by any properly constructed DAC. That’s the result of the infamous Nyquist-Shannon theorem.

I’m not talking about sampled sine waves.  A DAC will interpolate the missing bits of the curve when creating the analog analogue of the digital signal.  What I’ve seen presented as evidence that there’s no such thing as recorded acoustic intermodulation is spectral analysis of pairs of digitally generated sine waves.  Running the output of a digital signal generator directly into a computer for recording requires no DAC and the “sine wave” it generates is made up of discontinuous values.
 

At first, I assumed that there’d be intermodulation between dissimilar notes created by my software signal generator too, and I was puzzled when I couldn’t find the right products in the spectrum.  I now think it’s the fact that those tones are purely digital and simply will not interact because they’re just very fancy little square waves strung rapidly together.

 

It seems that few of us have analog signal generators and recorders any more.  Now I really need my good old relics :) 

Link to comment
20 hours ago, fas42 said:

Again, modulation doesn't occur "in the air" - it may in the driver, but that's a completely separate issue.

 

Using two tones, over two drivers, is in fact how high quality microphones are checked for misbehaviour - microphones are going to be orders better than speakers for distortion, so how do you measure how good the mic actually is? 🙂

 

The answer, https://www.listeninc.com/products/test-sequences/free/microphone-intermodulation-distortion-measurement/

 

That doesn't address what we're talking about at all.  The stated intent of the linked article is clear and concise:  "The purpose of this sequence is to measure the Intermodulation Distortion (IM) of a microphone." 

 

We're not talking about distortion created by a device - we're talking about natural intermodulation products among tones generated by musical instruments, and how well and completely they're captured by microphones and incorporated into recordings.  And these tones do occur "in the air", if by that you mean as audible compression waves.  

 

Here's a wav of simultaneous plucking of an A 220 and a B 246 played on the 3rd and 4th strings of one of my guitars.

220_IM_test.wav 

 

You can clearly hear the 30 Hz "difference" intermodulation product, so it's "in the air" and on the recording. It's real and not only in our brains.  I didn't bother tuning perfectly to A440, so the exact frequencies are a bit off of 220 and 246.  Here's the spectrum to show the fundamentals and the IM products:

220_250_IM.jpg.ebb6e269ddf5469e22341ce6998d332d.jpg

 

I know of no way to explain the ~26 Hz tone except intermodulation.  The rest of the spectrum content is from sympathetic resonance of the other strings, which are undamped and resonating.

 

Link to comment
1 hour ago, fas42 said:

If there are intermodulation frequencies created then it's because the analogue circuitry is distorting

No it's not. Natural intermodulation products are not distortion and they're not products of nonlinear or otherwise imperfect circuitry.  See my post #580 for an audio recording that clearly demonstrates natural intermodulation, along with a spectrum analysis that shows its presence in the recording.

Link to comment
2 minutes ago, fas42 said:

The point I was making was that the pure tones, reproduced over two separate speakers, in close proximity, could each only produce harmonics, of the specific frequency fed to each - no intermodulation products would emerge from either speaker driver.

I never said they would.  The natural intermodulation of two tones does not happen at the speakers.  The IM tones are generated by the interaction of the two waves in the air.  They're real, audible airborne compression-rarefaction waves resulting from addition of the two fundamental waves.  It's a very simple version of the summation of multiple instruments into the orchestral waveform - and the IM products among all the instruments are in there too.

Link to comment
9 minutes ago, fas42 said:

 

Yes, it's there, but it was created by the instrument itself - some part of it resonated at that frequency, provoked by the plucking of the strings - it's part of its intrinsic nature, its character - not added by the vibrations in the air interacting.

There is simply no way that a 26 Hz tone was generated by any part of the guitar (which is a solid body of neck-through construction with no resonances within 2 octaves of 30 Hz).  And I get the same spectrum from the same maneuver on 4 other guitars, including a big body hollow flattop, a 17" archtop, a National tricone, and a 3/4 size Kubicki Express.  I just don't see how your explanation could possibly be correct.

Link to comment
17 minutes ago, fas42 said:

 

Therefore, that test procedure I linked to is invalid - if intermodulation occurred in the air, the microphone would register it - and we still wouldn't know how well the microphone performed, as regards internally generated IMD ... does this make sense?

Sure - but distortion in microphones is pretty far afield from acoustic intermodulation among instruments.  I’m not even sure how it got into the discussion.  Then again, it’s a great example of a misleading measurement in the context of acoustic intermodulation. 🙂

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...