Jump to content
IGNORED

Misleading Measurements


Recommended Posts

2 minutes ago, Audiophile Neuroscience said:

 

My view is that it is artificial, which is why I didn't buy it. There was no doubt it could be heard, even a novice listener was struck by the comparison. There was no 'coaching' - he pointed it out to me. My response was, " it is impressive, yes" (code for HiFi sounding, not life like)

 

Interesting response, David ... my take is that the size was indeed correct, but along with the elicitation of the detail that allows the mind to hear this goes the requirement that the distortion of every part of the system has to be under control to an even greater degree - otherwise, it will indeed sound HiFi, rather than lifelike.

 

It's a hairy journey getting there - but definitely possible, 🙂.

Link to comment
24 minutes ago, pkane2001 said:

 

Again, why? I don't measure DACs to determine the exact size of the soundstage they'll create. I measure them to determine if the soundstage in the recording will not be disturbed. For that, I use the variables I already listed. 

 

Which is where I agree with Paul. The size of the soundstage is set by what the recording was done in - if a single person in a soundbooth it will be tiny, 🙂; if recorded in St Peters, it will indeed sound immense ... if done by studio manipulation, it can span the Grand Canyon - if desired, 😁.

Link to comment
6 hours ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

 

If there is IM in the recording, acoustic space then it will be captured by a decent mic. The job of the playback chain is to not add IMD, which in real life systems can be difficult - so, there is no automatic "created anew" taking place.

 

Truly lifelike reproduction is possible when the tune of the replay setup is good enough ... but, yes, it occurs rarely, at the moment. It would be quite amazing for someone to experience a rig doing this, if they have never come across it before .. happened to me over 3 decades ago, 🙂.

Link to comment
35 minutes ago, sandyk said:


Now some extra points:
- listener fatigue is reduced or completely eliminated
- the sound can be turned up higher without any distortion being evident
- the sound can also be turned down lower & the full dynamics are still retained but at a lower volume

 

 

 

Good post, Alex ... now, how many systems does one come across, in real life, that tick those 3 boxes ... ?

Link to comment
23 minutes ago, bluesman said:

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

Okay, as @pkane2001 pointed out, there is a clear difference between acoustic IM, and IMD, Intermodulation Distortion. Now, whether the acoustic IM is something that makes a truly microphone recordable 60 HZ tone, or whether this is something that purely occurs in one's head, and doesn't actually exist in the music playing space should be well understood by now - can someone point to a paper or otherwise which clearly explains which it is?

 

23 minutes ago, bluesman said:

 

 

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

 

Yes, I understand the value of creating beating effects to detect frequencies ... at one point I used Audacity to add a slightly different frequency to the harmonics of sine bass frequencies being fed to a small mid/bass driver - made it easy to get a measure of the level of distortion of the actual driver, by listening to the intensity of the beats.

Link to comment

Along these lines, this is why I find most system playback of pipe organ pretty awful - the real world impact is amazing to hear, of a live instrument; the sense of all the harmonics blending is fabulous, the air of the space "loads" to an enormous degree ... and rigs do this poorly, in general. It's why I use a particular organ CD of mine to check this out - and other rigs I try it on are, ummm, duds ...

Link to comment
5 minutes ago, bluesman said:

Acoustic intermodulation is not in your head - it's physical and audible.  That beat frequency you hear and we use to tune our guitars is recordable and audible on playback.

 

Yes, but if I record that with an instrumentation microphone, and look at the spectrum of the captured air vibration - is there actually a 60Hz signal in the mix?

Link to comment
30 minutes ago, bluesman said:

Yes.  Here's an example of the kind of research and results found in the scientific literature of acoustics.  The paper (out of Dartmouth) is investigating ways of enhancing the harmonic richness of the sound of musical instruments with the intermodulation products of the instrument's sound and additional injected frequencies.  They include spectral analysis of this phenomenon:

 

 

Okay, had a quick look ... a key bit,

 

Quote

Harmonics arise from nonlin-earities intrinsic to electrical or physical systems.

 

IM is found in electrical systems such as ampliers and effects for musical purposes. For example, "power chords" played on electric guitars are an ffect resulting from IM within an over-driven mixing ampli er [2]. IM can produce strong subharmonics by injecting two harmonic-rich signals into a nonlinear electrical ampli er. We propose here a me-chanical method that generates intermodulation and para-metric acoustic timbres in an acoustic system, such as an augmented musical instrument or effect systems.

 

The important bit is that non-linearity is deliberately introduced, or occurs naturally in the sound making of an individual instrument. It doesn't "occur in the air", which is the vital difference - so, the playback chain should minimise all non-linearities, and then, all is good, 🙂.

Link to comment
21 minutes ago, Audiophile Neuroscience said:

 

But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

No, they're not. Provided the drivers are of a decent standard, those 2 frequencies are all that will exist, measured close to the drivers - measure the signal being fed to the drivers, again only 2 frequencies. There is no such thing as a purely acoustic event, in the sense you're trying to conceive of it.

Link to comment

Again, modulation doesn't occur "in the air" - it may in the driver, but that's a completely separate issue.

 

Using two tones, over two drivers, is in fact how high quality microphones are checked for misbehaviour - microphones are going to be orders better than speakers for distortion, so how do you measure how good the mic actually is? 🙂

 

The answer, https://www.listeninc.com/products/test-sequences/free/microphone-intermodulation-distortion-measurement/

 

Link to comment
5 minutes ago, bluesman said:

That doesn't address what we're talking about at all.  The stated intent of the linked article is clear and concise:  "The purpose of this sequence is to measure the Intermodulation Distortion (IM) of a microphone." 

 

The point I was making was that the pure tones, reproduced over two separate speakers, in close proximity, could each only produce harmonics, of the specific frequency fed to each - no intermodulation products would emerge from either speaker driver. And the usefulness of the test was the knowledge that intermodulation does not then occur in the air, therefore if the microphone registered any it was due to imperfections of the mic.

 

5 minutes ago, bluesman said:

 

We're not talking about distortion created by a device - we're talking about natural intermodulation products among tones generated by musical instruments, and how well and completely they're captured by microphones and incorporated into recordings.  And these tones do occur "in the air", if by that you mean as audible compression waves.  

 

Any intermodulation products must be created by some non-linearity in other than the air, otherwise the above testing technique could not be used.

 

Link to comment
12 minutes ago, bluesman said:

No it's not. Natural intermodulation products are not distortion and they're not products of nonlinear or otherwise imperfect circuitry.  See my post #580 for an audio recording that clearly demonstrates natural intermodulation, along with a spectrum analysis that shows its presence in the recording.

 

Yes, it's there, but it was created by the instrument itself - some part of it resonated at that frequency, provoked by the plucking of the strings - it's part of its intrinsic nature, its character - not added by the vibrations in the air interacting.

Link to comment
1 minute ago, bluesman said:

I never said they would.  The natural intermodulation of two tones does not happen at the speakers.  The IM tones are generated by the interaction of the two waves in the air.  They're real, audible airborne compression-rarefaction waves resulting from addition of the two fundamental waves.  It's a very simple version of the summation of multiple instruments into the orchestral waveform - and the IM products among all the instruments are in there too.

 

Therefore, that test procedure I linked to is invalid - if intermodulation occurred in the air, the microphone would register it - and we still wouldn't know how well the microphone performed, as regards internally generated IMD ... does this make sense?

Link to comment
22 minutes ago, bluesman said:

Sure - but distortion in microphones is pretty far afield from acoustic intermodulation among instruments.  I’m not even sure how it got into the discussion.  Then again, it’s a great example of a misleading measurement in the context of acoustic intermodulation. 🙂

 

It got into the discussion, because we're talking about how IMD would be registered by a mic. First of all, the mic itself must not misbehave, to any significant degree, because then if we got a reading we wouldn't know whether it was a true picture of what was happening, or the mic intermodulating, in its workings. So we need to test for this. Which is what that above test is about. Now, you said,

 

Quote

The natural intermodulation of two tones does not happen at the speakers.  The IM tones are generated by the interaction of the two waves in the air.  They're real, audible airborne compression-rarefaction waves resulting from addition of the two fundamental waves

 

Which means that even if the microphone was in fact perfect, that it would register IM, from those two speakers. So, running this test is useless for the purpose of detecting whether the microphone is misbehaving, adding IM from internal poor quality - by your thinking. Therefore, why do microphone manufacturers use this for a test?

 

Link to comment
3 hours ago, Clockmeister said:

All of these features can and do affect the way the finished products, very easy to produce a dac with wide open staging and cavernous depth, though does it actually have the correct tonal balance, textural rendering and articulation................?

 

 

So, it's possible to build a DAC which for a recording where the performer is in a tiny sound booth, will make it sound as if he is in a cathedral ... I'm curious if you can point to some guidelines of how one can do this ... ?

Link to comment
11 hours ago, Clockmeister said:

 

Fas42

 

Pretty basic requirements there, if this is want you wish for then DSP is your friend I believe its been on the market for one form or another for many years in many products, whether it accurately does this is another matter 🤔

 

Ohhh ... I thought the magic occurred in how the raw DAC worked - people mention how particular items did this circus trick, I think MSB DACs were mentioned, and that this is faulty reproduction... a naive chap like me come to the simple conclusion that this in fact is what the recording actually contains, and that units that don't present this information are the ones that are actually faulty ... sorry to misunderstand, 😉.

Link to comment
4 hours ago, bluesman said:

Maybe this will help clear up the mystery.  It's a piece from a researcher at UConn on intermodulation products that's oriented toward music.  And this is a nice little piece on the physics of intermodulation, beat frequencies, and Tartini tones.  The more I read, the more I believe that I'm probably correct.  I know and have shown in the clip I attached to a prior post that natural acoustic intermodulation among instruments is definitely real, audible, and captured by microphones,  And its creation anew on playback could easily have an effect on SQ. For example, the second linked article suggests that intermodulation products in the sub-60 Hz range may cause muddiness.

 

Consider this. The audible beating, to our ears, also impacts other, real materials which are part of the musical instrument. Or the general environment around the music making. And causes that material to vibrate at the beat frequency; because it's reacting to the ebb and flow of transmitted energy. And therefore real, detectable energy at those frequencies will now exist in the air.

 

With regard to Jud's suggestion, consider a thin timber panel being in the region of where the speaker and microphone are set up, versus not being there at all - would readings in the two situations show the same spectrum detail?

 

 

Link to comment
1 hour ago, bluesman said:

A solid body guitar is a slab of wood (solid maple in the case of my little experiment) that weighs about 8 pounds and has no resonant cavities in it other than a small opening about 1" deep, 4" long and 2" wide for the controls. The wavelength of a 40 Hz tone is 28 feet.   The heaviest string on a 5 string bass is tuned to a low B, which is 30 Hz at concert pitch.  The lowest string on a 6 string guitar is an 82 Hz E, so that won't resonate at 26 to 30.  The low A on my 7 string guitars is 55 Hz, so that wouldn't do it either.  So you could be correct if I'd done this on a 5 string bass - but I didn't.

 

There is absolutely no way that a 26 to 30 Hz tone is emanating from a normally strung, normally tuned solid body guitar other than as intermodulation between two of the strings.  And there may be information in there that can be used to develop a new and hopefully meaningful measurement to quantify "primary" (i.e. captured from the program) and "secondary" (i.e. generated during playback) intermodulation products.  Interestingly, if this turns out to be doable, the IMD of every component in a system will have to be mighty low if DSP stands a chance of removing the secondary acoustic IM products.

 

Look at that spectrum you posted - the intermodulation tone is down about 45dB, from the fundamentals of the strings; that's of the order of 1% of the level of provoking frequencies. You can be 100% sure that there is absolutely nothing in the structure of the guitar, that's going to vibrate from the energy of the strings, and is ever so slightly non-linear in its vibrational behaviour, to the degree that such a frequency node will never emerge; you are completely confident in the perfection of the construction of the guitar, that this can't happen?

 

If I tap the body of the guitar with my finder tip, is that completely inaudible? And if not, what energies occur, at all frequencies in the audible range?

Link to comment

Looking up things a bit further, one discover nonlinear acoustics - this relates to the situation, as John mentioned, that when intense pressures are involved that air as a medium for transmitted sound does become non-linear. Which can be exploited for doing interesting things. But otherwise it behaves as a linear system, https://acousticstoday.org/the-world-through-sound-linearity/.

 

I'm still not seeing anything that supports the idea that vibrations in air at normal, everyday amplitudes can create difference tones that will registered by a microphone.

Link to comment
7 hours ago, bluesman said:

 

This further convinces me that they're real, are captured by the mic, and are not a psychoacoustic phenomenon.  Here's the spectrum from the 18 dB/octave filter showing how little of the fundamentals remains - yet the difference tone is still there and quite clearly so:

 

IM_minus_fundamentals.jpg.bf65bead52a1d7cc54b44e7080d7f4c6.jpg

 

 

 

 

Thanks. Unfortunately, this still does not distinguish whether the difference tones, created as a real phenomenon, is caused by the air, or by some other object in the recording space ... IOW, what is the most likely candidate for reacting in a non-linear way to the vibrations of the strings, and how do we ascertain that in fact this is what happened?

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...