Jump to content
IGNORED

Misleading Measurements


Recommended Posts

6 hours ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

 

If there is IM in the recording, acoustic space then it will be captured by a decent mic. The job of the playback chain is to not add IMD, which in real life systems can be difficult - so, there is no automatic "created anew" taking place.

 

Truly lifelike reproduction is possible when the tune of the replay setup is good enough ... but, yes, it occurs rarely, at the moment. It would be quite amazing for someone to experience a rig doing this, if they have never come across it before .. happened to me over 3 decades ago, 🙂.

Link to comment
11 hours ago, pkane2001 said:


As discussed, it is not THD but interchannel differences that determine soundstage “quality”. These are caused by differing levels and amounts/types of distortion between channels.

Added HF detail may also result in a larger "billowly" soundstage. Many members are doing this by using daisy chained LT3045 ultra low noise voltage regulators which have a considerably lower output impedance at >100kHz.

 

This is a quotation from Bob Katz, well known recording & audio mastering engineer posted here

www.digido.com/audio-faq/j/jitter-better-sound.html

After an engineer learns to identify the sound of signal-correlated jitter, then you can move on to recognizing the more subtle forms of jitter and finally, can be more prepared to subjectively judge whether one source sounds better than another.

Here are some audible symptoms of jitter that allow us to determine that one source sounds "better" than another with a reasonable degree of scientific backing:

It is well known that jitter degrades stereo image, separation, depth, ambience, dynamic range.

Therefore, when during a listening comparison, comparing source A versus source B (and both have already been proved to be identical bitwise):

The source which exhibits greater stereo ambience and depth is the "better" one.

The source which exhibits more apparent dynamic range is the "better" one.

The source which is less edgy on the high end (most obvious sonic signature of signal correlated jitter) is the "better" one.

And a reply:
The better one, and it is better, is also easier to listen to. . . less fatiguing. I would also add to this that the low end just "feels" bigger and more solid. This is perhaps a psychoacoustic affect more than a measurable one. It may be that the combination of a less edgy high end and greater depth and width makes the bass seem better.

All of this makes sense if thought of in terms of timing (that is what we're talking about isn't it ;-]). With minimal jitter nothing is smeared, a note and all its harmonics line up, the sound is more liquid (a term probably from the "audiophile" crowd but one which accurately describes the sound none the less), and images within the soundstage are clearly defined. 


Now some extra points:
- listener fatigue is reduced or completely eliminated
- the sound can be turned up higher without any distortion being evident
- the sound can also be turned down lower & the full dynamics are still retained but at a lower volume

 

 And one from Barrows which both John Kenny and myself agreed with.

 

Quote

 Quote 


The thing is, I could imagine some people preferring the sound of higher jitter,
in some systems. In other words, a low jitter source, if one is used to hearing higher jitter levels,

may point out what I would call problems in a system. It appears to me, that some levels/spectrums of jitter

may have a euphonic result in some systems. To me, in a good system, the results of lowering jitter are increased detail retrieval

(as evidenced by image specificity, decays, more complex harmonic portrayals) accompanied by greater listening ease.

Higher levels of jitter result in a hazy sound, obscuring these same lower level details to some extent,

but sometimes higher jitter levels also result in the appearance of a larger, billowing, soundstage,

that can be somewhat impressive at first listening. I can certainly imagine a system, that is already on the bright/hard side,

where a listener might be very impressed by the sound of a higher jitter source and the little bit of haze it brings,

along with a big, billowy, soundstage.
 

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
35 minutes ago, sandyk said:


Now some extra points:
- listener fatigue is reduced or completely eliminated
- the sound can be turned up higher without any distortion being evident
- the sound can also be turned down lower & the full dynamics are still retained but at a lower volume

 

 

 

Good post, Alex ... now, how many systems does one come across, in real life, that tick those 3 boxes ... ?

Link to comment
6 minutes ago, fas42 said:

 

Good post, Alex ... now, how many systems does one come across, in real life, that tick those 3 boxes ... ?

Note this bit in particular, as many seem to think that they can correct all this damage with further conversions to a much higher bit rate.

Quote

It is well known that jitter degrades stereo image, separation, depth, ambience, dynamic range.

Therefore, when during a listening comparison, comparing source A versus source B (and both have already been proved to be identical bitwise):

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
1 hour ago, fas42 said:

The job of the playback chain is to not add IMD, which in real life systems can be difficult - so, there is no automatic "created anew" taking place.

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

When you play back the program material, you will generate again the same acoustic intermodulation from the same two original tones, which are now playing from your speakers along with their recorded intermodulation products.  It is indeed created anew, and is purely an acoustic phenomenon just as it was in the performance.  It has nothing to do with the electronics, which add whatever intermodulation distortion products they generate.  But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

Link to comment
7 hours ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance. [bold is mine]

 

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

 

So I may be misunderstanding your post and also John's @John Dyson explanations that followed your post like "Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL."...

 

So, is it a good thing or bad?

 

The plot thickens if we also consider illusory phenomenon like the "missing" or phantom fundamental and issues of the same pitch but different timbre https://auditoryneuroscience.com/pitch/missing-fundamentals

 

 

A professional symphony trumpet player recommended this book by Christopher Leuba on intonation for those interested https://www.hornguys.com/products/a-study-of-musical-intonation-by-christopher-leuba-pub-cherry

 

I will quote this trumpet player from another website post as I think it teaches a lot about the topic and may be germane to the question

 



For those of us that think that an octave is simply a doubling or halving of frequency, this book is certainly recommended reading. It was written for musicians, but is applicable to anyone wanting to understand what our ears and brains really want!

 

To me, intonation is many things:

 

First: it is the relationship of the tones played by different instruments at the same time. When two trumpets (or other instruments)are playing, sum and difference tones are generated acoustically.

 

When "in tune" those resultant tones line up in frequency and become a constructive part part of the musical presentation. If we play "out of tune", the sum and difference tones become destructive, the end product becomes hard to listen to. If we are talking about playing major chords, the sum and difference tones are all in the same key. A common sense of "intonation" makes large groups of instruments create an "orchestral fabric" that solo instruments can move in and out of.

 

Second: intonation is the relationship of notes to one another in the same instrument. A piano is tuned for instance "well tempered". This means that moving up the scale, the distance in frequency between the notes is the same.

 

Unfortunately, the sum and difference tones do not line up with well tempered tuning. When a piano plays a C major chord, the resultant tones are NOT in C major. We use well tempered tuning to be able to play in all keys - albeit with equal errors in the resultant tones. The "character" of the key is dramatically diminished. Many harpsichords and organs are NOT tuned well tempered. They are tuned for the job immediately at hand. For a Bach Brandenburg Concerto #2, this would favor F major. Baroque period organs are often tuned similarly as they seldom need a lot of different key signatures.

 

Of course there is much more to discuss here, but this gets us started.

 

What I learned in this book was about the compromises necessary when playing - to keep the entire ensemble "happy". Performers can not just let their instruments do what they do, we need to influence pitch depending on its context. I also learned about making resultant tones work for me. Sometimes the resultant tones ABOVE the played notes are more prominent, sometimes the resultant tones BELOW are. In any case, playback systems with great low end extension can make 2 violins sound much more "real" due to difference tones being <50 Hz (A440Hz and Bb 466Hz produce resultant tones 26 Hz and 906Hz for instance).

 

To "hear" the effects of resultant tones in large groups, grab any Bruckner symphony (Vienna Phil, Furtwängler or Berlin, Karajan is a great start) and listen to the brass. Bruckner composed as if the orchestra was an organ. The brass section plays block chords and incredible resultant tones result.

 

In small groups, recorder duets make resultant tones very easy to hear. Telemann, Georg Philipp: Blockflötenduette, Label: Raumklang,

EAN: 4035566200409

Order Nr.: RK MA 20040

 

In relation to perception here, maybe we should take Ravel Bolero. Here Ravel orchestrated instruments together to get a combined sound that was new - unlike the instruments being played. Bach used resultant tones to play notes outside of the range of the organ.

 

Playback systems with good tonal discrimination, let more of this "goodness" through.

 

 

Sound Minds Mind Sound

 

 

Link to comment
15 minutes ago, bluesman said:

But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

Our posts crossed but this I guess is the explanation i was seeking. I am still puzzled as to how this situation is resolved in real world playback. You want the sum products but not the 'looking in back to back mirrors'  where the image gets repeated beyond the original.

Sound Minds Mind Sound

 

 

Link to comment
24 minutes ago, Audiophile Neuroscience said:

 

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

 

So I may be misunderstanding your post and also John's @John Dyson explanations that followed your post like "Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL."...

 

So, is it a good thing or bad?

 

The plot thickens if we also consider illusory phenomenon like the "missing" or phantom fundamental and issues of the same pitch but different timbre https://auditoryneuroscience.com/pitch/missing-fundamentals

 

 

A professional symphony trumpet player recommended this book by Christopher Leuba on intonation for those interested https://www.hornguys.com/products/a-study-of-musical-intonation-by-christopher-leuba-pub-cherry

 

I will quote this trumpet player from another website post as I think it teaches a lot about the topic and may be germane to the question

 

 

 

 

 

I very definitely did not make my point clear when mentioning 'distortion'.  I was intending to say that natural distortion that comes before the mic is cool.   Distortion in the electronics after the microphone is uncool.

 

The natural world, instruments, etc produce intermod and nonlinear distortions from time to time.  We want for the mic to capture those NATURAL sounds.  Anything mucked up by ham-handed electronics is generally bad (unless artfully intentional.)

 

Sorry for the confusion.

 

John

 

Link to comment
23 minutes ago, bluesman said:

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

Okay, as @pkane2001 pointed out, there is a clear difference between acoustic IM, and IMD, Intermodulation Distortion. Now, whether the acoustic IM is something that makes a truly microphone recordable 60 HZ tone, or whether this is something that purely occurs in one's head, and doesn't actually exist in the music playing space should be well understood by now - can someone point to a paper or otherwise which clearly explains which it is?

 

23 minutes ago, bluesman said:

 

 

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

 

Yes, I understand the value of creating beating effects to detect frequencies ... at one point I used Audacity to add a slightly different frequency to the harmonics of sine bass frequencies being fed to a small mid/bass driver - made it easy to get a measure of the level of distortion of the actual driver, by listening to the intensity of the beats.

Link to comment
3 minutes ago, Audiophile Neuroscience said:

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

I'm not sure if it's good or bad or a bit of each. Acoustic intermodulation is part of what we hear at a concert, and it clearly helps to create the overall soundscape of the live performance and venue. But as is clear from some of the responses to my posts in this thread, there aren't many people who've even given this any thought let alone come to an understanding of it. I think it's logical to assume that creating the same intermodulation products during playback that were created and recorded at the performance might have a clearly audible effect on how real it sounds. I do not know this for sure, and it'll take a lot of work to begin to figure it out.  But I strongly suspect it's hiding an overlooked opportunity to further improve SQ.

 

Interestingly, recording bands part by part in isolation eliminates this consideration except for the intermodulation among fundamental and harmonic tones from the individual instrument being recorded.  So playback is the first opportunity for intermodulation among all the instruments in the ensemble.  It's only for full scale live performances by multiple instruments that they can interact acoustically.

 

It doesn't matter whether the recording and/or playback equipment is analog or digital - the sounds of the instruments coming from your speakers are analog, so they will generate natural intermodulation. Maybe we could use real time spectral analysis to identify any recorded intermodulation and DSP to cancel it with out of phase addition and summation.  I think this may be important.  It also may be a wild goose chase - but I like goose with the right sauce.....

Link to comment
4 minutes ago, fas42 said:

Yes, I understand the value of creating beating effects to detect frequencies ... at one point I used Audacity to add a slightly different frequency to the harmonics of sine bass frequencies being fed to a small mid/bass driver - made it easy to get a measure of the level of distortion of the actual driver, by listening to the intensity of the beats.

Acoustic intermodulation is not in your head - it's physical and audible.  The beat frequency you hear and we use to tune our guitars is recordable and audible on playback.  But it seems that it only occurs with analog sources - it doesn't seem to develop when the differing frequencies themselves are digitally generated.  However, even an all digital record-playback chain starts with analog input from live instruments and ends up with purely analog output as sound, so it definitely occurs with what's coming out of your speakers on playback.

Link to comment

Along these lines, this is why I find most system playback of pipe organ pretty awful - the real world impact is amazing to hear, of a live instrument; the sense of all the harmonics blending is fabulous, the air of the space "loads" to an enormous degree ... and rigs do this poorly, in general. It's why I use a particular organ CD of mine to check this out - and other rigs I try it on are, ummm, duds ...

Link to comment
5 minutes ago, bluesman said:

Acoustic intermodulation is not in your head - it's physical and audible.  That beat frequency you hear and we use to tune our guitars is recordable and audible on playback.

 

Yes, but if I record that with an instrumentation microphone, and look at the spectrum of the captured air vibration - is there actually a 60Hz signal in the mix?

Link to comment
9 minutes ago, fas42 said:

 

Yes, but if I record that with an instrumentation microphone, and look at the spectrum of the captured air vibration - is there actually a 60Hz signal in the mix?

Yes.  Here's an example of the kind of research and results found in the scientific literature of acoustics.  The paper (out of Dartmouth) is investigating ways of enhancing the harmonic richness of the sound of musical instruments with the intermodulation products of the instrument's sound and additional injected frequencies.  They include spectral analysis of this phenomenon:

 

"Modulation is often used in sound synthesis to reduce the number of oscillators needed to generate complex timbres by producing additional signal components prior to the output. For example, FM Synthesis is employed to emulate rich timbres of acoustic instruments [5]. Another method of modulation synthesis is via Intermodulation (IM), a form of amplitude modulation acting on the signal harmonics from two or more injected signals....

 

In this paper we have detailed and defined a new approach to nonlinear acoustic synthesis through IM. We have shown that it is possible to produce IM components in a variety of instrumental contexts and have shown that by parametrically increasing modulation depth β, more frequency components can be produced in a continuous, controlled fashion. Control over both the number and frequency of sidebands suggests that IM is a powerful method of producing broad timbral synthesis in modified or newly-designed acoustic instruments, capable of bridging the electronic with the acoustic."

Link to comment
39 minutes ago, bluesman said:

Acoustic intermodulation is not in your head - it's physical and audible.  The beat frequency you hear and we use to tune our guitars is recordable and audible on playback.  But it seems that it only occurs with analog sources - it doesn't seem to develop when the differing frequencies themselves are digitally generated.  However, even an all digital record-playback chain starts with analog input from live instruments and ends up with purely analog output as sound, so it definitely occurs with what's coming out of your speakers on playback.

 

IM is part of the natural (analog) world. Most natural things that produce sound are non-linear and multiple fundamental frequencies (most natural sounds contain a ton of these) will create IM components due to these non-linearities. Nothing to do about that, but also nothing to worry about: our ears and brains figured out how to process and not to get confused by all these "extra" frequencies in the natural world. To us, IM is part and parcel of the recognizable sounds. I suspect that if it was possible to completely remove IM from, say, a guitar or piano or human voice, it would sound completely unnatural to us.

Link to comment
30 minutes ago, bluesman said:

Yes.  Here's an example of the kind of research and results found in the scientific literature of acoustics.  The paper (out of Dartmouth) is investigating ways of enhancing the harmonic richness of the sound of musical instruments with the intermodulation products of the instrument's sound and additional injected frequencies.  They include spectral analysis of this phenomenon:

 

 

Okay, had a quick look ... a key bit,

 

Quote

Harmonics arise from nonlin-earities intrinsic to electrical or physical systems.

 

IM is found in electrical systems such as ampliers and effects for musical purposes. For example, "power chords" played on electric guitars are an ffect resulting from IM within an over-driven mixing ampli er [2]. IM can produce strong subharmonics by injecting two harmonic-rich signals into a nonlinear electrical ampli er. We propose here a me-chanical method that generates intermodulation and para-metric acoustic timbres in an acoustic system, such as an augmented musical instrument or effect systems.

 

The important bit is that non-linearity is deliberately introduced, or occurs naturally in the sound making of an individual instrument. It doesn't "occur in the air", which is the vital difference - so, the playback chain should minimise all non-linearities, and then, all is good, 🙂.

Link to comment
 1 hour ago, bluesman said:

Interestingly, recording bands part by part in isolation eliminates this consideration except for the intermodulation among fundamental and harmonic tones from the individual instrument being recorded.  So playback is the first opportunity for intermodulation among all the instruments in the ensemble.  It's only for full scale live performances by multiple instruments that they can interact acoustically.

 

and from another thread

3 hours ago, gmgraves said:

Of course, large works, symphony orchestras, big bands, etc., aren’t there to the extent that smaller works are, the illusion doesn’t scale well at all, but that doesn’t mean that they can’t sound excellent anyway. They just can’t be as palpably in the room as can small intimate performances, but, of course, it’s dreaming to think that they could. Just as one can’t fit an 80 piece symphony orchestra in one’s living room (well, most people can’t, anyway), one can’t realistically fit the sound of an 80 piece symphony orchestra in one’s living room either.

 

1 hour ago, bluesman said:

But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

-----------

The interesting thing for me is that I have previously commented that recordings of live jazz can lose something in translation for me. I also agree that symphony orchestras are hard to recreate in one's living room. I consider I have a SOTA system and smaller scale more intimate works can sound startlingly real AND well done studio produced works can also sound very real (some have said better than live in some ways but l would say different, not better).

 

SO, could it be possible that this 'double down' acoustic intermodulation of tones be in some way partly responsible for the inherent difficulties of playback systems to get larger scale works right? It simply doesn't matter how good the recording or playback is there will be artifact created by added intonation which colors the sound.

 

Descriptions of say "congestion" may have nothing to do with the system's lack of resolution or the room acoustics.

Could some people be very sensitive to this effect and may those like @STC prefer reproductions with less crosstalk such as ambiophonics be because there is less acoustic intermodulation?

Sound Minds Mind Sound

 

 

Link to comment
1 hour ago, bluesman said:

But as is clear from some of the responses to my posts in this thread, there aren't many people who've even given this any thought let alone come to an understanding of it. I think it's logical to assume that creating the same intermodulation products during playback that were created and recorded at the performance might have a clearly audible effect on how real it sounds.

1+

I think there can be too much focus on the audio signal without always thinking about what comes before and after. If you don't understand both these things it likely makes it difficult to know what to look for in the audio signal and lead to missed opportunities.

Sound Minds Mind Sound

 

 

Link to comment
21 minutes ago, Audiophile Neuroscience said:

 

But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

No, they're not. Provided the drivers are of a decent standard, those 2 frequencies are all that will exist, measured close to the drivers - measure the signal being fed to the drivers, again only 2 frequencies. There is no such thing as a purely acoustic event, in the sense you're trying to conceive of it.

Link to comment
13 minutes ago, Audiophile Neuroscience said:

Could some people be very sensitive to this effect and may those like @STC prefer reproductions with less crosstalk such as ambiophonics be because there is less acoustic intermodulation?


I did not read all the posts. I only read the first post and your post quoting me so what I reply is confined to these two posts.  I will be emphasizing the word ONE in my reply here. 
 

In audio, we measure only ONE signal at a time. A reproduction of ONE signal ( channel) always correlates with measurements for assessment of sound quality. 
 

All sound that reaches our ears are a mix of direct and reverberation. In live concert, the indirect sound of the reverberation make up more than 80% of the sound we hear. These indirect sound hardly represents the exact sound waves of the original direct sound waves. Your perfect measurements are no longer relevant now as the sound waves are already altered by the time it reaches the ears. 
 

So far we are dealing with only ONE sound. In nature there is no stereo sound. In stereo reproduction what we hear is two sound representing the original single sound. The effect of soundstage is the creation of our mind based on the level difference cues. This again got nothing to do with the measurements as we do not utilize ILD and ITD difference similarly to everyone.  For some the ITD difference can be around 900Hz to 1500Hz and for others at a different range.  This again will make all the measurements useless as we simply do not know how the brain going to process the stereo sound for localization. 
 

In Ambiophonics ( which is actually listening to stereo without crosstalk via speakers), you provide the signal to the ears that corresponds to real sound that is only ONE sound. This got nothing to do acoustic intermodulation. It simply reduces the confusion to the brain grappling the conflicting ITD caused by stereo phantom image(s). It also reduces the comb filter effect caused by the two signals of the same sound....

 

If the post gets deleted, I will email you, AN. 
 

 

Link to comment
1 hour ago, Audiophile Neuroscience said:

SO, could it be possible that this 'double down' acoustic intermodulation of tones be in some way partly responsible for the inherent difficulties of playback systems to get larger scale works right? It simply doesn't matter how good the recording or playback is there will be artifact created by added intonation which colors the sound.

 

45 minutes ago, bluesman said:

That’s exactly my belief.

 

@esldude and I discussed this quite some time back.

 

I wondered if reproduction of ultrasonics helped reproduce some of the natural harmonics of musical instruments. He maintained any intermodulation of that type from home speakers over and above the audible frequencies captured at the mics would be distortion. I'm not exactly sure if Paul is arguing the opposite.

 

I'm quite sure at least some speaker designers are well aware of the possibility such intermodulation may occur and take steps to minimize it insofar as possible. I'd guess it would show up pretty clearly with some speaker tests.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
53 minutes ago, STC said:


I did not read all the posts. I only read the first post and your post quoting me so what I reply is confined to these two posts.  I will be emphasizing the word ONE in my reply here. 
 

In audio, we measure only ONE signal at a time. A reproduction of ONE signal ( channel) always correlates with measurements for assessment of sound quality. 
 

All sound that reaches our ears are a mix of direct and reverberation. In live concert, the indirect sound of the reverberation make up more than 80% of the sound we hear. These indirect sound hardly represents the exact sound waves of the original direct sound waves. Your perfect measurements are no longer relevant now as the sound waves are already altered by the time it reaches the ears. 
 

So far we are dealing with only ONE sound. In nature there is no stereo sound. In stereo reproduction what we hear is two sound representing the original single sound. The effect of soundstage is the creation of our mind based on the level difference cues. This again got nothing to do with the measurements as we do not utilize ILD and ITD difference similarly to everyone.  For some the ITD difference can be around 900Hz to 1500Hz and for others at a different range.  This again will make all the measurements useless as we simply do not know how the brain going to process the stereo sound for localization. 
 

In Ambiophonics ( which is actually listening to stereo without crosstalk via speakers), you provide the signal to the ears that corresponds to real sound that is only ONE sound. This got nothing to do acoustic intermodulation. It simply reduces the confusion to the brain grappling the conflicting ITD caused by stereo phantom image(s). It also reduces the comb filter effect caused by the two signals of the same sound....

 

If the post gets deleted, I will email you, AN. 
 

 

Hi ST

thanks for that and I understand how ambiophonics would present one sound to two ears as in real life.

 

The issue here is acoustic intermodulation aka intonation as discussed in the quotes below.

 

So, intonation / acoustic intermodulation obviously exists in real life acoustic performances as musicians and physicists would agree.

 

Those real life tones are captured in the recording and even with Mono reproduction would be reproduced, creating a second generation of new acoustic intermodulation - a 'double down effect'. It is this that may color the sound.

 

So I thought of you here. Surely ambiophonics would tend to mitigate this effect quite apart from any other benefits. I don't think it could eliminate it, just ameliorate the effect, or at least not exacerbate it like stereo crosstalk. It would be another plus for arguing for ambiophonics, if I am right

 

 

 

 

3 hours ago, Audiophile Neuroscience said:

 

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

 

So I may be misunderstanding your post and also John's @John Dyson explanations that followed your post like "Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL."...

 

So, is it a good thing or bad?

 

The plot thickens if we also consider illusory phenomenon like the "missing" or phantom fundamental and issues of the same pitch but different timbre https://auditoryneuroscience.com/pitch/missing-fundamentals

 

 

A professional symphony trumpet player recommended this book by Christopher Leuba on intonation for those interested https://www.hornguys.com/products/a-study-of-musical-intonation-by-christopher-leuba-pub-cherry

 

I will quote this trumpet player from another website post as I think it teaches a lot about the topic and may be germane to the question

EDIT -added quote

<Quote>



For those of us that think that an octave is simply a doubling or halving of frequency, this book is certainly recommended reading. It was written for musicians, but is applicable to anyone wanting to understand what our ears and brains really want!

 

To me, intonation is many things:

 

First: it is the relationship of the tones played by different instruments at the same time. When two trumpets (or other instruments)are playing, sum and difference tones are generated acoustically.

 

When "in tune" those resultant tones line up in frequency and become a constructive part part of the musical presentation. If we play "out of tune", the sum and difference tones become destructive, the end product becomes hard to listen to. If we are talking about playing major chords, the sum and difference tones are all in the same key. A common sense of "intonation" makes large groups of instruments create an "orchestral fabric" that solo instruments can move in and out of.

 

Second: intonation is the relationship of notes to one another in the same instrument. A piano is tuned for instance "well tempered". This means that moving up the scale, the distance in frequency between the notes is the same.

 

Unfortunately, the sum and difference tones do not line up with well tempered tuning. When a piano plays a C major chord, the resultant tones are NOT in C major. We use well tempered tuning to be able to play in all keys - albeit with equal errors in the resultant tones. The "character" of the key is dramatically diminished. Many harpsichords and organs are NOT tuned well tempered. They are tuned for the job immediately at hand. For a Bach Brandenburg Concerto #2, this would favor F major. Baroque period organs are often tuned similarly as they seldom need a lot of different key signatures.

 

Of course there is much more to discuss here, but this gets us started.

 

What I learned in this book was about the compromises necessary when playing - to keep the entire ensemble "happy". Performers can not just let their instruments do what they do, we need to influence pitch depending on its context. I also learned about making resultant tones work for me. Sometimes the resultant tones ABOVE the played notes are more prominent, sometimes the resultant tones BELOW are. In any case, playback systems with great low end extension can make 2 violins sound much more "real" due to difference tones being <50 Hz (A440Hz and Bb 466Hz produce resultant tones 26 Hz and 906Hz for instance).

 

To "hear" the effects of resultant tones in large groups, grab any Bruckner symphony (Vienna Phil, Furtwängler or Berlin, Karajan is a great start) and listen to the brass. Bruckner composed as if the orchestra was an organ. The brass section plays block chords and incredible resultant tones result.

 

In small groups, recorder duets make resultant tones very easy to hear. Telemann, Georg Philipp: Blockflötenduette, Label: Raumklang,

EAN: 4035566200409

Order Nr.: RK MA 20040

 

In relation to perception here, maybe we should take Ravel Bolero. Here Ravel orchestrated instruments together to get a combined sound that was new - unlike the instruments being played. Bach used resultant tones to play notes outside of the range of the organ.

 

Playback systems with good tonal discrimination, let more of this "goodness" through.

 

<end quote>

 

 

3 hours ago, bluesman said:

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

When you play back the program material, you will generate again the same acoustic intermodulation from the same two original tones, which are now playing from your speakers along with their recorded intermodulation products.  It is indeed created anew, and is purely an acoustic phenomenon just as it was in the performance.  It has nothing to do with the electronics, which add whatever intermodulation distortion products they generate.  But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

 

Sound Minds Mind Sound

 

 

Link to comment
3 minutes ago, Audiophile Neuroscience said:

Hi ST

thanks for that and I understand how ambiophonics would present one sound to two ears as in real life.

 

The issue here is acoustic intermodulation aka intonation as discussed in the quotes below.

 

So, intonation / acoustic intermodulation obviously exists in real life acoustic performances as musicians and physicists would agree.

 

Those real life tones are captured in the recording and even with Mono reproduction would be reproduced, creating a second generation of new acoustic intermodulation - a 'double down effect'. It is this that may color the sound.

 

So I thought of you here. Surely ambiophonics would tend to mitigate this effect quite apart from any other benefits. I don't think it could eliminate it, just ameliorate the effect, or at least not exacerbate it like stereo crosstalk. It would be another plus for arguing for ambiophonics, if I am right

 

 

 

 

 

 


Hi AN,

 

It will always exist because we are receiving two inputs via two ears. All sound when reaching the ear drums already altered even for ONE mono signal due to the difference in the pinna between left and right ears. You are always listening to two inputs at all time which cannot represent the exact measurements ( sound waves) as the original even after taking into consideration of frequency shaping by our ears. 
 

Furthermore, our ears are not fixed in space as we continually move even when you try to hold your breath. Even the slightest movements alter the FR that reaches your ear drums. A perfect 440Hz will never be perfect by the time it reaches the ears because we are constantly moving even if it just a few millimeters. 

Link to comment
19 minutes ago, Jud said:

I'd guess it would show up pretty clearly with some speaker tests.

 

In fact free software like REW used with a calibrated mic should show it with test tones at various frequencies, so you can check whether this might be a concern with your speakers.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...