Jump to content
IGNORED

Misleading Measurements


Recommended Posts

24 minutes ago, pkane2001 said:

 

Again, why? I don't measure DACs to determine the exact size of the soundstage they'll create. I measure them to determine if the soundstage in the recording will not be disturbed. For that, I use the variables I already listed. 

 

Which is where I agree with Paul. The size of the soundstage is set by what the recording was done in - if a single person in a soundbooth it will be tiny, 🙂; if recorded in St Peters, it will indeed sound immense ... if done by studio manipulation, it can span the Grand Canyon - if desired, 😁.

Link to comment
10 minutes ago, pkane2001 said:

 

I'm pretty sure that this differs person to person. You can try some of the easy to configure cross-feed plugins with headphones to get a sense of what is audible to you. 

 

I haven't figured out yet how to tell soundstage depth on headphones. Maybe there's a correlation with some parameter yet to be determined but I don't in general get a perceived image 'out of my head' enough to hear where the soundstage ends.

Link to comment

 

35 minutes ago, pkane2001 said:

 

Again, why?

 

So the loop begins. It started with your challenge to my statement

3 hours ago, Audiophile Neuroscience said:

 

measurements do not show and do not correlate with what is heard.

I later refined this to direct measurable correlates to hearing soundstage height, width and depth parameters

 

and now you say:

Quote

I don't measure DACs to determine the exact size of the soundstage they'll create.

 

so there is no correlation if you do not determine size of the soundstage that is heard - clearly it is not a "perfect" or "exact" correlation as claimed and the association or concordance is non quantifiable. You depend on soundstage "non disturbance" presuming multiple measures are all ok with no quantifiable correlation other than all good or it is not .

Sound Minds Mind Sound

 

 

Link to comment
1 minute ago, Audiophile Neuroscience said:

 

 

So the loop begins. It started with your challenge to my statement

 

and now you say:

 

so there is no correlation if you do not determine size of the soundstage that is heard - clearly it is not a "perfect" or "exact" correlation as claimed and the association or concordance is non quantifiable. You depend on soundstage "non disturbance" presuming multiple measures are all ok with no quantifiable correlation other than all good or it is not .

 

I'm not sure I can explain it any better, but I'll try one more time. If I know how to place any instrument in a multi-track recording at a specific sound stage position (I do, and so do all the mastering engineers), and this is done using these three variables, then we know the correlation between these three variables and soundstage position, and therefore, size.

 

Whether I personally measure "soundstage size" or not is irrelevant, and I'm not interested in such a measurement. But nevertheless, correlation exists, and it's a mathematical function performed by most DAWs and plugins that generate stereo position from these three variables. If you claim that there's no correlation between these three measurements and soundstage, then everybody has been doing it wrong all this time.

 

 

Link to comment
5 minutes ago, pkane2001 said:

 

I'm not sure I can explain it any better, but I'll try one more time. If I know how to place any instrument in a multi-track recording at a specific sound stage position (I do, and so do all the mastering engineers), and this is done using these three variables, then we know the correlation between these three variables and soundstage position, and therefore, size.

 

Whether I personally measure "soundstage size" or not is irrelevant, and I'm not interested in such a measurement. But nevertheless, correlation exists, and it's a mathematical function performed by most DAWs and plugins that generate stereo position from these three variables. If you claim that there's no correlation between these three measurements and soundstage, then everybody has been doing it wrong all this time.

 

 

You've looped back to talking about artificial recordings which has already been discussed, and the scope is not limited to this. Even then you have not offered a quantifiable correlation and that is what has been asked - an objective measure that Iisteners can rely upon.

Sound Minds Mind Sound

 

 

Link to comment
4 minutes ago, Audiophile Neuroscience said:

You've looped back to talking about artificial recordings which has already been discussed, and the scope is not limited to this. Even then you have not offered a quantifiable correlation and that is what has been asked - an objective measure that Iisteners can rely upon.

 

Answer was already given for what determines the accuracy of soundstage reproduction. If you want something different, like soundstage dimensions in inches, you'll have to ask someone else, as that was never something I wanted to know. The math exists, you'll just have to do some homework.

 

Link to comment
2 minutes ago, pkane2001 said:

 

Answer was already given for what determines the accuracy of soundstage reproduction. If you want something different, like soundstage dimensions in inches, you'll have to ask someone else, as that was never something I wanted to know. The math exists, you'll just have to do some homework.

 

No, a quantifiable objective measure of correlation was not given that listeners can rely upon, and now a suggestion to search for what was not asked for, dimensions in inches - another loop back to what I previously said was not being argued = strawman.

Best to end the loop here😉

 

Sound Minds Mind Sound

 

 

Link to comment
6 minutes ago, Audiophile Neuroscience said:

No, a quantifiable objective measure of correlation was not given that listeners can rely upon, and now a suggestion to search for what was not asked for, dimensions in inches - another loop back to what I previously said was not being argued = strawman.

Best to end the loop here😉

 

 

You did ask for a way to determine if: "soundstage was measured to be excellent, width was x, depth was y and height an exceptional 97.4".  I assume X and Y would have to be in some units of length, so why not inches? Do you have something against inches? 

 

Anyway, I agree, let's stop here.

Link to comment
3 minutes ago, pkane2001 said:

 

You did ask for a way to determine if: "soundstage was measured to be excellent, width was x, depth was y and height an exceptional 97.4".  I assume X and Y would have to be in some units of length, so why not inches? Do you have something against inches? 

 

Anyway, I agree, let's stop here.

It can be in any measurement unit you prefer even percent correlation or concordance. This was answered previously. /loop

Sound Minds Mind Sound

 

 

Link to comment
5 hours ago, pkane2001 said:

 

A DAC doesn't have soundstage height, width and depth.

 

I believe it’s safe to say that different DACs is not equally good at presenting a realistic sound-stage just as they differ in other SQ characteristics.

 

If lower THD or distortion would actually correlate to bigger 3D sound-stage people would not choose tube gear for that exact reason.

Link to comment
2 hours ago, Summit said:

I believe it’s safe to say that different DACs is not equally good at presenting a realistic sound-stage just as they differ in other SQ characteristics.

 

If lower THD or distortion would actually correlate to bigger 3D sound-stage people would not choose tube gear for that exact reason.


As discussed, it is not THD but interchannel differences that determine soundstage “quality”. These are caused by differing levels and amounts/types of distortion between channels.

Link to comment
13 minutes ago, Miska said:

 

Overall phase behavior, and also to some extent modulator behavior. Not just inter-channel differences, but differences that apply to both channels. And for inter-channel also things over channel cross-talk.

 

Wow, that is good.   Definitely modulation distortion on a channel can distort the temporal relationships as a *secondary* effect.  Modulation distortion (as what one gets with fast gain control/agc/compression/expansion) can 'fuzz' spatial relationships along with the compression/expansion itself causing a modification of the 'space'.

 

John

 

Link to comment
56 minutes ago, John Dyson said:

Wow, that is good.   Definitely modulation distortion on a channel can distort the temporal relationships as a *secondary* effect.  Modulation distortion (as what one gets with fast gain control/agc/compression/expansion) can 'fuzz' spatial relationships along with the compression/expansion itself causing a modification of the 'space'.

 

John

 

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

Link to comment
44 minutes ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

 

IMD is just the way a nonlinear transfer function affects multiple tones. Harmonic distortion is a simple case of IMD with a single tone. 


If you want to see what IMD looks or sounds like, try my DISTORT app. You can look at pure tones, multi-tones, or just apply the same simulated IMD to any recorded piece of music and listen to it.

 

When you say that you want to learn about IM in the digital domain, do you mean in the frequency domain? Standard digital processing (such as in a DAC) should produce no IMD, unless specifically designed to introduce a non-linear transformation. It's the analog section that will introduce some level of non-linearity.

Link to comment
46 minutes ago, pkane2001 said:

 

IMD is just the way a nonlinear transfer function affects multiple tones. Harmonic distortion is a simple case of IMD with a single tone. 


If you want to see what IMD looks or sounds like, try my DISTORT app. You can look at pure tones, multi-tones, or just apply the same simulated IMD to any recorded piece of music and listen to it.

 

When you say that you want to learn about IM in the digital domain, do you mean in the frequency domain? Standard digital processing (such as in a DAC) should produce no IMD, unless specifically designed to introduce a non-linear transformation. It's the analog section that will introduce some level of non-linearity.

But I’m talking about intermodulation, not IM distortion. Intermodulation is a perfectly normal acoustic phenomenon that occurs when two instruments play different notes at the same time.  It’s an integral part of live music, so it’s in the source performance. IM distortion is the addition of intermodulation products that are not in the source.  
 

Not all intermodulation in audio is distortion.  In fact, natural intermodulation among instruments is a large component of the sound of a symphony orchestra. The phenomenon of recreation of the fundamental from the harmonic structure of a tone is what lets you hear lower notes in a musical performance than were played.  The lowest note in the spectral splash of the tympani can be heard below the pitch to which the head is tuned, if this is what the composition demands. Some composers use this effect as part of their music.  As I recall, the oboe’s spectrum is almost entirely harmonics, yet we hear the fundamental being played because we reconstruct it from those overtones and intermodulation products.
 

You can play two analog sine waves of differing frequencies and find the sum and difference intermodulation products in a spectral analysis of the sound.  The first ones are also easily heard if the fundamentals are audible and their sum and/or difference frequencies are in the audible range, although amplitude drops off precipitously after that. This does not appear to be the case with digitized sine waves, as I don’t see the same spectrum - what should be the biggest sum and difference components seem to be missing.  And I don’t hear those IM products when I play the tones through speakers or headphones.
 

So I’m trying to understand if and how digital tones intermodulate with each other, not how IM distortion products are created.  There’s a big difference between the two.

Link to comment
1 hour ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL.

 

I have a super simple example of IM, but it is hard to quantify the audible damage, because it all depends on gain control slew, rate of gain control changes, and the periodic nature.  There is a form of AM that is totally analogous to AM radio modulation -- it is created by dynamic range compression, expansion, limiting or traditional NR systems.

 

Any time you grab a signal, and multiply it by a varying gain, that is EXACTLY the same as AM modulating a carrier, except the carrier is the recording/music signal.   The goal of 'gain control' is usually to dynamically modify the signal level, and that is the actual goal.  Hiowever, simple jFET, OPTO, THATcorp chips, when they do the gain control, they MATHEMATICALLY create sidebands in the signal.  These sidebands both frequency wise and temporarlly spread the signal.   There are super special mathematical tricks that can modify the sidebands to inaudibility, and I created such a technique, but the math is beyond BSc level understanding.

 

There are other strange side effects to the modulation, and that is DURING the gain control slew, it weirdly opens up windows for the audio components to modulate each other, but that is mostly because of the gain control signal not being totally 'pure' WRT the desired gain.

 

There are two major kinds of modulation distortions avoided by the DHNRDS -- one form results from the gain control signal itself 'wobbling' based upon the signal waveform, and the other kind of modulation distortion comes from the gain control being applied to the signal.  Both of these evil behaviors mix with each other, making the result even worse.

 

Micro level forms of modulation can also happen against a digital signal, and that comes from the final clock rate moving around, and that also causes sidebands, and as a side effect of fitering further down the change, can even amplitude modulate the signal.

 

Any REAL source of modulation distortions in a recorded signal should be avoided, unless used for artful purposes (e.g. FM synth effects), or maybe even other purposeful distortions.  Artful distortion isn't bad, unless it is bad art :-).

 

John

 

Link to comment
1 hour ago, bluesman said:

But I’m talking about intermodulation, not IM distortion. Intermodulation is a perfectly normal acoustic phenomenon that occurs when two instruments play different notes at the same time.  It’s an integral part of live music, so it’s in the source performance. IM distortion is the addition of intermodulation products that are not in the source.  
 

Not all intermodulation in audio is distortion.  In fact, natural intermodulation among instruments is a large component of the sound of a symphony orchestra. The phenomenon of recreation of the fundamental from the harmonic structure of a tone is what lets you hear lower notes in a musical performance than were played.  The lowest note in the spectral splash of the tympani can be heard below the pitch to which the head is tuned, if this is what the composition demands. Some composers use this effect as part of their music.  As I recall, the oboe’s spectrum is almost entirely harmonics, yet we hear the fundamental being played because we reconstruct it from those overtones and intermodulation products.
 

You can play two analog sine waves of differing frequencies and find the sum and difference intermodulation products in a spectral analysis of the sound.  The first ones are also easily heard if the fundamentals are audible and their sum and/or difference frequencies are in the audible range, although amplitude drops off precipitously after that. This does not appear to be the case with digitized sine waves, as I don’t see the same spectrum - what should be the biggest sum and difference components seem to be missing.  And I don’t hear those IM products when I play the tones through speakers or headphones.
 

So I’m trying to understand if and how digital tones intermodulate with each other, not how IM distortion products are created.  There’s a big difference between the two.


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

Link to comment
21 minutes ago, pkane2001 said:


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

But spectral analysis shows more than just the 2 digital sine wave frequencies and the electronic floor of noise and distortion (which is quite low when examined without the sine waves).   So something is happening.  I can’t define any relationships among the peaks I see - they’re clearly not natural harmonics or harmonic products.   I’d love to know what this is, where it originates, how it affects audio playback, and if we can use or reduce it.

 

My next experiment is to mix the same two sine waves at multiple sampling rates, to see if the spectrum changes with resolution of the digital waveforms. This may all be nonproductive, except for the educational value.  But I just gotta know what’s under the hood!

Link to comment
2 minutes ago, bluesman said:

But spectral analysis shows more than just the 2 digital sine wave frequencies and the electronic floor of noise and distortion (which is quite low when examined without the sine waves).   So something is happening.  I can’t define any relationships among the peaks I see - they’re clearly not natural harmonics or harmonic products.   I’d love to know what this is, where it originates, how it affects audio playback, and if we can use or reduce it.

 

Maybe I missed it, but what are you measuring? Any frequencies that are not in the original signal are distortion. IMD happens in the analog domain, except for some special software or plugins  designed to simulate it, like my DISTORT.

 

Spectral analysis of analog signal can certainly show IMD, but usually not digital.

Link to comment
32 minutes ago, pkane2001 said:


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

Instruments themselves can intermodulate because of natural nonlinearities.  I wouldn't be surprised if closely spaced traditional instruments wouldn't intermodulate also.  Of course, it requires significant (live) volumes.

 

No matter, as long as the natural performance is mic'ed and preamp is good, the electronics equipment itself shouldn't produce a lot of modulation components (of whatever type.)   The transducer (mic) might intermodulate to some extent -- one reason to use small diaphragms at high levels.   The acutal performance (sources) certainly can intermodulate to one extent or another.  I truly don't know how much -- it is for those who work with live music nowadays to measure the natural modulations (if they are interested.)  I am interested in real-world information on the matter -- cool stuff.

 

An example of something that naturally creates intermod distortions (doppler/FM distortion) is a single cone speaker trying to reproduce the entire frequency range...   The long excusions of the lows will certainly doppler modulate the highs.   (That is one reason for the early development of coaxial and triaxial speakers.)   Geesh, a poorly constructed speaker box, with lots of bass, buzzing and buzzing is also a form of IMD :-).

 

John

 

Link to comment
2 minutes ago, John Dyson said:

Instruments themselves can intermodulate because of natural nonlinearities.  I wouldn't be surprised if closely spaced traditional instruments wouldn't intermodulate also.  Of course, it requires significant (live) volumes.

 

No matter, as long as the natural performance is mic'ed and preamp is good, the electronics equipment itself shouldn't produce a lot of modulation components (of whatever type.)   The transducer (mic) might intermodulate to some extent -- one reason to use small diaphragms at high levels.   The acutal performance (sources) certainly can intermodulate to one extent or another.  I truly don't know how much -- it is for those who work with live music nowadays to measure the natural modulations (if they are interested.)  I am interested in real-world information on the matter -- cool stuff.

 

An example of something that naturally creates intermod distortions (doppler/FM distortion) is a single cone speaker trying to reproduce the entire frequency range...   The long excusions of the lows will certainly doppler modulate the highs.   (That is one reason for the early development of coaxial and triaxial speakers.)   Geesh, a poorly constructed speaker box, with lots of bass, buzzing and buzzing is also a form of IMD :-).

 

John

 


Air itself can become nonlinear and result in IMD at very loud levels.

Link to comment
1 hour ago, pkane2001 said:

 

Maybe I missed it, but what are you measuring? Any frequencies that are not in the original signal are distortion. IMD happens in the analog domain, except for some special software or plugins  designed to simulate it, like my DISTORT.

 

Spectral analysis of analog signal can certainly show IMD, but usually not digital.

I expected to find nothing - but it shows content in addition to the two digital "sine waves" being played and the noise + distortion floor of the equipment, which is minimal.  And I get the same spectrum whether I play the tones directly into an analyzer or record them with Audacity and analyze the wav file.  I'll have to find the time to run some and post them.

 

Thanks!

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...