Jump to content
IGNORED

Misleading Measurements


Recommended Posts

1 minute ago, Audiophile Neuroscience said:

 

How can i point to a measurement that doesn't exist......but let's take soundstage, what measurement correlates with that? Amir did a whole thing about it (I really must start keeping records, not) which I described as no more than an opinion piece. An assumption that lots of measures taken collectively would somehow assure us of accurately hearing soundstage. 


Wait, there are perfectly reasonable findings by research scientists correlating measurements to preferences. So your claims are certainly not true in general. Do you have specifics?
 

Soundstage is perfectly defined by phase and level differences between the left and the right channel, plus reverb. In fact, most soundstage in recordings is artificially created using these tools in mastering, so we know exactly how it’s produced and therefore how to measure it.

Link to comment
Just now, fas42 said:

 

Incorrect. At a simplistic level, yes, true - but a high performing system reproduces very low level detail far more clearly, allowing the ear/brain to decode the meaning of this extra, usable information - anyone who has had a rig evolve into a state of tune allowing this fuller appreciation of the soundstage capture has an understanding of what is possible; and at least has an intuitive idea about how to move his setup towards this capability.


I’ll ask you the same question: do you have any objective evidence for this claim aside from your personal experience 30 years ago?
 

Because I can certainly cite books and studies that support mine, having been looking into spatial hearing and how to reproduce realistic 3D sound for the past few years.

Link to comment
12 minutes ago, Audiophile Neuroscience said:

 

wait, Let's avoid that loop

 

 

 

Level can certainly influence left/right imaging and I agree phase manipulates certain aspects but can you provide evidence where "Soundstage is perfectly defined" by such measures. Presumably it must be reported in just about all measurement reviews like "soundstage was measured to be excellent, width was x, depth was y and height an exceptional 97.4" 😁


Channel level differences are measured all the time. I always measure phase aberrations. Reverb is not something introduced by DACs or amps, so no need to measure it, but it can be even using free tools like REW.

 

But please, provide SOME objective evidence for anything you  claim. Have some respect for the rules of this forum, and share something more than a personal opinion.

Link to comment

 

3 minutes ago, Audiophile Neuroscience said:

Where did I say that?

 

Here you go:

22 minutes ago, Audiophile Neuroscience said:

Please show measurement values where "Soundstage is perfectly defined"  like "soundstage was measured to be width was x, depth was y and height an exceptional 97.4"

 

4 minutes ago, Audiophile Neuroscience said:

Give me either where "Soundstage is perfectly defined"

 

As I said, in most recordings soundstage is artificially generated, instruments and voices placed at desired positions using level, phase, and reverb. The soundstage you hear is defined by those parameters. Perfectly.

 

Link to comment
2 minutes ago, opus101 said:

 

Which particular measurements of a DAC are going to indicate if that DAC flattens the soundstage? Assume here a truly acoustic recording made in a concert hall not an artificially synthesized one.

 

Distortions between channels: crosstalk, level imbalance/nonlinearity, phase.

Link to comment
5 minutes ago, opus101 said:

 

You've lost me so let me backtrack.

 

Let's begin with your first one which was 'distortions between channels'. I'm unclear what that means - even though I'm pretty clear on distortion itself. Soundstage is going to get flatter if one channel has more distortion than the other?

 

Differences between time of arrival and amplitude between the two channels will cause a change to soundstage, as will crosstalk. 

Link to comment
10 minutes ago, opus101 said:

 

You've quantified this (the former) with your software? Does Amir make measurements of differences in time of arrival between channels for DACs? I must confess I've not seen any but I'm an infrequent visitor to ASR.

 

Amir usually measures channel imbalance and crosstalk. Don't think he's measured phase differences before, but I could be wrong. DeltaWave allows for direct computation of phase differences between two related waveforms, including music or test tones. It computes some RMS phase difference numbers by frequency range, but also plots the phase difference, as well as corrects for it. Below is one example with an analog capture compared to digital, but you could just as easily compare left to right channel in an analog capture, say at the output of a DAC (blue line is the original, uncorrected phase difference, red is after DW corrected it):

 

image.thumb.png.218c08141e55dc3273a0e1b97f6e3f71.png

 

Link to comment
1 minute ago, opus101 said:

 

So on your first cited measurement, its not one that's currently at all popular? If that's true then it won't help me pick a DAC which doesn't flatten the soundstage ISTM.

 

It's not done by Amir, but that doesn't mean it doesn't exist and cannot be measured. That was the claim made by AN. 

Link to comment
2 minutes ago, Audiophile Neuroscience said:

 

Incorrect.Show me where I said that channel imbalance and crosstalk or phase could not be measured.

I am asking for the actual measurements (numbers) that tells me "perfectly" how I will hear the soundstage height, width and depth. The actual numbers that correlate to what is heard in some real world example.

 

Again, why? I don't measure DACs to determine the exact size of the soundstage they'll create. I measure them to determine if the soundstage in the recording will not be disturbed. For that, I use the variables I already listed. 

Link to comment
24 minutes ago, opus101 said:

Shall we move on to the next one which was crosstalk? How much can I allow before soundstage depth is compromised?

 

I'm pretty sure that this differs person to person. You can try some of the easy to configure cross-feed plugins with headphones to get a sense of what is audible to you. Phase and crosstalk are two things I want to add to DISTORT app when I get some free time. I'm pretty sensitive to crosstalk, and cross-feed never sounded right to me, so I don't use it.

Link to comment
1 minute ago, Audiophile Neuroscience said:

 

 

So the loop begins. It started with your challenge to my statement

 

and now you say:

 

so there is no correlation if you do not determine size of the soundstage that is heard - clearly it is not a "perfect" or "exact" correlation as claimed and the association or concordance is non quantifiable. You depend on soundstage "non disturbance" presuming multiple measures are all ok with no quantifiable correlation other than all good or it is not .

 

I'm not sure I can explain it any better, but I'll try one more time. If I know how to place any instrument in a multi-track recording at a specific sound stage position (I do, and so do all the mastering engineers), and this is done using these three variables, then we know the correlation between these three variables and soundstage position, and therefore, size.

 

Whether I personally measure "soundstage size" or not is irrelevant, and I'm not interested in such a measurement. But nevertheless, correlation exists, and it's a mathematical function performed by most DAWs and plugins that generate stereo position from these three variables. If you claim that there's no correlation between these three measurements and soundstage, then everybody has been doing it wrong all this time.

 

 

Link to comment
4 minutes ago, Audiophile Neuroscience said:

You've looped back to talking about artificial recordings which has already been discussed, and the scope is not limited to this. Even then you have not offered a quantifiable correlation and that is what has been asked - an objective measure that Iisteners can rely upon.

 

Answer was already given for what determines the accuracy of soundstage reproduction. If you want something different, like soundstage dimensions in inches, you'll have to ask someone else, as that was never something I wanted to know. The math exists, you'll just have to do some homework.

 

Link to comment
6 minutes ago, Audiophile Neuroscience said:

No, a quantifiable objective measure of correlation was not given that listeners can rely upon, and now a suggestion to search for what was not asked for, dimensions in inches - another loop back to what I previously said was not being argued = strawman.

Best to end the loop here😉

 

 

You did ask for a way to determine if: "soundstage was measured to be excellent, width was x, depth was y and height an exceptional 97.4".  I assume X and Y would have to be in some units of length, so why not inches? Do you have something against inches? 

 

Anyway, I agree, let's stop here.

Link to comment
2 hours ago, Summit said:

I believe it’s safe to say that different DACs is not equally good at presenting a realistic sound-stage just as they differ in other SQ characteristics.

 

If lower THD or distortion would actually correlate to bigger 3D sound-stage people would not choose tube gear for that exact reason.


As discussed, it is not THD but interchannel differences that determine soundstage “quality”. These are caused by differing levels and amounts/types of distortion between channels.

Link to comment
44 minutes ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

 

IMD is just the way a nonlinear transfer function affects multiple tones. Harmonic distortion is a simple case of IMD with a single tone. 


If you want to see what IMD looks or sounds like, try my DISTORT app. You can look at pure tones, multi-tones, or just apply the same simulated IMD to any recorded piece of music and listen to it.

 

When you say that you want to learn about IM in the digital domain, do you mean in the frequency domain? Standard digital processing (such as in a DAC) should produce no IMD, unless specifically designed to introduce a non-linear transformation. It's the analog section that will introduce some level of non-linearity.

Link to comment
1 hour ago, bluesman said:

But I’m talking about intermodulation, not IM distortion. Intermodulation is a perfectly normal acoustic phenomenon that occurs when two instruments play different notes at the same time.  It’s an integral part of live music, so it’s in the source performance. IM distortion is the addition of intermodulation products that are not in the source.  
 

Not all intermodulation in audio is distortion.  In fact, natural intermodulation among instruments is a large component of the sound of a symphony orchestra. The phenomenon of recreation of the fundamental from the harmonic structure of a tone is what lets you hear lower notes in a musical performance than were played.  The lowest note in the spectral splash of the tympani can be heard below the pitch to which the head is tuned, if this is what the composition demands. Some composers use this effect as part of their music.  As I recall, the oboe’s spectrum is almost entirely harmonics, yet we hear the fundamental being played because we reconstruct it from those overtones and intermodulation products.
 

You can play two analog sine waves of differing frequencies and find the sum and difference intermodulation products in a spectral analysis of the sound.  The first ones are also easily heard if the fundamentals are audible and their sum and/or difference frequencies are in the audible range, although amplitude drops off precipitously after that. This does not appear to be the case with digitized sine waves, as I don’t see the same spectrum - what should be the biggest sum and difference components seem to be missing.  And I don’t hear those IM products when I play the tones through speakers or headphones.
 

So I’m trying to understand if and how digital tones intermodulate with each other, not how IM distortion products are created.  There’s a big difference between the two.


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

Link to comment
2 minutes ago, bluesman said:

But spectral analysis shows more than just the 2 digital sine wave frequencies and the electronic floor of noise and distortion (which is quite low when examined without the sine waves).   So something is happening.  I can’t define any relationships among the peaks I see - they’re clearly not natural harmonics or harmonic products.   I’d love to know what this is, where it originates, how it affects audio playback, and if we can use or reduce it.

 

Maybe I missed it, but what are you measuring? Any frequencies that are not in the original signal are distortion. IMD happens in the analog domain, except for some special software or plugins  designed to simulate it, like my DISTORT.

 

Spectral analysis of analog signal can certainly show IMD, but usually not digital.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...