Jump to content
IGNORED

Misleading Measurements


Recommended Posts

2 minutes ago, John Dyson said:

Instruments themselves can intermodulate because of natural nonlinearities.  I wouldn't be surprised if closely spaced traditional instruments wouldn't intermodulate also.  Of course, it requires significant (live) volumes.

 

No matter, as long as the natural performance is mic'ed and preamp is good, the electronics equipment itself shouldn't produce a lot of modulation components (of whatever type.)   The transducer (mic) might intermodulate to some extent -- one reason to use small diaphragms at high levels.   The acutal performance (sources) certainly can intermodulate to one extent or another.  I truly don't know how much -- it is for those who work with live music nowadays to measure the natural modulations (if they are interested.)  I am interested in real-world information on the matter -- cool stuff.

 

An example of something that naturally creates intermod distortions (doppler/FM distortion) is a single cone speaker trying to reproduce the entire frequency range...   The long excusions of the lows will certainly doppler modulate the highs.   (That is one reason for the early development of coaxial and triaxial speakers.)   Geesh, a poorly constructed speaker box, with lots of bass, buzzing and buzzing is also a form of IMD :-).

 

John

 


Air itself can become nonlinear and result in IMD at very loud levels.

Link to comment
39 minutes ago, bluesman said:

Acoustic intermodulation is not in your head - it's physical and audible.  The beat frequency you hear and we use to tune our guitars is recordable and audible on playback.  But it seems that it only occurs with analog sources - it doesn't seem to develop when the differing frequencies themselves are digitally generated.  However, even an all digital record-playback chain starts with analog input from live instruments and ends up with purely analog output as sound, so it definitely occurs with what's coming out of your speakers on playback.

 

IM is part of the natural (analog) world. Most natural things that produce sound are non-linear and multiple fundamental frequencies (most natural sounds contain a ton of these) will create IM components due to these non-linearities. Nothing to do about that, but also nothing to worry about: our ears and brains figured out how to process and not to get confused by all these "extra" frequencies in the natural world. To us, IM is part and parcel of the recognizable sounds. I suspect that if it was possible to completely remove IM from, say, a guitar or piano or human voice, it would sound completely unnatural to us.

Link to comment
2 hours ago, Summit said:

 

All “things” has to be very good to record and reproduce a realistic 3D sound-stage and measurements that correlate with a realistic 3D sound-stage is not simply the level of inter-channel differences or THD. In fact a big and flat left right sound-stage is easy to record and reproduce while am talking about a realistic 3D sound-stage with good image.


“Everything matters” is a possibility, but the question is why?  
 

Left to right position is defined by differences between channels.

 

Depth is detected primarily through  reverb. It’s possible that some distortions will destroy very low level reverb that our ears may find useful, but that should be measurable, as these will affect any low level signal, not just reverb. The question is then, at what level can we still hear reverb, and at what level does it still help the brain determine distance?

Link to comment
9 minutes ago, bluesman said:

A sampled sine wave is technically not a continuous function, which may be why digitizing alters, reduces, or eliminates acoustic intermodulation.


Sampled sine is a continuous function within the limits of the sampling frequency. It’s stored as discontinuous samples but reproduced as a continuous waveform by any properly constructed DAC. That’s the result of the infamous Nyquist-Shannon theorem.

Link to comment
5 minutes ago, sandyk said:

 

 Even if it is correct, it's a waste of time as non linear human hearing is involved, and in any event, no matter how high the initial bit rate it still ends up converted to 24 bit at best, which virtually ALL DACs are unable to fully resolve, with many DACs not fully resolving more than 21 bits properly..


What exactly is a waste of time? Understanding what causes IMD and what doesn’t? That’s the only thing that Frank stated, nothing related to the number of bits a DAC  can resolve.

Link to comment
9 minutes ago, sandyk said:

And dumbing it down to playable bit rates again,  increases it again.


Maybe. But that’s still outside of what was being discussed. The error in digital is due to quantization error in the last bit. It’s small to begin with, but dithering and noise shaping reduce it even further. Now, compare that to IMD levels in an audiophile amplifier, say an SET.

Link to comment
3 hours ago, Audiophile Neuroscience said:

The question is can superposition of sound waves do this. There can be nodes and anti-nodes and beats but I am not sure about continuous separate modulated tones

 

This is not the first time it's been mentioned here. Beating and the "superposition" of sound waves is not intermodulation. Intermodulation has a very specific definition in audio and signal theory, and addition of sine waves causing troughs and peaks or even completely canceling each other is not it. I suspect that a lot of this discussion is going in orthogonal directions simply because the wrong terms are being used.

Link to comment
43 minutes ago, Jud said:

 

 

Thanks for the explanations. In this formulation, speakers are analog and non-linear, so the concerns regarding a "second round" of intermodulation, as evidenced by @bluesman's listening/measuring, appear to have some support, though I believe @pkane2001 is attributing most or all of this effect to mics rather than speakers. Of course this narrows down the concern to recordings made with microphones from analog sources, which isn't much narrowing down at all. The same will happen when listening to these sources "live," but whether it's an accurate representation of what one would hear live, who knows? @bluesman is probably as well equipped as anyone to answer that, having played live music professionally for decades.


No, all analog devices are guilty. Speakers too. I didn’t mention speakers because I assumed one was not involved in capturing from a mic in @bluesman’s scenario. 
 

 

Link to comment
6 hours ago, Jud said:

 

What speakers did you use for this, BTW?

 

Speaking of speakers 😎 Remember, I claimed that they are non-linear analog devices, right? Here's an example of this non-linearity causing harmonic distortion (which is the same as IMD, except measured with a single tone, as we discussed already). This is from a very detailed and well-done speaker review by Erin from Erin's Corner. This little speaker produces THD exceeding 1% at normal listening levels around 600Hz, for example:

 

image.thumb.png.8590c807b128aa8c9072314a36fced0d.png

 

Link to comment
40 minutes ago, bluesman said:

I don't think that's correct.  From the Linear Circuit Design Handbook (2008) by Hank Zumbahlen, harmonic distortion is defined as the ratio of harmonics to fundamental when a (theoretically) pure sinewave is reconstructed.  Intermodulation products are not harmonics of the tones that produce them, but harmonic distortion products are.  So HD tones can be identified as multiples of the fundamental frequency producing them, while IM tones can be identified by their relationships to the tones that produce them (sum & difference frequencies).

 

I'll repeat this one more time:  HD and IMD are both the result of exactly the same non-linearity. HD is measured with a single tone, IMD with multiple. The cause of both is the same

 

Here's an example. The same exact nonlinearity applied to a single tone and two tone signals using DISTORT app. This is HD:

image.thumb.png.4ce31221cc0bb135913ad28496f8ce0c.png

 

Two tone signal (5kHz and 5.5kHz) going through the same exact non-linearity. Notice the IMD components:

image.thumb.png.30fb97e67712d4da094d76e429dc8830.png

 

Here are the two tones without non-linearity:

image.thumb.png.3feb746cdf3970e9e8ef1e54eea74d57.png

 

 

 

Link to comment
39 minutes ago, bluesman said:

I cannot find any specifics on your website or your forum (or thread or whatever it is) on ASR.  What exactly is this magical nonlinearity on which you’re hanging all your hats?  Where and how is it applied to the signal?  How have you validated the stated effects of your app?
 

Without validation, there’s no way for us to know that it does what you say it does.  I can’t identify any mathematical model for production of a set of true harmonic frequencies from a single tone but a totally different set of nonharmonic frequencies from two.

 

DISTORT applies a non-linear transfer function to any desired signal. This is a simple mathematical operation. The result is measured using a spectral plot. You can confirm that it generates correct IMD frequencies with two or more tones, as these are easily verifiable by comparing the result with any text on IMD by specifying the same frequencies/amplitudes in DISTORT, or by calculating various sums and differences yourself.

 

I suggest you do some searching and reading before posting more theories on what IM is or isn't, as this topic does not require speculation. Here's the first thing I found when searching. An Analog Devices Op Amp tutorial that discusses IMD. To quote a couple of relevant paragraphs (highlights are mine):

 

Quote

INTERMODULATION DISTORTION (IMD)

When a spectrally pure sinewave passes through an amplifier (or other active device), various harmonic distortion products are produced depending upon the nature and the severity of the non-linearity. However, simply measuring harmonic distortion produced by single tone sinewaves of various frequencies does not give all the information required to evaluate the amplifier's potential performance in a communications application. [...] It is often required that an amplifier be rated in terms of the intermodulation distortion (IMD) produced with two or more specified tones applied.

 

Intermodulation distortion products are of special interest in the IF and RF area, and a major concern in the design of radio receivers. Rather than simply examining the harmonic distortion or total harmonic distortion (THD) produced by a single tone sinewave input, it is often required to look at the distortion products produced by two tones.

 

 

 

Link to comment
1 hour ago, bluesman said:

I cannot find any specifics on your website or your forum (or thread or whatever it is) on ASR.  What exactly is this magical nonlinearity on which you’re hanging all your hats?  Where and how is it applied to the signal?  How have you validated the stated effects of your app?
 

Without validation, there’s no way for us to know that it does what you say it does.  I can’t identify any mathematical model for production of a set of true harmonic frequencies from a single tone but a totally different set of nonharmonic frequencies from two.

 

Here's a DISTORT simulation of IMD with two tones:

image.thumb.png.a388eff8071bf1baa12e03f481fe9d1f.png

 

Here's a published plot of what IMD might look like with two tones and some (unknown) non-linearity and how to compute corresponding frequencies. Notice any similarity? 

IMD-Graph-1.jpg

 

And since you wanted to validate what DISTORT does, here are the frequencies. Please verify against the above calculations. Note that 507Hz on the very left is actually 500Hz, but the screen resolution is too low in the lower frequencies to point to the exact Hz. The rest of the frequencies are spot-on, as far as I can tell.

image.thumb.png.d4b8acd30cf7c4839b7882ca15d150bd.png

 

 

Link to comment
3 hours ago, Audiophile Neuroscience said:

 

The issue of validating the tool being used is a legitimate question just as it is for blind listening tests or any other measuring tool - the test of the test parameters eg published sensitivity,specificity, reliability etc. If you cannot produce these numbers in relation to a gold standard reference (and I am not saying you can't) then your tool is not validated.

 

The comparison images you present are IMO hardly convincing without knowing more about the legitimacy of the reference to which you rely. It should be an established gold standard. It does not inspire me thus far to know the reference source is some guy on a blog selling stuff who lists his credentials as

 

"I began posting training videos on YouTube to help teach the volunteer sound team at my church how to learn the sound board. What I didn’t see coming was that God had a much larger plan!

Videos that were intended for only a handful of volunteers have been viewed roughly 4 million times!

I want to do everything in my power to help churches succeed and their volunteers find confidence. Here are 6 ways you can access my knowledge and experience. I have arranged these in order, from those that require the least investment to those that require the most.

  1. Search my site. I am continually putting out new posts and content on how to master your audio console. Simply use the search bar in the upper right corner to get started.

  2. Subscribe to my updates. By subscribing, you’ll get a notification when I post something new and additional content that I haven’t posted publicly. Learn more…

  3. Subscribe on YouTube. If you’re a visual learner, you’ll want to subscribe on Youtube for access to over 100 videos and updates on new training. Learn more…

  4. Buy one of my products. You can find them in the store. I created these products and resources to simplify and help speed up your workflows. I have several other products in development, so stay tuned! Learn more…

  5. Work with me on Skype. I have limited availability for 1-on-1 online training & consulting. This is a great option if you have questions on a setup that is complicated and specific. Skype trainings are booked on a per-hour basis. Contact my team to get it set up.

  6. Hire me for in-person training. I love working with large teams, face to face, setting up the board and training everyone all at once! Due to the time required, the costs associated with this option are typically very high. If you feel that your team can benefit from a 2 day training, contact my team for more information.


DISTORT has been validated, and not just by me. It’s impossible to validate a tool without first understanding of what it does. Once you do understand, the validation becomes extremely obvious. 
 

So, what would you like to see validated?

 

 

Link to comment
21 minutes ago, Audiophile Neuroscience said:

 

A good start would be offering what validation you have. I don't buy 'you wouldn't understand it'. I don't have to necessarily understand all the the technical details depending on how the validation is offered. If a recognized certifying body says it tested the tool and it does what it says on the can, that is one way. Independent publications where the tool is calibrated against a known gold standard is another, if you know for example what is the sensitivity of the tool, quote it - most everyone understands this.

The point is, if you are offering a tool to persuade someone of something,one likes to know if it does what it purports to do.I am not saying your app does not do that. @bluesman asked a question about validation to which I think deserves a reply.


Again, easy to do for anyone interested, and I already replied.
 

It’s up to you or bluesman to understand why my reply was a proper way to validate the tool. You can find an explanation of what IMD is and what frequencies it produces in any basic text on distortion. That chart I linked was a proper example of IMD frequencies, regardless of who posted it, showing the math details. It’s  curious that you’d attack the person who shared it without first asking if the information he shared was accurate. 
 

I’m not here to teach signal distortion analysis, it’s been done a million times, look it up if interested. Many tutorials, textbooks and papers and blogs published on the subject.
 

DISTORT is easy to validate, like any other device that produces distortions: feed it a desired test signal, measure the output and confirm that the expected distortion was added through analysis. No magic, and you can chose your own tools to do the analysis, since you don’t trust mine.

Link to comment
1 minute ago, bluesman said:

I’m probably more knowledgeable about this than you think I am.  And I think you may have bonded the wrong content - so I fixed it for you.  The last phrase is the exact reason I asked you to tell us what you’re doing to the signal, along with how and where in the path between input & output  The nature of the signal manipulation is critical to knowing exact what effect it’s likely to have.  
 

I also note with interest that I got the same IM products just mixing two tones that you got by “treating” two tones with your app. So how would we know that your app did anything, especially if you won’t tell us what it does.

 

Absolutely.  IM products can and do directly affect radio reception directly, eg by creating side bands that suck bandwidth and even affect the SQ of reception.  And it’s not only reasonable but highly likely that they do the same thing in the audio band.  It’s the source of that IM that you continued dispute with me.

 

I did not realize that you were using your own “black box” app to support your assertions until last night.  As we’re clearly not communicating well about this, and you won’t even consider that some of your arguments might be less than 100% correct, I’ve said my piece.

 

If I’m wrong, so be it.  You simply haven’t offered a single piece of evidence or data that I am.

 

Between the input and the output is f(x) where f is a non-linear function. Is that clear enough? That's a definition of non-linearity. X is the signal in, f(x) is the signal out.

 

There are no additional tones being individually added, or any frequency domain manipulation in DISTORT when applying non-linearity. It works exactly like any non-linear analog device would, by changing the transfer function, except that it's done using calculations rather than hardware. f(x) is then measured using spectrum/FFT and other analysis tools. You can output f(x) result to a digital file or to a sound device driver and then analyze it using whatever other tools you want. REW is the one I often use. APx500 software that Chris is looking into would also work. Or even my own DeltaWave. As I said, no magic. Simple non-linear distortion causing HD and IMD.

Link to comment
9 minutes ago, bluesman said:

If, as you seem to be suggesting with your “demonstration” above, the same nonlinearity produces both harmonic and intermodulation products, how in the world could you use it to add only one or the other (as you suggest can be done with your app)?

 

Sorry, but if you keep ignoring what I write, then this whole conversation is pointless. HD is generated with a single sine wave test signal, the x in f(x), IMD with multiple tones. DISTORT lets you pick your test signal, or even feed your own, so you don't have to depend on it to generate it. You can read the test signal setting in the screen captures, All the settings, including the test signal used, are listed in the panel on the left.

 

Here's that same non-linearity applied to a 32-tone signal:

image.thumb.png.c07522f4c9bbf2764892fc7c77b5927b.png

 

 

 

Link to comment
1 minute ago, bluesman said:

It’s not nearly enough.  Y=X squared is a nonlinear function. Maybe that’s what your app does.

 

It really doesn't matter. Nonlinearity of all kinds causes the same effect. My app allows you to create infinite number non-linearities, and in fact, even simulate those of real devices. And you can verify that it's doing it by applying a test signal to input and measuring the output.

Link to comment
2 minutes ago, bluesman said:

That’s just babble.  After this entire discussion, you still think that I don’t know that harmonics are generated by a single tone and intermodulation comes from interaction between two or more?

 

One other question for you: if the same nonlinearity generates both IMD and HD, why don’t we see the harmonics of your two test tones in your IMD display?  We seem to be seeing only IM products?

 

OMG! Because IMD IS NOT HD, BUT IT IS CAUSED BY THE SAME NON-LINEARITY, you even repeated this! Two or more tones intermodulate due to the non-linearity and this doesn't look the same in frequency domain as HD because it results in amplitude modulation. BUT IT IS THE SAME CAUSE. The result is exactly like what I posted with the two tones. 

 

If you're here to just argue with me, then sorry, I'm out.

Link to comment
6 minutes ago, bluesman said:

If your app is to be of value as other than a carnival toy, it can’t just randomly manipulate the input.  It has to be applying the same nonlinearities that cause distortion in our equipment.

 

Totally disagree. Since it allows you to construct any non-linearity, it is a tool to simulate whatever you like. 

 

Look, you've obviously not looked or used it or tried to understand what it does, and yet you keep arguing against it. You're arguing in bad faith and there's no reason for it.

 

Link to comment
1 minute ago, bluesman said:

What I said is that you are using the same unknown nonlinear manipulation to artificially generate both.  You alternate between saying there’s only one (“the same non-linearity”) and saying that it doesn’t matter which of many it is.
 

“It” are both caused by many different nonlinearities in audio equipment, although you strongly believe that it doesn’t matter at all which it is, where in the signal chain it is, or what it does to affect the signal.  If this were true, wouldn’t all audio equipment sound very similar?

 

Whaat?? Where did I say that? I said IMD and HD are generated by the same non-linearity, and ANY NON-LINEARITY will result in HD and IMD. So it doesn't matter for the discussion of HD vs IMD which actual transfer function you use, as long as it's non-linear. Did I say anything about sounding the same? Different transfer functions can sound completely different. That's what DISTORT lets you play with: design or simulate any non-linearity and see what it does to sound.

 

But, Wow. Just wow. I'll stop here. You either don't get it at all or you intentionally ignore and misinterpret what I say. Either way, I'm done.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...