Jump to content
IGNORED

Misleading Measurements


Recommended Posts

6 hours ago, The Computer Audiophile said:

FWIW, let’s not let the tail wag the dog. 

No matter what, some people hearing at 23kHz does kind-of argue for 48kHz sample rate.   Most 'music' doesn't have fundamentals above a few kHz, and by the time the harmonics on normal material have trailed off -- the acutal 'audio' above 12kHz is usually much attenuated by the time it gets into the 'brain'.

 

Detectability is very different than it being useful.  Audio based security systems and traffic light controllers in the olden days used to drive me totally nuts -- now I hear the buzzing all of the time :-).  On the other hand, I never heard music in the frequency range of the irritating super-high frequency audio sensors.

 

If someone can actually hear the 23kHz harmonics of real music audio, then I wouldn't want to be anywhere near the source.  I treasure my bones and don't want them to be pulverized.  You know, they become more and more brittle as we get older :-).

 

John

 

Link to comment
4 hours ago, Audiophile Neuroscience said:

I could weigh in on audibility thresholds, differences for absolute vs differential and interesting results for  non periodic steady-state noise but as I see it, what the thresholds actually are is not the point.

 

The point is *if* someone declares a threshold for inaudibility (whether they be right or wrong), then uses it to further their opinion on another topic, but then elsewhere appears to use the very same data in a contradictory manner. A double standard is said to exist or at least is implied.

I am all for a frequency response of a system that is as wide as reasonably possible.  44.1k is indeed cutting it close, but is okay for REAL music and not special effects.  I'd prefer the wiggle room of 48khz, and that is what I use for down-sampled material, even if starting with 88.2kHz.   I watch my spectograms very carefully -- MOST of the time, older pop music appears to contain modulated noise above 20kHz not so much actual information.  On the other hand, there ARE good recordings out there with short-term transients above 20kHz.  If there was significant provable information above 20khz on the material that I played with -- I'd use higher sample rates.   I vote for 66.15 and 72 as a compromise (that is what the DHNRDS uses for 44.1k and 48k material, BTW -- then does a further upconversion to 88.2k/96k.)    Nowadays, space is cheap, so it is kind of silly to consider 66.15k/72k except for very special purposes.

 

John

 

Link to comment
10 minutes ago, John Dyson said:

I am all for a frequency response of a system that is as wide as reasonably possible.  44.1k is indeed cutting it close, but is okay for REAL music and not special effects.  I'd prefer the wiggle room of 48khz, and that is what I use for down-sampled material, even if starting with 88.2kHz.   I watch my spectograms very carefully -- MOST of the time, older pop music appears to contain modulated noise above 20kHz not so much actual information.  On the other hand, there ARE good recordings out there with short-term transients above 20kHz.  If there was significant provable information above 20khz on the material that I played with -- I'd use higher sample rates.   I vote for 66.15 and 72 as a compromise (that is what the DHNRDS uses for 44.1k and 48k material, BTW -- then does a further upconversion to 88.2k/96k.)    Nowadays, space is cheap, so it is kind of silly to consider 66.15k/72k except for very special purposes.

 

John

 

I want to explain the modulated noise comment.  That come from the NR systems that they used to use.  It creates an envelope around hiss left over from the tape recorder when the HF bands change gain.  It is as simple as that.  When there are transients, they are such low level that they cannot be heard -- except by VERY SPECIAL PEOPLE.  This is especially true when decoding with DolbyA units -- long propagation in the decoding process -- misses transients.   The mics that used to be used (esp like U47) don't give you much above 20kHz, but they do peak between 6k-9k a little.  Ribbons were worse.   I had some ancient Altec condensers that did pretty well to 20kHz,  but they were omnis and very simple designs.   Earthworks mics didn't exist a long time ago, and DPA mics weren't all that common.   Normal condensers didn't appear to be designed for the max response.

 

There are now Earthworks mics, wider band RF condensers from Sennheiser, and always the DPA specialty mics -- but they are not likely used on common recordings.  The Sennhiser RF condensers are interesting though -- wideband AND low noise.  Earthworks, not so much -- just wideband.

 

John

 

Link to comment
6 hours ago, sandyk said:

 

 FWIW, Barry Diament's wife was able to hear 23kHz which is above CD's limitations .

But, when the levels from actual music material are below the hearing threshold anyway, who cares?  Super intense 23kHz isn't coming from any normal music, unless skin is being heated, nonlinear effects in the hearing system, or bones pulverized.   I used to hear security systems all of the time, probably in the low 20kHz range -- but music didn't happen up there (I could tell the difference, and always used high quality (but beamy) super-tweeters.)

 

What you usually see on older recordings done on nice, wideband tape, is noise modulation from the NR system...  That is it -- and, frankly, I don't care about noise modulation.  I worry more about the weakened/missed AUDIBLE & INADUBLE  transients from DolbyA/SR/DBX material.   Yes -- there is a VERY LONG delay through the decoding loop on all of the systems, (not so much C4) where transients are eweakened/missed, where most of the 20+kHz energy resides.   This is one of the major reasons why the sound of cymbals is so attenuated on even properly mastered material.

 

John

 

Link to comment
29 minutes ago, pkane2001 said:

 

I didn't score well on the HD-Audio Challenge by Mark Waldrep, as an example. But I know a few others who scored perfectly, so there's something related to hi-res encoding or its playback that can possibly make these audible. I doubt that it has anything to do with the frequency response caused by the increased sampling rate between 44.1kHz and 96kHz. But then, the question is what is it? Is it the filter? The resampling in the DAC or the reconstruction filter? IMD with higher-frequency signals? Or are some ears just more sensitive to it than mine (I've no doubt that's true)? The reports of someone having useful hearing at 23kHz as an adult are possible, but very unlikely.

 

Here is a study result based on 384 test subjects of various ages. You'll note that the level of sound at 20kHz needs to be over 90dB to be detectable for age group 22-35 (and no, I'm not even remotely close to that age group!) The error bands go down to about 85dB level, so not much variation. I don't think I'd ever want to listen to a recording that had 90dB content above 20kHz:

image.png.252771ed6e2bbb1ecd9068eddb871d28.png

 

One thing that I'll do when I get a chance, actually create a table of delay vs frequency for various minimum phase filters -- I am truly not sure the effect, but the difference between DIFFERENT minimum phase filters can create audible differences (theoretically.)

 

My guess that the differences that people hear, and whether or not they hear differences is based upon time resolution of the hearing at audible frequencies, not so much the higher sample rate/Nyquist frequency per se.  Also, don't mistake my comment about sample rate -- it has little to do with the time resolution, as long as there is enough bit resolution.  (It is all about maximum information content.)

 

There is NOTHING wrong with using minimum phase filters if that is what 'sounds good', but for an experiment -- but I believe that filter delay vs. frequency will easily bias the results.   I would ONLY use carefully crafted software for testing purposes also -- not something 'off the shelf.'.  The reason isn't that other DSP software developers aren't competent, but on the other hand, the design might not have considered things that are important in the experiment.

 

I remember a kid when I was in high school was passing hearing tests at impossibly low levels because he was hearing the hiss in the electronics as it was gating on and off.  These measurements must be done with a scientific mentality, not just scientific/good experimental discipline.   As I tell everyone on some of these things 'There be dragons'.

 

John

 

Link to comment
8 hours ago, bluesman said:

But it’s an excellent way for many to get critical education and experience.  Those who just want to listen need easy, efficient, transparent aids to optimize their systems.  But to those for whom knowledge about what’s in the boxes and how it all works enhances their audiophilic enjoyment, tweaking can be a valuable and pleasurable learning experience.

 

If you’re learning from the effort (or just enjoying it), it’s productive time well spent.

A few tweaks are okay, but tweak tweak tweak isn't instructive.  Studying a bit of technical background and learning why the attempt at 'design' requires so many tweaks, is MUCH MUCH more important.

 

Tweaking doesn't create learning -- ask Mr Edison about that.  I doubt that 100yrs of Mr Edison tweaking would have created Tesla's new ideas.   Tweaking is an intuititve physical activity -- it is only an adjunct to the more important learning.   Or, most ideally -- a finishing touch.

 

John

 

Link to comment
13 minutes ago, fas42 said:

 

How I detect noise is by listening ... it's trivially easy for me to make a tiny adjustment to the electrical environment of the home in which one is listening, and hear the variation in SQ. Whether one wishes to call this noise is up to the individual, but what it really says is that the playback chain is not sufficiently robust to reject this input - why you should want to measure such I don't quite see; but if you want to make it really obvious, in some numbers, just hook up, say, a working arc welder into a nearby socket - that will give you plenty of juicy data to work with, 😁.

Since my hearing has never worked beyond perhaps 22kHz, and currently isn't even that good, I find that using only my hearing to detect noise problems - (such problems not always being in the audible frequency range), isn't adequate in a lot of cases.   Being able to use an objective measurement of some kind -- often the direct measurement being further processed so that the details are more clear seems to be more effective when available.  Properly presented details about the 'noise' can be helpful to pinpoint even unforseen problems.   Sure, there are cases where a direct measurement with technology might be too difficult, but having objective/technologically aided measurements available as a primary means can eliminate missing lots of potentially non-audible impairments.   Out of band impairments can indirectly cause in-band audible problems.

 

I am definitely not against listening for problems, but even primarily depending on listening might miss some unforseen issues.

 

John

 

Link to comment
4 hours ago, Audiophile Neuroscience said:

 

I think you may be being a bit bold about this John ! 🤣🙄 ^^^

The reason why I do that is that I tend to blather, and try to bring out the more important notes.  Perhaps not needed in short messages.  I worry about being too wordy, but my language skills are VERY VERY VERY poor...   Just desperately trying to communicate information, not so much being pretty.   I have a lot of respect for peoples time.  (The only thing that EVER really held me back in my career was my ability to communicate, not so much technical and practical competency -- which was usually the strongest of my peers of ALL education levels.  Perhaps 1/3 of my co-workers had been PhDs or very advanced experts, and I was very often the 'answer person on advanced issues.  But, until perhaps 15yrs ago, I could JUST BARELY write a coherent sentence, and still stuck at the level.

 

John

 

Link to comment
10 hours ago, sandyk said:

 During the development  of my DIY Class A Preamp I used a CRO at maximum sensitivity with an inline very low noise battery powered 10 x Preamp to highlight potential noise problems, which in this case appeared to be related mainly to nearby SMPS powered devices. Attending to these very low residual noise levels did result in an apparent Subjective improvement.

I don't disagree with your abilities/skill, but here is my position:

 

The problem is that just using 'scopes in the traditional ways are not very selective in providing noise information.   They are okay at general information, but it takes spectral and other presentations to study what is going on.  I am NOT claiming that *aided*  measurements are the only way to find problems, but nowdays we have so many easy-to-use information sorting aids, there is no reason not to use them.  Here is one of my long, blathering anecdotes (off topic, but an exemplar):

 

Do you know how to make an amazingly good AM/MW/SW receiver, very simply, if you know what you are doing?   A very simple stable oscillator at approx 4X receving frequency, a fancy analog switch circuit with a few specially chosen *almost commodity* analog switches, a very simple, carefully laid out circuit, and a good, wideband (at least 96k, but 192k is better) 24bit stereo audio interface, and connect to your computer.  Maybe a slight amount of input selectivity is a little helpful and minimal input gain -- any analog RF amp is tricky design with low enough distortion not to make the recevier worse than the raw RF switching converter.  Perhaps use the RF amp as a buffer for radation, and impedance match from a short wire antenna.   (The switching device likes to work in the 50-150ohm range.)  (Direct conversion SW receiver, using audio interface, full demod capabilies AM/FM/SSB/digital/etc.)   The only real limiation is the base band bandwith of the composite signal limited to 1/2 sample rate.

 

This VERY VERY simple design will very often out-perform a very highly engineered, very fancy analog SW receiver.  With a little more work, can easily blow it away.   There is a whole series of very new (pretty much in the last 20yrs), innovative designs, both for the switching RF converter designs and even traditional RF mixer designs (A guy named Trask did some good papers on his super-innovative improvements to RF mixer designs at lower frequencies, and I forget the name of the person who designed this crazy-good, but simple SW level receiver concept.)   I think that Trask did both a brilliant switching mixer & lossless feedback derivative of the traditional MC1496 type scheme.  Both methods used real scientific and innovative thinking.

 

With the anecodote above -- which is wisest? -- to do a lot of hard-core, grating engineering/design to develop a retrogressive SW receiver, perhaps the biggest advantage is that it 'looks and feels' like a traditional SW receiver, or a relatively 'smart' design using an ingenious RF receiver that takes advantage of current technology, and doesn't have any of the IMD effects of a superhet design?   The question is rhetorical, because there is no real answer -- but a direct conversion approach tends to be very common nowadays -- eliminating huge chains of complex, hard earned circuitry.

 

I am not claiming that the new way is the only way to do it, but USING new technology opens up opportunities and gives more information for understanding what is going on.   Clear away the weeds, rather than deal with all of the weeds left over from the past.   Sometimes the old way is okay, but an 'oscilloscope', still being a super useful tool -- just measuring gross noise levels (unless some kind of information processing, like spectrum analysis) is doing it the hard way.   There be dragons, otherwise.

 

Using my current project as an example...  I am pretty sure that if R Dolby had the technology commonly available today, and designing the DolbyA, he would have made significant improvements over the eventual current design...  He was a genius, and did a wonderful job much better than I would have -- but he would have also caught some audibly noticeable (because of the demands of current recordings) flaws in the design.   What he DID do was amazingly good -- for an easily understandable example, I made the mistake of criticizing is FET/transistor combination gain blocks that he used -- until I analyzed them.  They work very well - but he was a special genius -- some of his design parameters are very counter intuitive relative to what a hobby person would probably do today.   I was super surpised with the performance of his little gain blocks.

 

I doubt anyone corresponding here today are at the innovative level of Mr Dolby, but the avialble technology aids tend to level the playing field for us, more average innovative pseudo-geniuses :-).

 

John

 

Link to comment
2 minutes ago, sandyk said:

John

 You are overstating the problems when  using established design techniques such as Analogue Preamplifiers and Power Amplifiers based on the research of highly qualified designers such as Nelson Pass, Bob Cordell and Douglas Self .etc. 

 Neither are most E.Es across most areas of Electronics like you appear to be suggesting.

Most EEs these days appear to specialise in a selected area. Very few E.E's these days are even capable of marrying together both the Digital and Analogue areas of a DAC in order to  create an exceptional performing DAC other than one  having perhaps 

good measurements that sounds O.K.

 Very few complete consumer type products, other than from a small company are likely to be the result of just one person..

I doubt that Mansr could achieve that either, despite all the claims he  used to make, as well as the vast majority of the members of  

that other forum that several members here keep quoting measurements from as being definitive..

 

I do however have a couple of modest  E.E. friends who do know their own limitations outside the areas they mainly specialise in,

 and would .not be capable of  the  know how to make an amazingly good AM/MW/SW receiver without a great deal of research and assistance.

 

BTW, you are putting Ray Dolby on a pedestal that very few designers other can aspire to..

Alex

I think that my diversions as examples might diffuse the importance of what I am saying.   Let me try to translate:  we have so many great tools nowadays, IF we use them, and not try to be biased like:  'my hearing is so good that I don't need to measure', or 'the old tools are good enough', maybe we can do things better, quicker, more optimally, etc.  Maybe even have more fun doing other things also.  More time to do new things because of less time wasted.

 

Maybe each of these wonderful claims of sufficiency are true, but sometimes also we over-estimate our own capabilities.  Sometimes, even we totally miss the mark.

 

More succinctly:   I don't believe in an objective vs subjective view of things.  However, relying too much on the subjective, and not taking FULL advantage of the available objective measurement tools is a bit anachronistic, and often a waste of time.  This is a corollary of my anti-'Tweak-tweak-tweak' stance.   I don't mean to say: NEVER 'Tweak tweak tweak', but instead why not take advantage of the WONDERFUL tools that we already have.   Aggressively avoid 'tweak tweak tweak' instead.

 

It isn't either or -- the reason for the Dolby diversion is simply because he did SUPER well considering the limited tools available.  He could have done better with the WONDERFUL stuff that we have today.*   I doubt that given the time frame of his early work, that he would have had the chance to do a computer simulation for a first cut optimization.  I could do a wonderful low noise pre-amp without computer optimization and without careful spectral distortion analysis for the results -- but why not take *full* advantage of current tools?   It isn't difficult to do so.

 

I try to be more self critical instead of being totally self sufficent, knowing that my hearing is 'good enough'.   I know that EVERYONE HERE has human hearing and human intelligence -- and I have known some of the most brilliant people that there are -- but sometimes being biased towards one or the other technique might make someone a little less productive and innovative than they could have been.  Some of the brightest people that I have known (and certainly one degree of separation from some of the VERY brightest) have also been stunted by needlessly set-in-stone opinions.

 

  * Today, R Dolby wouldn't have even needed to do his 'DolbyA NR', but just using him and his situation as an example.

 

John

 

Link to comment
1 hour ago, The Computer Audiophile said:

It means that I'm OK with whatever the objective crowd presents. It's more about the people conducting the measurements. If they'll fudge text that says pass or fail, they'll fudge measurements. I trust they won't do either, so I'm OK with a pass / fail. 

I think that these matters about specsmanship would be a little more honest if there were open explanations when/if some initially extreme value for a measurement might be helpful.

Just saying:  my circuit has 0.0001% distortion and is so much better than 'Sams' circuit with 0.0002% distortion is a rather useless argument or comparison.  Those are blind comparisons with no context.  I have this wonderful little paper that some person wrote on op-amps, creating his own specs with LOTS more details than normal manufacturers specs.   That 'wonderful' paper helps to show the behavior with much more circuit context involved.  It provides much more helpful behavior information for more realistic circuit configurations (of course, not perfect.)   There are so many choices of op-amps, that good, understandable objective measurements are so helpful to get started.

 

I can agree that a blind spec without explanation of context is just a little better than irrelevent.  This reminds me of the old 'lines of resolution' spec for SVHS and VHS decks.  The number was meaningless, but we consumers always know that 'bigger is better', right?   Heh -- they way that those 'lines of resolution' were measured was almost meaningless WRT actual quality of video reproduction.  They were especially meaningless when comparing consumer vs pro video equipment.  (Nowadays, such matters are anachronistic -- we are so spoilt with almost flat & more linear video response in comparison.)

 

This would be similar to the 0.001% vs 0.00001% distortion...   For example (another one of my diversions): how many such 'wonderful' preamps characteristics are measured with a source that truly emulates (for example) a MM cartridge?   Such transducers are well known to have very wide ranging characteristics and high impedance in certain frequency bands.   Such a high transducer impedance can certain cause negative effects WRT modulation of input impedance vs. signal waveform/frequency/level.   The noise matter bothers me less, because it is almost impossible for a competent designer to create much more preamp noise than an MM cartridge at frequencies where the ear is sensitive.   However, I seldom see a real distortion measurement where the source is at least a model of a cartridge (or, perhaps a low impedance signal feeding through a cartridge of choice, so that an actual distortion measurement can be done.)

 

Big, fat low noise jFETS or medium geometry BJTs can have noticeably changing input capacitance in a normal amplifier circuit.  This changing capacitance acts superficially similar to a changing resistance, and in certain cases can produce significant distortion.

 

It WOULD be nice if the measurements actually measured in-circuit/in-situ behavior, and comparisons were made with the priorities openly explained.   At least, when someone says that (for example) the 0.25dB down at 20kHz and 1dB down at 30kHz has the effect of significantly changing the sounds of the cymbals crashing -- the judgement can then be prioritized by the person reading the spec.   At least, I know how I would prioritize that interpretation of that raw frequency response spec, esp at 30kHz.  (BTW, the change in cymbals crashing might be caused more by dynamic input impedance effects, say, on MM preamps.)

 

The general categories of objective & subjective can matter -- the problem with relevence has to do with the priority/usefulness of the actual spec, and the measurement/usability situation for that spec.

 

John

 

Link to comment
1 hour ago, pkane2001 said:

 

That's why there are more complete measurements being published. Nobody (except for some manufacturers) publish just the THD % -- and in those cases, I fully agree with you -- that's mostly meaningless, even when stated as 1kHz @ 0dBFS.

 

Here's an example of the types of evaluation I find useful, having performed these myself. I look for these when published by others, as they do provide a lot more detail about a device than just a single number.

 

Distortion vs. level with levels of individual harmonics, noise, etc:

image.thumb.png.4bb2b212ef275eb11031cdd7bf6f21f5.png

 

Distortion vs Frequency:

image.thumb.png.831006dda903c9156c8fd917685afb3a.png

 

And yes, Chris, these are mostly below threshold of audibility, although the noise floor is higher than I'd like to see.

 

Those are the good, minimum specs that are needed for reasonable judgements.  Hobby claims need that kind of characterization also (perhaps to a bit less detail just because of practicality.)

 

I am also skeptical of the effects of the input transducers.   I haven't seen the full specs recently -- but for those still into vinyl -- the impedance thing, esp for MM is important.   Since MM/MC preamps are common in the hobby realm, claims about distortion really need to include the impedance of the transducer.

 

But yep -- this is getting into the realm of relatively meaningful.

 

It WOULD be nice if a transparent explanation would be given for non-techie audiophiles also -- what does that 0.001% mean if it increase to 0.01% at 30kHz?    You know what I mean...   IMD is also important, where THD vs IMD have different importance at different frequency.   To do the specs in full detail can be onerous.  But, it isn't about the 'specs' per se...  Like, how bad is the IMD.  Is IMD even an issue in the design (it can be an issue, but hopefully a good design won't make it worse than what it should be.)


I can keep on rambling on about this without communicating more of what I intended  -- but as we all know (both people who have a preference for objective or subjective), raw numbers are meaningless unless the *effect* of those 'numbers' (however diminishingly small or big) on the sound is the important thing.

 

I'll sign off on this subject because my point is made -- and I only feel uncomfortable when being too much or too little focused on the objective specs, and also uncomfortable if the subjective effects aren't verified/tested.   When I suggest 'testing', I mean using experiments with controls.  These true subjective tests, with some reasonable scientific/statistical/blind method are very inconvenient, but can be amazingly beneficial to both the consumer and the engineer doing a design.  For the design, it is more about verification, but for the consumer, it can be about validation or choice.  (probably other reasons.)

 

I guess -- I try to say, don't discount ANY information source, and use whatever tools you can make available.  Doing things 'right' can be incredibly inconvenient, and happily, much of the time, we are lucky -- and HOPEFULLY whatever mistakes we make in design/testing and as a customer end up being inconsequential.

(To be polite, unless someone directly replies and effectively asks for response, I'll just lurk again :-)).

 

John

 

Link to comment
13 minutes ago, Miska said:

 

Overall phase behavior, and also to some extent modulator behavior. Not just inter-channel differences, but differences that apply to both channels. And for inter-channel also things over channel cross-talk.

 

Wow, that is good.   Definitely modulation distortion on a channel can distort the temporal relationships as a *secondary* effect.  Modulation distortion (as what one gets with fast gain control/agc/compression/expansion) can 'fuzz' spatial relationships along with the compression/expansion itself causing a modification of the 'space'.

 

John

 

Link to comment
1 hour ago, bluesman said:

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL.

 

I have a super simple example of IM, but it is hard to quantify the audible damage, because it all depends on gain control slew, rate of gain control changes, and the periodic nature.  There is a form of AM that is totally analogous to AM radio modulation -- it is created by dynamic range compression, expansion, limiting or traditional NR systems.

 

Any time you grab a signal, and multiply it by a varying gain, that is EXACTLY the same as AM modulating a carrier, except the carrier is the recording/music signal.   The goal of 'gain control' is usually to dynamically modify the signal level, and that is the actual goal.  Hiowever, simple jFET, OPTO, THATcorp chips, when they do the gain control, they MATHEMATICALLY create sidebands in the signal.  These sidebands both frequency wise and temporarlly spread the signal.   There are super special mathematical tricks that can modify the sidebands to inaudibility, and I created such a technique, but the math is beyond BSc level understanding.

 

There are other strange side effects to the modulation, and that is DURING the gain control slew, it weirdly opens up windows for the audio components to modulate each other, but that is mostly because of the gain control signal not being totally 'pure' WRT the desired gain.

 

There are two major kinds of modulation distortions avoided by the DHNRDS -- one form results from the gain control signal itself 'wobbling' based upon the signal waveform, and the other kind of modulation distortion comes from the gain control being applied to the signal.  Both of these evil behaviors mix with each other, making the result even worse.

 

Micro level forms of modulation can also happen against a digital signal, and that comes from the final clock rate moving around, and that also causes sidebands, and as a side effect of fitering further down the change, can even amplitude modulate the signal.

 

Any REAL source of modulation distortions in a recorded signal should be avoided, unless used for artful purposes (e.g. FM synth effects), or maybe even other purposeful distortions.  Artful distortion isn't bad, unless it is bad art :-).

 

John

 

Link to comment
32 minutes ago, pkane2001 said:


Sorry, thought you’re looking at IMD. Digital tones simply add together in the time domain. In the frequency domain, there is no intermodulation, the two tones remain at their separate frequencies.

 

The frequency sums and differences that you are describing are a product of IMD, and exist normally in the analog processing of the signal with nonlinear transfer function.

Instruments themselves can intermodulate because of natural nonlinearities.  I wouldn't be surprised if closely spaced traditional instruments wouldn't intermodulate also.  Of course, it requires significant (live) volumes.

 

No matter, as long as the natural performance is mic'ed and preamp is good, the electronics equipment itself shouldn't produce a lot of modulation components (of whatever type.)   The transducer (mic) might intermodulate to some extent -- one reason to use small diaphragms at high levels.   The acutal performance (sources) certainly can intermodulate to one extent or another.  I truly don't know how much -- it is for those who work with live music nowadays to measure the natural modulations (if they are interested.)  I am interested in real-world information on the matter -- cool stuff.

 

An example of something that naturally creates intermod distortions (doppler/FM distortion) is a single cone speaker trying to reproduce the entire frequency range...   The long excusions of the lows will certainly doppler modulate the highs.   (That is one reason for the early development of coaxial and triaxial speakers.)   Geesh, a poorly constructed speaker box, with lots of bass, buzzing and buzzing is also a form of IMD :-).

 

John

 

Link to comment
24 minutes ago, Audiophile Neuroscience said:

 

Fascinating and it brings up subjects of musical intonation and temperament. I would have initially or intuitively thought that capturing sum and different tones generated acoustically was a good thing, indeed making it more natural and more representative of a real- life performance.

 

So I may be misunderstanding your post and also John's @John Dyson explanations that followed your post like "Real world IM as picked up by a mic is cool -- it is when it is a form of signal modifying distortion, that is when it becomes EVIL."...

 

So, is it a good thing or bad?

 

The plot thickens if we also consider illusory phenomenon like the "missing" or phantom fundamental and issues of the same pitch but different timbre https://auditoryneuroscience.com/pitch/missing-fundamentals

 

 

A professional symphony trumpet player recommended this book by Christopher Leuba on intonation for those interested https://www.hornguys.com/products/a-study-of-musical-intonation-by-christopher-leuba-pub-cherry

 

I will quote this trumpet player from another website post as I think it teaches a lot about the topic and may be germane to the question

 

 

 

 

 

I very definitely did not make my point clear when mentioning 'distortion'.  I was intending to say that natural distortion that comes before the mic is cool.   Distortion in the electronics after the microphone is uncool.

 

The natural world, instruments, etc produce intermod and nonlinear distortions from time to time.  We want for the mic to capture those NATURAL sounds.  Anything mucked up by ham-handed electronics is generally bad (unless artfully intentional.)

 

Sorry for the confusion.

 

John

 

Link to comment
59 minutes ago, bluesman said:

There are many pieces of audio gear that sound grossly different despite identical distortion measurements.  Could those measurements possibly be misleading? Or maybe there’s a lot more to that “wire with gain” than how much IMD and THD it generates.

 

Have you considered the possibility that your interventions have more effects on the signal than those you’re focused on and measuring?  Is it at all possible that distortion measurements alone could be misleading you?  Might different nonlinearities lead to different effects in addition to the same harmonic distortions?

 

You obviously don’t consider the efforts and results I posted earlier in this thread to be measurement, experimentation, testing or validation.  I’ll just have to pull myself together once I finish grieving over your disapproval.

 

PS: nice catch on your “IMO”. You were on the verge of contradicting yourself.
 

 

Electronics equipment and processors are not designed with the sole test sources being musical instruments.  Most often, they will be test recordings of all kinds, including recordings of instruments and recordings/direct output of signal generators.  Mr @pkane2001 piece of software is effectively the same as the oh-so-typical piece of test equipment.

 

Even though a lot of audiophiles are not full EE/DSP/Audio engineers/technologists, don't underestimate their competency and what they know.  It is actually *better* to have the correct already-designed tool rather than cobble together a piece of software.

 

Maybe, many non-programmers  don't really know how tedious it is to write even trivially useful/file compatible audio software -- just try to figure out how to reliably read/write .wav. files?   Remember, there are at least three common data variants, many sample rates, and all kinds of metadata. Then, there are semi-compliant/semi-nonstandard .wav variants.  Then, is the tool going to have to be compatible with RF64?  (Probably not, but still -- it ends up being an issue on applications software.)  Okay, go find a software libarary that does that work, but then, watch the licensing...  SW licensing is yet another issue to worry about (not so much on test software, but on redistributed test software, it is.)

 

When a test tool generates audio files with the appropriate/desired/needed data, that is a potentially a nice shortcut for the developer or moderately sophisticated (or more) audiophile type person.

 

Sure, I have a multi-threading framework where I can write a processing program in minutes, with all of the .wav file work already being done, but that still takes a few hours to create a single-purpose test tool.   If @pkane2001 has already done the work to effectively create the same as multiple single purpose test tools -- that saves the user/developer time.  (Properly controlled/standardized 'noise' isn't even trivially simple to do.)

 

An already-made test tool is really helpful.

Using the tool for 'listening' or SW test purposes, it all has a similar purpose.  (Even if I could play an instrument, I doubt that I'd get the Sax or Clarinet out and use it as a software test input.)   Even controlling an audio test with real instruments -- one would have to be VERY careful.

 

John

 

Link to comment
10 minutes ago, bluesman said:

And it isn't.  Go back to posts 580 and 620 for a start.  You (and others - you're not alone) keep pushing the fact that all intermodulation is distortion and is generated by nonlinearity.  You're the one who introduced your app in support of this belief - I couldn't care less about it.  I'm suggesting and supporting the belief that not all intermodulation is distortion caused by nonlinearity.  Musical notes intermodulate naturally in the air because their compression and rarefaction waves collide with the molecules in the air around them, pushing those that are randomly minding their own business into sum and difference waves that are heard and recorded as part of the program material.

 

Those intermodulations (which are NOT distortion in the true sense, as they arise from and are part of the source signal) are then recreated again from the reproduced program material being thrust into the air by speakers.  This is in addition to and apart from any intermodulation distortion caused by nonlinearities in the signal chain.  Your repeated assertion that your app proves that all IM and harmonic distortion stems from nonlinearities is simply not correct, in my opinion.  I support my opinion by having recorded pure tones with their natural intermodulation products, and showing that those intermodulation products persist in the recorded waveform even after filtering out the fundamentals that generated them.

 

Another contributor believes that these natural IM products are coming from resonances in the instruments themselves.  I cited well done research by others showing that there are no resonances in a solid body guitar anywhere near the sum and difference tones in my demo, which I believe also refutes this belief.  Similar research shows a lack of resonance anywhere near the intermodulation products found in the playing of flutes, oboes, and many other instruments.

 

Because of my belief, I'm suggesting that distortion measurements alone are misleading.  I believe that it's possible to separate the natural IM in the program from the IM created by playback of the program (entirely apart form any IMD introduced by the system).  I don't know how yet - it may be phase differences, amplitude differences, or perhaps use of real time FFT to differentiate natural IM in the source program from the products of intermodulation between the recorded natural IM and that generated by its playback. 

 

I think this added layer of intermodulation is at least part of that famous veil that we all want removed from our music on playback.  But "IMO", believing that all IM is the product of nonlinearities and is distortion is counterproductive. And if I'm correct, inducing distortion as you advocate for evaluation, testing and development would be the wrong way to approach the problem.

 


We can get into defining what constitutes a 'nonlinearity', but usually that means something like a non-fixed gain at a single frequency based on parameter (changing gain at a given frequency vs. a parameter).  The controlling value for that 'gain' can be instantaneous signal 'voltage' or 'pressure'.  A waveform can be distorted by a simple filter, for example, but that filter is NOT nonlinear, because new sine wave frequencies aren't generated (a simple filter doens't vary the gain at a specific frequency vs. time.) 

 

Normally new frequency components cannot be generated from a signal without nonlinearity.  Not all 'nonlinearities' are 'fixed' and are just bends in gain curves.   Any time that you multiply a sine wave by a fixed value (other than zero), you'll get a sine wave.  The gain curve can be bent by insidious (parameteric) ways.   There is circuitry whose behavior is fully dependent on the parametric effects.

 

You are probably thinking about a fixed nonlinear gain curve vs. time, but a lot of audio doesnt work that way -- distortion happens because of a changing gain curve, just another kind of nonlinearity, but instead is parametric vs. a 'dc' (constant) kind of nonlinearity.

 

Distortion is all 'relative' also -- the argument can end up being mired in sophistry.   However, traditionally, if you get new frequencies in a spectrum after sending a signal through a device, there be 'nonlinearities' in there.

 

I can agree with one sentiment -- not all nonlinearities are distortion.  It is all about the goal of the circuit/instrument/etc.   The output of an RF mixe that results from nonlinearities,r isn't deemed to be 'distortion', even though there are associated distortion components involved.

 

John

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...