Jump to content
IGNORED

Misleading Measurements


Recommended Posts

 

9 minutes ago, STC said:


Hi AN,

 

It will always exist because we are receiving two inputs via two ears. All sound when reaching the ear drums already altered even for ONE mono signal due to the difference in the pinna between left and right ears. You are always listening to two inputs at all time which cannot represent the exact measurements ( sound waves) as the original even after taking into consideration of frequency shaping by our ears. 
 

Furthermore, our ears are not fixed in space as we continually move even when you try to hold your breath. Even the slightest movements alter the FR that reaches your ear drums. A perfect 440Hz will never be perfect by the time it reaches the ears because we are constantly moving even if it just a few millimeters. 

 

Hi ST

Sorry, I just edited my previous post to add the quote I wanted you to see about intonation.

 

So even with one ear and one speaker, Intonation will occur from combination of the sound waves in the air.

 

I'm just thinking, it's worse with stereo and ambiophonics may help mitigate that effect

 

Sound Minds Mind Sound

 

 

Link to comment
8 minutes ago, Audiophile Neuroscience said:

 

 

Hi ST

Sorry, I just edited my previous post to add the quote I wanted you to see about intonation.

 

So even with one ear and one speaker, Intonation will occur from combination of the sound waves in the air.

 

I'm just thinking, it's worse with stereo and ambiophonics may help mitigate that effect

 


Actually I think Ambiophonics will make it worse because the recursive nature of the signal feeding more than the normal two signals of stereo. OTOH, it also helps to smoothen and average out the difference. The only thing we noticed is that room influence is lessen and we really do not know why as this involves how human perceive sound rather than measurements. 
 

With physical barriers to prevent the crosstalk then It is possible that Ambiophonics might mitigate. 
 

 

Link to comment
42 minutes ago, Jud said:

 

 

@esldude and I discussed this quite some time back.

 

I wondered if reproduction of ultrasonics helped reproduce some of the natural harmonics of musical instruments. He maintained any intermodulation of that type from home speakers over and above the audible frequencies captured at the mics would be distortion. I'm not exactly sure if Paul is arguing the opposite.

 

I'm quite sure at least some speaker designers are well aware of the possibility such intermodulation may occur and take steps to minimize it insofar as possible. I'd guess it would show up pretty clearly with some speaker tests.

 

24 minutes ago, Jud said:

 

In fact free software like REW used with a calibrated mic should show it with test tones at various frequencies, so you can check whether this might be a concern with your speakers.

 

Hi Jud,

I t may well be I am not understanding but if you read my edited post #548 it seems to me and apparently @bluesman that following on from this, speakers must reproduce what they are fed which includes the captured intonation products that instruments and orchestras produce. Those acoustic intermodulations are recreated anew in the air on playback, which in effect doubles down on the intermodulations already captured and faithfully reproduced.

 

EDIT- how might REW assess that and how would a speaker manufacturer control it?

Sound Minds Mind Sound

 

 

Link to comment
10 minutes ago, Audiophile Neuroscience said:

Those acoustic intermodulations are recreated anew in the air on playback, which in effect doubles down on the intermodulations already captured and faithfully reproduced.

 

I'm saying why theorize about this? Run test tones through your speakers (Audacity or other free software should work), and use REW or other free software with a calibrated mic to see if you're getting intermodulation products at what could be audible levels.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Again, modulation doesn't occur "in the air" - it may in the driver, but that's a completely separate issue.

 

Using two tones, over two drivers, is in fact how high quality microphones are checked for misbehaviour - microphones are going to be orders better than speakers for distortion, so how do you measure how good the mic actually is? 🙂

 

The answer, https://www.listeninc.com/products/test-sequences/free/microphone-intermodulation-distortion-measurement/

 

Link to comment
8 minutes ago, fas42 said:

 

Again, modulation doesn't occur "in the air" - it may in the driver, but that's a completely separate issue.

 

 

Another proposition that's simply tested. Measure with mic at speaker(s), measure with mic at listening position.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
1 hour ago, Jud said:

 

I'm saying why theorize about this? Run test tones through your speakers (Audacity or other free software should work), and use REW or other free software with a calibrated mic to see if you're getting intermodulation products at what could be audible levels.

 

Okay, I am clearly in over my head here. I will wait till @bluesman comes back online. I wouldn't know what to look for using spectral analysis but apparently David (bluesman) has apparently done so.

 

Conceptually, and maybe ill conceived, based on previous quoted material and that quoted below, Intonation is - acoustic sum and difference tones from constructive and destructive interference patterns of sound waves in air - it happens in the real world as a natural inevitable phenomenon and would inevitably happen anew on playback. Its not a fault or distortion. Its not IMD. If correct, every speaker on the planet would faithfully create the effect just as real instruments do in a real orchestra. The speaker would be 'flawed' if it didn't produce the notes that then mingle in the air to create sum and difference tones. There's no way around it apart from turning off the sound or maybe doing some sort of fancy DSP algorithm which I am totally unaware about.

 

The 'double down' effect (my words) would occur because the mic captured the sum and difference tones of the orchestra (as well as the fundamental component notes) - then the speaker must play back everything including the captured sum and difference tones and the fundamental component tones. The latter must create a new set of sum and difference tones in your living room which add to the original captured sum and difference tones.

 

This process would be expected, normal and inevitable.

 

I probably have it all cocked up 😃🤷‍♂️

 

Quote

I think it’s important to differentiate the intermodulation that occurs among notes in the performance from all other IM.  Every time a violin plays a C (262 Hz at concert pitch) and a trumpet plays the E above it (330 Hz), the sum and difference frequencies of those fundamentals plus all the harmonics from each instrument are also generated at an audible but far lower level.  This IM is captured by the mic and, in an all analog recording chain, is preserved just like the notes that are played.  It’s created anew in playback, which I believe is a barrier to truly lifelike reproduction.  Because the same IM products are now louder and interacting to create even more, they place a true sonic veil over the original performance.

 

I’ve played a bit with digital pure tones (which is truly an oxymoron), to see what’s in the spectrum of their IM products.  I neither hear nor see nor measure the sum and difference frequencies like I do with pure analog tone generators and amplification chains.  So either my equipment is faulty, I don’t know how to use it, or digitized sine waves do not interact the same way real ones do. When you look at a stretched out digital sine wave on a good high res analog scope, you can see what looks to me to be the effect of sampling as a fine discontinuity that seems to reflect sampling rate.  
 

I’m trying to learn as much as I can about IM in the digital domain, so all input is welcome.  I don’t think that it’s the same phenomenon as it is in the analog world, which may account for a lot of the differences we hear both between versions of the same source and among listeners to the same playback.  Capturing and controlling IM seems to me to be a key frontier for advancement of SQ in recording and in playback.

 

11 hours ago, bluesman said:

But I’m talking about intermodulation, not IM distortion. Intermodulation is a perfectly normal acoustic phenomenon that occurs when two instruments play different notes at the same time.  It’s an integral part of live music, so it’s in the source performance. IM distortion is the addition of intermodulation products that are not in the source.  
 

Not all intermodulation in audio is distortion.  In fact, natural intermodulation among instruments is a large component of the sound of a symphony orchestra. The phenomenon of recreation of the fundamental from the harmonic structure of a tone is what lets you hear lower notes in a musical performance than were played.  The lowest note in the spectral splash of the tympani can be heard below the pitch to which the head is tuned, if this is what the composition demands. Some composers use this effect as part of their music.  As I recall, the oboe’s spectrum is almost entirely harmonics, yet we hear the fundamental being played because we reconstruct it from those overtones and intermodulation products.
 

You can play two analog sine waves of differing frequencies and find the sum and difference intermodulation products in a spectral analysis of the sound.  The first ones are also easily heard if the fundamentals are audible and their sum and/or difference frequencies are in the audible range, although amplitude drops off precipitously after that. This does not appear to be the case with digitized sine waves, as I don’t see the same spectrum - what should be the biggest sum and difference components seem to be missing.  And I don’t hear those IM products when I play the tones through speakers or headphones.
 

So I’m trying to understand if and how digital tones intermodulate with each other, not how IM distortion products are created.  There’s a big difference between the two.

 

5 hours ago, bluesman said:

Actually there is, and it has nothing to do with the playback equipment.  Acoustic intermodulation takes place whenever two tones of differing frequencies are sounded together.  If you play a 260 Hz C and a 320 Hz A at the same time, intermodulation will generate a 60 HZ tone and a 580 Hz tone that are much lower in SPL than the fundamentals but clearly audible, along with a host of even quieter products that are addition or subtraction products among the fundamentals and the 60 and 580 tones, etc etc.

 

When you play back the program material, you will generate again the same acoustic intermodulation from the same two original tones, which are now playing from your speakers along with their recorded intermodulation products.  It is indeed created anew, and is purely an acoustic phenomenon just as it was in the performance.  It has nothing to do with the electronics, which add whatever intermodulation distortion products they generate.  But if a 260 Hz C and a 320 Hz A are coming from your speakers, their intermodulation products are being created anew and added to the same tones that were created in the original performance (and therefore on the recording).  This has nothing to do with the electronics - it's purely an acoustic event.

 

If you don't understand acoustic intermodulation, you can easily and dramatically experience it with a guitar.  The 5th guitar string is a 110 Hz A. If the strings are correctly in tune, the 6th string (an 82.4 Hz E) will produce a 110 Hz note when pressed onto the 5th fret.  So if you finger the E string at the 5th fret and strike both the E and A strings together, you should hear nothing but a 110 Hz tone.  If one of the strings is a tiny bit sharp or flat, you'll hear both the two pitches created by plucking them and the intermodulation of the two different frequencies.  If the strings are 1 Hz apart in tuning, you'll hear a throbbing in the notes at exactly 1 beat per second.  This is how we used to tune before electronic tuners were invented.  And if you record this and play it back with analog equipment, you'll hear twice as much throbbing.

 

4 hours ago, bluesman said:

I'm not sure if it's good or bad or a bit of each. Acoustic intermodulation is part of what we hear at a concert, and it clearly helps to create the overall soundscape of the live performance and venue. But as is clear from some of the responses to my posts in this thread, there aren't many people who've even given this any thought let alone come to an understanding of it. I think it's logical to assume that creating the same intermodulation products during playback that were created and recorded at the performance might have a clearly audible effect on how real it sounds. I do not know this for sure, and it'll take a lot of work to begin to figure it out.  But I strongly suspect it's hiding an overlooked opportunity to further improve SQ.

 

Interestingly, recording bands part by part in isolation eliminates this consideration except for the intermodulation among fundamental and harmonic tones from the individual instrument being recorded.  So playback is the first opportunity for intermodulation among all the instruments in the ensemble.  It's only for full scale live performances by multiple instruments that they can interact acoustically.

 

It doesn't matter whether the recording and/or playback equipment is analog or digital - the sounds of the instruments coming from your speakers are analog, so they will generate natural intermodulation. Maybe we could use real time spectral analysis to identify any recorded intermodulation and DSP to cancel it with out of phase addition and summation.  I think this may be important.  It also may be a wild goose chase - but I like goose with the right sauce.....

 

4 hours ago, bluesman said:

Acoustic intermodulation is not in your head - it's physical and audible.  The beat frequency you hear and we use to tune our guitars is recordable and audible on playback.  But it seems that it only occurs with analog sources - it doesn't seem to develop when the differing frequencies themselves are digitally generated.  However, even an all digital record-playback chain starts with analog input from live instruments and ends up with purely analog output as sound, so it definitely occurs with what's coming out of your speakers on playback.

 

Sound Minds Mind Sound

 

 

Link to comment
26 minutes ago, Audiophile Neuroscience said:

Conceptually, and maybe ill conceived, based on previous quoted material and that quoted below, Intonation is - acoustic sum and difference tones from constructive and destructive interference patterns of sound waves in air - it happens in the real world as a natural inevitable phenomenon and would inevitably happen anew on playback. Its not a fault or distortion. Its not IMD. If correct, every speaker on the planet would faithfully create the effect just as real instruments do in a real orchestra. The speaker would be 'flawed' if it didn't produce the notes that then mingle in the air to create sum and difference tones. There's no way around it apart from turning off the sound or maybe doing some sort of fancy DSP algorithm which I am totally unaware about.

 

The 'double down' effect (my words) would occur because the mic captured the sum and difference tones of the orchestra (as well as the fundamental component notes) - then the speaker must play back everything including the captured sum and difference tones and the fundamental component tones. The latter must create a new set of sum and difference tones in your living room which add to the original captured sum and difference tones.

 

This process would be expected, normal and inevitable.

 

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
22 hours ago, pkane2001 said:


As discussed, it is not THD but interchannel differences that determine soundstage “quality”. These are caused by differing levels and amounts/types of distortion between channels.

 

All “things” has to be very good to record and reproduce a realistic 3D sound-stage and measurements that correlate with a realistic 3D sound-stage is not simply the level of inter-channel differences or THD. In fact a big and flat left right sound-stage is easy to record and reproduce while am talking about a realistic 3D sound-stage with good image.

Link to comment
2 hours ago, Summit said:

 

All “things” has to be very good to record and reproduce a realistic 3D sound-stage and measurements that correlate with a realistic 3D sound-stage is not simply the level of inter-channel differences or THD. In fact a big and flat left right sound-stage is easy to record and reproduce while am talking about a realistic 3D sound-stage with good image.


“Everything matters” is a possibility, but the question is why?  
 

Left to right position is defined by differences between channels.

 

Depth is detected primarily through  reverb. It’s possible that some distortions will destroy very low level reverb that our ears may find useful, but that should be measurable, as these will affect any low level signal, not just reverb. The question is then, at what level can we still hear reverb, and at what level does it still help the brain determine distance?

Link to comment
9 hours ago, Audiophile Neuroscience said:

So even with one ear and one speaker, Intonation will occur from combination of the sound waves in the air.

Intonation is the accuracy of the pitch of a note.  If an instrument is faulty (eg a poorly crowned or heavily worn fret on a guitar), not all the notes played on it will be perfectly in tune even though the instrument is tuned / pitched / tempered correctly.  Poor intonation can also be the player’s fault, eg a violinist who places his or her fingers imprecisely on the fingerboard.  
 

There must be a better term for whatever you’re trying to describe. I don’t think you mean intonation.

Link to comment
3 hours ago, STC said:

You may want to look at Tartini tones and intermodulation distortion that occurs naturally due to non linearity of the ears. 

From Jeffrey Freed (violinist.com)


My current thinking is that [Tartini tones] are real (meaning that they actually exist as pressure pulses in the air which conveys the sound from the source to the ear), because these pressure pulses at the Tartini frequency are picked up by a microphone, and can be seen by zooming in far enough on a recorded sound file.“

 

He also opines (as do I) that digital processing of the sources may reduce or eliminate these intermodulation tones, and that this may explain why they don’t appear in some spectral displays.  A sampled sine wave is technically not a continuous function, which may be why digitizing alters, reduces, or eliminates acoustic intermodulation.

Link to comment
9 minutes ago, bluesman said:

A sampled sine wave is technically not a continuous function, which may be why digitizing alters, reduces, or eliminates acoustic intermodulation.


Sampled sine is a continuous function within the limits of the sampling frequency. It’s stored as discontinuous samples but reproduced as a continuous waveform by any properly constructed DAC. That’s the result of the infamous Nyquist-Shannon theorem.

Link to comment
1 hour ago, pkane2001 said:


“Everything matters” is a possibility, but the question is why?  
 

Left to right position is defined by differences between channels.

 

Depth is detected primarily through  reverb. It’s possible that some distortions will destroy very low level reverb that our ears may find useful, but that should be measurable, as these will affect any low level signal, not just reverb. The question is then, at what level can we still hear reverb, and at what level does it still help the brain determine distance?

 

I didn’t mean to say that “everything matters”. The opposite. I do not think that low distortions and THD correlate with a realistic 3D sound-stage. If low distortions and THD really would show a relationship tube gear wouldn’t be selected for their big 3D and lifelike sound-stage.

 

Reverb is made at the site and recorded. It is not more difficult to reproduce than any other SQ aspect. Some distortions can create an illusion of more reverb than that is recorded thou.

 

I don’t believe for a second that the minimal measured inter-channel differences in modern DACs and amps make much difference when it comes to left and right positions. Vinyl measure much much worse and still can do left and right and 3D sound-stage as good as a lot of digital gear.

 

Many people use sound-stage as an example for characteristics there measurements of electronics won’t tell you anything of value. Recordings, speakers and room acoustic is another thing and measurement of them matters a lot more. 

Link to comment
10 hours ago, Audiophile Neuroscience said:

The latter must create a new set of sum and difference tones in your living room which add to the original captured sum and difference tones.

 

The question is, must it indeed and if so to what degree? It seems like something that might make sense, but:

 

- To what degree are speaker designers aware of this and design speakers to minimize the effect?

 

- I've seen science defined as "A beautiful theory slain by an ugly fact." 🙂 So the theory sounds at least somewhat sensible, but let's do some measurements and see if we discover any contrary ugly facts.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
9 hours ago, pkane2001 said:


Sampled sine is a continuous function within the limits of the sampling frequency. It’s stored as discontinuous samples but reproduced as a continuous waveform by any properly constructed DAC. That’s the result of the infamous Nyquist-Shannon theorem.

I’m not talking about sampled sine waves.  A DAC will interpolate the missing bits of the curve when creating the analog analogue of the digital signal.  What I’ve seen presented as evidence that there’s no such thing as recorded acoustic intermodulation is spectral analysis of pairs of digitally generated sine waves.  Running the output of a digital signal generator directly into a computer for recording requires no DAC and the “sine wave” it generates is made up of discontinuous values.
 

At first, I assumed that there’d be intermodulation between dissimilar notes created by my software signal generator too, and I was puzzled when I couldn’t find the right products in the spectrum.  I now think it’s the fact that those tones are purely digital and simply will not interact because they’re just very fancy little square waves strung rapidly together.

 

It seems that few of us have analog signal generators and recorders any more.  Now I really need my good old relics :) 

Link to comment
16 hours ago, sandyk said:

...Binaural beats

 

11 hours ago, STC said:


David,

 

You may want to look at Tartini tones and intermodulation distortion that occurs naturally due to non linearity of the ears. 
 

Cheers,

ST

 

8 hours ago, bluesman said:

Intonation is the accuracy of the pitch of a note.  If an instrument is faulty (eg a poorly crowned or heavily worn fret on a guitar), not all the notes played on it will be perfectly in tune even though the instrument is tuned / pitched / tempered correctly.  Poor intonation can also be the player’s fault, eg a violinist who places his or her fingers imprecisely on the fingerboard.  
 

There must be a better term for whatever you’re trying to describe. I don’t think you mean intonation.

 

8 hours ago, bluesman said:

From Jeffrey Freed (violinist.com)


My current thinking is that [Tartini tones] are real (meaning that they actually exist as pressure pulses in the air which conveys the sound from the source to the ear), because these pressure pulses at the Tartini frequency are picked up by a microphone, and can be seen by zooming in far enough on a recorded sound file.“

 

He also opines (as do I) that digital processing of the sources may reduce or eliminate these intermodulation tones, and that this may explain why they don’t appear in some spectral displays.  A sampled sine wave is technically not a continuous function, which may be why digitizing alters, reduces, or eliminates acoustic intermodulation.

 

6 hours ago, Jud said:

 

The question is, must it indeed and if so to what degree? It seems like something that might make sense, but:

 

- To what degree are speaker designers aware of this and design speakers to minimize the effect?

 

- I've seen science defined as "A beautiful theory slain by an ugly fact." 🙂 So the theory sounds at least somewhat sensible, but let's do some measurements and see if we discover any contrary ugly facts.

 

So in a nutshell my confusion here was born out of the theory (known science) relating to acoustics -  mechanical waves - in this case sound waves in air and with known properties of constructive and destructive interference. My confusion specifically was i thought musicians were stating or implying this as the mechanism for that part of musical theory (which I am far less familiar, not being a musician) for "sum and difference" tones.

 

I mentioned in an earlier post that "the plot thickens" when considering things like "the missing fundamental" and I could have added other normal perceptual creations that occur in the cochlea/ brain. They would include "binary beats" for two ears....and "subjective combination tones" (which should in theory appear even for one ear). In either case these "illusions" are not acoustic phenomena and will not appear in the signal or be captured by a mic. They will presumably be recreated anew from the fundamental component tones on the recording but there is no 'double down'.

 

Re Intonation, it can mean a number of different things depending on context eg speech vs music. I always just thought of intonation in music as being in or out of tune, thats it. As said I am no expert on music notation or theory so I believe I read more into a quote from a professional symphony musician - Intonation can mean "the relationship of the tones played by different instruments at the same time. When two trumpets (or other instruments)are playing, sum and difference tones are generated acoustically.When "in tune" those resultant tones line up in frequency and become a constructive part part of the musical presentation. If we play "out of tune", the sum and difference tones become destructive, the end product becomes hard to listen to. If we are talking about playing major chords, the sum and difference tones are all in the same key". In my mind it conflated, rightly or wrongly, being "in tune" with sum and difference tones physically present in the air and obviously created some confusion on my part.

 

So, it turns out that there is an hypothesis, not accepted theory, that combination (Tartini) tones may actually have a basis in acoustics ie mechanical waves but to paraphrase Jud, if it disagrees with experiment it is wrong.As Jud also posted shouldn't this be easy to explore by taking measurements at the speaker compared to listening position? I do still 'like' the hypothesis.It would explain a lot.

 

 

Sound Minds Mind Sound

 

 

Link to comment
15 minutes ago, fas42 said:

 

If there are intermodulation frequencies created then it's because the analogue circuitry is distorting, in the classic manner, from non-linearity in the signal path. Digital can be made arbitrarily precise, and so any distortion frequencies at levels that are meaningful can be avoided by simply using more bits to do the maths.

And of course your hearing, and Paul's hearing, is perfectly Linear without distortion ?  

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
22 minutes ago, pkane2001 said:


What does this have to do with anything Frank said, which happens to be correct?

 

 Even if it is correct, it's a waste of time as non linear human hearing is involved, and in any event, no matter how high the initial bit rate it still ends up converted to 24 bit at best, which virtually ALL DACs are unable to fully resolve, with many DACs not fully resolving more than 21 bits properly..

 Neither have I seen any S/W signal generator that has distortion as low as you would require, not even the best of the Analogue generators such as the Wien Bridge Oscillators .

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
5 minutes ago, sandyk said:

 

 Even if it is correct, it's a waste of time as non linear human hearing is involved, and in any event, no matter how high the initial bit rate it still ends up converted to 24 bit at best, which virtually ALL DACs are unable to fully resolve, with many DACs not fully resolving more than 21 bits properly..


What exactly is a waste of time? Understanding what causes IMD and what doesn’t? That’s the only thing that Frank stated, nothing related to the number of bits a DAC  can resolve.

Link to comment
12 minutes ago, pkane2001 said:


What exactly is a waste of time? Understanding what causes IMD and what doesn’t? That’s the only thing that Frank stated, nothing related to the number of bits a DAC  can resolve.

 I repeat

Neither have I seen any S/W signal generator that has distortion as low as you would require, not even the best of the Analogue generators such as the Wien Bridge Oscillators .

 Any way, what has any of this to do with the topic of the thread Misleading Measurements ?

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...