Jump to content
IGNORED

Amir at ASR claims Uptone won't sell the ISO regen to him...


Recommended Posts

22 minutes ago, Speed Racer said:

 

Did Alex run over your dog or something? He doesn't deserve these kinds of comments. He's in business to make money, but in my dealings with him, money has never seemed to be the overriding concern. The concern has always been with the quality of the products and customer satisfaction.

Stating the obvious.  Don't have a problem with someone making money.  Everyone should. He did renege on doing a blind test when someone took him up on it.  Again, he has nothing to gain from that, and could only lose.  Any smart business person would turn it down.  He did. 

 

Important measures are dollars, as for audio measures they are unimportant to his business venture or in satisfying his customers.  Those are not in conflict in a case like this.  So what I stated is not in conflict with his concern being his customer satisfaction.  Those go together. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
10 hours ago, jabbr said:

 

 

Ok so there are two different measurements that we've been discussing and I need to clarify what is what:

 

1) Measure "jitter" or more properly phase error in the DAC clock: This is done with phase error measurement equipment. There are several ways to measure phase error using both analog and digital techniques, so-called "vector network analyzers", the venerable HP 3048A and newer equipment such as the John Miles TimePod as well as offerings from Keysight nee Agilent nee HP.

 

Phase error is provided as a plot : http://www.crystek.com/crystal/spec-sheets/clock/CCHD-575.pdf -- in this case the Agilent E5052A was used to take the measurements. See how the phase error rises as the offset frequency goes down? Thats called "close in" phase noise. Yes this phase error goes down to an offset frequency of zero. But phase error is always measured with respect to the clock frequency e.g. 11 Mhz, 22 Mhz. "Jitter" is 11 Mhz +/- offset where offset varies from 0 to ...

 

2) Measure the effects of jitter at the analogue output of the DAC: I've been discussing "line width" and widening of the peak in a number of recent posts. If you take a pure tone -- and for the sake of our discussion here, a 1KHz sine wave tone generated at 24 bits 352 kHz, or DSD256 or DSD512 (for the sake of our discussion) and take this digital stream and send it to the DAC, in an ideal situation the DAC will emit a pure 1kHz sine wave.

 

Phase noise, specifically close-in phase noise will cause a distortion in the output of the pure sine wave. What does this look like? It causes the "peak" in the spectrum to widen. That's called "linewidth". It goes up with increasing phase error. If we take the DAC output and run it through a spectrum analyzer such as the Audio Precision that "Amir" uses (quotes because i've never met him and he's not here :);) then what would be a thin line at the 1kHz frequency will spread out. This is caused by applying the phase error curve as above, to the pure tone.

 

Hopefully this makes more sense, and (2) is proposed as a way for folks who have a spectrum analyzer to measure the effects of clock jitter on the analogue output

 

3) I'm not being coy here: if the linewidth of a pure tone does not measurably widen with a high quality measurement e.g. not single Hz increments but let's say 0.1 Hz or better increments, then jitter at the DAC clock is not having an appreciable effect on the analogue output. I don't know that 0.1 Hz increments are the specific resolution needed but you know, basically if the 0.1 Hz phase error is not significant your clock is doing a great job ;)

 

I think this is very misguided without some reason to believe such close in differences are problem. 

 

Lots of ways to think about this.  A one degree C difference in temperature changes the speed of sound enough that it alters the relationship between direct and reflected sound to a level many times greater than a small perturbation of clock jitter at .1 hz.  Or even 1 hz.  This is just barking up the wrong tree. 

 

You don't know what you are trying to achieve.  Is it 100 dbc/hz at 10 hz offset, is it 100 dbc/tenth hz at a .1 hz offset?  And why other than better must be better?

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
9 hours ago, jabbr said:

 

Yeah ok that distills a lot of math and physics down -- the clock speeding up and slowing down isn't "white noise" random, rather "pink" or 1/f random, that is to say the speeding up and slowing down is greater for lower frequencies (at the same base clock frequency).

 

 

People have focussed on the baseline phase noise which is indeed low for "competant DACs". If you look at the close in phase noise it is drastically higher -- how many DACs report their 0.1Hz phase error? What dB would you expect the limit of audibility to be? hint: I bet vast majority of DACs would exceed a reasonable number you might propose.

 

 

This is slightly different: If you were to plot phase error against offset frequency, a turntable and clock oscillator would have different curves -- and what I think we are hearing when there is "wow and flutter" is a particular resonance due to the physics of the turntable. Where they are the same is that both will generally have increasing phase error with close-in offsets! Some clocks can have resonances etc, I am describing the general appearance of general phase error plots of clock oscillators (as demonstrated by Crystek and others)

 

In any case yes, the degree to which this is audible, or limits of audibility are not yet known (at least by me)

Here is the type plot you are discussing of a couple pedestrian pieces of gear.  64k FFT of a 12 khz tone with 48 khz sampling.  Firstly because the FFT is not 1 hz you adjust by reducing the number by 1.3 db for 1 hz equivalence.  This is also a voltage measurement so for power relative to the carrier you divide these numbers in half.  The dotted line at 10.3 hz offset would give just short of 45 dbc/hz.   The inserted result at 1.5 hz is just short of 40 dbc/hz.  Now the measuring ADC is not an AP unit.  Specs are it has about half the jitter of the DAC.  Suggestions are your measuring device be 10 db better which this one isn't.  So some of that is the ADC. 

 

Does the Regen make a significant audible difference that somehow results in the same measurement on such a device or slightly worse?  So the improvements are all at something less than 1.5 hz and very audible?  Seems like a real reach to me.

593498febe617_phasenoise1.thumb.png.fbef8c971a7bcf046c8c4cbebb4e56c3.png

 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
24 minutes ago, jabbr said:

 

Are you kidding? First, what is your x-axis and what is your y-axis? You aren't even doing the test I proposed aside from ? 40 dBc -- that's terrible. I certainly hope its the ADC otherwise the DAC is really bad. The test I proposed, assuming you are doing the second test would have a single peak at 12 kHz with a measurable width. I've suggested that you do 0.1 Hz increments but good spectrum analyzers from the 1980s do 640 microHz increments ... 

 

Do you see the specs of the Crystek clock? That's a $20 part. You have no way to get close to measuring it.

This is what I described.  And no the Crystek graph has no single peak it actually starts at 10 hz from the carrier.  I lined my graph up so the edge is lined up with the peak.  So you are seeing the upper sideband.  I am not measuring a 100 mhz clock.  I am measuring the 12 khz tone of a DAC.  No I don't have 1 microhertz resolution.  If I did, the 1 hz wide level would measure somewhere close to the same. 

 

The x-axis is clearly there.  0 to -180 db voltage.  The y axis is in hertz.  In this case spanning 12 khz to 12.5 khz.

 

Now part of the reason I posted this is to show you are comparing apples and oranges in one sense.  Yet if you wished to do comparisons this way it would be valid as far as it goes.  Same idea different frequencies.  Lower jitter will net better results on such a graph.  Not all types of jitter mind you which is another problem with the assumption close in jitter is of large importance. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
7 minutes ago, mansr said:

I was fooling around with an old Cambridge Audio AVR recently. As it turned out, feeding it an unexpected, but in-spec, S/PDIF signal made it emit smoke.

I have had that happen with tweeters before in some of my test to destruction activities.  Unintentional destruction, but same result.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
5 minutes ago, jabbr said:

 

Ah ok ;) I misinterpreted what you were doing ...

 

Yes lower close-in phase error will get a more narrow peak. This is math. I know this is a new concept that doesn't appear to be widely described but I am very confident of the underlying math. 

 

 

I don't know that it is new.  Perhaps the importance of how narrow the central peak is has been overlooked or considered unimportant. 

 

On a graph like this where would you wish to measure it.  1 hz either side of the peak, 10 hz?  What level is telling in your opinion?  With matlab one can measure to about any size FFT you desire.  You have to be really careful with relative clock timing to even get what I have shown you here.

 

5934a4700d5a3_phasenoise2.thumb.png.0a7659b43466508c2952825e97589a1d.png

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
3 minutes ago, firedog said:

No, according to him it was a blind test

He said it was marked G and M or something like that.  While true he didn't know what those markings meant, it wasn't really even single blind.  I have seen this too many times.  Even the color of a cable as long as you can see it cause people to latch onto perceived differences and place them on one vs the other.  Then you simply can't get those out of your head.  No human can. 

 

From what I have seen of other people and myself, you could so mark two completely identical boxes and pretty much cannot prevent yourself from hearing a difference.  Most particularly when you are auditioning to see if they sound different.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Just now, firedog said:

Go back and read his longer explanation in a different post and I think you will see it was blind. He also has repeatedly referred to it as blind, and I don't think he would do that if it wasn't. 

Awful long thread at this point.  Can you point to it.  Or perhaps more simply, Jud can tell us how it was done.  I trust he is telling the truth, and I may have misremembered his description. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

You fellows measuring this:

 

What do you do about differential clock speeds between the DUT and measuring device?

 

It makes a difference more so as you get closer in to the target frequency.  What I have been doing to get cleaner more consistent results is rather than use a 12 khz tone use a 11,999.85 hz to 12,000.15 hz sweep over 4 minutes.  Then do no averaging on the FFT.  I find the spot where the measuring device read exactly 12 khz or as close as possible.  I have been using 128 k FFTs.  I suppose with 1 million or certainly 32 million this makes less difference.

 

As an example the red trace is 12 khz tone, but a clock difference causes the ADC to record it at a higher frequency which happens to split almost evenly the FFT bins.  In the green trace I used a very slow sweep and found the spot that was closest to being 12 khz exactly for the ADC.  This results in a more than 20 db difference at a 5.1 hz offset.  This detail will obscure close in differences. 

5935c7edec4ea_dualclosein.thumb.png.f6c9eabe4104102163073183d82890e2.png

 

The second image is zoomed out where they look virtually identical. 

5935c8059574a_dualclosein2.thumb.png.a8ec8a8e35135d50533db01528545399.png

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
6 hours ago, jabbr said:

 

 

Not saying that it is ... I am saying that regardless of the phase error of the clock, the linewidth is measurable. If the linewidth is 0.8 Hz then you need 0.1 Hz increments to reasonably measure. Should we try and improve on that? I have no idea. Now suppose I used 3 different power supplies and got 0.8, 5 and 10 Hz, would that be audible? 

 

Note the linewidth is being defined as the width at 50% height. There's also the skirt which IIRC is a combination of "flicker" and "shot" noise.

Seems to me 50% height in these audio plots is problematic.  It can be effected what height is by the base level noise floor.  Seems a better suggestion would be to compare levels compared to the carrier tone at some standard offset like say 5 hz.  Or to compare width in hz at some arbitrary level down from the carrier tone like maybe -80 db. 

 

I will note my opinion is from masking effects these concerns are overblown.  I think this close in with current gear all this is not making an audible difference.  While I don't oppose better performance I don't think improving this will net a change in sound quality. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Just now, jabbr said:

50% height is standard. With a Gaussian peak it the point of maximum slope. That means that slight errors in placing the exact vertical level will have the least effect in the measured horizontal.

 

In any case this is only an approximation of the "jitter" because other factors such as baseline noise will (slightly) affect the results.

 

I know folks here are still digesting this because it doesn't seem to be widely discussed in the audio field but this is how it's done in the other signals fields where this comes from. 

 

The unique situation we have have is that the test signal is generated in the digital domain, and the output measured in the analogue domain so we have a true end to end measurement of the DAC + transport/interface.

In the context of audio signals here how are you defining 50% height?

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
1 minute ago, jabbr said:

Peak to baseline. This tends to underestimate the medium range phase error (shot noise) and there are many other ways to measure curves eg area under etc 

Sorry, I should have been more specific in my question.  So if the peak to base level noise at say -140 db is the case you measure at -70 db?  Or are you referring to 50% signal strength which is the point where the side bands are - 6db from peak?  I would think the latter.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
1 hour ago, pkane2001 said:

This measure is called full width at half maximum (FWHM) and is a very convenient way to measure the width of a gaussian-like function that is independent of the peak value. For a gaussian, this measure is proportional to the standard deviation. There are some other functions that might be more applicable for this particular measurement, for example some with much longer tails.

Ah, that term I recognize from astronomy.  You might use it to characterize 'seeing' conditions based up how jittery the atmosphere is making images. 

 

To use it similarly for timing jitter you would want the width at - 6db. 

 

I think jabbr would find it more useful in these cases to pick a standard offset like 5 hz or width at a standard level like -60 or -80 db from peak for his purposes. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
17 minutes ago, firedog said:

Terrible analogy. Leeches are used today in some of the most advanced medical centers of the world. Several therapies using leeches are accepted today

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3757849/

 

Try again.

All medical centers in the world use USB.  :)

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Guest
This topic is now closed to further replies.



×
×
  • Create New...