Jump to content
IGNORED

Looking for a treatise on jitter relationship to stereo imaging


Recommended Posts

 

This is an interesting statement in the article, not specific to stereo imaging but on what causes worst percieved jitter.

 

" Recently phase noise close to the carrier (‘close in noise’) below about 100 Hz turned out to be the most miserable!"

 

Would I be interpreting this correctly that if the D to A clock is in a less than perfect environment causing variation of clock frequency in the range of 0-100 hz,

that will cause the most objectionable jitter?

Regards,

Dave

 

Audio system

Link to comment
This is an interesting statement in the article, not specific to stereo imaging but on what causes worst percieved jitter.

 

" Recently phase noise close to the carrier (‘close in noise’) below about 100 Hz turned out to be the most miserable!"

 

Would I be interpreting this correctly that if the D to A clock is in a less than perfect environment causing variation of clock frequency in the range of 0-100 hz,

that will cause the most objectionable jitter?

 

Here's another description: http://www.vectron.com/products/literature_library/phase_noise.pdf

 

The close-in phase noise means small offsets from the carrier or clock frequency, but yes, power supply fluctuations and vibrations are important to minimize.

Custom room treatments for headphone users.

Link to comment
This is an interesting statement in the article, not specific to stereo imaging but on what causes worst percieved jitter.

 

" Recently phase noise close to the carrier (‘close in noise’) below about 100 Hz turned out to be the most miserable!"

 

Would I be interpreting this correctly that if the D to A clock is in a less than perfect environment causing variation of clock frequency in the range of 0-100 hz,

that will cause the most objectionable jitter?

 

Well that is a perceptive question on your part.

 

If we were doing a jitter test on a 48 khz sample rate we might use a 12 khz tone. Were the clock perfect you get the 12 khz tone and nothing else. However suppose it is being jittered at 100 hz. This means samples are taken too soon, too soon, too soon.....followed by too late, too late, too late. The frequency at which this sample clock is modulated is 100 hz. The resulting signal would have 12 khz, and some spurs at 12,100 hz and 11,900 hz. Being so close to the 12 khz tone they would be masked by it unless the jitter was at a very high level.

 

Now imagine our 12 khz tone is being jittered at a 3 khz rate rather far out jitter. You will get the 12 khz tone plus spurs of energy at 15 khz and 9 khz. These being further from the tone will be less masked by it. And will become audible at levels much lower than the 100 hz jitter.

 

So close in jitter is the hardest to eliminate from a clock, but also the least audible in general.

 

Yes, I know I simplified things somewhat, but it makes the proper point in my opinion.

 

Here is an example of a low close in jitter result. The other tones you see may be jitter as well. But notice the central spike is narrow and sharp on the central tone. That is because the close in jitter in relatively low.

15701d1417746824t-tc-impact-twin-firewire-spdif-interface-post-105789-0-02254100-1414109533.png

 

And a higher close in jitter result. The far out jitter tones are still there, but notice how the central tone is not so sharp. It has widened considerably in the few hundred hertz each side of the test tone. This is from poor control of close in jitter. In this case it was the same piece of firewire connected gear with two different power supplies.

15702d1417747264t-tc-impact-twin-firewire-spdif-interface-stellou3-d1v3_zoomed.png

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

So close in jitter is the hardest to eliminate from a clock, but also the least audible in general.

 

Why do you say "least audible"?

 

Rutgers -- a widely acknowledged expert who not only publishes his data but the circuits used to measure -- says the opposite, that the lose-in jitter is the most audible. (It certainly is the hardest to eliminate)

Custom room treatments for headphone users.

Link to comment
Why do you say "least audible"?

 

Rutgers -- a widely acknowledged expert who not only publishes his data but the circuits used to measure -- says the opposite, that the lose-in jitter is the most audible. (It certainly is the hardest to eliminate)

 

Because of masking effects. Does he say it is most audible because it is most in evidence in most clocks? I might agree with that though there still questions about what level is audible. There is little evidence beyond anecdotal that levels typical good gear has is easily heard or heard at all.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

Regrettably my hearing begins to end at 12khz. And most of the content in music is in the 50hz to 3khz range. So lets say I'm listening to an instrument whose fundamental tone is 200 hz with simplified overtones of 400 and 800 hz. Does that 100hz phase jitter mean playback will sound like my instrument will have 2 faint out of tune accompaniments with fundamentals of 100 and 300 hz??

Regards,

Dave

 

Audio system

Link to comment

I've read all the posts in this thread to better understand the mechanisms at play when jitter deteriorates a signal. I am not an engineer, just an enthusiast who likes to understand nature, and I'm not sure I know anymore than I did before.

 

Dave Hill of Crane Song LTD. has some discussion of jitter with test tracks at cranesong.com Please go to the site and click 'jitter page'

 

Is this relevant to the discussion here?

 

Best,

Andrew Bacon

'if it aint broke take it apart and find out why'

Link to comment
The Crystek CCHD - 575 is a very nice part for the price ~$20 - wouldn't describe as SOTA for A DAC-- expect to see it in decent boards.

 

Actually, at 500 pieces they are just $9.60 each. :) And Crystek confirms that they are better in all respects than their physically larger, earlier generation CCHD-957 which you do see in a lot of top DACs.

 

As for spending a lot more on really custom clocks, I think that until the many other sources of significant jitter in most designs is addressed (all that lovely CMOS switching), nobody is really going to get too much benefit from overspending on uber-fancy clocks. It is all a balance. But you knew that already. :)

Link to comment
Regrettably my hearing begins to end at 12khz. And most of the content in music is in the 50hz to 3khz range. So lets say I'm listening to an instrument whose fundamental tone is 200 hz with simplified overtones of 400 and 800 hz. Does that 100hz phase jitter mean playback will sound like my instrument will have 2 faint out of tune accompaniments with fundamentals of 100 and 300 hz??

 

Yes, it would. However, any given amount of jitter causes higher level sidebands with higher frequencies. So at 12 khz that side tone with decent gear might be at -100 db. It will be far lower in level with frequencies below 1 khz.

 

So could you hear a difference between 200 hz clean at a moderate listening level and with 100+300 hz sidebands at - 140 db? The obvious answer is you would never know those were there. It is below the basic noise floor of your gear, not to mention your listening room, and the basic threshold of human hearing.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Actually, at 500 pieces they are just $9.60 each. :) And Crystek confirms that they are better in all respects than their physically larger, earlier generation CCHD-957 which you do see in a lot of top DACs.

 

As for spending a lot more on really custom clocks, I think that until the many other sources of significant jitter in most designs is addressed (all that lovely CMOS switching), nobody is really going to get too much benefit from overspending on uber-fancy clocks. It is all a balance. But you knew that already. :)

 

At those prices an even better reason to use it then:) I'd say uber expensive clocks are in the $5k + range. MSB etc seem to favor these.

 

I think it's great that we are having this conversation. I do think having a really good clock right next to the DAC is not unreasonable. Kind of interesting that more folks are more concerned with what type of cables they have than what type of clock? At the same time we can focus on fire walling the DAC from external sources of noise.

 

With DSD for example, the BCLK phase noise can be measured to see exactly how much the external circuitry is affecting it. That would be a great measurement for John to do with and without your circuitry for example. You can look at it with various USB->I2S boards, +\- @iancanada isolation chain etc.

Custom room treatments for headphone users.

Link to comment
At those prices an even better reason to use it then:) I'd say uber expensive clocks are in the $5k + range. MSB etc seem to favor these.

 

I think it's great that we are having this conversation. I do think having a really good clock right next to the DAC is not unreasonable. Kind of interesting that more folks are more concerned with what type of cables they have than what type of clock? At the same time we can focus on fire walling the DAC from external sources of noise.

 

With DSD for example, the BCLK phase noise can be measured to see exactly how much the external circuitry is affecting it. That would be a great measurement for John to do with and without your circuitry for example. You can look at it with various USB->I2S boards, +\- @iancanada isolation chain etc.

 

I looked at those articles and I did not see a detailed implementation of a phase noise measuring circuit, just a simple block diagram of a standard analog sideband system. You can do much better than this by using the cross correlation variant of this, but they are difficult to make work right and a REAL pain in the neck to use.

 

I'm working on my own phase noise measuring system that will be MUCH easier to use and gives very good results for not too much money. I did not invent it, it's based on a design from someone else that has been released into the public domain. It should be able to easily have a noise floor of -140dBc at 10Hz which should be good enough to do some real measuring in good audio circuits.

 

I don't want to discuss the details right now until I get it up and running, I have to squeeze the work in between everything else I am doing so it might be a little while before I get it working.

 

Once this is up and running I certainly DO plan on doing quite a bit of testing.

 

One very important aspect of digital audio clocking is that the jitter that is important is NOT the jitter at the oscillator, but the jitter INSIDE the DAC chip that REALLY matters. This is a combination of jitter at the oscillator, the jitter at the clock receiver in the DAC chip, which can be significantly affected by what is happening on the board between oscillator and DAC chip, AND the clock network inside the chip.

 

My experience so far has been that getting the OSCILLATOR down to extremely low jitter is far less important than decreasing the jitter added by the board and DAC chip itself. This is why I spend a large amount of effort making sure the signal from the oscillator is degraded as little as possible by the board. We can't change the clock network inside the DAC chip but we can decrease the external factors that are part of the jitter increase inside the chip.

 

My experimentation has shown that these things are more important than getting that last little bit of performance out of the oscillator itself.

 

John S.

Link to comment
I looked at those articles and I did not see a detailed implementation of a phase noise measuring circuit,

 

Measuring circuit not in those articles, it's here: https://www.by-rutgers.nl/PDFiles/DC-receiver.pdf

 

 

 

 

One very important aspect of digital audio clocking is that the jitter that is important is NOT the jitter at the oscillator, but the jitter INSIDE the DAC chip that REALLY matters. This is a combination of jitter at the oscillator, the jitter at the clock receiver in the DAC chip, which can be significantly affected by what is happening on the board between oscillator and DAC chip, AND the clock network inside the chip.

 

My experience so far has been that getting the OSCILLATOR down to extremely low jitter is far less important than decreasing the jitter added by the board and DAC chip itself. This is why I spend a large amount of effort making sure the signal from the oscillator is degraded as little as possible by the board. We can't change the clock network inside the DAC chip but we can decrease the external factors that are part of the jitter increase inside the chip.

 

My experimentation has shown that these things are more important than getting that last little bit of performance out of the oscillator itself.

 

Certainly and that's why I suggested measuring the DSD BCLK signal. In discrete DSD DACs there isn't a 'DAC chip' to worry about but the circuit itself can be analyzed. If you look at Jussi's DSC1 there are ways to improve the jitter specs such as implementing a clock distribution scheme and adjusting the clock path lengths between shift register chips.

 

W.r.t. Oscillator, that sets the achievable noise floor, and using back if the cuff 100fs = -130 dB @ 10 Hz, it should be obvious that pin to pin skew errors, impedance mismatches etc can have a real effect. So as you are saying : you can look at the entire circuit and select an appropriate clock .

Custom room treatments for headphone users.

Link to comment
Yes, it would. However, any given amount of jitter causes higher level sidebands with higher frequencies. So at 12 khz that side tone with decent gear might be at -100 db. It will be far lower in level with frequencies below 1 khz.

 

So could you hear a difference between 200 hz clean at a moderate listening level and with 100+300 hz sidebands at - 140 db? The obvious answer is you would never know those were there. It is below the basic noise floor of your gear, not to mention your listening room, and the basic threshold of human hearing.

 

If you look at the phase noise curves of pretty much every good clock, it goes up with decreasing Hz, i.e. higher at 1Hz than 10hz than 100hz and may level off at say 1khz. So looking at 1hz, which will have the highest noise as well as being the most "close-in" do you think the ear can distinguish a separate tone? No. A simple linear analysis just won't work. Not simply about human hearing thresholds for tones.

 

I don't know what the actual threshold for phase error at 1Hz is, do you? Can you show me a paper? If not there is no "obvious answer"

 

That said, 100 dB down at 1 Hz is very good. I'd be very very happy with 120 dB down at 1 Hz. It would be great to actually know what the empirically derived limit is.

Custom room treatments for headphone users.

Link to comment

Two things would be quite helpful, to me at least: (1) "Learning tracks," a series of the same song (preferably something sparsely produced, acoustic guitar and vocal) with increasing amounts of (perhaps simulated) jitter, clearly labeled; and (2) accompanying these, a tech explanation of what types of distortion it causes, along with an English language explanation of what to listen for.

 

I'd really like to train myself to recognize jitter, and this seems like a good way to do it. Based on papers I've read, it will likely take at least several weeks if not longer, but I'd like to give it a try.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
If you look at the phase noise curves of pretty much every good clock, it goes up with decreasing Hz, i.e. higher at 1Hz than 10hz than 100hz and may level off at say 1khz. So looking at 1hz, which will have the highest noise as well as being the most "close-in" do you think the ear can distinguish a separate tone? No. A simple linear analysis just won't work. Not simply about human hearing thresholds for tones.

 

I don't know what the actual threshold for phase error at 1Hz is, do you? Can you show me a paper? If not there is no "obvious answer"

 

That said, 100 dB down at 1 Hz is very good. I'd be very very happy with 120 dB down at 1 Hz. It would be great to actually know what the empirically derived limit is.

 

Well this doesn't really contradict what I said. A given amount of jitter is less audible at lower frequencies. Lucky then that it is lower frequencies where jitter is worst.

 

As there are a few variations on jitter no one answer fits all. However, you ask about 1 hz jitter. All indications of the few tests done indicate jitter is perceived with a lower threshold on test tones than on music just like virtually all other artefacts. Contrary to audiophile suppositions.

 

We could simulate your 1 hz jitter with a tone easily enough. Then see how much is audible. Of course we don't have to do that because yes masking curves are directly pertinent to that situation.

 

The multi-generation test files I posted would seem somewhat pertinent as well. They don't appear to be very audible if at all given the results. For that test FR was really no issue at all. Cascading distortion would increase as would noise (some of which was jitter based probably) and jitter itself. Some types of jitter would add so 8 times as much would have been there. Other random or noise like jitter would have added like noise, being some 9 db higher in level than the original. Still whatever jitter was present was a good deal higher than the original file when people listened to it over their own gear. Yet it doesn't seem to cross whatever threshold there is to become audible.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Well this doesn't really contradict what I said. A given amount of jitter is less audible at lower frequencies. Lucky then that it is lower frequencies where jitter is worst.

 

As there are a few variations on jitter no one answer fits all. However, you ask about 1 hz jitter. All indications of the few tests done indicate jitter is perceived with a lower threshold on test tones than on music just like virtually all other artefacts. Contrary to audiophile suppositions.

 

We could simulate your 1 hz jitter with a tone easily enough. Then see how much is audible. Of course we don't have to do that because yes masking curves are directly pertinent to that situation.

 

The multi-generation test files I posted would seem somewhat pertinent as well. They don't appear to be very audible if at all given the results. For that test FR was really no issue at all. Cascading distortion would increase as would noise (some of which was jitter based probably) and jitter itself. Some types of jitter would add so 8 times as much would have been there. Other random or noise like jitter would have added like noise, being some 9 db higher in level than the original. Still whatever jitter was present was a good deal higher than the original file when people listened to it over their own gear. Yet it doesn't seem to cross whatever threshold there is to become audible.

 

I don't really see how what you say addresses the issue.

 

1) Have you actually measured the level of audibility of close-in phase noise?

2) Have you measured whether your files adequately model this, and at what levels?

3) Have you validated your file based model?

 

I understand when you say that the files don't sound very different but how does that correlate with actually different clocks? Is this more than a thought experiment?

 

Again, you keep suggesting that close-in phase error is less audible, but that goes against what real experts say so we need more.

Custom room treatments for headphone users.

Link to comment
I don't really see how what you say addresses the issue.

 

1) Have you actually measured the level of audibility of close-in phase noise?

2) Have you measured whether your files adequately model this, and at what levels?

3) Have you validated your file based model?

 

I understand when you say that the files don't sound very different but how does that correlate with actually different clocks? Is this more than a thought experiment?

 

Again, you keep suggesting that close-in phase error is less audible, but that goes against what real experts say so we need more.

 

Any kind of noise is less audible close to the signal. Noise from phase/timing issues are not different unless you can bring new data to the issue.

 

My files are not a model. They are real files and must by necessity pick up additional jitter with each generation. If it were audible with playback of the original it would be more so with additional generations which will have increased jitter. If by some strange manner additional generations reduce jitter then we have a new simple method to combat the issue, though one that I can't see as being possible.

 

So it isn't a thought experiment. It was actually carried out and the files are available to be listened to by anyone at least for the time being. If you missed it here it is:

 

http://www.computeraudiophile.com/f8-general-forum/can-you-hear-16-times-distortion-time-domain-digital-sound-signal-human-hearing-very-sensitive-jitter-28433/

 

Which experts say close in is more audible? It is usually a larger level of jitter close in with most clocks, but not automatically more audible.

 

So yes, I suggest phase noise, or any other noise is less audible when it is very near in frequency to a larger louder signal. That is what masking is and it doesn't require some new and special ideas. If you can show in testing it is more audible than expected then that would be interesting and news.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
So it isn't a thought experiment. It was actually carried out and the files are available to be listened to by anyone at least for the time being. If you missed it here it is:.......

Still whatever jitter was present was a good deal higher than the original file when people listened to it over their own gear.

Those files went through several stages of processing beforehand. None were pristine rips to start with, so you didn't really have an original file.

As I said originally, you were providing files of differing levels of mediocrity.

The fact remains that there were only 7 votes, and no definite conclusions reached.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Any kind of noise is less audible close to the signal. Noise from phase/timing issues are not different unless you can bring new data to the issue.

 

My files are not a model. They are real files and must by necessity pick up additional jitter with each generation. If it were audible with playback of the original it would be more so with additional generations which will have increased jitter. If by some strange manner additional generations reduce jitter then we have a new simple method to combat the issue, though one that I can't see as being possible.

 

When did you start thinking that digital files contain "jitter"? Do you think you can recover the phase noise plot from a file? Hint: of course you can't. On what basis do you say that all noise is the same even when different measurement have entirely different units? Phase noise is not measured in voltage so you are conflating several issues together.

Custom room treatments for headphone users.

Link to comment
When did you start thinking that digital files contain "jitter"? Do you think you can recover the phase noise plot from a file? Hint: of course you can't. On what basis do you say that all noise is the same even when different measurement have entirely different units? Phase noise is not measured in voltage so you are conflating several issues together.

 

Of course digital files contain jitter. An ADC has some jitter and that jitter gets embedded in the file. Now copying digital to digital adds no jitter, but that wasn't what was done. A DAC, which has jitter plays a file, it gets recorded with an ADC which has jitter and the jitter from those is embedded in the copy of the file. This copy with now increased embedded jitter gets played a second time, the DAC adds yet more jitter to the analog result which gets recorded by an ADC which adds more jitter and the jitter of these operations gets embedded in the resulting file. Wash, rinse, repeat.

 

Jitter is actually more similar to flutter from tape machines. The available info on that also shows less audibility close in versus further out. The curves for that and plain masking are very similar. One reason I used a solo piano recording in the 8th generation files is those are known for making flutter (and presumably jitter) more easily audible than most musical instruments.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Those files went through several stages of processing beforehand. None were pristine rips to start with, so you didn't really have an original file.

As I said originally, you were providing files of differing levels of mediocrity.

The fact remains that there were only 7 votes, and no definite conclusions reached.

 

To be clear for anyone who doesn't know what SandyK is talking about when he says several stages of processing, SandyK believes digitally moving or copying a file is a processing stage which changes the sound when nothing else is changed including none of the bits. Something the great majority of the world would not be in agreement on.

 

I did have an original bit perfect portion of the file. I removed a few bits as there were actually two of them. I simply chopped them off the first tiny fraction of a second so they couldn't be compared on file size alone. The bits of the signal were unaltered thereafter. SandyK also thinks that changes the sound of the remaining 1.3 million bits in that file though they were unchanged.

 

There were only 7 votes and about that many who communicated they heard no difference. While no definite conclusions could be reached one that doesn't fit is an 8th generation copy is so degraded it sounds obviously inferior. Were that the case all votes would have been correct and they weren't.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Let's be real simple and look at the Wikipedia definition: https://en.m.wikipedia.org/wiki/Jitter

 

Files simply don't have jitter by any reasonable definition of jitter. Files contain a sequence of digital numbers. Jitter involves time.

 

Jabbr, you are being intentionally obtuse. Otherwise there is no such thing as jitter in an ADC. Does the jitter occur in the file no. Files that are recordings certainly have the recorded waveform altered by the effects of jitter. Whether that jitter altered the waveform when it was captured by the ADC or whether the waveform is altered by the DAC upon playback. The result is the effects of jitter.

 

Otherwise you couldn't use a Jtest for some measures of jitter. Typically you record and then analyze the recorded file. The changes of jitter upon playback have been recorded and can be analyzed.

 

Serially playing and recording the files will embed the effects of jitter in increasing amounts in the signal, and those effects alter the waveform and the altered waveform ends up recorded in the file even though the files do not record time directly. Do you disagree with this?

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
There were only 7 votes and about that many who communicated they heard no difference. While no definite conclusions could be reached one that doesn't fit is an 8th generation copy is so degraded it sounds obviously inferior. Were that the case all votes would have been correct and they weren't.

 

Given remarks about the SQ of even the supposedly original file, G.I.= G.O .

You should have given recipient's a totally unmodified .wav file for the original file.

You are assuming that further processing can't cause further degradation which is FAR from a Scientific approach !!

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Jabbr, you are being intentionally obtuse. Otherwise there is no such thing as jitter in an ADC. Does the jitter occur in the file no. Files that are recordings certainly have the recorded waveform altered by the effects of jitter. Whether that jitter altered the waveform when it was captured by the ADC or whether the waveform is altered by the DAC upon playback. The result is the effects of jitter.

 

Otherwise you couldn't use a Jtest for some measures of jitter. Typically you record and then analyze the recorded file. The changes of jitter upon playback have been recorded and can be analyzed.

 

Serially playing and recording the files will embed the effects of jitter in increasing amounts in the signal, and those effects alter the waveform and the altered waveform ends up recorded in the file even though the files do not record time directly. Do you disagree with this?

 

Not trying to be obtuse, rather trying to be precise and not trying to make, nor accept, assumptions. By "model" I mean that you are not directly measuring jitter, rather something else that is affected by jitter. What types of changes in the file are caused by what types of jitter. I don't want people to get the idea that jitter is a properly of a file or filesystem -- well if there is jitter as measured by an eye pattern in Ethernet transmission that is *entirely different* than jitter in an ADC -- the effects of which, yes, can be obvious in a recording.

 

Clearly there are many types of jitter and I'm not clear that this measurement technique is capable of differentiating. That's why I asked how the technique was validated, and for what types of data/jitter. I think that is a very reasonable question. So the farther you get away from the exact thing you are trying to measure, and the more assumptions you make, and the more other factors that you dismiss, the less confidence that I have that your measurements are valid ... by yes I agree that broadly "jitter" in the ADC results in changes in the recording, and "jitter" in the DAC result in changes in the analog output. Serially playing and recording has the possibility of introducing interactions between the ADC and DAC clocks that are rather unlikely audible when just recording and playing back (once). For example, the two clocks could have an absolute frequency accuracy that might differ by 1% and this would be inaudible. I personally focus on minimizing close-in phase noise. YMMV.

Custom room treatments for headphone users.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...