Jump to content
  • entries
    5
  • comments
    12
  • views
    1963

Why I never got that perfect null in difference testing part three


esldude

Timing in my compared files, how do I compare timing in files?

 

Clearly none were as much as one single sample off. I had figured out ways to align all files to the single bit. I put one second of silence at beginning and end while putting one bit in the middle of the silence to maximum in the test files. So just record a few seconds before and after. Then align the signals in Audacity and chop off the ends with single bit precision. When the clocks were locked no other obvious artifacts occurred they seem perfectly aligned.

 

But I remembered inter-channel timing at 44.1khz/16 bit was good to microseconds discrimination. 24 bit would resolve timing differences even lower. This is far less than one sample period which is more than 22 microseconds in time. The waveform will be properly recovered, but does differ if the timing shifts even less than one sample period. Could this be preventing the perfect null. Sub-sample period timing differences?

 

I then remembered something I noticed in my null tests that seemed odd. When using 3 tones each one an octave higher than the one before, once I matched level precisely, the residual nulls of those three tones differed by 6 decibels. They had been the same original level, but in difference files with levels matched they were an even 6 decibels per octave different. The lowest would null deepest, an octave up 6 db higher in level and an octave up another 6 db higher. Could this be due to timing shifts?

 

I took the digital version of my three tone signal. Duplicated it and inverted the second track. I zoomed in and deleted one sample at the beginning of only the second track. One single bit snipped off. I then subtracted one from the other, and got a fairly high residual. It was a null of only 16 db depth. And that for the lowest tone, the next octave up was 6 db higher and the third was another 6 db higher. Just like I was seeing in the low level residuals with matched levels. Interesting coincidence I was thinking.

 

So I created a 4 th tone an octave lower. It nulled to 22 decibels deep (6 db lower than an octave higher) and the remaining tones were nulled at 6 db less each octave up. Seemed like a steady relationship.

 

Next I oversampled to twice the sampling rate. Effectively snipping off one bit of this signal would be a timing shift of half as much. Using my original 3 tone signal, when I snipped off one bit the tones nulled to 22 db deep and each octave higher 6 db less. I wondered how small a timing shift would still show the 6 db per octave regression. I stopped at a software based sample rate of 6.144 mhz. 128 times more than 48 khz. That tiny slice of one sample still showed the 6 db progression between octaves with the null of the fundamental 42 db deeper than the original for a total null of 58 db depth. This would be 163 nanoseconds timing shift.

 

Looked to me this allowed one to detect timing shifts, if levels were matched, and to determine by the null depth what fraction of the sample rate this timing shift amounted to. It appears to work. What were timing shifts determined this way? In the couple hundred picosecond range up to a couple nanoseconds. It did vary from test run to test run.

 

In one of my test runs I did get excellent timing. It was a 60 second pair of files. They happened to time out very well. Once levels were matched the difference ran within a few db of the noise floor. When I check parts of it there was a 5-6 second area that lined up near perfectly to the resolution of my equipment. The noise floor only was nearly all that was visible, and amplified greatly there was only the whoosh of noise. Listening to the entire 60 second file amplified you very, very faintly hear the signal, hear it fade into nothingness for a few seconds and then return very faintly. As it happened this also was two signals made with two different interconnects, not the same one. One was the Audioquest Diamond, the other was something I made using RG142 coax which features pure silver shielding, teflon insulation, and silver plated steel alloy center conductors. The null depth of this run was more than 135 db deep. Pretty close to practical perfection. Not quite, but awfully, awfully close.

 

Now does this prove interconnect matters not at all? No, but it provides good evidence in that direction. I don't have a way to vary or control timing more precisely. I am surprised this stepped test signal seems to function so well to detect timing shifts of such a small amount. And the lower the noise floor the lower you could detect the shifts.

 

It does appear to answer my question of why I didn't get perfect nulls. Gain/sensitivity of DAC and ADC units vary a tiny bit over time. Enough to prevent complete nulls. Then even once you have adjusted for those tiny level shifts, very small timing shifts prevent quite getting the complete practically perfect null of having only low level noise show up. This appears to come very close to a complete explanation without the wire used making a difference at least in 1 to 1 1/2 meter lengths.

 

A few caveats just determined empirically, these relationships seem to hold only for frequencies one sixteenth of the sample rate or lower. Above that the 6 db steps are not quite an even 6 db. This is for timings shifts of one whole sample. For smaller shifts of one 16 th or less of a single sample period, then it works for residuals of any frequency recorded. Also if you have remaining level differences it will depend on their relative nulls vs timing shift nulls. When the null levels from timing get close to those from level differences you see some level difference in the octave spaced tones, but they will be less than 6 db. If both timing and level shifts are contributing equally the level per octave difference is about 3 db. This is the tip off that residuals aren't purely due to timing.

 

Look forward to comments on this. Seems the ability to determine timing shifts this way should be useful in some other ways. Even if not, seems pretty useful for difference testing if nothing else. Now serial recording with DAC to ADC isn't the only form of difference testing. But it is rather more convenient and amenable to adjustment to get additional precision and results of more use in many cases.

 

ADDENDUM:

 

Decided the leave the above as it was how I was thinking through this. Tips by others of course made it clear what should have been quite obvious. Phase shifts between two otherwise identical or very nearly identical files will leave a residual signal. The amount of the residual varies with frequency. For any given frequency the following formula works. This copied from Diffmaker help files as suggested by Mitchco. I also had done this with vector addition which also works giving the same answers. This formula is simpler to work through however.

 

Once levels are matched it indicates by the level of the residual I got in my testing that phase shifts at different times varied between 200 picoseconds and a nanosecond as I stated elsewhere. Pretty small timing shifts.

 

Phase or Time Sensitivity

 

The achievable depth (drop in Difference track energy, relative to the Reference track energy), at any frequency will be limited by the phase error of "theta" degrees existing between the Reference track and the Compared track at that frequency, and will be no better than

 

10*log (2-2*cos(theta)) [dB]

0 Comments


Recommended Comments

There are no comments to display.




×
×
  • Create New...