Jump to content
IGNORED

Time resolution of digital sampling


Don Hills

Recommended Posts

On 10/13/2020 at 7:18 PM, Miska said:

Intersample overs happen at least because someone decides to run "normalize" function for music at 44.1k sampling rate. Since actual sampling points rarely coincide with the actual waveform peaks, this results in values higher than 0 dBFS when the actual waveform is reconstructed. The reconstructed values are more likely to reach the highest point, higher the digital filter oversampling factor is.

 

Another common reason is RedBook content driven to clipping, which also seems to be about 90+% of modern content.

 

I haven't thought about this -- but what if you immediately convert to floating point in your math?   I always use floating point, even floating point .wav files.  Therefore, worrying about clipping only happens when dropping to the +-1 type formats.   Then, can worry about the -0.8dB or -3dB or whatever the fashion is today.

 

Am I right? (truly, I havent' thought about it.   Normally I know things.)

 

John

Link to comment
3 hours ago, John Dyson said:

I haven't thought about this -- but what if you immediately convert to floating point in your math?   I always use floating point, even floating point .wav files.  Therefore, worrying about clipping only happens when dropping to the +-1 type formats.   Then, can worry about the -0.8dB or -3dB or whatever the fashion is today.

 

DACs take only integer format data, and also delivery containers such as RedBook or FLAC. You have strict and clear value range boundaries.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
9 minutes ago, Miska said:

 

DACs take only integer format data, and also delivery containers such as RedBook or FLAC. You have strict and clear value range boundaries.

 

Okay -- I wanted to make sure that was the extent of the problem.  I thought so -- but my intuition (incorrect) was that it wasn't' a really bad problem.

I knew about trying to avoid Gibbs from any subsequent LPF causing clipping (keeping levels below about -0.80dB FS), but I didn't realize that the interpolation effects could do +3dB!!!   Didn't even think about it...

 

Lots of stuff to know, not enough brain & time to know it.

 

John

Link to comment


From the observation of zero-crossing timings between two adjacent sample point, when bit depth is increased by 1, number of possible zero crossing positions is increased more than 2x because distant sample point affects zero-crossing timing. And interval of possible zero crossing timings is not regular and it is somewhat unpredictable.

 

When bit depth becomes ₀א, time resolution becomes ₁א ?🤔

 

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
34 minutes ago, Speedskater said:

Remember to low pass filter your square wave.

 

It is properly low-pass filtered. Square wave generator does not generate ≧ Nyquist frequency component. Analog wave form reconstruction uses brick wall low pass filter

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment

Thank you, now I understand a bit more about the following phrase you wrote in OP

 

On 2/21/2020 at 4:56 AM, Don Hills said:

Shannon and Nyquist showed that as long as you keep all components of the input signal below half the sampling frequency, you can reconstruct the original signal perfectly - not just in terms of amplitude, but in terms of temporal relationships too. They only addressed sampling, and assumed infinite resolution in amplitude.

 

 

 

 

 

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
  • 1 year later...
On 11/5/2020 at 6:56 AM, Don Hills said:

 

Yes. See the equation in the OP... ☺️

 

It seems the same idea is appeared on Lebesgue integration, fitted simple function series of countable cardinality to be a continuous function because 2^{ℵ0} = ℵ1 of vertical axis cardinality.

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
  • 2 months later...
On 2/20/2020 at 7:56 PM, Don Hills said:

If you want to see a real world demonstration of a single event (the edge of a square wave) being accurately sampled between sample points, check out Monty's show and tell at the 20:55 mark.

There’s a demo here https://github.com/plext/cdtr that’s easy to reproduce.  They take a square-wave sampled at 1GHz, convert to Redbook then back again, and with saturation, the result is identical to the original.

 

I tried it on an Ubuntu machine and it works. So 1 nano-second rather than 55 pico-seconds but impressive nevertheless.

Link to comment
7 hours ago, Hifi Bob said:

There’s a demo here https://github.com/plext/cdtr that’s easy to reproduce.  They take a square-wave sampled at 1GHz, convert to Redbook then back again, and with saturation, the result is identical to the original.

 

I tried it on an Ubuntu machine and it works. So 1 nano-second rather than 55 pico-seconds but impressive nevertheless.

 

Thank you for sharing. Interesting experiment.

 

I tested 1GHz signal test (1.0ns temporal resolution with 44.1kHz 16bit PCM) and it worked. 0.9GHz (1.11ns) and 1.1GHz (0.91ns) succeeded, while 1.2GHz (0.83ns) failed. This is rather the software test of sox than testing actual temporal resolution limit of PCM

 

Also tried 2GHz (0.5ns) and sox crashed (as mentioned on Further Testing of the demo page), it seems lsx_save_samples function tried to read memory address 0 and got SEGV. It seems there are other problems to be fixed to run it correctly.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  lrint32 (input=<error reading variable: Cannot access memory at address 0x0>) at effects_i_dsp.c:607
607         _ _ _ _ _ _ _ _ 0;
(gdb) bt
#0  lrint32 (input=<error reading variable: Cannot access memory at address 0x0>) at effects_i_dsp.c:607
#1  lsx_save_samples (dest=0x55cf341b2e70, src=0x0, n=n@entry=8192, clips=0x55cf34188948) at effects_i_dsp.c:607
#2  0x0000152389781c5b in flow (effp=<optimized out>, ibuf=0x55cf341aae60, obuf=<optimized out>, isamp=0x7ffe2f66fd10, osamp=0x7ffe2f66fd18)
    at rate.c:660
#3  0x000015238976bee1 in flow_effect (n=1, chain=0x55cf34187aa0) at effects.c:257
#4  sox_flow_effects (chain=<optimized out>, callback=0x55cf32c51760 <update_status>, client_data=0x0) at effects.c:449
#5  0x000055cf32c54462 in process () at sox.c:1780
#6  0x000055cf32c4f695 in main (argc=10, argv=0x7ffe2f6700c8) at sox.c:2988
(gdb) quit

 

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment
On 11/3/2020 at 4:27 PM, yamamoto2002 said:

I created animated graph of a square wave, its wave front moving toward left.

I did an impulse some time ago: https://imgur.com/a/KVFOJU1

Quote

A 16 bit, 44.1 kHz file with 33 impulses. Impulses in the right channel (bottom) are exactly 0.5 second apart, while the distance between impulses in the left channel (top) increases by 1.4 microsecond.

The animation skips to each impulse as evidenced by the time bar at the top. The grey waveform is this 16/44 file upsampled 16x. The highlighted area in the middle is 2-samples wide and centers on zero-crossing of the right, "stationary", channel.

time.resolution.gif.b5a1d1e9aceb82d43f017a75c6cc1699.gif

imp.all.44.flac.zip

Link to comment
  • 3 months later...
On 5/8/2022 at 7:42 AM, danadam said:

 

Yeah, Nyquist-Shannon says the waveform peak timing can be positioned at any real value (any rational values and any irrational values with mathematically exact accuracy) when bit-depth is countable-infinite bit integer, time resolution of digital sampling can be increased up to cardinality of the continuum.

Sunday programmer since 1985

Developer of PlayPcmWin

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...