Jump to content

StreamFidelity

  • Content Count

    569
  • Joined

  • Last visited

  • Country

    Germany

7 Followers

About StreamFidelity

  • Rank
    Junior Member

Personal Information

  • Location
    Berlin

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bookmarks

  1. A toast to PGGB, a heady brew of math and magic
    A toast to PGGB, a heady brew of math and magic
    6 hours ago, Fourlegs said:

    @Zaphod Beeblebrox Rob Watts has today posted some comments on Head-Fi regarding PGGB and it would be useful to hear what you think is important and if you have any thoughts on the points he raises?

     

    https://www.head-fi.org/threads/hugo-tt-2-by-chord-electronics-the-official-thread.879425/page-942#post-16390263

    I will be happy to answer, and sorry in advance of a very long post:

     

    Inspiration:

    MScalar was the inspiration for PGGB. MScalar showed that a well implemented sinc interpolation would result in a more natural and transparent sound. An iconoclast of a product in many ways, in spite of a lot of misinformation about linear filters and pre-ringing etc.

     

    PGGB started as an experiment to see if it is possible to do a software based Multi-M-Scaling based on sinc interpolation to go beyond 1M taps and to see if it resulted in any improvement. Through Robs many posts I was already made aware that just the number of taps don’t matter and the quality of the taps, specifically how close they are to the true sinc coefficients matter more. I also was aware of the significance of noise shaping in being able to reproduce small signals and to decreases noise floor modulation due to quantization errors. What I was skeptical about was the significance of both the quality of taps and quantization noise.

     

    Windowing experiments:

    I started the experiments with 'off-the-shelf' window functions like Kaiser. I increased the number of taps, and conducted listening tests (myself and group of friends), it became clear that to come close to MScalar we needed 32 Million taps or more. Which meant, the taps using Kaiser window or similar was just not good enough and was not efficient enough. Kaiser window tapers from get go and very few taps are truly Sinc. Now, I was able to verify what Rob has said many times before, so the next step was to go a different direction, find a window that keeps more of the Sinc coefficients as is, then taper faster to zero. While our listening tests showed this window seemed to improve in some ways in terms of transient accuracy, the frequency domain performance was not as good as the Kaiser window in out of band rejection or having narrow transition band and we felt there was room for improvement.

     

    I needed a windowing function that was closer to Kaiser window on frequency domain performance but retain as many sinc coefficients as is. I needed a windowing function I can parameterize , optimize and easily adjust to obtain the right trade off between reconstruction accuracy and frequency domain performance and with the constraint that it retain 50% or more sinc coefficients without any change and still achieve close to Kaiser window like performance in the frequency domain. It was indeed an up-hill task. With more research, I arrived at the window function I use now. It is not a window that is off-the-shelf, it is a custom window that is parameterized to not only control the % number of taps that are true sinc, I am also able to control when, how fast and the shape of how the window tapers from 100% sinc coefficients to zero. While the percentage of true sinc coefficients had a direct correlation to transient accuracy, the shape of the taper affected the tonal qualities more. Instead of using a one-size-fits-all approach, PGGB allows the user to control the tradeoff between reconstruction accuracy (by allowing more true sinc coefficients) and frequency domain performance via the transparency and presentation options. The windowing is adaptive, with longer tracks, even more taps are true sinc  (i.e well beyond 50%) and reconstruction accuracy increases.

     

    Noise shaping:

    We were quite happy with these results based on listening tests, so I started focusing on designing noise shapers. Here too, I did not want a one-size-fits-all approach. Though PGGB was inspired by MScalar attached to my Chord DAVE, I was aware that other DACs can benefit too and DACs operate at different rates all the way to 32FS and bit depths between 16 to 32. PGGB applies different noise shaping based on the output rate and bit depth but there were two goals I had in the design, to be able to reproduce small signals accurately and to have a quantization noise floor very very low. As an example, the noise shaper PGGB uses for 16FS signal to noise shape output signal to 32bits has a noise floor below -350dB in the audible range and can easily reproduce a tone at -200dB anywhere in the audible range.

     

     

    While the noise shaping improved depth detail and texture and overall sounded cleaner, it felt more like an icing on the cake and the biggest improvement was still through the multi-million taps and windowing function. With that background. I would like to address the questions that were raised:

     

     

    Answers:

    Quote

    But - the key here is the number of taps that are Whittaker-Shannon - NOT the number of taps. If you were to use a filter to filter above 20kHz - call it apodizing or slow roll off or whatever, like the PGGB for 44.1kHz then NO taps would be Whittaker-Shannon - and that would mean a filter using billions of the wrong coefficients will simply return the wrong result, so the long taps are completely useless.

    Let me answer the question by asking a question. Whitaker Shannon also requires the signal to be band limited, and it is well known that CD audio is not perfect and it aliasing distortion is common. If the original CD audio  signal already contains aliasing distortion, does retaining the signal as is and applying Sinc interpolation produce better timing information than removing the aliased portion  and applying the sinc interpolation? How do the aliased components affect timing?

     

    The above is really moot because PGGB in beta had the option to apply Whitaker Shannon without modifying the original signal in any way (i.e. no apodization)  but there was no interest in using it. For the sake of science, I will reinstate this option in the next release of PGGB so it does not appear ‘heavy handed’ on my part and there will be a non-apodizing option for 44.1khz and everyone can happy.

     

    Quote

    If it is a windowed sinc function it is no longer true sinc following Whittaker-Shannon interpolation filter - the only filter that will reconstruct the original bandwidth signal perfectly.

     

    I will respond to this with another question. That is true, a windowed sinc function is not a true sinc following Whittaker-Shannon interpolation filter. That is true with any windowed sinc function including WTA and the window function PGGB uses. But is it not true that a windowed sinc function with 256M or more taps that retains more than 50% of its Whittaker-Shannon coefficients is closer to a true Shannon interpolation filter than a 1M windowed sinc function that retains 50% or more of the Whittaker-Shannon coefficients? Also is it not true that it is possible to reap the benefits of using Whittaker-Shannon filter by using more and more taps that are true Whittaker-Shannon coefficients?

     

    Quote

    But what I don't understand is why they are using windowed sinc at all for HD recordings as they claim this is not being apodized. Doing it off-line with unlimited time, and doing the sinc function appropriately (you must pre-process and post process the file correctly) means you can in practice do a pretty much good approximation to true Whittaker-Shannon interpolation 

    The answer is quite simple, it is an overkill based on our listening tests compared to additional time and processing resources needed which are both finite quantities and have a cost attached. The benefits of using tap lengths longer than the track length at the output sample rate diminished. There are added benefits to being able to process tracks relatively quickly and with limited resources and RAM, not everyone is ready to buy a monster machine for upsampling. Though PGGB is currently being primarily used for off-line resampling, it was designed with the idea of being able to be used to do real-time resampling even with streamed audio. I already have it in a sdk form and benchmarked on multiple platforms to be able to do up to 256M taps with a few second startup delay and no delay after that with the right pipelining/threading, multimillion-taps close to Whitaker Shannon upsampling is possible while retaining more than 50% of the true sinc coefficients and also being able to do real-time digital volume control with noise shaping and EQ too.

     

    Regarding the accuracy of 64bit floats and quantization noise, while PGGB uses 64bit floats for the sinc coefficients, I applied optimal scaling to reduce the effect of quantization noise on accuracy. Internal computations are done in 80bit extended precision; the output is still 64bit floats but I too noise shape the output to the desired bit depth.

     

    End

    How does the cumulative effect of the approach I have outlined compare to what Mscalar (or any other upsampling software or hardware)? That is subjective and it is not for me to say, those curious to know can find out for themselves.

     

     

     


  2. Differences in sound: DAC vs. DAC + Pre-amplifier
    Differences in sound: DAC vs. DAC + Pre-amplifier
    34 minutes ago, Jean Paul D said:

    Wish I'm doing something wrong so I could get back a few dB but.... It's quite the other way round here. I use a Meyer CP10 parametric eQ and I could do without gain for vinyl.

     

    At least MC cartridge has so low output that you certainly need gain. So purely passive RIAA network without any gain wouldn't work.

     

    34 minutes ago, Jean Paul D said:

    But when experimenting with digital eQ (convolution), especially using your Mch to Stereo mixdown, I found myself needing to add gain (12 dB and even more sometimes) by preamp settings and I have a 2 x 500 Watt amplifier... And I'm not sure I'm doing things wrong : -8dB seems a recommended minimum for convolution, then another almost -8 dB for Mch mixdown then -3 dB for HQP headroom then maybe another effective -3 for DSD if I buy a Holo : seems normal I have to add gain on the preamp to compensate

     

    Does your preamp have a dB display? Have you measured voltage output from your preamp with your normal setting, how much is the peak value? Typical sensitivity for a power amp is something like 500 mV, while typical output level from a digital source is 2 V. So at full output level the digital source would exceed power amp's maximum input level by 4x. These are unbalanced values, for balanced you can double those.

     

    If you have EQ boost and you end up adding gain on 500W amp, then you are risking both clipping the amp and burning your speakers. It may sound quiet, but those certain frequencies can end up on the output at 500W or so then! You always need to be very careful with any boosts in EQ. Never try to fill up nulls with peak boosts. Because you can throw unlimited amount of power trying to fill up a null and it still just won't happen.

     


  3. HQ Player
    HQ Player
    5 minutes ago, Quadman said:

    2-3 db down at 44.1K, a lot of speakers roll off similar as well.

     

    No, at 22.05 and it starts already around 1 kHz (gray-green plot):

    820HoMayfig07.jpg

     

    Whole point of HQPlayer is to be advanced DSP engine. If you want a player that just shovels data from place A to place B, then there's no shortage of options.

     

     


  4. Optical Network Configurations
    Optical Network Configurations

    About jitter requirements. 
    https://grouper.ieee.org/groups/802/3/ae/public/may01/ewen_1_0501.pdf

    It’s an old paper, but this was the requirement I was able to find. 

     

    Just google “jitter 802.3ae requirements” if you like to dive into this. 
     

    Is also seems specification varies a bit depending on use.


    6848B0FC-7737-498C-AA9E-B0D2C8D6E5BF.thumb.jpeg.7c852a2fa129604f408270433dfbead3.jpeg

     

    I expect the RIN (Relative Intensity Noise) number is important to. This is where FTLX1475D3BTL has better numbers. RIN 1420 is 120. 

    63FE379D-9D58-4361-BF14-E2DD1C921941.thumb.jpeg.73617adced228845f09a7695181eb608.jpeg

     

    63E691D0-12A8-4165-B363-23017BEEDC99.thumb.jpeg.0cacc9b6121351461064ea6fe2087a48.jpeg
     

    With so close numbers in the data sheet between FTLX1475D3BTL and FTLF1421P1BCL one can assume you won’t hear any difference, like jabbr indicates. 
     


  5. Optical Network Configurations
    Optical Network Configurations
    3 hours ago, R1200CL said:

    I think we all have an expectation that due to the fact of lower jitter requirements as well as phase noise, that we ought to obtain the possible best SQ using 10GB equipment that comply with these strict standards. It may be an overkill. 
     

    These things is proven by eye stress pattern tests. The only way to document that you comply with the standards. That’s my understanding. I hope I’m correct. 
     

    The choice of favor single mode fiber and DFB laser is also based on that this technology is probably best to use.

    We could even state that APC connectors instead of UPC should be used. I stick with blue connectors (UPC) and yellow cables.  (That’s most common). That will work for any speed. 

     

     

    This thinking is not wrong. Its easy to do. 

     

    I use the Finisar FTLX1475 this is an SFP+ single mode module, it comes in two versions:

    FTLX1475D3BCV ... the "D3BCV" version is the dual 10G/1G module for both 10GBase-LR and 1000base-LX

    FTLX1475D3BTL ... the "D3BTL" version is the 10G module for 10GBase-LR

     

    electrically as @JohnSwenson has discussed you can use these modules in an SFP port and they work. The caveat is that Finisar states that they are not for use in an SFP port but I have tested this to work in the Clearfog Base, and apparently he has tested this to work in the EtherREGEN. YMMV.

     

    So what is the difference between the two versions of the FTLX1475?

     

    The dual rate "D3BCV" version uses the RS0 line (rate select). The rate select lines interact with the switch. According to the datasheet:

    Transceiver data rate selected through the 2-wire bus in accordance with SFF-8472 Rev. 10.3. Soft RS0 is set at
    Bit3, Byte 110, Address A2h. Soft RS0 default state on power up is ‘0’ LOW, and the state is reset following a
    power cycle. Writing ‘1’ HIGH selects max. data rate operation. Transceiver data rate is the logic OR of the input
    state of the RS0 pin and soft RS0 bit. Thus, if either the RS0 pin OR the soft RS0 bit is HIGH then the selected
    data rate will be 9.95 and 10.3 Gb/s. Conversely, to select data rate 1.25 Gb/s both the RS0 pin and the soft RS0
    bit are set LOW.

    The single rate "D3BTL" version does not use the RS0 line and works at 10GBase. 

     

    The equivalent multimode FTLX8574 also uses the same extensions to denote the 10G and dual 10G/1G versions. The 8574 is the module I reported on in the first post of this thread and yes at that time this SFP+ module worked in an SFP port (I suspect this was dumb luck on my part rather than some grand design).


×
×
  • Create New...