Jump to content
IGNORED

A toast to PGGB, a heady brew of math and magic


Recommended Posts

2 hours ago, Zaphod Beeblebrox said:

Every algorithm is a compromise in one sense or the other, if the focus is using a slower transition band, then time domain reconstruction is going to suffer as they are not ideal for depth, realism and transient reproduction.

Again, I applaud you for posting charts.  Given the fact that "every algorithm is a compromise", transparency brings some risks as there will likely always be nitpicking opportunities - and unfortunately some may try to unfairly turn mole hills into mountains.  I hope that doesn't discourage you from continuing to share objective data.  I have learned a lot from your graphs.  

Digital:  Sonore opticalModule > Uptone EtherRegen > Shunyata Sigma Ethernet > Antipodes K30 > Shunyata Omega USB > Gustard X26pro DAC < Mutec REF10 SE120

Amp & Speakers:  Spectral DMA-150mk2 > Aerial 10T

Foundation: Stillpoints Ultra, Shunyata Denali v1 and Typhon x1 power conditioners, Shunyata Delta v2 and QSA Lanedri Gamma Revelation and Infinity power cords, QSA Lanedri Gamma Revelation XLR interconnect, Shunyata Sigma Ethernet, MIT Matrix HD 60 speaker cables, GIK bass traps, ASC Isothermal tube traps, Stillpoints Aperture panels, Quadraspire SVT rack, PGGB 256

Link to comment
1 hour ago, Zaphod Beeblebrox said:

With digital audio be it PCM or DSD, we are stuck with band limited systems, and they are not going away unless everyone decided to switch to vinyl, so we have got the make best use of what we have got.

 

DSD is not band limited as such. ADC doesn't need practically any anti-alias filter, because Nyquist is up in MHz range. And DAC reconstruction filter can be very low order and totally non-ringing. Like for example my DSC1 design. It can produce perfect square wave without any Gibbs' phenomenon or overshoot.

 

1 hour ago, Zaphod Beeblebrox said:

Ideal, infinitely long sinc is not necessary nor is it practical, a very good approximation is possible for a finite length signal. What matters is if the quality of reconstruction better with any errors the approximation may introduce and if it is better than other techniques such as those using shorter apodizing filters.

 

Apodizing filters can of course be any length. Like I'm offering wide range of apodizing filters from very short to very long.

 

1 hour ago, Zaphod Beeblebrox said:

It is very easy to demonstrate near perfect reconstruction of even very small signals (and these are not infinite length) in time domain and it is also very easy to demonstrate extremely high stop band attenuation. This was the reason the tools are now built right into PGGB.

 

It is also possible to squeeze near perfect reconstruction into few ms long filter. Of course much more demanding than by increasing the length by huge factor. And without causing the transient smear (ringing). Length mostly affects just the roll-off steepness.

 

1 hour ago, Zaphod Beeblebrox said:

Reconstruction of small signals

 

This has nothing to do with filter length. I'm not into these Chord style number games (Rob will certainly attempt to beat you on stating numbers). I'm more interested into what comes out of DAC's analog output. That is basis of the work I do. Would be curious to see your measurements show the analog output reproduce -651 dB tone, in real physical world. 😉

 

Using arbitrary precision math you can produce any kind of numbers you like, thousands of dB if you like to burn enough CPU cycles and memory. Next step maybe 4096-bit precision? Completely doable.

 

1 hour ago, Zaphod Beeblebrox said:

If the goal of upsampling is to do the best job of accurately reconstructing a signal at a higher rate, should that not be the primary focus?

 

Yes, that's my primary focus. That's why I have also invested into measurement equipment and collected lot of different DACs with different chips and discrete implementations, and continue to collect more and more DACs from the market so that I can squeeze best possible performance out of those.

 

Because it is important how the equipment reacts to the data being fed there.

 

And one thing I see is that in many cases the conversion ratio is insufficient, leaving images in the output (incomplete reconstruction). That's why I went first to 128x and then to 256x, 512x and 1024x filters. Perfect reconstruction would require infinite sampling rate. Thanks to modern computing power, you can hammer away over one second long filters at 50 MHz sampling rates in realtime - if you want to.

 

1 hour ago, Zaphod Beeblebrox said:

Are any of the methods using apodizing filters demonstrably better in that sense and can be shown to do so?

 

Yes, because there are many many other things that affect how correct the filter or noise-shaper is, rather than just stop-band attenuation or how much noise-shaper shovels noise around.

 

Apodizing filters are one thing. I'm offering number of apodizing filters of various lengths. And many with same length and similar stop-band attenuation, yet sounding very different due to other reasons. And of course number of non-apodizing filters as well, varying all kinds of lengths.

 

1 hour ago, Zaphod Beeblebrox said:

An apodizing counter cannot possibly be the only metric used to judge the quality of a filter.

 

No of course not, it is just one of the important aspects. The counter is there to indicate issues in the source content that can be fixed with an apodizing filter, so you can choose such when necessary.

 

1 hour ago, Zaphod Beeblebrox said:

Upsampling algorithms need not be built with a focus mainly on trying to correct for flaws in ADC or mastering.

 

Of course not, there are many focus areas.

 

1 hour ago, Zaphod Beeblebrox said:

Yes, their existence (short apodizing filters)

 

Why are you bundling short and apodizing together?

 

1 hour ago, Zaphod Beeblebrox said:

I am personally irritated by the lack of realism in digital reproduction and realism and depth and quality of transients are lacking in any other method I have heard.

 

Yes, same here, luckily I have fix for it. 🙂

 

1 hour ago, Zaphod Beeblebrox said:

Every algorithm is a compromise in one sense or the other, if the focus is using a slower transition band, then time domain reconstruction is going to suffer as they are not ideal for depth, realism and transient reproduction.

 

Very fast transitions can also sound unnatural. You can try by placing your very long filter's corner at 1 - 4 kHz area and listen the result. That's one good way to test your filters, put the transition where your hearing is most sensitive.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 hour ago, Miska said:

DSD is not band limited as such. ADC doesn't need practically any anti-alias filter, because Nyquist is up in MHz range

Technically still band limited except the Nyquist is way up high. But single bit DSD as output from ADCs are not without constraints, are they? The quantization noise in the audible range can be quite high as you need infinite bandwidth to push out all the noise from the audible range. Not many DAWs can handle DSD as is and they got through multiple steps some of which involve conversion to PCM and back before the final release comes out on a DSD. After mastering and mixing, it is very hard to claim it is DSD in its original form. There is no free lunch.

 

1 hour ago, Miska said:

It is also possible to squeeze near perfect reconstruction into few ms long filter. Of course much more demanding than by increasing the length by huge factor. And without causing the transient smear (ringing). Length mostly affects just the roll-off steepness.

Where is data to support this?  What is your definition of 'near perfect reconstruction' how do you measure it?

 

There is only one way to truly accurately reconstruct signals and that is using sinc interpolation, but since that is impractical the next best thing is as close approximation as possible. Of course, there are infinite ways to design filters and all of them can claim near perfect reconstruction and all of them can look similar in frequency domain and yet sound very different (per your own admission). How can all the variations be near perfect reconstruction and yet sound different? where is the theory to support perfect reconstruction of band limited signals by using these methods? Yes, length mostly-affects just the roll-off steepness if not designed with an eye for time domain reconstruction accuracy.

 

I have done my own analysis of the reconstruction accuracy of software upsamplers, if a method is more accurate than another before it is fed to the DAC, it cannot possibly less accurate when it comes out of the DAC. And if a method is less accurate reconstructing simple stationary signals, it cannot possibly be more accurate reconstructing more complex signals and a DAC measurement is not going to magically alter the results.

 

Which should be given more weight, the possibility of ringing outside the audible range when ADC or mastering caused the signal to hit Nyquist or better reconstruction accuracy within the audible range? There is no ringing or smearing of transients in the audible range with properly designed sinc based reconstruction.  Apodizing and near perfect reconstruction are self-contradictory, you cannot throw out information that is in-band and then claim to reconstruct accurately. Also, I see the out of band image rejection and steep transition a by-product of near-perfect reconstruction rather than the goal of the filter design. 

 

1 hour ago, Miska said:

This has nothing to do with filter length. I'm not into these Chord style number games (Rob will certainly attempt to beat you on stating numbers). I'm more interested into what comes out of DAC's analog output. That is basis of the work I do. Would be curious to see your measurements show the analog output reproduce -651 dB tone, in real physical world. 😉

No, the point is to demonstrate the upsampling and noise shaping process does not add noise of its own and can near-perfectly reconstruct very small signals (or large signals) in audible range. Better reconstruction cannot possibly hurt. The output as the graph shows is still 32bits not 256bit or 4096bit.

 

1 hour ago, Miska said:

Using arbitrary precision math you can produce any kind of numbers you like, thousands of dB if you like to burn enough CPU cycles and memory. Next step maybe 4096-bit precision? Completely doable.

 

No, arbitrary precision arithmetic does not automatically decrease noise floor, so I cannot just pull out some arbitrarily small number. A lot of time and research went into the use of arbitrary precision arithmetic. It can be used to improve the accuracy of the reconstructed signal, in the case of upsampling PCM, it keeps the original samples unaltered and then reconstructs the intermediate samples at high precision (128 or 256 bits). The noise shapers also need to operate at high precision to be able to noise shape these samples that were reconstructed at 256 bit precision into lower bit depths. And high precision makes an audible difference, there is a reason a majority of PGGB users ultimately prefer 256bit to 128bit or 64 bit. Just throwing any upsampling algorithm into arbitrary precision arithmetic will not magically yield the numbers. You can try it yourself and post graphs too. Also, there is no point in using higher precision unless noise shapers can be designed to make use of the higher precision, at some point band-width becomes a limitation and out of band noise from the noise shaper becomes too high. 

 

1 hour ago, Miska said:
4 hours ago, Zaphod Beeblebrox said:

I am personally irritated by the lack of realism in digital reproduction and realism and depth and quality of transients are lacking in any other method I have heard.

 

Yes, same here, luckily I have fix for it. 🙂

Ironically that is exactly what I have been saying too, just a different approach.

 

 

1 hour ago, Miska said:

Very fast transitions can also sound unnatural. You can try by placing your very long filter's corner at 1 - 4 kHz area and listen the result. That's one good way to test your filters, put the transition where your hearing is most sensitive.

Even more reason to be placed outside the audible range.

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
1 minute ago, Zaphod Beeblebrox said:

Technically still band limited except the Nyquist is way up high. But single bit DSD as output from ADCs are not without constraints, are they?

 

Not really. I have RME ADI-2 Pro and Merging Hapi. I also know Grimm AD1 pretty well. For example ADI-2 Pro is certainly not constrained by DSD. Hapi also performs better at DSD than at DXD.

 

1 minute ago, Zaphod Beeblebrox said:

The quantization noise in the audible range can be quite high as you need infinite bandwidth to push out all the noise from the audible range.

 

It depends on the modulator. But for example DSD256 can easily exceed that of 192/24 PCM and also 192/32 PCM if you want to.

 

So certainly less than what you get from microphone feeds.

 

Theoretically DSD64 can have equivalent resolution to 44.1/64 PCM.

 

1 minute ago, Zaphod Beeblebrox said:

Not many DAWs can handle DSD as is and they got through multiple steps some of which involve conversion to PCM and back before the final release comes out on a DSD. After mastering and mixing, it is very hard to claim it is DSD in its original form.

 

It takes some effort to do, but I have created tooling for this, so you can combine Pyramix with HQPlayer Pro to do it without conversion to PCM. Of course Sonoma workstations existed since beginning of SACD, but it is DSD64 only. And there were also SADiE DSD workstations.

 

And then if you use the again very common approach of mixing and mastering in analog, the workflows are the same for both PCM and DSD. Where digital multitrack recording just replaces old school multitrack analog tapes.

 

1 minute ago, Zaphod Beeblebrox said:

Where is data to support this?  What is your definition of 'near perfect reconstruction' how do you measure it?

 

Hefty safety margin accounting for hearing of what is detectable from the digital data going to the DAC, or second layer from the DAC's analog outputs.

 

1 minute ago, Zaphod Beeblebrox said:

There is only one way to truly accurately reconstruct signals and that is using sinc interpolation

 

For perfectly band limited infinitely long signals... Which don't exist in any music content.

 

1 minute ago, Zaphod Beeblebrox said:

How can all the variations be near perfect reconstruction and yet sound different?

 

That is something for you to figure out. I think I know why. 😁

 

1 minute ago, Zaphod Beeblebrox said:

where is the theory to support perfect reconstruction of band limited signals by using these methods? Yes, length mostly-affects just the roll-off steepness if not designed with an eye for time domain reconstruction accuracy.

 

Yes, as it gets steeper in frequency domain it gets worse in time domain spreading a single event over longer and longer period of time. Especially having ripple effects before the event ever happened, time machine...

 

1 minute ago, Zaphod Beeblebrox said:

I have done my own analysis of the reconstruction accuracy of software upsamplers, if a method is more accurate than another before it is fed to the DAC, it cannot possibly less accurate when it comes out of the DAC.

 

Assumptions... I can show you some results when I have spare time. If you push the noise shaper to hard, you just lift of high frequency noise up without any apparent benefit because the lower frequencies are all way below analog noise floor limits. Physics will enforce you noise limits in terms of thermal Johnson-Nyquist noise. Not to even mention DAC's resolution.

 

1 minute ago, Zaphod Beeblebrox said:

And if a method is less accurate reconstructing simple stationary signals, it cannot possibly be more accurate reconstructing more complex signals and a DAC measurement is not going to magically alter the results.

 

That's the dilemma, because time and frequency are inversely proportional.

 

If event takes couple of sample cycles, having filter combining data there around that event from time several seconds before and after the event will just cause all events to smear all over.

 

One second long filter is already long enough to see entire cycle of 1 Hz waveform. Going longer, you are talking about mHz or DC.

 

1 minute ago, Zaphod Beeblebrox said:

Which should be given more weight, the possibility of ringing outside the audible range when ADC or mastering caused the signal to hit Nyquist or better reconstruction accuracy within the audible range?

 

Let's say you start with TPDF dithered RedBook 1 kHz tone. Or TPDF dithered 192/24 1 kHz tone for that matter. Which kind of digital oversampling filter would affect this within the audible range?

 

1 minute ago, Zaphod Beeblebrox said:

Apodizing and near perfect reconstruction are self-contradictory, you cannot throw out information that is in-band and then claim to reconstruct accurately.

 

Question is do you want to reconstruct accurately the analog signal that once entered ADC, or do you want to accurately reconstruct that ADC's more or less erroneous interpretation of it? You cannot simply assume that the source data is perfect, that the ADC is perfect, that there is no heavy digital clipping, etc.

 

1 minute ago, Zaphod Beeblebrox said:

Also, I see the out of band image rejection and steep transition a by-product of near-perfect reconstruction rather than the goal of the filter design.

 

Goal of oversampling filters is precisely to remove out of band images. By perfectly removing the images you gain perfect reconstruction. If there is any image left, you know there is digital stair-stepping left and it is not the accurate band-limited analog waveform.

 

1 minute ago, Zaphod Beeblebrox said:

No, the point is to demonstrate the upsampling and noise shaping process does not add noise of its own and can near-perfectly reconstruct very small signals (or large signals) in audible range. Better reconstruction cannot possibly hurt. The output as the graph shows is still 32bits not 256bit or 4096bit.

 

Now get started with regular TPDF dithered RedBook as source data.

 

1 minute ago, Zaphod Beeblebrox said:

A lot of time and research went into the use of arbitrary precision arithmetic.

 

From my perspective, that is rather relative, as for me "a lot of time and research" is measured in decades... 😅

 

1 minute ago, Zaphod Beeblebrox said:

It can be used to improve the accuracy of the reconstructed signal, in the case of upsampling PCM, it keeps the original samples unaltered and then reconstructs the intermediate samples at high precision (128 or 256 bits).

 

Problem is just that those samples that already have embedded errors.

 

1 minute ago, Zaphod Beeblebrox said:

Also, there is no point in using higher precision unless noise shapers can be designed to make use of the higher precision, at some point band-width becomes a limitation and out of band noise from the noise shaper becomes too high. 

 

Yes, there is no point in using noise shapers push out more noise than what your audio band analog noise floor otherwise allows. Going beyond that will just lift up the out of band noise without benefit.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 hour ago, Zaphod Beeblebrox said:

I have done my own analysis of the reconstruction accuracy of software upsamplers, if a method is more accurate than another before it is fed to the DAC, it cannot possibly less accurate when it comes out of the DAC. And if a method is less accurate reconstructing simple stationary signals, it cannot possibly be more accurate reconstructing more complex signals and a DAC measurement is not going to magically alter the results.


Really well said, but with one caveat.  The DAC itself could make it less accurate at the output, but of course that is not the fault of what was fed to the DAC.  
 

If an algorithm designer is really proud of what he’s accomplished, I think he would want to focus eyeballs on how good the signal looks just after upscaling.  
 

if it’s more of a matter of optimizing to a particular DAC’s topology, then certainly the DAC’s output can be introduced into the conversion.  But if the user can’t easily find advice on how to capitalize on this for their DAC, I’m not sure I see the point on focusing on that data.

Digital:  Sonore opticalModule > Uptone EtherRegen > Shunyata Sigma Ethernet > Antipodes K30 > Shunyata Omega USB > Gustard X26pro DAC < Mutec REF10 SE120

Amp & Speakers:  Spectral DMA-150mk2 > Aerial 10T

Foundation: Stillpoints Ultra, Shunyata Denali v1 and Typhon x1 power conditioners, Shunyata Delta v2 and QSA Lanedri Gamma Revelation and Infinity power cords, QSA Lanedri Gamma Revelation XLR interconnect, Shunyata Sigma Ethernet, MIT Matrix HD 60 speaker cables, GIK bass traps, ASC Isothermal tube traps, Stillpoints Aperture panels, Quadraspire SVT rack, PGGB 256

Link to comment
39 minutes ago, Miska said:

Now get started with regular TPDF dithered RedBook as source data.

Yes, I do that all the time, be it CD or Hires or DSD, the tools to do that is built right into PGGB so anyone can check, and it can be done on any real music track! It is possible to preview the frequency content of input vs upsampled version, the source and unsampled versions are indistinguishable with PGGB (in band). I also released the RASA app for the same reason, to make it easy to analyze any track and compare it to the upsampled version.

 

Unfortunately, the same cannot be said about other methods.... below is an example where a bandlimited white noise was unsampled by an apodizing filter and the reconstruction error is visible in the audible range. One need not need extremely small numbers to show inaccurate reconstruction, it happens all the time with most upsamplers that do not pay as much attention to reconstruction accuracy.

zoom_tool.jpg

 

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
21 minutes ago, Zaphod Beeblebrox said:

Unfortunately, the same cannot be said about other methods.... below is an example where a bandlimited white noise was unsampled by an apodizing filter and the reconstruction error is visible in the audible range. One need not need extremely small numbers to show inaccurate reconstruction, it happens all the time with most upsamplers that do not pay as much attention to reconstruction accuracy.

 

I get immediate feeling that you didn't account for delay differences? Remember that the delay can be even sub-sample. Be careful not to fool yourself. It is not trivial to align time windows (you can look at DeltaWave and some other tools regarding that).

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
On 4/9/2023 at 9:59 AM, Kalpesh said:

Houston, we have a problem, big IMO :

 

same album, 2 tracks in album order. The first one is mainly spoken with sparse noises, second one has heavy bass and drums. I applied a fixed gain of -20.1 to both. The first one is extremely loud and if I listen to it at OK level then the second track is much much too low. With HQP, if I listen to the first track at OK level then the second track explodes just as obviously intended.

 

 

pggb_album_analysis_256_v1.csv 507 B · 5 downloads

For those of you using PGGB EQ with left and right channels having a different EQ, there is a bug that had gone under the radar in the past few releases, the left and right channel EQs may be switched (this will not affect those use the same EQ for both the channels). I am working on a fixing it and hopefully will release the patch this weekend. Just letting you know so that you do not process a bunch of tracks.

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
3 hours ago, Zaphod Beeblebrox said:

For those of you using PGGB EQ with left and right channels having a different EQ, there is a bug that had gone under the radar in the past few releases, the left and right channel EQs may be switched (this will not affect those use the same EQ for both the channels). I am working on a fixing it and hopefully will release the patch this weekend. Just letting you know so that you do not process a bunch of tracks.


 

pardon I’m a novice and new to all this.  
 

i use HQplayer with my ΔΣ DACs with a clear improvement in sound.  
 

i understand the improvement and emphasis on the upsampling method to sound.   
but what about the analog part of the DAC?

 

I’ve tried Chord DACs and they don’t sound ‘ right’ to me.  Mostly timbre wise. 
I also have an ADI PRO black edition sitting here next to me and it doesn’t sound ‘ right ‘ either.  
 

i admire you and Miskas effort and what you have contributed.   
it makes my listening more enjoyable.  
 

btw I haven’t tried PGGB yet.  
 

the DACs I love are both up to 192khz only.   But they sound great. 
 

that’s is why I ask about the other aspects of DACs.   Ok, the math is wonderful , but I am confused on the analog output stages and whatever else contributes to the sound.   
E.g I totally disagree with ASR’s view that DACs which measure about the same or to a certain point should sound the same.   Complete crock. 
 

so in saying all that, now that you have ( and have given us users) better math, how about the rest of the DACs ‘sound’?

 

I got the RME PRO to see how well it sounds with HQplayer, but as a DAC on its own it doesn’t sound good.  Imho of course.  We all have different tastes. 
I’ve tried the Venus II and didn’t like it at all. 
 

i hope all this babble makes sense. 
 

im basically saying , how about the other analog stages in the DAC and parts which contribute to the sound?  
 

 

Legit questions .   I did not mean any of that in a snarky manner.  As a 40 year musician I’m trying to learn about this stuff to make my listening more enjoyable and besides it’s fun and satisfies my curiosity.   
DACs are pretty fascinating.   How one can sound so much different than another. 
 

thanks in advance! 
 

 

Link to comment

Thanks @Kalpesh for your patience in helping me track down the issue. 

 

Patch version of PGGB 256 Reference Edition:

I just released a patch - PGGB 256 v5.2.53 for the Reference Edition.

 

Release notes

Version 5.2.53: The Reference Edition (PGGB 256)
    SQ Changes: (None)
    Bug Fixes:
        * Fixes a bug that cause L and R channel EQ to be swapped.

 

Who should upgrade: Anyone who is or is planning to use PGGB EQ. If you use the same EQ for left and right channel, you are not affected by the bug.

 

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
1 hour ago, Atriya said:

@Zaphod Beeblebrox is there any chance that PGGB will implement a headphone crossfeed? I sometimes use one, and it would be nice (and probably more accurate) to have it done in PGGB itself instead of after.

I will consider it for a future release if there is enough demand. Any specific implementation you are looking at?

ps: I remember you enquired at some point about multi-channel support. PGGB-IT! does that now.

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
18 minutes ago, Zaphod Beeblebrox said:

I will consider it for a future release if there is enough demand. Any specific implementation you are looking at?

ps: I remember you enquired at some point about multi-channel support. PGGB-IT! does that now.

I don't have a specific implementation is mind; flexibility is usually helpful.

 

Thanks for the tip on PGGB-IT! 

Link to comment
48 minutes ago, Zaphod Beeblebrox said:

What is with the insanely small numbers related to small signal accuracy and Noise shaping?

When I post a graph like below, it is often met with ridicule and skepticism.

 

pggb_plot_preview_ns.jpg (3239×1856)

 

I would like to explain why we can still perceive the benefits of noise shaping at extremely low levels, like -600dB, even when critics argue otherwise using these three statements:

  1. It's illogical because the original CD content cannot represent such tiny signals.
  2. It's pointless because the DAC's noise floor is much higher than -600dB, closer to -150dB.
  3. Nobody can hear that low, given that the human hearing dynamic range is between -120 and -140dB.

Let's address the first point:

In simple terms, upsampling involves inserting additional samples to increase the sampling rate. To double the sampling rate, one needs to introduce a single sample between two consecutive samples.

image.thumb.png.09ba3d35fb36b06cc9fa7eb8a0676f70.png

 

For example, consider upsampling a 44.1kHz 16-bit CD to 88.2kHz 24-bits. Although the original CD samples have a 16-bit resolution (the smallest non-zero value it can store is at -96dB), we can calculate the intermediate samples for the upsampled signal at any resolution since it's done in software (typically 64 bits). We're not limited by the 16-bit constraint, as -96dB isn't the lowest non-zero value we can create. These intermediate samples in the original analog signal we're trying to reconstruct have infinite resolution. For example, using 24-bit resolution for intermediate samples is better than CD quality, and it's easy to demonstrate that the smallest reproducible signal is at -144dB, for example. At this point, arguing that "the original CD couldn't represent -144dB" doesn't make sense, as it misses the point. What we're showing is that intermediate samples can be reconstructed with at least -144dB resolution (or 24 bits of accuracy). It is not constrained by the source data. And even at 24 bits, one cannot claim near perfect reconstruction

 

In the case of PGGB with the Optimal noise shaper, 16fS rate, and 256-bit precision, the noise floor in the audible range is near -740dB. That means the intermediate samples can be reconstructed with a resolution of about 123 bits, allowing for more accurate predictions without being limited to 24-bit bins. The -650dB value is a more conservative estimate based on the analysis of small signals.  123 bits of resolution is a lot closer to near perfect reconstruction than 24bits.

 

To achieve higher reconstruction accuracy, one needs infinite processing precision and resolution. It's easy for our brains to differentiate between a real instrument and a recorded one being played back, but it's difficult to determine the smallest resolution beyond which our ears can't differentiate between reality and playback and we do not know where that limit is.

 

Regarding the second point: Critics argue that there's no benefit to having a -650dB accuracy in the audible range because no DAC has such a low noise floor. However, this overlooks the fundamental concept behind noise shaping. If a noise shaper encodes 256-bit precision information into 24 bits, and if the DAC's noise floor is less than -144dB, the idea is that it will accurately reproduce the information in the 24 bits of noise-shaped data. Even though the DAC’s physical dynamic range cannot be changed, the perceived dynamic range in the audible range is increased. Which is what is perceived whenever the noise floor in the audible range is decreased.


If single-bit DSD 64 can encode near-CD quality data using just one bit, it shouldn't be surprising that we can encode 128 bits of precision within 24 or 32 bits.

 

Lastly, the third point: No one can hear that low, given that the human hearing dynamic range is between -120 and -140dB. It's not about detecting the smallest sound; the sounds we hear in real life can change by infinitesimally small amounts within the audible range.  It's about the analog-like resolution of the reconstructed signal where the changes in levels are not like ‘pixelated’ but rather continuous.


In summary, the goal of an upsampler is to reconstruct the signal at a higher rate and as close to the original analog signal as possible. To do this infinite resolution is needed. Computing the intermediate samples at a very high precision and then delivering the high precision (~123bits) in a 32bit PCM through noise shaping is a good way to do this.


This was extremely informative. Thank you. 

Link to comment
52 minutes ago, Miska said:

Point is more that what makes sense, the noise floor in 20 kHz band won't move below analog noise floor no matter what you do. Beyond that point, the noise floor outside of 20 kHz band will just move up without any benefits in sub-20 kHz region.

I never said the DAC's noise floor will change; I even mentioned the physical dynamic range of the DAC remains unchanged.

 

The question is if a DAC has an analog noise floor of -170dB, can it faithfully reproduce up to 24bits? If the answer to the question is yes, then it should faithfully reproduce the 24bit noise shaped output. If you have equipment to sensitive enough to measure that back at 24-bit precision without the measuring equipment adding its own noise, you should be able to do a FFT and get the very same graphs I show.

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
23 minutes ago, Miska said:

No, remember that the analog noise floor is practically flat. IOW, in best case equivalent to that of 22-bit TPDF.

If that is the case why even provide Noise shaping options like LNS15? 24bit dither should be good enough?

Author of PGGB & RASA, remastero

Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

SystemTT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...