Jump to content
IGNORED

A toast to PGGB, a heady brew of math and magic


Recommended Posts

Zaphod Beeblebrox, my hat is off to you and also to the folks who helped bring this PGGB software to fruition! And as a bonus for me, I love all things Hitchhikers Guide to the Galaxy (my favorite Tshirt below).

 

My question is: can I run your PGGB SW on an underpowered machine if I don't care how long it takes to complete the conversions?  I don't (yet) have a powerful enough computer to run your SW. I have a MacBook Pro M1 loaded version (16GB RAM, 2TB SSD, with external SSDs many TBs), but even with virtual machine, that's way under the specs to do the conversions according to your website. So can your SW run over a period of say a month or more even to convert a library (~2000 titles) or will it just not run if I don't have the full spec machine?

 

I also have a SonicTransporter i9 with opticalRendus running Roon (will look to move to something like a Taiko server soon) that drives my Chord DAVE DAC (with Sean Jacobs DC4 LPS - thanks Nenon!) so I think I'll benefit a lot from the PGGB SW. I have an MScaler that I'd still need to use for streaming content unless I shift over to HQPlayer in the future. 

 

Thanks for your thoughts!

webp-1831-3189-1.jpg

Link to comment
3 minutes ago, Zaphod Beeblebrox said:

I too have a M1 Mac Book Air with similar  specs,  and only recently did Parallels announce support for running  Windows Arm x64, I plan to try and run PGGB on it at some point. It is not yet clear if there is support for x64 applications.

 

If you are going to try CDs and not DXD or DSD rates, you may be able ok with 16GB RAM, just limit to 512M Taps (which would be still plenty for 90% of Redbook).  If you provide plenty of virtual memory (128G), it will churn through. You will get a pretty good idea if  you just download the trial version.

That's a great idea - to just use less taps and to have large virtual memory (in an SSD). Most of what I convert would be from CD rips. Now if I am doing this slow conversion on a Mac virtual machine (I have VMWare Fusion), then could I pause the large conversion project and then start it up again without having to completely start over? 

 

Again, if I do move to a separate server that does run Windows (like a Taiko from what I understand) then I could perhaps just run the conversion directly on that server - unless there's something I don't understand about the process...

 

Lastly, since I'm using a DAVE DAC, I assume I'd want to convert everything to 16fs or maybe even all 768K (vs 705K)?

 

One other factor for me, I also have another audio system with a Devialet 120 (native 192K internally) and Roon can automatically downconvert (the previously DXD upsampled versions from PGGB) to 192K (shame to throw away the upsampling but the Devialet has to accept the data). I could of course do 2 separate versions of upconverting but then that gets to be a logistics issue to manage, store and source the correct version... So I'm wondering: would a full DXD upconversion via PGGB then downconverted by Roon automatically (based on Roon endpoint to 192K) sound much different than a separate PGGB upconversion to 192K (and no downconversion by Roon)??  Of course for DAVE, I'd want the full DXD version... 

Link to comment
5 minutes ago, Zaphod Beeblebrox said:

PGGB is designed to batch process, and the best way to do that is batch process a dozen album (or more depending on how fast they go) each night and they will be ready for listening the next day. You need not point at your whole library. That said, PGGB is 'idempotent', you can close PGGB and it will start from where it left if you choose the 'skip' option. You may also pause it.

 

Edit: My preferred way is to move a bunch of my albums in my library to a 'work' folder folder and point PGGB to the folder. Once they are done, I move a fresh set of albums in to the work folder for remastering. You can also point PGGB to multiple different folders.

Thanks for clarifying! I'm more of a big batch kind of guy and am thinking I may just buy a separate PC (have a need for that anyway) and run it continuously to convert my entire library - and I don't really care how long it takes unless it was many months or more... 

So back to my earlier question: could I have just a modest PC (but with tons of SSD storage) and let it run (albeit slowly) for a long time or is there a true limit below which things just don't work anymore - regardless of speed??

 

And related, how long does a given 44K track take to convert on a very high performance machine (like a Taiko Extreme)?

 

Thanks!

Link to comment
5 hours ago, austinpop said:

Exactly. We find a good metric to track processing speed is to use the ratio of track time/processing time. On my machine (above), I easily get 3.2-3.8x for Redbook tracks. Even "heavy" loads like long DXD tracks come in at over 2x.

Wow, this is faster than I thought for the conversion! It makes me wonder given that the SW can convert this fast, then perhaps some ability to playback streaming music might be in the cards too at some point soon? There's no playback capability of the PGGB SW (yet, perhaps never, but maybe somehow licensed to Roon, Taiko, or others) and the first track of an entire album would take a while to convert/buffer, the follow-on tracks could be converted by the time you got to them listening-wise. Now a separate issue of course is the noise in the machine doing the conversion, but still the ability to listen to Qobuz with this sort of upsampling would be really amazing!

Link to comment
8 hours ago, Zaphod Beeblebrox said:

Yes, it looks like you read my mind :) PGGB is available in a SDK form for OEM, for that exact purpose.  

Developing a player from scratch that will both play local files and streaming audio is a huge undertaking, and also doing it the right way (managing buffers, load balancing and keeping any processing noise to an absolute minimum etc.) is even harder.

 

One option I had considered is for PGGB to sit in between an existing player and an endpoint or like a plugin. The problem with this approach is the latency will be unacceptable as PGGB in the middle will only have access to the stream at the rate at which the host application sends the data. For this reason, the only way I would implement PGGB in real time is if I had direct access to the full track or multiple tracks (this is possible even for streamed data as most players will cache the tracks in advance). 

 

The PGGB in SDK form uses a different framework for performance and is light weight and my tests indicate after  just a few seconds of startup delay, gapless playback is possible.

This would be truly awesome if someone grabs the PGGB tech and runs with it for a player. Personally, I'd love to see Roon do this and I imagine it's fully possible, but perhaps the biggest challenge might be convincing them that their upsampling tech is not as good as the PGGB tech or that the difference (even once they're convinced) isn't worth the effort. Still though they are smart folk and maybe could even license your tech somehow and fold it into Roon as some "superenhanced upsampling" option right in the UI! And if not them, maybe someone else hopefully!

Link to comment
5 hours ago, austinpop said:

One of the reasons PGGB outputs WAV is because the 32-bit, 16FS files needed for the DAVE cannot be stored in FLAC containers, since FLAC is limited to 8FS (352.8/384 kHz). WAV offers the ultimate flexibility, since it can accommodate higher sample rates like 16FS and 32FS, as well as integer and floating point samples. Once we discovered that WAV files can contain sufficient metadata tags to be useful, we just went with it.

I'm curious - if a 24 bit FLAC is available (instead of 32 bit 16FS for Dave), how much difference would there be (if any) between a 24 bit and a 32 bit version of the PGGB 16FS upsample? Seems to me there's not much relevant data beyond the 24 bits (once the 32 bit upsampling has happened and then truncated to 24 bits)? 

Link to comment
  • 4 weeks later...
6 hours ago, austinpop said:

A few PGGB findings on DSD albums and provenance in general

 

There are indeed some albums that were recorded, mixed, edited, and mastered all in the DSD domain. But they are few, and none of these have found their way into my "albums I love and must own" list.

 

This statement that there are few albums that stay in DSD the entire path is so true and so hard for most DSD enthusiasts to acknowledge. Now some folks (who use a DAC that sounds best with DSD and/or that doesn't benefit from PGGB) might still prefer the sound of a DSD final version but this might say more about their specific DAC than it does about the underlying information truly available in the recording.

Link to comment
On 5/28/2021 at 5:53 AM, Zaphod Beeblebrox said:

With 16GB of RAM you will have to limit it to CDs and 256M taps.  Big Sur is not a requirement, I suggested it for future proofing reasons. Mojave or Catalina will be OK too. RAM is what PGGB requires most, and on a Mac, unlike wWindows it is not possible to change virtual memory/swap space. If you can go up to 32GB for Hires and 40-64GB for DSD. You can do DSD at 32GB but may have to limit it to 512M taps if you run out of memory.

 

I'm wondering how much difference in sound quality (subjective I know) to 16FS with a Dave DAC is there between 256M taps and maximum number of taps (if I had a machine with 128GB RAM) for 44K/16 originals? I have a MBPro with 16GB of Ram, although will be upgrading this year to the coming new M2 MBPro with 64GB RAM, but most of what I'd upsample is 44K. 

Link to comment
22 minutes ago, austinpop said:

 

There is certainly a difference. As I pointed out in my last post, the magnitude of difference will be greater if the track length is greater. Take the example of a 10 min track. It would have processed at 423M taps if you had had more RAM, but instead processed at 256M due to your limited RAM. The difference between a 423M and 256M taps filter would be quite minor. But if you had a 48 min track, which would have processed at 2048M taps with adequate RAM, whereas it processed at 256M taps on your system, the sonic delta between the 2048M tap-processed file and the 256M tap-processed file would be noticeable.

 

So keep that in mind - a lot depends on track length. 

 

If your favored genres are manifested as primarily 16/44.1 with average track lengths of 5 mins, you may not ever need more than 16GB to be perfectly content with PGGB. But if you listen to classical or other genres where long track lengths are common, investing in more RAM is very worthwhile.

Thanks for this info. I listen to a very wide variety of music, much is shorter tracks but I do listen to quite a bit of classical and so that'd be an issue. I'd like to avoid doing multiple rounds of upsampling my library - so what I might do is to only process the albums with shorter tracks and then wait for those longer ones till I get my new system (will likely be the coming MBPro with **rumored** 64GB of RAM and M2 chip). I don't really want to have yet another computer just for processing PGGB files at least not at this point.

Link to comment
4 hours ago, austinpop said:

 

As someone who just redid almost 300 albums because I switched from 32-bit to 24-bit files (for using the SRC-DX bridge with my DAVE), redoing the upsampling is not that big a deal. If you wait for a *rumored* machine, you're just depriving yourself. Even at a cap of 256M taps, PGGB makes a massive improvement over the native file!

 

BTW - during the initial development, I must have redone my albums 4-5 times as ZB made incremental improvements. I did not regret it one iota.

 

Yes, redoing upsampling (with more taps) is certainly possible, but I have ~3000 CDs (ripped) plus some other downloads that I want to process so it'd be quite a bit to redo. I'm likely to just target the low hanging fruit soon (albums with short tracks or even some with longer tracks that I'm fine with redoing later) and then do the rest later. Of course it's not like I'm listening to all 3000 albums now in any case, although I would want to save the "big" conversion until I can do it more optimally.

 

p.s. My apologies for this being a duplicate post - I had an earlier post that incorrectly quoted the wrong post, so I redid it with this post correctly. BTW, is there any way to delete a post one has made?

Link to comment
  • 2 weeks later...
5 hours ago, austinpop said:


Ugh. I know people love to hate on Windows, but at least on this issue, I’ll take the explicit allocation of paging space in Windows any day.

 

Don’t get me wrong, my laptop is an MBP, so I like MacOS too.

I'm a new user of PGGB (have been testing and just bought my license). I love the resulting sound improvements! (Dave DAC with Sean Jacobs DC4, doing 16FS, 24 bits or 32bits being evaluated). But I am currently running it on a M1 MacBook Pro (>1TB of free SSD space and 16GB of RAM) so it's an underpowered machine for PGGB although I am wondering if you have any thoughts about:

 

1. Best way to optimize this Mac (regardless of speed of processing PGGB files) for higher taps (currently on classical pieces I'm limited to 32Mtaps unfortunately before the dreaded "out of application memory" error and then often a PGGB crash). Most "regular" albums with shorter tracks allow for 256Mtaps with no problems. Dang... 

 

2. Does a virtual windows machine on this M1 Mac (assuming lots of available SSD space for swap files) allow for much higher taps even on long tracks? Speed is not at all a priority, I don't care how long it takes to process the files within reason.

 

3. What might be a good and very very small Windows computer (laptop or very small not-a-tower computer). 

 

Thanks again austinpop for all of your help in the past on PGGB and other issues! 

Link to comment
1 hour ago, ray-dude said:

I have same issue on 2012 Mac Pro with 128GB (Intel) when processing some large DSD files.  The crash is the watchdog giving up the ghost.  I suspect the system is asking for swap but not getting it fast enough, so the watchdog barks.

 

If there was preallocated swap (like there is on Windows), we wouldn't be having this issue.

 

I do not have this issue when running Windows in VMware on the same Mac.  I preallocate the swapfile in windows, and Bob's your uncle

Thanks for the perspectives on the Mac issues!  I'm thinking maybe the M1 Mac is the problem too. My current M1 MacBookPro has a single 2TB SSD that's still got 1TB of free space available. I have VMWare fusion but haven't run it yet on this M1 Mac. Maybe I do that, although I'd sure prefer just staying with the Mac directly and not having to run a VM. I do plan to upgrade to the "likely coming soon" M2 (or maybe M1x label) MacBook Pro that's rumored to have up to 64GB of RAM. But it's still a rumor although seems likely, but maybe not out for a few months. 

 

I'm going to do some more testing to see what's up, but for now I've been able to do 32Mtaps just fine for long tracks. Not near the desired number but a lot more than the 1M taps of my MScaler. And for normal (5min or so) tracks I can do 256Mtaps. 

 

I'll report back later after more testing. I'm still new to PGGB and maybe there's something more I can try that I haven't. Thanks!

Link to comment
34 minutes ago, Zaphod Beeblebrox said:

If you look at the logs (.log file), it will say how much RAM it is using for a track, if it says 16M of 16M, then you are most likely to run into a issue if your Mac wont create more swap. I wonder if you wait for a long time does it ever start processing?

Thanks ZB, I've been meaning to reply directly to you via email. I see the message:

 

Estimated memory per worker: 2GB of 16GB RAM

Sugested max workers: 8

 

I also frequently see the message "Error in function PGGB.Continue() at line 906."

Another thing - sometimes the process does restart but other times it doesn't restart ever. And sometimes the whole app crashes.  I've attached a failed upsampling attempt log...

 

PGGB_album_debug.log

 

So it does appear that the Mac is not able to do this particular track with anything more than 32Mtaps. Maybe an M1 Mac issue, maybe something else... 

 

Link to comment

I love the "to OOM it may concern"! Ha! 

 

I'm doing some more testing (various numbers of taps and "workers") on my M1 Mac and sharing files with ZB for evaluation. No clear resolution yet although part of the problem may be the M1 Mac itself vs an older intel Mac. More news to come after further testing by me and further eval by ZB.

 

And I'll second @kennyb123 comments about istat menus!

Link to comment
8 hours ago, Zaphod Beeblebrox said:

I released another  patch just now v1.2.07, further improvement to stability while remastering DSD.

I've been testing with v1.2.07 several "challenging" albums (i.e. 44K/16 classical albums with long tracks) with my M1 MacBook Pro (16GB RAM, 2TB SSD with 1TB free) and I can reliably do 256MTap (1 worker) conversions (16FS, 24 bit) - I'm using a Chord Dave Dac (with SJ DC4 LPS), currently through a Chord MScaler (since I also use the same setup for streaming audio and PGGB can't process streaming tracks, an opticalRendu feeds the MScaler which then feeds the Dave via 2 Blackcat Tron BNCs).

 

So even though the MScaler also upsamples everything to 7xxK, I find the PGGB tracks to sound quite a bit better than what the MScaler does and of course both sound a lot better than the original 44K/16 tracks.

 

Side question in case anyone has a thought: Given that I am still using the MScaler in the chain (eventually plan to not use the MScaler in this chain) - should I use PGGB Adaptive Noise shaping or not? Zaphod thought it's likely best to use it and also use 24 bits (given the MScaler is in the chain and that I'm not Dave direct via USB), but neither of us are sure whether the MScaler does its own Adaptive Noise shaping if it receives a track already at the 7xxK sample rate in which case the MScaler basically does nothing, or maybe it does do something? I of course can do testing to see which I like better - as I also will with the HF noise filter (currently I'm using the default Moderate and also Transparency Natural and Presentation Transparent). 

 

Another question I'm pondering: Does the MScaler essentially do what a SRC•DX usb to dual bnc convertor does in terms of benefitting the sound for Dave by not requiring Dave to use the USB input? Has anyone sonically compared an MScaler to an SRC•DX assuming the MScaler isn't doing anything (both are fed PGGB 16FS 24 bit tracks)?

 

I'm so impressed with the SQ of PGGB tracks, even at "only" 256MTaps! Before PGGB I was at a point in my listening where I really didn't enjoy much classical music because it just didn't sound right (even with the MScaler) but now with PGGB it does!

Link to comment
36 minutes ago, Fourlegs said:

I also retain the mscaler (with a dedicated DC4) in my system for streaming but i much prefer 32bit PGGB files to go direct to DC4 Dave by usb rather than to take them through the mscaler. I do not find ‘pass through’ on the mscaler to be transparent.
 

It is only when using the SRC-DX that i then prefer 24bit PGGB files going to the DC4 Dave via dual bnc.

 

 

Thanks for the perspective! I'm wondering, do you then do 2 sets of PGGB upsamples, one with 16FS/32bits and another 16FS/24bits? Also wondering what BNC cables you use? I can have an entirely different feed to Dave directly for PGGB tracks, but since I've found the opticalRendu to make a big improvement in SQ (I'm not yet at the point of having a Taiko although I'd like to get there, am currently using Roon via a SonicTransporter i9 as source), that means I'd need yet another opticalRendu/LPS to feed Dave or an SRC-DX directly and then another 2 BNC cables. This makes for a lot more $ and also more space taken up (I've got limited space in my current setup and unfortunately not unlimited funds). 

 

Stated differently and trying to put things in perspective: is the improvement achieved with PGGB upsampling greater than the improvement achieved by not having an MScaler in the chain (and so going from source directly to Dave or via SRC-DX)? Or is this comparing apples to oranges?

Link to comment

Flac vs Wav vs AIFF for the INPUT source file for PGGB - does it matter? (as long as it's lossless of course and as HD as available).

There are lots of posts on this thread regarding final PGGB output format and how Wav (or maybe AIFF) is best, and that FLAC doesn't support 16FS (which is what I'll use for my Dave) and those all make sense. However, I'm about to (but haven't yet) purchase/download lots of music that I was previously just streaming via Qobuz - and I'm wondering if there's any reason NOT to use Flac for these downloads (to save space and perhaps optimize tags) given that I'll only be listening to the final PGGB-upsampled/processed 16FS output file/track (which will always be Wav format for me). I've done some testing on this and can't hear a difference - but maybe others have done more extensive testing and that there's possibly a clear consensus on PGGB being indifferent to Flac vs Wav vs AIFF for its input file.


Thanks for any thoughts on this or pointers to posts where this may already have been dealt with.

Link to comment
9 hours ago, chrille said:

Like you obviously I also used to have problems with classical and digital and classical muisc both Western and Eastern is the music I truly love but Mscaler improved things quite considerably for me even to the point of enjoying many  cds, which I could not do before Mscaler.

So far I have only one cd rip/ PGGBd but it takes cd one noticable notch higher than even Mscaler in my humble Qutest /Mscaler based digital systems.

But I have only compared Mscaler/PGGB via usb so far and would like to hear from another classical music listener if the SRC-DX  might be the way to go?

I am a bit puzzled how going from 32 bits to 24 bits can improve things, but I keep an open mind.

As to 24 bits vs 32 bits, basically per the folks who've done a lot of testing about this with the Chord Dave DAC: Dave's USB implementation is fundamentally flawed by its chipset (too much latency, etc.) and so it's best NOT to connect directly to Dave via USB because of this, hence using SPDIF inputs. BTW, not all DACs have this problem and so they can get 32 bit data directly via USB in many cases and that's best, depends on the DAC. That said, what's risen to the top so far for SPDIF source is the SRC-DX, but this SRC-DX does not accept 32 bit source data. So then the issue becomes: is the SRC-DX (even with it's constraint requiring a downgrade from 32 bits to 24 bits) more of an improvement than the benefit of using 32 bit data via Dave's USB input? The consensus so far seems to be with using the SRC-DX for better sound. 

Now this is interesting because the Chord MScaler does bypass the problem somewhat by sending the data to Dave via SPDIF, but then the issues of internal noise shaping of the MScaler (apparently not able to be bypassed even if the MScaler is sent 7xxK content) becomes relevant. If not for that, we could use 32 bit data and then send it through the MScaler, but per Zaphod, this is not optimal since the MScaler does it's own thing in the noise shaping realm and it's less damaging to send even the MScaler 24bit data (even though the MScaler can accept 32 bit data). 

 

I've got an SRC-DX coming soon so will have more to day about this from direct experience....

Link to comment
9 hours ago, chrille said:

Hello Romaz, just a quick question after all these discussion around digital  room correction . Why did you sell your elctrostatics and go for speakers with 4? problematic crossover points over just one crossover point?

The only real  problem I heard with the 15s you had was the built-in class D bass-

amp and the crossover point there.

With classical music I prefer electrostatic panels line-source over most conventional speakers I have heard even the most expensive.

Ok if someone would give me a pair of Gryphon Pendragons or similar giants for free, I would not say no, but I might prefer ML XL Art elctrotats with a good subwoofer for the deep underworld Bach organpoints or Zarathustra 33khs bass  even over those?

Cheers Chrille

 

I can't speak for Romaz, but in my experience (and over the years I've owned and/or heard many of the very top implementations), I've never yet heard a set of panel speakers (electrostatics, ribbons, etc.) that could match the micro or macro dynamics of a really good horn or traditional woofer/tweeter/etc. set of speakers - even if the panels had subwoofers added. Yes, the panels can cast an amazing image, but there's always been an element missing for me. I'm not saying I don't like the sound of the really great panel speakers, just that I always seem to gravitate to the really good woofer/tweeter speakers even given their crossover limitations. Maybe the only exception I've found to this is with the MBL Extreme speakers that or sort of a hybrid but pretty amazing when done right in the right room...

Link to comment
11 hours ago, chrille said:

Hi I agree regarding macrodynamics, very loud big dynamic peaks are not what electrostats thrive at.

And yes I  too used to own a pair of huge HORN coffins which could play much louder than my current electrostats can. But like all horns I have heard including the truly big  Avantgarde Trios,they are just  a bit too colored for my "acoustic music only please" ears.

One of the best speakers I have heard with really  large scale acoustic music where the Gryphon Pendragons,but they are way beyond both my wallet and room capacities.

My compromise  electrostats both economically and dynamically sound quite nice with PGGB and I can play my PGGB  test tracks a bit louder than normal hi res.

And at 1 metre 85 centimetres, taller than me I can stand at my listening position and conduct along without having to be a "couch potato" all day.

Cheers Chrille

 

 

I've found that the best "traditional driver" systems excel at not just macro dynamics but also micro dynamics. For me the panel speakers don't to either quite as well as the best "traditional" speakers I've heard, but panels are still great. As to horns, I think they really excel over almost anything I've ever heard in the realm of dynamics, but they don't need to be big - the best ones I've heard are actually quite small (and often have separate bass units of some sort). Big horns are wonderful, but mostly only work well IMO in very very large rooms where we sit at quite a distance from the drivers.

 

There is another issue with large panels - and that is (with some exceptions like Quad ESLs) that they basically provide a wave of sound but that means that different portions of the large panel's sound arrives at our ear at different times. This does create a huge and often very desirable sound, but does give up some accuracy in imaging. So certain music will work better than others with panels. 

 

Always tradeoffs! 

Link to comment

@Fourlegs - this is my new challenge too - that of comparing 32 bits vs 24 bits and noise shaping on or off as it goes through the MScaler. And then comparing this to the SRC-DX (which requires 24 bit). I'd talked to ZB earlier about this issue and he initially thought that 32 bit with noiseshaping by PGGB was not optimal if going through the MScaler, but now considering this "no noiseshaping and 32 bit" option for PGGB files is intriguing. 

 

One issue I have with these variants though is that as I process more and more music (with quite a lot left to do), I'd rather not have different (each large) files for different ways that I do my playback. i.e. don't want to need a different version for MScaler playback vs SRC-DX vs perhaps some other future option that goes to other systems I have (not Chord DAC based). But maybe these variants are unavoidable for the optimal sound... 

Link to comment
53 minutes ago, Fourlegs said:

I will try to accept the challenge. When I have settled on my favoured playback/processing method it is my intention to just use that for my PGGB library.

 

Live streaming and other albums that did not get processed (I only have 12TB local storage 🤣) will be played with HQPlayer or possibly the Mscaler depending on which streamer I am using (Antipodes K50 for HQP and Innuos Zenith for HMS).

This is a tough challenge we have!

 

In my case, I have 3 separate systems that I'll want to play content on. Currently everything is sourced from Roon, but I'm going to head eventually to a highend server setup like what you have (not sure, maybe Innuos, maybe Taiko, maybe something else), but even then I'll need Roon for some of my "multizone" listening modes. I generally in the evenings have music playing on 3 systems simultaneously so it gives a "fill the house with sound" effect - quite nice for dinner music, etc. But 2 of my systems (one Chord DC4 Dave/HMS with my Abyss headphones and HPA, and another living room system currently with a Devialet integrated amp/DAC but I'm going to upgrade that soon to a TBD DAC, maybe a Holo May or Tambaqui or dCS or maybe another Dave) are for "highest quality" listening. The problem here is that one version of PGGB tracks is not necessarily optimal for every system - and maybe not even playable on every system. The Devialet can only handle up to 192K/24 input and so I currently have Roon downsample the PGGB processed files to 192K, but if I get a dedicated server, it might not be able to do that on the fly....

 

I hope SSD drives come down in price quickly! I don't even have 12 TB😀.

 

Such first world problems we have!

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...