Jump to content
IGNORED

A toast to PGGB, a heady brew of math and magic


Recommended Posts

 

On 6/27/2021 at 8:14 AM, Zaphod Beeblebrox said:

 

I think the reason for my observation are two fold

1. Typical measurements are done at 48kHz using a USB mic and to generate mixed phase filters at different rates, these are resampled at different rates to create filters at 44.1kHz - 384kHz. the resampling process introduces timing uncertainty in the resampled filters and these when convolved with music signal results in reduced transparency of the reconstructed signal when compared to just using linear phase filters.

2. Even when the measurements are done at different rates (I use a Earthworks M30BX with Lynx Hilo), even when the TTD correction is done at native rates without resampling, linear phase filters resulted better transparency/depth then using mixed phase filters. This made me conclude that excess phase correction negatively impacted timing accuracy of reconstructed signals.

.......

I can only base this on my observations and from feedback  I received from those who have tried. If someone wishes to try TTD correction, I will accommodate that and let them decide.

 

 

Hi Zaphod,  

 

I am trying to wrap my head around the two claims above. Something has to have gone wrong somewhere for you to draw these concusions.

 

Resampling does not introduce timing uncertainty if it is done properly.  The only "uncertainty" is the part of the system response that is above the Nyquist frequency, or more precise, above the anti-aliasing filters used during the measurement. This is also outside of human hearing capabilities.

 

Furthermore, linear phase correction should by design introduce distortion in the time domain. But how does it work in real life?

image.png.efb2e39fe45d02171d3f8e93e6165726.png

The above plot from Audolense looks practically the same whether it's a TTD correction, a minimum phase correction or a linear phase correction.

 

Here's the resulting step response from a linear phase correction.

 

image.png.430b3819d7ed557fa0d9739b441a98d5.png

Everything left of the 442 ms mark are artefacts produced by linear phase, and the reason that linear phase correction is a bad idea. This can be audible and very, very annoying on the "right" music material. This looks very audible to me. If not flat out annoying pre-echos, a degree of reduced cleanness of the leading edge and reduced transparency. I base this on numerous instances of feedback where I have looked at similar plots.

 

Here's the IR of the linear phase correction filter used above:

image.png.bf142315774a013fd0e436717579e7b7.png

Everything that happens before the main spike creates pre-artifacts that will be audible from time to time. This will also reduce transparency. 

 

 

Here's the step response of the same speaker with a corresponding TTD correction. The resulting frequency response is the same as above:

image.png.19658e9788407a9135d434f70121fa2e.png

 

By comparison it has virtually no pre-ringing and a somewhat cleaner decay. @mitchcofrequently gets a clearly better result than this when he perform his services.

 

The corresponding TTD filter:

image.png.f98c2b5854b445dd8c15496d60fe2610.png

Left side activity here too. But here, the left side is specifically designed to improve the time domain behavior. This is what makes the TTD somewhat picky on measurement quality. The left side of the correction will produce artifacts if the timing in the  measurement was wrong, if the measurment picked up some mechanical noise etc. But this is also what makes it sound better when the measurement and playback streams are in order. A cheap measurement microphone gets the job done. The most critical aspect is glitch free streams during measurement, no mechanical noise from either system or environment, and stress-less volume during measurement.

 

A side-by side comparison of the two step responses, after linear phase and TTD correction, respectively. This time on logarithmic form.

image.png.592f8e829a5997d85988e108fb85893a.png

The red is the linear phase correction, the blue is the TTD correction. I often use this analysis to check that the TTD has an acceptable leading edge, and doesn't have pre-artifacts that are likely to be audible.  Thise TTD is good to go, the LP is pretty bad. 

 

The majority of Audiolense customers end up using TTD correction because they experience it to be somewhat more transparent than a minimum phase correction. This seems to be the general experience related to Trinnov, Acourate, Dirac Live as well. None of the sound correction providers in the industry use linear phase filters the way they are used in PGGB, even though it is by far the easiest filter to make. The second easiest approach is to use minimum phase filters, which is what e.g. Audacity, Lyngdorf and many others offer, and what Tact used in their pioneering days. This is a more robust and fool-proof solution. Quite effective too, and not as picky on measurement quality. But time domain correction done well performs measurably better and is generally preferred by a solid margin among Audiolense users.

 

If linear phase corretion is the best sounding alternative in PGGB, the other alternatives must have been seriously distorted in the process. Correction filter from Audiolense, transformed into a linear phase filter in PGGB will perform as shown here, which is less transparent than it needs to be.

 

I hope you manage to sort this out, Zaphood. It may benefit our common clients.

 

 

image.png

image.png

image.png

image.png

image.png

Juice Hifi

Link to comment

Hello everyone,

 

I would like to add something to my recent suggestion for a proof of concept experiment for PGGB.

 

Thanks to the new cloud based option of upsampling files with PGGB, I today had the opportunity to try it out for myself. I played the upsampled files through Audirvana and sent them directly to DAVE. The original reference files for comparison were as well played through Audirvana and sent to MScaler. 

 

I am a trained listener, recording concerts with classical music on a regular base myself (not as a sound engineer but as a video director), but still did not expect to make out a clear difference because I am not able to reliably discern things as subtle as some in this forum seem able to.

That said, I was stunned how obvious the difference between Mscaler and PGGB was. First I perceived a greater depth in soundstage and air between the instruments. But later I very clearly heared the PGGB sound as being brighter and sharper than with MScaler. Trumpets, Violins and soprano voices had an artificial bright touch. After my initial enthusiasm I therefore clearly find Mscaler sounding more natural or, if you like, neutral.

 

Is PGGB then maybe acting like some kind of presence filter, giving the illusion of more space and air?

 

The only way to find out what is going on, I guess, would be the kind of proof on concept I recently suggested: comparing an analogue mastertape with a digitized file through MScaler as well as PGGB. One of these two should sound more similar to the analogue tape. My bet would be MScaler, even though I can see that there's potential for improvement. 

 

 

 

 

 

 

Link to comment
7 minutes ago, hanshopf said:

Hello everyone,

 

I would like to add something to my recent suggestion for a proof of concept experiment for PGGB.

 

Thanks to the new cloud based option of upsampling files with PGGB, I today had the opportunity to try it out for myself. I played the upsampled files through Audirvana and sent them directly to DAVE. The original reference files for comparison were as well played through Audirvana and sent to MScaler. 

 

I am a trained listener, recording concerts with classical music on a regular base myself (not as a sound engineer but as a video director), but still did not expect to make out a clear difference because I am not able to reliably discern things as subtle as some in this forum seem able to.

That said, I was stunned how obvious the difference between Mscaler and PGGB was. First I perceived a greater depth in soundstage and air between the instruments. But later I very clearly heared the PGGB sound as being brighter and sharper than with MScaler. Trumpets, Violins and soprano voices had an artificial bright touch. After my initial enthusiasm I therefore clearly find Mscaler sounding more natural or, if you like, neutral.

 

Is PGGB then maybe acting like some kind of presence filter, giving the illusion of more space and air?

 

The only way to find out what is going on, I guess, would be the kind of proof on concept I recently suggested: comparing an analogue mastertape with a digitized file through MScaler as well as PGGB. One of these two should sound more similar to the analogue tape. My bet would be MScaler, even though I can see that there's potential for improvement. 

 

 

 

 

 

 

Thanks for the feedback, were the samples you uploaded at CD rates or hires or a combination of both?

Author of PGGB & RASA, remastero

Update: PGGB-256 is completely revamped, improved, and now uses much less memory

New: PGGB-IT! is a new interface for PGGB 256, supports multi-channel, smaller footprint, more lossless compression options

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

System: TT7 PGI 240v > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ [QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment
31 minutes ago, hanshopf said:

Hello everyone,

 

I would like to add something to my recent suggestion for a proof of concept experiment for PGGB.

 

Thanks to the new cloud based option of upsampling files with PGGB, I today had the opportunity to try it out for myself. I played the upsampled files through Audirvana and sent them directly to DAVE. The original reference files for comparison were as well played through Audirvana and sent to MScaler. 

 

I am a trained listener, recording concerts with classical music on a regular base myself (not as a sound engineer but as a video director), but still did not expect to make out a clear difference because I am not able to reliably discern things as subtle as some in this forum seem able to.

That said, I was stunned how obvious the difference between Mscaler and PGGB was. First I perceived a greater depth in soundstage and air between the instruments. But later I very clearly heared the PGGB sound as being brighter and sharper than with MScaler. Trumpets, Violins and soprano voices had an artificial bright touch. After my initial enthusiasm I therefore clearly find Mscaler sounding more natural or, if you like, neutral.

 

Is PGGB then maybe acting like some kind of presence filter, giving the illusion of more space and air?

 

The only way to find out what is going on, I guess, would be the kind of proof on concept I recently suggested: comparing an analogue mastertape with a digitized file through MScaler as well as PGGB. One of these two should sound more similar to the analogue tape. My bet would be MScaler, even though I can see that there's potential for improvement. 

 

 

 

 

 

 

When doing an apples/apples comparison its important to ensure the galvanic signal path and inputs for both is the same.  You could try repeating this test by sending the PGGB file thru MScaler (it will go into bypass if the input sample rate is 16fs) and DBNC into Dave.  Then do the normal MScaler upsample to 16fs and DBNC into Dave.  In both cases make sure any USB cable is removed from Dave (as well as any other coax inputs you may have). 

Link to comment
43 minutes ago, dmance said:

When doing an apples/apples comparison its important to ensure the galvanic signal path and inputs for both is the same.  You could try repeating this test by sending the PGGB file thru MScaler (it will go into bypass if the input sample rate is 16fs) and DBNC into Dave.  Then do the normal MScaler upsample to 16fs and DBNC into Dave.  In both cases make sure any USB cable is removed from Dave (as well as any other coax inputs you may have). 

Since Mscaler will noise shape 32bits to 24 bits, for a true  (apples to apples) comparison 16FS 24bits need to be used used as input to Mscaler or better still (my preferred apples to apples comparison) to use 16FS 24bit signal via SRC-DX straight to DAVE's DBNC.

Author of PGGB & RASA, remastero

Update: PGGB-256 is completely revamped, improved, and now uses much less memory

New: PGGB-IT! is a new interface for PGGB 256, supports multi-channel, smaller footprint, more lossless compression options

Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks

System: TT7 PGI 240v > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ [QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN

 

Link to comment

Thank you all for your replys. You may be right, even though I tend to doubt the notion, a battery driven source into galvanically isolated USB input of DAVE or noise shaping from 32 to 24 bit inside DAVE should be the reason for the PGGB files to aquire the perceived unnatural brightness. But I will try again, next time a step further, recording a LP and then compare it with the files through the same headphone amp. Let‘s see which of them

sounds more similar to the vinyl. 
 

Link to comment
3 hours ago, Zaphod Beeblebrox said:

Since Mscaler will noise shape 32bits to 24 bits, for a true  (apples to apples) comparison 16FS 24bits need to be used used as input to Mscaler or better still (my preferred apples to apples comparison) to use 16FS 24bit signal via SRC-DX straight to DAVE's DBNC.

 

Exactly what you say about apples to apples. Taking the Mscaler output to the Dual BNC inputs of the Dave but taking PGGB playback to the usb input of the Dave is not an apples to apples comparison. Both playbacks need to be compared on the dual bnc inputs to the Dave.

Owner Wave High Fidelity digital cables :

Antipodes Oladra (WAVE Storm BNC spdif RF noise filtering cable to Mscaler)

Dave (with Sean Jacobs ARC6 and SJ Cap Board) + WAVE Storm dual BNC RF noise filtering cables

ATC150 active speakers.

Link to comment

 

 

4 hours ago, Zaphod Beeblebrox said:

I am sorry I do not buy into pre-echo or linear phase is bad argument, this has long been used by many including DAC manufacturers as a way to shun linear phase filters.  As an evidence, what is provided is convolving with a perfect step or impulse response, which is clearly not band-limited and is a 'illegal' real-world digital music signal.

Give it some time. If you last for a couple of years this wil come and bite you ... if you keep supplying linear phase corrections. LInear phase cutoff in top usually work fine. It is neither better nor worse than the usual alternatives. Besides, most Audiofiles doesn't hear a lot past 15 kHz anyway. Sometimes linear phase can work in the bottom too. But correctiong speakers and room with linear phase is the worst approach.

 

 

4 hours ago, Zaphod Beeblebrox said:

This is simply not true. For example, please refer to this post. Applying a EQ filter is just a convolution operation, and PGGB convolves all EQ filters (linear phase or otherwise) in exactly the same way. So the idea that other filters somehow get distorted cannot be true. Edit: PGGB is oblivious to the filters it is using as currently the linear phase filters are generated (using my own proprietary technique) outside of PGGB.  The user then drops these filters (or the Audiolense filter) into a folder. This folder is always named named 'EQ' and is always in a specific location. PGGB looks for this folder and just picks the filters at teh appropriate sampling rate that it finds within this folder.

 

If it was "just a convolution operation" the original correction should clearly sound better than the linear phase replica, as I have shown.

 

I haven't had the opportunity to look into your code, test it and see what it produces. There seems to be a lot of processing. I don't trust any DSP code out there, including my own. There are bugs everywhere, plus. every speaker, room, dac, PC, mic and user seems to have their own issues it seems. Actually not every single one, but they are plentiful. So I spend a lot of time testing if stuff works as I have intended for a particular customer. Most of the time Audiolense works as intended, but exceptions happens regularly.

 

How do you know that your solution works as intended? And how do you know that you have the most important parameters on your scoreboard? I fully expect the linear phase correction to mask any winning you'd squeeze out of the LSB with noise-shaping and upsampling etc. And I fully expect an uncorrected speaker to do the same.  I have shown in my last post that a linear phase version of an Audiolense correction by design is significantly less transparent than a TTD verion of the same. And I can easily show that a minimum phase correction is better than the linear phase too. Just say when.

 

How about it if you show us how a time domain + frequency correction performs in your solution and how it compares to a linear phase correction in PGGB. I'd like to see frequency response, impulse response, step response and whatever you prefer to focus on. Preferrably based on a few real life cases. There are transparency claims in this thread, but the claims haven't been very transparent so far....

 

Juice Hifi

Link to comment
3 hours ago, Nenon said:

@Juice Hifi - can you please share what your reference audio system consists of? 

 

I am way past the point where my own system  s are the reference systems that I base my conclusions on as far as Audiolense goes. And for the last 15 years or so I have been more concerned with my clients’ sound quality than my own.


But I have 4 systems where I have built the speakers myself. Drivers are Accuton, TC Sounds, DC Gold and one tweeter I can’t remember. Three of the four have sand-filled walls for the midrange and up. Highly recommended. Power amps are class D and B, although I have a pretty potent class A in idle. My preference is class D. Converters are two from Lynx Studio, embedded, and Hypex’ dsp module that came with their plate amps.


My best speakers are a 3-way with 2*Acuton and an awesome 10” driver from TC Sound … from before TC became famous. The TC plays with ease down in the 20s. 

Juice Hifi

Link to comment
3 hours ago, Zaphod Beeblebrox said:

I have no intention to argue about this any more. I have already  said what I  have to. I have also said, PGGB will use any filter XO provides as is if that is what the user desires, I do not have any hidden agenda here. I do not charge for generating the linear phase filters or for incorporating EQ into PGGB either. So PGGB is not competing in anyway with Audiolense or other measurement software.

 

PGGB is a resampling software solution first, I provide convolution EQ as an option as many (including me) use room correction. I also provided my own linear phase filters because they worked really well for me. It does not matter to me if someone likes the PGGB filters or Audiolense XO filters (min-phase or mixed phase), they have the option to use them and decide what they like. At the end, it is about enjoying music. If someone likes TTD correction or linear phase correction whichever improves their listening experience, who am I to object.  The truth is in the pudding, those who listen can report their experience and few have already. 

I am all good with that, and I agree that we are not competitors.


But everybody would be better off if you were able to sort out why technical superior sound correction is perceived to sound worse than the worst alternative - a linear phase correction - in  PGGB. And even more so if you could make the best corrections deliver the goods in PGGB. 
 

Cheers!
 

 

Juice Hifi

Link to comment
25 minutes ago, zettelsm said:

In my case when I compared the stock DAVE to the Bartok I much preferred the dCS. And when I compared the DAVE to the Rossini and then Vivaldi it was no contest -- to *my* ears. YMMV obviously.

Thanks for sharing Steve. I completely agree with you. I loved the Bartok, loved it. I would take the stock Bartok over the stock Dave any day of the week, and twice on Sunday. I wrote up an extensive post detailing my experience on Headfi. It is the Dave-based system that I am so enthusiastic about. PGGB seems like another instance where Dave’s fundamental architecture is leveraged to produce some wonderful results. I would love to hear more of how PGGB likewise transforms other platforms like dCS and Denafrips(32fs)!

 

As it is, right now, I have a few hundred files baking in the PGGB oven! 

 

Link to comment

Dear @Nenon, I always enjoy reading your posts. I don't always agree with you, but I can understand your thoughts.

 

1 hour ago, Nenon said:

There is no perfect room. Everyone can benefit from room correction, whether it's passive or active. In fact, I consider the room to contribute to about 40-60% of the system.

Regardless of percentages, most will agree that the room is a very important component.

 

1 hour ago, Nenon said:

Measurements are not perfect. Microphones are not perfect.

I can understand that from my own experience. For example, if you measure with a non-calibrated microphone, you will correct the errors of the microphone afterwards. 😂 But if you follow a few principles, you will get very good measurement results.

 

1 hour ago, Nenon said:

You start with a far from perfect (in my opinion) measurement and do quite aggressive "corrections" based on something that is WRONG to begin with.

Good software enables psychoacoustics and windowing to be taken into account. For example, a smooth frequency response is not the goal because it sounds unnatural. An important part is the target curve. Anyone who thinks the frequency response must look like a straight line in the room has already lost. I put a lot of effort into my target curve. I prefer an expressive bass with sloping highs. The thin line shows the uncorrected frequency response, the thick line the corrected one.

 

spacer.png

 

I think correcting the timing is very important. Not all passive speakers have a sophisticated time correction system like Wilson Audio. In the picture the thin line represents the uncorrected step response. The tweeter comes first with reversed polarity, followed by the midrange and bass. The correction (bold - red) brings everything in one go.

 

41527556gc.png

 

This is what the perfect step response looks like (bold red). 😉

spacer.png

 

1 hour ago, Nenon said:

If you are doing realtime heavy processing... same thing. Nothing wrong if you are doing some of that, but you are in a different group than me

In fact, I see myself in the group of real-time processors. That's because I mostly stream via Qobuz. But also because I don't think offline convolving is expedient. I'm moving to the sea soon. Another gets a new armchair or rug. The speaker membranes and the technical devices age or play themselves in. That's why I take a new measurement once a month. Who wants to have their music files recalculated every month? For me, there is no way around online convolution.

 

To get back to PGGB. I used a different software to remaster some CDs with 134 million tabs. This high number cannot be achieved with real-time processing. The listening result was very good. When I have a little more time I will try PGGB. 👍

Link to comment
2 hours ago, StreamFidelity said:

I used a different software to remaster some CDs with 134 million tabs. This high number cannot be achieved with real-time processing.

 

Why not? Whether it makes sense is another question... The delay just becomes annoyingly high, because you have 67 million sample lead-in and lead-out. Unless you run at DSD1024 rate where it is then just a bit over one second.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 hour ago, Nenon said:

@StreamFidelity

I expect that at least 90% of the people here will disagree with me. It's a controversial post that will piss off some people. I've done it before and I am sure I will do it again :). Hopefully, my post will not derail this thread too much.

You definitely fall under a different group of people than the group I meant. You are in a group that recognizes the importance of the digital source and you spend quite a bit of time on R&D of your digital source, but you do it on a much tighter budget than people like @romaz. No question you are much wiser than us on how you spend your money and you get way better bang for the buck. But also no question how high the noise floor is from those Keces power supplies and extra processing and how much important information gets buried like dither.  We don't realize this until we hear it.

(BTW, you will hear some of that when you try the v2 of the unregulated LPS with the Taiko ATX powering the EPS and ATX, but that's way offtopic)

 

@romaz and I have a very similar digital source.

Power: dedicated power circuit - Sound Application 240V PGI TT-7 (fed by a Sablon King power cord)

Server: Taiko Extreme with Taiko USB card. (I am not quite there yet)

DAC: Chord DAVE with the latest ARC6 development of the Sean Jacobs DC4 LPS - Audiowise SRC.DX

Software: HPQ (playing bitperfect!) or TAS playing PGGB-ed files.

Network: completely disconnected while playing.

I would probably add a choice of digital cables too but risking to dial the level of controversy up from 800 to 900 (on a scale from 0 to 10) :). Everything listed above is a key component. 

 

Listen to a source like this in a really good system and you will realize how much different it is than what you are doing today. On a system like this messing with even the slightest thing matters! And we are talking about the slightest changes, not anything major. Just to give a couple random examples... You can disable the unused ipv6 protocol on the unused NIC on the Extreme server and the sound will degrade. You can turn on the display on the DAVE DAC and the sound will degrade. You can leave Roon server (unused) in the background and the sound will degrade. Even a few degrees higher room temperature is audible (sounds worse)! 

That's how highly tweaked this system is. I am not even talking about things like using Roon (that makes a huge negative impact) or playing anything different than 16fs files that enables extra processing on the DAVE and reduces the sound quality significantly. I am talking about very little changes. This system is also highly tweaked for super low latency and resource isolation. Imagine what real-time convolution messing with the time domain based on "science" that can't even explain why a USB cable makes a difference can do in such system! It can make it unlistenable even with the best filters. And @romaz has tried that. Digital correction in general can make it unlistenable... you don't even have to mess around with the time domain to achieve that. 

If everyone has a system like this, Mitch, Audiolense, etc. would never do what they do. The good thing is that such highly optimized systems are very rare. For most other people the net gain from digital room correction is very positive and as you say "correcting the timing is very important".

However, not in a system as the one described above. For those few systems out there at that level, PGGB EQ has merits and can make digital room correction a consideration again. 

 

Re: non-calibrated microphone. Let me be crystal clear about that. I would never consider a non-calibrated mic for my measurements. That's not what I meant. What I meant is that even the best calibrated microphone and measurement system cannot show me why things like digital cables that make obvious difference make that difference. Until "measurements" and "science" can show me that, those measurements are far from perfect in my book. And if someone is planning to start a debate on cables, I will not engage in such discussion.

In fact, I am traveling and won't have time for discussions in coming days/weeks. I think I made my point. Not here to argue with the 90% who will disagree with me... just wanted to share my 2 cents. 

 

Lastly, just to be absolutely clear - I am not criticising the pioneers of digital room corrections that I mentioned here. What they have done is genius. No doubts about that. It just needs a little more careful implementation in a highly tweaked system like the one mentioned above, and this is where @Zaphod Beeblebrox and PGGB EQ play a big role.

I am pretty confident if @StreamFidelity trys PGGB his system is more than capable of hearing the differences. My system in comparison to Romaz is well below his but I clearly hear the differences of most small tweaks. VLAN’s being 1, I consider it to have an noticeable change. 
 

Like I’ve heard upsampling to 352khz using PGGB and now the EQ. 
 

I look forward to @StreamFidelity trying PGGB & EQ reporting back. Could yield more benefits than having to take your microphone out each week. Imagine that? Being able to sit back & not worry if something has moved or changed. 

Link to comment
3 hours ago, Nenon said:

 

I don't know a single person with a reference system of the level of  @romaz or better who liked digital room correction. Not a single person. Why? Because at this level absolutely everything matters and makes a huge difference. I visited @romaz and listened to his system last month, so I have a pretty good idea of what it does. I've heard those Wilson Alexia Series 2 speakers you see on his photos many times before but never as good as they were in @romaz's system / room.

And in a system of that level, digital room correction done the traditional way makes a lot more bad than good. This is the first time I am getting feedback that digital room correction is done right... with PGGB EQ. As an early PGGB adopter, I can absolutely believe that.

 

 

Yes, the Wilson Alexia is a fine pair of speakers, and probably much better than mine. But those I regard as prestige customers have speaker systems with extreme capabilities, far beyond what an off-the-shelf 4-driver speaker can do. They typically have acoustically treated rooms, separate bass solutions with a multitude of drivers, line arrays, open baffle, horns etc, equipped with the finest drivers money can buy, class A or class D amplifiers and studio grade multichannel converters. And they all use active crossovers. The owners are typically highly skilled diy builders who have outgrown off-the-shelf speakers decades ago. They are always on the look for sonic improvements. Some of them have been using Audiolense for digital crossovers and sound correction for 5-10 years. Their systems have been scrutinized for equally long, both through measurements and peer reviews, and Audiolense has gained recognition in the process.

 

The artifacts that you claim to surface in "... a system of that level .." simply do not exist at the highest level in this hobby. 

 

I do not usually engage in other manufacturers' threads, but this situation is differnet. Speculative and subjective and technically illiteral arguments have here been circulated to install distrust in proven-to-work DSP solutions (not only mine) and promote the manufacturer's own solution. In that regard, the last few pages have some of the same traits as a dirty marketing campaign. I just hope that is not @Zaphod Beeblebrox's intention.

 

Juice Hifi

Link to comment
37 minutes ago, Juice Hifi said:

They typically have acoustically treated rooms, separate bass solutions with a multitude of drivers, line arrays, open baffle, horns etc, equipped with the finest drivers money can buy, class A or class D amplifiers and studio grade multichannel converters. And they all use active crossovers. The owners are typically highly skilled diy builders who have outgrown off-the-shelf speakers decades ago. They are always on the look for sonic improvements. Some of them have been using Audiolense for digital crossovers and sound correction for 5-10 years. Their systems have been scrutinized for equally long, both through measurements and peer reviews, and Audiolense has gained recognition in the process.

I have to agree that digital crossovers is an excellent application for digital room correction. Since there is already DSP involved, it does not hurt to add room correction. No concerns from me doing it this way. There are many benefits of using active crossovers. In fact, thumbs up for those implementing it this way 👍.

Let's give credit where credit's due :). 

 

52 minutes ago, Juice Hifi said:

In that regard, the last few pages have some of the same traits as a dirty marketing campaign. I just hope that is not @Zaphod Beeblebrox's intention.

I don't think that's @Zaphod Beeblebrox's intention. From my conversations with him on this topic he does not have the time and does not want to deal with measurements and room corrections, but he is open to add this as a standard functionality to PGGB, in a BYOCF (Bring Your Own measurements / Convolution Filters) style. 

Industry disclosure:
https://chicagohifi.com

Dealer for: Taiko Audio, Conrad Johnson, Audio Mirror, and Sean Jacobs

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...