Jump to content
IGNORED

Izotope SRC


Recommended Posts

I don't understand DAC filters too well, but won't most DACs apply a filter to remove ultra-sonic noise before conversion, even if not up sampling? If that is the case, should we use a low steepness so that the DAC has more data to work with for its own filter?

 

When you upsample, you move the Nyquist (fs/2) frequency up. DAC's digital filters will leave the newly created space between old and new Nyquist frequency mostly alone, since the DAC doesn't know that it is not being used... (it cannot tell difference between true 24/192 hires content and RedBook upsampled to 24/192)

 

If you use more leaky upsampling filter than the DAC's filter, the DAC will also become more leaky. If you use less leaky filter, the DAC will also become less leaky...

 

DAC's filter will deal only with frequency content above Nyquist (fs/2) of the incoming sample rate. Thus, more you upsample, higher you push the impact of DAC's built-in filter.

 

Of course DAC should always also have an analog filter. Cut-off frequency of this analog filter is almost always fixed. And for modern delta-sigma DACs it is usually low order, typically 2nd order filter.

 

Higher you manage to push the digital image frequencies, more the analog filter will be able to remove those.

 

RedBook content has digital images at 22.05 - 44.1 kHz frequencies, when upsampled to 192 kHz (assuming perfect attenuation) the digital images will only appear at 169.95 - 192 kHz frequencies (and higher).

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Miska, what do you mean by "leaky" ?

 

By leaky I mean filter that let's any content between original and new Nyquist frequency. For example for 44.1 -> 176.4 conversion that means frequency band between 22.05 and 88.2 kHz. IOW, non-perfect stop band attenuation.

 

Different filters leak different amounts with varying frequency characteristics.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I think that different SRC settings are a matter of personal preferences and I do not have strong preference for one or another setting. That's why iZotope SRC provides all the settings to tweak.

 

One important aspect, IMO, would be adjustment of the roll-off shape / stop-band parameters.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Can you share with us what other parameters of an SRC filter would be, in an ideal world, adjustable?

 

There are many things I consider important regarding SRC filter design, but I'm not sure if all those should be exposed to end users. Because things grow really complex. And lot of those things cannot be expressed in simple numbers, but would require meta-programming language to express as mathematical formulas. Of course all technically doable, but I'm not sure if anybody would actually want that, instead of just buying Matlab. So ideal world has to meet reality somewhere... :)

 

For example I don't think it is possible to accurately replicate the Ayre's filter with the current set of parameters anyway.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 3 months later...
In the case of the Ayre Minimum Phase/Slow Roll Off filter, they are oversampling to 705.6 kHz, so there are way less worries about alias products producing audible problems. If anyone is interested in the measured performance of the Ayre filter, take a look at the Sterophile measurments associated with their review of the QB-9 DAC (I have not looked, but the measurements of the DX-5 Blu Ray player shoudl have the same info as well).

If one can only oversample to 176.4, one will probably prefer a little steeper slope, as the audio band artifacts of a shallower slope will be problematic at that rate.

 

It is not about the target rate, but about the source rate. Artifacts begin immediately above Nyquist frequency of the source rate.

 

In the Stereophile measurements, you can clearly see that there's really heavy leakage with the "listen" filter by looking at figure 13. The image frequencies of 19 and 20 kHz that should have been filtered away are just couple of dB down. Compared to figure 12 "measure" filter where it's 90 dB down - a lot better figure. You can also see that in figure 13 the audio band spectrum becomes polluted with aliases all over, compared to figure 12.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
And then, there is the trade off: what do you think is worse, the pre ring from the anti alias filter in the ADC, or the amount of ringing from the filter in OSF (whether in the DAC or SW). Clearly, Ayre has come to the conclusion that reduction of ringing in the DAC OSF is important than reduction of ringing from the ADC filter used in recording, of course, YMMV!

 

From ringing perspective it doesn't matter at what amount of ringing the DAC filter has if it's non-apodizing because it has no impact on the overall ringing... Having a short filter just makes it leak more.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Well, this is a subjective judgement.

 

No, it's purely objective. :)

 

 

This filter was designed by ear to provide the best performance possible in subjective terms. It is significant to note that John Atkinson mentions that the minimum phase filter used in the dCS was a lot more leaky than the Ayre "listen" option. In any case, if these relatively low level tones bothers one, one can try the more effective alias suppression of the "measure" option, as you note. And I am sure you do not mean that one would be able to hear the primary alias pair above 20 kHz.

 

You can hear the intermodulation products of those and the low level aliases that pollute the noise floor. Essentially it will largely sound similar to a filter-less NOS-DAC. Some people like it, some people don't. I also offer those kind of filters as an option, but I never use those myself. Instead, in the designs I use, I try to both minimize the ringing and maximize the filter attenuation at the same time. It's not so much about filter design parameters but more about filter design algorithms... Once you have the design algorithm, dialing in nice parameters is easier.

 

dCS doesn't seem to leak much (-125/-135 dB), but it has some more severe aliasing issues instead (see fig 11):

dCS Debussy D/A processor Measurements | Stereophile.com

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I would like to try to understand better. A less steep non-apodizing filter in the playback PC will not affect the total amount of ringing, even though that specific type of playback filter itself will ring less, because - why? Because it will reduce pre-ringing from the recording by a lesser amount? But wouldn't that vary with the amount of pre-ringing in the recording?

 

Because by definition it will pass the original ringing through as-is...

 

Sorry for the quick-and-dirty example (I don't have time to make anything more pretty now):

 

Let's take a source at 44.1k, converted from 96k dirac pulse using ordinary type of conversion:

source.png

 

If we upsample it to 176.4k through a typical non-ringing (non-apodizing) filter:

non-apodizing.png

 

If we upsample it to 176.4k through a typical apodizing minimum-phase filter:

apodizing.png

 

You can see that the non-apodizing filter that has only single cycle of ringing on both sides passes the original ringing through as-is while the apodizing one replaces the ringing with it's own. It could be as well linear phase, but I used a minimum-phase here just to make the difference more obvious.

 

Now if we inspect the difference in frequency domain...

 

Here's spectrum of the non-ringing upsampled data:

na.png

 

And here' spectrum of the apodizing minimum-phase upsampled data:

a.png

 

You can see that the non-ringing filter leaks strong high frequency images, while the apodizing minimum-phase one has high attenuation of those spurious images.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
a) What you mean by a "non-ringing filter." Are you just referring to a linear-phase filter?

 

It is generally used by manufacturers to refer to anything that has only very few cycles in the impulse response, something like fig 3 here:

Ayre Acoustics QB-9 USB DAC Measurements | Stereophile.com

 

In my example it was polynomial interpolation with small number of steering points.

 

Here's another example of FIR with some ringing, but very small amount (< 10 cycles impulse response), it doesn't have much impact on the original ringing either, but cuts off the ultrasonic images a bit more effectively, and doesn't have roll-off in 10-20 kHz region (more aggressive than Ayre's "listen" filter):

non-apodizing-2.png

na-2.png

 

b) The distinction between a minimum-phase filter and one that is both minimum-phase and apodizing. As in, how are the two constructed and what do their respective impulse, passband ripple, and response curves look like.

 

Apodizing filter looks just like any linear or minimum phase filter, but it needs to be fairly efficient. Essentially it needs to be able to filter out the original anti-alias filter.

 

Despite the fact that "minimum-phase apodizing filter" is thrown around as one phrase by a number of manufacturers when they are just offering a minimum-phase filter, everything that I have read says the two are different, and I desperately want to get a clear explanation of the difference. And how might one create/simulate a true "apodizing" filter for D/A conversion.

 

Apodizing filter in itself doesn't have anything to do with minimum-phase, it can be linear or minimum phase or anything between the two. It's defined by other parameters than phase. But commonly people want to use minimum-phase filter as replacement response.

 

I don't want to go to design specifics, and it would anyway involve maths...

 

The term "apodizing" (literally, removing the foot) was first borrowed from optics and applied in reference to digital filters by Meridian/Peter Craven in his AES paper "Controlled Pre-Response Anti-Alias Filters for Use at 96kHz and 192kHz." I have not purchased the download of that paper to best understand the concept.

Since this discussion is getting around to the concept of trying to mitigate the effects of ringing embedded by the A/D converter's anti-aliasing filters, that is what Meridian first claimed to do with their "apodizing" filter, and I wish to learn more.

 

I think it's bad name for a filter that replaces original filter's impulse response. I use it only because it has become common way to describe such behavior. I don't think there's anything special about "apodizing filters" that would be worth AES paper.

 

There are two alternative sources for the ringing, A/D converter in case the recording was made at the final sampling rate (for example 44.1k). Or alternatively software SRC in case recording was made for example in 96k and then converted to 44.1k at mastering stage. iZotope is example of such converter used for mastering.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

To put it short:

 

1) Non-apodizing over-/up-sampling filter doesn't change overall ringing in significant way. It only defines frequency content (or absence) above source Nyquist. Ringing from the source material dominates ringing behavior.

 

2) Apodizing over-/up-sampling filter replaces the ringing from source material partially or wholly with it's own. Frequency content (or absence) above source Nyquist is also defined by the filter. Filter dominates ringing behavior.

 

Thus, regarding the Ayre QB-9, with "measure" filter the Ayre's filter partially dominates ringing behavior, while the "listen" filter doesn't practically change the ringing behavior. Both have their own amount of ultrasonic leakage from fairly low of the "measure" filter to very high of the "listen" filter. (as you can see, this in line with what JA of Stereophile wrote about the two)

 

(1) as design objective doesn't impose any limitations on the filter design, while (2) imposes certain boundaries for the filter design.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
The highest frequency extension vs. reduction of ringing (from the original recording) and aliasing.

 

That's where the design algorithms step in. Designing a design algorithm that gives you optimal result, minimal ringing without harming the top octave.

 

Also shape and properties of the transition band has impact on the sound.

 

Perhaps one might use the apodizing filter for 44.1 material, and non-apodizing for higher res.

 

Why? For hires it's much less challenging to nicely squeeze it in because of extra bandwidth...

 

Wonder if any 44.1 material might have an apodizing ADC filter used on it, so it might benefit less from an apodizing DAC filter.

 

That is also possible, but not very common.

 

You can check out ADC datasheets and also the SRC data available for various pieces of software and decide suitable filter parameters form that. You can also just test it with various ADCs by using AWG to generate a suitable test pulse for input. You already cover quite a lot if you check it against ProTools. :)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
My thought was this might be like the 44.1 apodizing ADC filter case: Perhaps less ringing or chance of aliasing to start with.

 

I'm not sure I follow... Aliasing is only issue for oversampling in certain circumstances, and that doesn't depend on the apodizing behavior. Most likely apodizing will have less aliasing and digital images due to necessity.

 

If ADC has less ringing/aliasing or is apodizing, then the situation is most likely not affected at all regardless if the oversampling filter is apodizing or not...

 

If ADC let aliases through already, there's nothing much you can do about it later, it is already mixed with the true content.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Right, which is what I thought the situation might be with hi res - why should there be ringing or aliasing? Or is there commonly ringing, for example, in hi res files?

 

Yes there is, how much the ADC has HF roll-off vs ringing at higher rates depends on the chip. But usually ringing at 2x rates is quite close to 1x rates, because manufacturers want to expand the HF response close to 44.1/48k. At 4x rates ADC chips typically start rolling off early and have less ringing.

 

In case of mastering stage conversion from 4x (or higher) to 2x rates, the amount of ringing is usually the same as for 1x rates to keep the extended bandwidth.

 

Of course, all else being equal, every time you double the sampling rate, length of the ringing (in time) is half and the frequency of the ringing is twice higher.

 

So as a summary, filter that is apodizing at 1x rates tends to become non-apodizing at 4x rates in regards to ADC chip filters.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
As to Damien's recommendation of steep filter slopes, most engineers seem to believe that 100% suppression of alias energy is very important, while many who design filters based on subjective listening observations generally seem to prefer to allow some alias energy through in favor of reduced ringing.

 

And I think the real art is in finding a design method that minimizes energy of ultrasonic images (these are not really aliases), while at the same time minimizing ringing, plus keeping the top octave untouched... IOW, getting as close as possible to the impossible.

 

Rest is about finding nice parameters for the design. Plus how the filter applied (recursive, single-pass, etc)...

 

(in my earlier posting you can see response of the min-phase filter I personally use for listening)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
My main point was to be clear that an "apodizing" filter is just one that starts at a frequency below Nyquist (the way it is sometimes thrown about I was at first unsure).

 

That definition doesn't cover it yet. It is a filter that filters a filter. So, filter A is apodizing if result of filtering impulse response of filter B results in impulse response of A. Thus it depends on all parameters rather than just the corner frequency.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
It has been a long time since I visited your web site. Reading it just now I was surprised to see that you offer an OS X version. Downloading the trial now!

 

Just as heads-up, it currently supports only network audio adapter output ("NAA"). CoreAudio support didn't make it into 3.0 release schedule, but it's planned.

 

I am intrigued by what you refer to as delta-sigma modulators for upsampling to 1-bit/24.576mHz. How does one feed that to a DAC? Or like a DSD's lower rate bitstream, does one just run it through a low-pass filter of some kind--no "DAC" needed?

 

It is essentially "DSD512", but at 48k base rate. So it can be converted the same way as DSD, but since it's higher frequency you don't need as steep low-pass filter or alternatively you get much less out-of-band noise. DSD512 should work with a DAC like Yulong DA8 (to be verified) or with a suitable DIY converter. Of course the "real DSD512" is also supported at 22.5792 MHz.

 

I'm confused about your Network Audio Adapter. Is that hardware or software? And why is it listed as system requirement only for the Mac OS X version HQPlayer? Please explain more about that.

 

It is a minimal Linux installation on PC or ARM-based platform like CAPS/FitPC/Alix/SolidRun CuBox with a small server process that sits between network and audio device(s), like USB DAC or built-in S/PDIF of CuBox, or I2S of some other ARM platforms. I offer the necessary server process "networkaudiod", while the Linux installation (Debian Wheezy) needs to be done separately.

 

HQPlayer can use these as audio output devices, just like any local device. Output to those "networked DACs" is included in all platforms in addition to local playback, but on Mac it is currently the only option... Good side is that it bypasses CoreAudio completely.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Hello ,Is the discussion here apply only to upsampling or downsampling can benefit also from the suggestions given here?

 

For downsampling (decimation) you need to pay special attention to filter parameters, because any leakage will cause unsuppressed frequencies above target Nyquist frequency to alias (fold down) to audio band and get mixed with the real content. Changing/removing filter ringing from previous stage is not an issue because it will "always" get replaced in downsampling.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 1 month later...
Upsampling as performed by a filter does not change the data from the file on the way to the DAC chip. That is, the DAC receives the data contained in the original file. With sample rate conversion, the DAC receives altered (converted) data. Two *very* different animals as far as I'm concerned.

 

"DAC" here can mean three different things:

1) A device, box that contains bunch of electronic components, connected to a computer or some other source

2) A chip that is inside (1) and takes digital data in and converts it to analog form

3) The actual conversion stage that converts digital symbols from digital to a signal in analog domain

 

Now any modern chip (2) has at least three functions inside:

1) Digital up-sampling/oversampling/interpolation filter

2) Delta-sigma modulator, converting output from (1) to high-rate low-bit output for (3)

3) Actual conversion stage converting output of (2) to analog domain

 

Of these functions, (1) and (2) can be moved to be performed in a computer software to a varying extent depending on the particular device. Regardless of where (1) and (2) are performed, however, function (3) inside DAC chip never sees the original data you have in a file in case of PCM. If your files contain DSD then it is much more likely that it goes straight to (3) - one of the leading ideas behind DSD.

 

It would be nice if you could express in more detail what exactly you mean by upsampling not altering data while sample rate conversion doing it? To my knowledge it can only be the case if both operate under certain carefully selected strict conditions in order to meet this particular criteria...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
If I understood him properly, the PCM stream that enters an oversampling converter (at least *his* design but I would think others too) is the same as what is being played from the DAW (or software player).

 

Only in case it wasn't processed on the way... ;)

 

Again, if I understood him correctly, he was saysing a sigma-delta DAC is in a way a hybrid of digital and analog technology and that it is really an analog process with a digital feedback loop of sorts. The converter generates a very high frequency square wave that is pulse width modulated with a feedback path - the width of the pulses is set by whether the voltage of the output is higher or lower than the desired voltage.

 

No, the delta-sigma modulation process is entirely digital (DSP operation).

 

Apparently, this can all be done in the analog domain but there are improvements possible with feedback in the digital domain.

 

No, this is only the case with ADC side...

 

He told me it is very difficult to get good performance from a single-bit modulator, so he uses multibit DACs, which means there is some sort of synchronous SRC from 24-bit at the source sample rate Fs to 8Fs, so, internally, when running 192k into the converter, the multibit modulator is running at 1.536 MHz.

 

Wrong, the synchronous SRC performs operation to 8fs which is using the 1x rate (44.1/48) used as reference. So the target rate is 352.8/384 kHz and thus 44.1/48 is multiplied by 8, 88.2/96 by 4 and 176.4/192 by 2.

 

This is also very apparent from spectrum analysis results from DAC outputs.

 

This "8fs" rate is quite low, but the DAC chips don't have enough master clock cycles per input sample to do better. While the delta-sigma modulator needs to operate at higher frequency, typically 5.6/6.1 MHz (same as DSD128). So in order to achieve this higher rate from the 8fs rate without DSP resources, sample-and-hold type "interpolation" by factor of 16x is used. This is just simply copying the same sample 16 times in row.

 

The type of SRC done within the interpolator/modulator is completely synchronous, so it works much like offline SRC. He told me on-the-fly SRC is generally asynchronous, so it does rate and ratio estimation.

 

Now, HQPlayer has 14 up-sampling filters of which just one is asynchronous. All 13 others are synchronous. All the six oversampling filters offered for delta-sigma modulated output are also synchronous (target rate 64fs - 512fs).

 

All performed on-the-fly.

 

Variation in that estimate causes low-level time and frequency dependent distortion that he tells me is likely the source of the brightening and hardening I hear in comparison to off-line SRC -- if it is done well (!) -- and worse if not done well.

 

That's the case with ASRC, but nobody was specifically speaking of ASRC here?

 

According to him, synchronous SRC can be made essentially perfect, depending on how many resources one is willing to devote to it. Properly implemented, it is essentially the same as simply digitizing at a different sample rate. When upsampling, new (redundant) data is interpolated. If the upsampling rate is an integer (as with the DACs he uses), the original data stream remains embedded and untouched in the upsampled stream.

 

The funny part is that conversion from 44.1k to 192k can be A) synchronous, B) integer and C) all the samples are still completely new. While also conversion from 44.1k to 176.4k can also be synchronous, integer and still all the samples may be completely new. Only with certain restrictions the every Nth sample can be same as the source one, but there are still for example 75% of new samples and 25% original.

 

In fact, you can drop the new interpolated samples from the stream without any additional filtering and still have a valid stream at the lower sample rate.

 

Yes, HQPlayer also has couple of such filters, but this doesn't mean anything in practice, because the delta-sigma modulator will come and create entirely different kind of samples anyway before it reaches conversion stage.

 

At the very least, this explains (to me) why I've never responded to oversampling in a converter in the same (negative) way I respond to real-time sample rate conversion using even the most transparent SRC algorithm in my experience.

 

I can implement exactly same oversampling filters in software that are implemented in DAC chip, so the point is moot.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Looking over some of the communications I received from him, I believe I told it the way he told it to me. In view of that, forgive me if I choose to take his word for it.

 

I don't mind.

 

DAC's are easy to study and manufacturers publish quite good information on theory of operation. I'll also keep posting more measurement results over time (for example the 8fs thing is quite apparent from the results when you see aliases around every 352.8/384).

 

For hardware, I've done quite a bunch of studying over the past about 25 years. For software I can only speak of my own algorithms, designed and implemented by me personally.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
The way Barry's explanation reads, that appears to be exactly what the Metric Halo's designer was doing, comparing the MH to an ASRC DAC (like the Benchmarks).

 

That I understand, but I feel that it's out of context here. Or did I fall out of discussion context somewhere? (entirely possible too)

 

Upsampling/oversampling/interpolation/etc is not as such related to ASRC. But ASRCs are usually used primarily to fight jitter and not alone just to change the sampling rate.

 

When there's an ASRC in the chain, like is case with certain DACs (like many from Musical Fidelity and many Sabre based), it is usually good from performance point of view to feed these with the constant rate they actually use for output. So the converter does just 1:1 conversion with minor adjustments to reduce jitter. Less there is jitter on input of this 1:1 conversion, less modification to the original data is performed...

 

In these kind of cases it also makes sense to upsample before the device, so the alterations there are reduced.

 

For example many AVR's run internally at static 192 kHz sampling rate, because all the DRC and X-over is designed only for that rate (that is also commonly used on BD). And the internal ASRCs or SSRCs performing input conversion to that rate are less than ideal, so better catch the 1:1 case.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Yeah, I mentioned ASRC just to give some context and very general understanding around what Barry had said. It also helps explain what Boris is hearing, since his Benchmark will do its ASRC (to 110kHz?) regardless of whether software SRC takes place upstream.

 

Ahh, that's the part I somehow missed! The older Benchmark DAC1 had such SRC (was it ASRC or SSRC?), but I have not found exact information on what the DAC2 does. DSD would probably bypass it...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I am still not entirely clear on what means Miksa uses to send a DSD256 rate data stream (created by s/w DSM of PCM files) to a DAC, and via what interface. I do see that the exaSound e20 takes DSD256, but only when using their own special Windows ASIO driver. Is that via USB or does it require an S/PDIF output card from the computer? Obviously when using their own driver and DAC DoP is not required. What driver does HQPlayer use when sending out a high-rate DSD stream?

 

We are getting pretty OT for this thread now...

 

Currently it practically means ASIO driver on Windows, no DoP. Either exaSound or some Amanero-based device. I just have the Amanero board for my own experiments (work-in-progress). On Windows I always prefer a native ASIO driver when such is available.

 

Another option is to use the fresh native DSD support in Linux, but this is another work-in-progress item.

 

And is an ESS-based DAC really the best place to send such a potentially artifact-free stream? It likely could be better served by a simpler architecture for the silicon side of the process.

 

For some quick-and-dirty testing purposes I was planning to combine PCM1792A configured to static DSD mode with the Amanero board. Until I get to better state with the discrete DAC experiments @DSD64/DSD128.

 

Edit: And one alternative was Antelope Zodiac Platinum.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...