Jump to content
IGNORED

Digital Filtering


Recommended Posts

Is anyone interested in a spreadsheet that (hopefully) shows a filter in operation?

A brief word on the current buzzwords of apodising and asymmetric filters - NOTE, as ever, I will attempt to be agnostic as to the various arguments...

If people can't see the previous spreadsheets, now would be the time to say "You know what, you really are an idiot!" - I'm guessing there is interest in general?

 

your friendly neighbourhood idiot

 

 

 

 

Link to comment

 

 

"As an example, if we say the output from our filter is 0.001 * the new sample + 0.999 * the last output, we will have something that will give out a smaller response for quickly moving things ( as the new data is only a small proportion ), but slower moving things will eventually make themselves felt.

 

This approach, unfortunately has it's drawbacks - because the output always has some "memory" of what has happened before,..."

 

Seems to me, if your example is near 'real world', the result would be almost entirely 'what had happened before'.

 

Perhaps I lost the plot? I didn't grok the reason to 'give out a smaller response for quickly moving things'

 

As always, thanks for your thoughtful posts, they are much appreciated.

 

clay

 

 

Link to comment

So, the output is almost entirely "what was there before" - so slower moving things ( imagine a DC level for a moment ), there is "more" of it to be before - so, with DC ( as an example ) if you were to go from 0 to 1 on the input, the first output would be 0, the next 0.001, the next is 0.001999 - if the input stays at is is, the output will catch it, and the proportion of the feedback determines how fast something can go,

 

http://spreadsheets.google.com/ccc?key=0AsS9Unc6TwPLdG5uY3k0dFltSHI0aVB4Z2w4RnBLZEE&hl=en

 

a quick spreadsheet - notice how changing the frequency changes the gain?

In this case, it's 0.1*new + 0.9 *old

 

 

your friendly neighbourhood idiot

 

Link to comment

 

 

"a quick spreadsheet - notice how changing the frequency changes the gain?"

 

If I'm reading the spreadsheet right, the gain goes down as frequency rises.

 

Smells 'analog-like'?

 

It seems like this type of filter would not have the best transient response?

 

Clay

 

 

Link to comment

The gain does go down as the frequency rises - this is a trivial example, as it's only first order ( if you were to do this properly, you'd have a more complex structure with multiple feedback paths ), so it basically droops from DC onwards, whereas a filter we'd like would be flat in the audioband before drooping. Interestingly enough, a filter similar to this but backwards is useful for removing DC in the real world...

 

The transient response is interesting, because as I mentioned before everything happens later on ( i.e. a transient will cause effect in the output after the transient has gone through the output ) - this is not necessarily the case with FIR filters, which we'll get onto in a bit. The key thing about transients is that a low pass filter ( which we want ) cannot exceed the bandwidth limitations of our system...

 

your friendly neighbourhood idiot

 

Link to comment

If I may (and I hope not to interfere with the thoughtful setup of another great lesson) ...

 

I think - at this stage - it is good to recognize the analogy with ringing. So, an IIR filter post rings (echos) just because of the output using parts of earlier outputs.

An IIR filter does not pre ring, because (... just think this over yourself and relate it to the before posts in this thread).

 

The objective with any filter is minimalize the ringing, because ringing is no virtue at all, and just a side effect of its working. Ah, careful, ringing is not even a side effect but merely the means to let that particular filter work. But the ringing is still no virtue ...

 

... or maybe it is (this is for later).

 

Peter

 

PS: I can't get that graph moving a bit.

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Right, so as Peter has pointed out, an IIR can only ever have echoes from the past due to it's outputs depending on it's inputs. The filter demonstrated won't ring, as it's heavily damped...

 

So, what do we have that is not an IIR? We have a filter called an FIR, or finite impulse response. How is this different? Well, there is no feedback - instead, it is realised as a set of "weighted delays" - so you have a memory, which is as long as your filter, and each sample makes it's way through the filter, one sample at a time, being multiplied by a coefficient at that point. The output of the filter is the sum of all the delays, or taps. This means that once a sample has traversed the length of the filter, it is gone forever - this is why it is finite. So how does this actually do any filtering? Well, the act of performing this weighting and summing does something quite peculiar - the response of those delays gets transferred onto the data being passed through it. It so happens that mathematicians know a function that is perfectly flat through a range of frequencies, and perfectly attenuated at another - the famous "sinc" function, which is plotted as sin(x)/x

I've attached a simple FIR filter, done in google spreadsheets, where you can adjust the length of the filter to up to 64 taps, on a sine wave of a variable frequency, together with an example of a hold of the same sinewave.

Have a play, and let me know what you think!

 

http://spreadsheets.google.com/ccc?key=0AsS9Unc6TwPLdHI1ajd3ZU9ScmJKOWYxNjdvLXhObUE&hl=en

 

EDIT: if the above needs you to login to google docs, try this one

http://spreadsheets.google.com/pub?key=tr5j7weORrbJ9f167o-xNmA&single=true&gid=0&output=html

 

your friendly neighbourhood idiot

 

Link to comment

Haven't had time to ... err ... adequately filter all this information but thought I'd bump the thread 'cos I think it deserves it :)

 

One question though: what we seem to do here is to created a stepped wave and then filter this wave. Mathematically it should be possible to create a sine wave from the sample points. As one assumes higher frequencies are pre-filtered at the AD stage we should be able to take three(?) points and interpolate a sine between them as we know that that line is monotonically increasing, decreasing or turning the corner.. (Maybe it is more than three points).

 

Is this rubbish or just not possible with electronics or do one of the existing filters perform the same funciton but in a different way.

 

To put more simply - is it not possible to directly re-create the wave so no filtering is required?

 

Link to comment

The DAC just sees the points, so a simple ( NOS/filterless DAC ) will output sample 1, hold the level there and then change when the next sample comes along. For the DAC to recreate the analogue signal between the points, you're sort of correct - we don't know it's a sinewave as such, but we do know it's bandlimited - so the act of filtering means we deduce the bits in between the points we do know - the filter means we know the only points they can be

 

A simple approach ( which has been used ) is called a linear interpolate - in this case, we can guess the mid-point between samples by a simple average. This (sort of) works, but is an appalling filter...

Here's another spreadsheet, showing the samples as dots - you'll see how a linear interpolate seems to make sense for a signal much lower than the sample rate, but not so much as we get anywhere near,

 

http://spreadsheets.google.com/pub?key=tgYWZLfBGIlNA6_nATvjyQQ&single=true&gid=0&output=html

 

This is the kind of counter-intuitive thing - to "connect the dots" the act of filtering does it for you!

 

your friendly neighbourhood idiot

 

Link to comment

I had a look at the Sheet2 on your previous sheet (after copying it I can see the formulae) but is quite difficult to what is actually going on. So maybe some questions (I couldn't work out the answers to these form the wikipedia page which I thought may help).

 

The weights in the filter - how are these related to sinc() and to the Kronecker delta function. Is the weight zero if the sample is equal to the previous sample (ie the sample is effectively discarded)

 

Link to comment

because otherwise people break things :)

 

Since you're interested, the spreadsheet is slightly more complicated than first appears, so here we go:

A2:A66 is the "tap number", used to establish where the last tap is

B2:b66 is a counter that is used to determine sinc

C2:C66 is the sinc function itself, which forms the coefficients

F2:F200 or so is the input to the filter - because we're doing an interpolate by two, we perform what's known as "zero stuffing" - the filter fills in these

G2:G200 is the output of the filter ( the sum of ( coefficients * samples ) )

I2:I200 is the stair-stepped, or filterless DAC output

 

I wouldn't worry about the Kronecker delta

 

your friendly neighbourhood idiot

 

 

 

Link to comment

This is the kind of counter-intuitive thing - to "connect the dots" the act of filtering does it for you!

 

Let's say it is one of these days, and I can't get it. But, I sure do want to know;

 

We are talking digital filtering, and so far I did not see the requirement to upsample.

Otoh, I guess this is just needed afterall, but if so, we can't inject more guessed samples than the sample rate allows for. IOW, we can try to reconstruct a near 100% sine wave at 20000Hz, but my brain won't allow to think that can happen. Not today. Not with a *digital* filter ...

 

? Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Peter,

 

if you remember, the idea of digital filtering is to give the analogue filter an easier job to do - for the analogue filter to roll off at say 80kHz gently, for there to be no images the DAC needs to be running at over 160k, which is why you need the filter...

 

your friendly neighbourhood idiot

 

Link to comment

Hi i_s, thanks.

 

What you are basically saying is that the "necessity" of oversampling demands the filter we are talking about here, correct ?

 

So what are your ideas about not having an analogue filter at all ?

Do you think that

 

a. upsampling is not necessary because we can create a steep digital filter anyway (which will have a fair amount of ringing);

b. upsampling is still necessary because otherwise the ringing will be too high but

c. we better don't apply b. because it creates more anomalies than it solves;

d. better do nothing and swallow a pile of aliasing;

e. nah, you just *need* the analogue filter always, because ...

 

By now I hope you have some difficulties with answering. That is, for me it becomes a kind of hard to decide what is better, of course taking into account the downsides of solutions. For a reference : my DAC can run without filter or with 2 pole Bessel (analogue), and when without I can feed it any kind of filtering software wise, with or without upsampling (2x or 4x).

 

Of course it is allowed to say "I don't know", and I am only asking because I can't decide myself, but I tend to lean towards "no upsampling and no filtering at all". Net, the anomalies seem to be the least. Uhhm, the audible ones I mean.

Idiot, keep in mind It is completely allowed to call me a fool.

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

YFNI, Peter & all,

 

What is stored on the CD/computer is simply a list of numbers. These numbers physically represent points on a graph of air pressure versus time: the points are spaced at equal intervals (e.g. 22.676 microseconds for a CD). To listen to the music, we need to reconstruct the varying air pressure as a function of time. Since air pressure is a smoothly-varying physical function, we need to 'draw a line' between the points. This must be a smooth line, with no discontinuities, because the physics require it.

The actual 'drawing' of the smooth line is done by a combination of the 'hold' circuit on the DAC output (if fitted), an analogue 'reconstruction filter' (if fitted), the bandwidth-limited response of the amplifier(s) and cable(s), and the similarly bandwidth-limited response of the speaker/headphone, and the air in the room itself. There is ALWAYS some bandwidth-limiting going on after the DAC. (YFNI's first spreadsheet showed how 'smoothing out' steps corresponds with limiting the bandwidth).

Now, given a set of evenly-spaced points on a graph, it is possible to draw lots of smooth curves which pass through every point (interpolating functions). In fact there are an uncountably infinite number of such curves! What Nyquist showed was that there is precisely 1 (one) interpolating curve which has the dual properties that it both passes through every point and contains no Fourier components at any frequency greater than 1/2 the sample rate. This is a Useful Result. He also showed how to calculate this function - which is where the sin(x)/x comes from.

There is, of course, a snag. This property only holds for an infinitely long sequence of points, and CDs aren't that long (even if they sometimes seem to be). The 'correct' interpolated value at some instant between two time samples depends not just on the value of the two nearest samples, but also on the value of EVERY sample in the infinite set.

However, the greater the time difference between the point of interpolation & the sample being considered, the smaller the influence of the sample on the interpolated value. Eventually, we can ignore the effect as being too small to bother about. In fact, rather than suddenly cutting off after a fixed number of samples, it is better to smoothly & carefully reduce their influence as the time difference increases.

This is the situation in the time domain. In the frequency domain, what happens is that our interpolation is now accurate only up to some limited frequency (20kHz in the case of the CD) and wrong above that frequency. We can live with this, PROVIDED that the original graph of air pressure versus time was also band-limited to 20kHz before being sampled. Then the only frequencies that could be affected by inaccurate interpolation aren't there in the first place - so no errors!

Now the business of interpolating between the DAC samples is quite tricky, and if it is not done properly, will introduce errors at frequencies below 20kHz, and since such frequencies are present in the source, we will be able to hear them. Hence designers prefer to do this interpolation under closely controlled conditions - right next to or within the DAC itself. (In fact the boxes usually called 'DACs' invariably include the interopolation/reconstruction function as well).

You can do this interpolation using an analogue filter circuit, but the performance requirements are very stringent, and meeting them, particularly accounting for component stability over time and temperature, is difficult (but not impossible). This is a true NOS DAC: a DAC running at the input sample frequency, followed by an analogue reconstruction filter.

Another approach is to digitally calculate additional samples, between each of the source samples, using the exact same interpolating function that would otherwise be implemented by the analogue filter. All we have done is to add extra points on the graph of air pressure versus time, making the same assumption about no signals above 20kHz. No new 'information' has been added. Except that now we have a set of samples at (e.g.) 176.4kHz that describe a graph which has no signal components above 20kHz. It is now much easier to design an analogue filter which interpolates correctly between these samples: it still has to be accurate only up to 20kHz, but gets four times as many input samples to interpolate from. This is an oversampling DAC.

Note that it is still a 20kHz bandwidth DAC - the oversampling has been used to move complexity from an analogue circuit into a number cruncher (DSP, FPGA, ASIC, etc) which may well be on the DAC chip itself.

 

Max

 

P.S. YFNI - sorry for muscling in on your thread!

P.P.S. In all the above I've said nothing about the number of bits: that's another story. As is the various bag of tricks that goes under the name of a delta-sigma DAC. Related to oversampling, but not the same.

 

Link to comment

@i_s

 

I get it now. Very interesting. Only really clicked when I did my own spreadsheet. Have some questions/observations for later if you don't mind (when not on iPhone).

 

@max

 

so is it true that theoretically one can accurAtely reconstruct a 20 wave using 40 sampling practically in a dac one cannot. More to ask here but basically would software pre upsampling give better results?

 

Link to comment

Harry,

 

so is it true that theoretically one can accurAtely reconstruct a 20 wave using 40 sampling practically in a dac one cannot. More to ask here but basically would software pre upsampling give better results?

 

You are asking if it possible to reconstruct a 20kHz sine wave perfectly using 40kHz sampling? In theory yes: but only if you have an infinite number of samples.

 

Software can't help here, and as I tried to explain upsampling is merely a different route to the same end - it doesn't change the results

 

Max

 

 

 

Link to comment

Max said... "You are asking if it possible to reconstruct a 20kHz sine wave perfectly using 40kHz sampling? In theory yes: but only if you have an infinite number of samples."

 

Surely if you have infinate number of samples you no longer have 40khz sampling?? Or am I completely confused??

 

Eloise

 

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment

means an infinite amount of time, not sample rate...

 

Even then, in a perfect system, if you imagine you are sampling a 20kHz signal at 40kHz, you will have the same 2 points - it needs to be ever so ever so slightly slower...

 

I'd like to thank Max for putting things more eloquently than I have managed!

 

As has been mentioned, the reason for Red Book working is that common theory dictates that human hearing stops at 20kHz, so we have no need to represent signals perfectly above this - the gap between 20kHz and 22.05kHz ( the highest possible frequency ) is called the transition band, where we don't mind if signals start to get smaller, as we don't have an infinite number of samples ( or time ) to do it...

 

For Peter, as you know I think a digital AND an analogue filter is required. I'm going to use the term oversample, as this is less marketing led, and is a requirement for a digital filter.

I think that if you have a red-book signal, it is optimal to oversample (with filtering) by say 4 times, so that the digital filter provides the analogue stage with a 176.4kHz signal, with no content between 20kHz and 88.2kHz. The analogue filter is then quite easy to design to roll of gently from say 70kHz, reaching full attenuation by 150kHz to prevent the final images from the DAC.

If you don't have a filter, you will have images, and you are then reliant on how the stuff after your DAC imposes it's bandwidth limitations ( which aren't designed with this in mind! ) - So, with one set of amplifiers and speakers, they may actually constitute and adequate filter for playback ( by chance ), but others may have strong IM products that combine in the baseband...

The ringing is part of the FIR structure - it's what removes the images, and is only excited by content that needs filtering. The audibility of it is a hot discussion at the moment, with no firm evidence one way or another.

 

Eloise,

in another thread, there seemed to be some confusion about downsampling - i.e. take a 24/96 recording and play it back at 24/48. You can't drop every other sample, as if you have any content above 24kHz, this will alias down - so, all the HF noise and harmonics will fold (or alias) into the audioband. You must do a filter first, to remove all energy above 24kHz, then drop every other sample,

 

your friendly neighbourhood idiot

 

 

Link to comment

 

 

"You can't drop every other sample, as if you have any content above 24kHz, this will alias down - so, all the HF noise and harmonics will fold (or alias) into the audioband. You must do a filter first, to remove all energy above 24kHz, then drop every other sample,"

 

So this means proper filtering is important in downsampling, just as in upsampling, and that probably Eloise was not hearing imaginary things. Great.

 

Thanks much, I_S.

 

Clay

 

Link to comment

Right, so I constructed my spreadsheet based around @i_s - it may be right it may be wrong but the results looked good enough to tell me it was probably ok.

 

I found that as the frequency of the input sine wave increased the error between the interpolated point and the "real" point sin (2*pi()*frequency*time) grew to be about 5%. At lower frequencies it was about 1%. Going on from @Max excellent post it seems true that we cannot accurately recreate a 20KHz wave (neither can we accurately recreate a 10KHz wave but we are closer) - this is obvious, I guess, but is news to me. (Maybe the analogue filter gets us closer, dont know).

 

So to rephrase my previous question - could we do a better job of recreating the wave using a software pre-filter on the computer side. We could take as long as we wanted to to do this to get better results - use more computationally expensive interpolation models - we have more time (as much as we care to spend) and a faster processor. I have some sort of memory that Weiss have software to do this - is this true? Would it make sense to have a "real-time" software pre-filter to over/upsample using better interpolation before the signal exit the computer.

 

Also - I am going to play around with quadratic based interpolation around three real samples to see if this gives closer/better results.

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...