Jump to content
IGNORED

ADC/DAC kernel correction aka deconvolution


jabbr

Recommended Posts

1 hour ago, mcgillroy said:

Mikes have a character and deliberately so. It’s part of the palette and element of the creative process in the studio. Often singers seek for a long time till they found the right microphone matching their voice. For guitars two of more mikes are the norm. 15 or more on drumsets are not uncommon, often very different ones.

 

Trying to ‘deblur’ mikes would ruin that and make little sense.

 

The whole idea rests on a preconceived notion of the possibility of an accurate representation of a reality. There is no reality in recording, it’s all part of the game of creating an audible product. It’s all asthetics - in the double sense of the word.

 

There is no requirement to deblur. It would be implemented in software eq HQPlayer or A+ etc and the decision to apply a deconvolution kernel would be up to the implementation. This is discussed as providing the same supposed benefit as MQA but in an open/nonproprietary  fashion.

Custom room treatments for headphone users.

Link to comment
35 minutes ago, bibo01 said:

How is that "aberration" going to be calculated? Every publisher has to stick to to the same method.

The list of correction would be published by microphone manufacturers and rec studio?

 

The basic technique is to compare a known signal against a recording of a known signal.

 

In astronomy and optical microscopy the "point spread function" http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/PSFtheory.pdf

 

... but more important to compare known vs recorded ... both are converted to frequency domain and then a simple division results in the deconvolution kernel. So for example a known low phase error sine wave reference signal is equivalent to an optical point. Same idea as impulse response. Same idea as room correction kernel.

 

What is produced is a deconvolution kernel (there are different techniques including estimation) and folks who have access to the equipment (microphones etc) can publish deconvolution kernels ... no need for the manufacturer to do it.

 

Is this necessary? It would do everything that MQA claims in terms of "deblurring" because ... that's how we deblur ;) 

 

Now what is super cool is that HQPlayer can run a deconvolution on a music stream in realtime!*** That is what makes @Miska's software so cool in my view (and in the SDM domain no less) ... back when I was doing this in the 1980s I can assure you that we were not doing this in realtime.

 

*** to be clear I don't know the limits of this capability but @Miska could comment

Custom room treatments for headphone users.

Link to comment
6 hours ago, bibo01 said:

 

Isn't this very similar to what a calibration file for measuring mics is?

No, although I don’t know the limits of what mic calibration files are used for. 

 

Deconvolution is classically a frequency domain processing action. @Ralf11 references to computational photography is very apropos. 

 

As an example that Im very familiar with (and is widely referenced) imagine a 10uM microsphere images using optics. You will see the sphere at the center and then surrounding ripples. Now imagine a 3D stack of images at different focal offsets. The ripple/diffraction pattern will vary from slice to slice. 

 

Now imagine an imaged structure eg a chromosome or intracellular structure. It will be blurry. 

 

Take both image sets along with a model corresponding to the known 10uM sphere and transform into the Fourier space/frequency domain. Divide the model by the microsphere to derive the deconvolution. Multiply the cellular image by the deconvolution and transform result back into spatial domain. 

 

The result will be sharpened. You might, for example, visualize DNA supercoils and other macro molecular structures.

 

This is deblurring.

 

Similar operations can be used for audio in both the spatial & time domains with suitable recordings, but we are limiting ourselves to the time axis for the purpose of this discussion (MQA only claims temporal deblurring).

Custom room treatments for headphone users.

Link to comment
1 hour ago, pkane2001 said:

 

PSF in optics is equivalent to an Impulse Response in audio. One can deconvolve using a properly derived IR.

 

IR can be derived from capturing a Dirac pulse or from a sine frequency sweep. IR contains more than just the frequency correction that normally would be in a mic calibration file. It also captures timing errors, reflections/reverb, and other frequency, amplitude, and timing errors. I call it a fingerprint of the system.

 

Right!

 

1 hour ago, pkane2001 said:


As an example of the opposite effect of re-blurring (not de-blurring!) I've captured IR from my speaker system and then applied it to headphone playback through convolution. The result was a much more spacious, reverberant sound that makes my headphones sound a lot more like my speaker system in my listening room.

 

Yes!

 

What is interesting to me about this possibility is not merely temporal deblurring rather spacial i.e. "sharpening" of the soundstage, and transforming between different numbers of input and outputs i.e. multiple microphones, multiple speakers

 

1 hour ago, pkane2001 said:

 

I would be very surprised if a well constructed mic and ADC system would require something like a deconvolution to 'deblur' it, but I've been wrong before ;) 

 

I think this is what MQA is promising with "temporal deblurring" and yes, its entirely unclear to me that this would be a benefit in well constructed systems. I'm not really concerned about microphones (which as you and others say can give an artistic effect), rather if an ADC had high jitter thus "widening" or "blurring" of the peaks, then this would be a way to sharpen impulses, peaks etc.

 

Another area that is probably the most significant in terms of temporal "blurring" would be the use of certain filters that as has been mentioned, cause ringing.

 

1 hour ago, pkane2001 said:

 

Then again, a valid point was made earlier in this thread that a performer/artist often picks microphones based on their sound. By deconvolving it, you'd be destroying some of the original intent that artist.

 

Yes, and many types of distortion are intended, such as the amplification of the electric guitar tube amp itself -- the tube distortion is intended. Deconvolution is a tool. Point here is to shed light on the techniques which are hardly proprietary to MQA.

Custom room treatments for headphone users.

Link to comment
1 hour ago, mansr said:

It still doesn't explain how they can "deblur" a mix made from dozens of tracks.

 

a) is there blur?

b) access to the source tracks for deconvolution prior to remastering would be best -- if it were actually necessary, but this isn't the promise of MQA

c) the deconvolution, if needed, could be written to a 24/192 kHz FLAC, for example, which has enough overhead to resolve -- the benefit of Hi-Res is that there is enough redundancy of information (overhead above 22 Khz) to enable transforms without loss of real information.

d) I view this as a essentially remastering/DSP operation, and really no reason for the DAC to know about which corrections have been applied.

Custom room treatments for headphone users.

Link to comment
3 hours ago, mansr said:

Yes. However, what MQA claim to do is equivalent to correcting a picture that is actually a composite of 100 photos taken with different cameras.

 

Yes ... these techniques are decades old and taught to undergrads these days ...

 

This talk has great photos & diagrams!

https://graphics.stanford.edu/talks/compphot-publictalk-may08.pdf

 

 

Custom room treatments for headphone users.

Link to comment
4 hours ago, Miska said:

There are couple of aspects to this...

 

Yes of course -- entirely unclear that "deblurring" is necessary, and particularly in heavily processed productions its also unclear what this would even mean.

 

My question was whether HQPlayer's FIR filtering capability (used in room correction) could be used in situations where an impulse response could be defined and for which "deblurring" would be meaningful? I'd assume that with a single dimension (time) that this would work, but e.g. 2D (spatial) no way to use both channels..

Custom room treatments for headphone users.

Link to comment
19 minutes ago, Miska said:

 

For replacing ADC or SRC impulse response, I think apodizing upsampling filters are good choice and work well. That applies especially for "1x rates" meaning 44.1k/48k where typically the anti-alias filter's effect is strongest.

 

Yes, the convolution engine is very generic, and you can do all kinds of 2D/3D things too when you use the Matrix processing feature. People regularly use it for processing different kinds of cross-feed, because you can take any source channel, process it through convolution engine and mix it to any output channel. At the moment you can have 32 of such virtual pipelines, but that is just arbitrary limit that can be raised if necessary.

 

 

Thanks. The ability to do generic 2D/3D convolution is a feature that folks here don't seem to appreciate. "temporal deblurring" isn't groundbreaking, and MQA probably doesn't deliver on that promise for the reasons above, and even if it does, its something that could be replicated with well known and well understood techniques.

 

This capability enables a significant degree of processing in the native DSD domain. You haven't enabled HQP to be able to save output to file (one might want to be able to process and save tracks to a file, for later recombination i.e. mastering/remastering) but that could be done with an file output ALSA driver ;) 

 

More interesting to me, particularly given the focus here on "soundstage" and "imaging" would be spatial sharpening deconvolutions such that instruments could be precisely focused within an arbitrarily sized soundstage.

 

Of course this would eat up a ton of CPU cycles but we are yet again entering new performance levels of CPU/CUDA processing.

Custom room treatments for headphone users.

Link to comment
20 minutes ago, fas42 said:

 

Sorry to be OT, but there is a type of 'madness' involved here, trying to solve a 'problem' by throwing technology and mathematical concepts at it - IMO this will never work; it's the equivalent of that fad for placing recordings in certain environments, by playing with certain aspects of the data: concert hall, jazz club, etc, etc. It's a cute trick, but wears off mighty quick ...

If you were sorry to be off topic, you wouldn’t comment where you don’t appear to have experience. The ‘maddness’ Involved is your compulsion to insert your vacuous comments into every discussion. 

 

Methinks you’ve inhaled way too many lead fumes in your resoldering 

Custom room treatments for headphone users.

Link to comment
31 minutes ago, fas42 said:

 

I have practical experience resolving "blurred" playback - but because that hasn't got heavy duty, theoretical underpinning it can't be very useful ... right, ?

 

You have had many opportunities to explain your approach. The last 2900 or so have been somewhat redundant but if you have new things to offer, you may attach to one of your threads or open a new one. 

 

This thread has has nothing to do with your approach. I might have placed this in the Software, DSP, Room Correction subforum but given the recent discussion about MQA, I wanted to explain an approach that would be analogous to MQA’s deblurring approach yet in an open fashion. 

 

This is not theoretical. I and many many others have decades of experience using these techniques in very practical applications.

 

These are specifically computer techniques and not electrical techniques. In Computer Audio the techniques are complementary and not in opposition. 

Custom room treatments for headphone users.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...