Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

5 hours ago, John Dyson said:

Thanks everyone.   I had a bad day yesterday, greatly frustrated.   Not even knowing that I had disabled the EQ that would be a 2/3 fix for the problem.   (The decoder has LOTS of stuff ALREADY built-in for contingencies, a lot of times 'adding' a feature is just enabling it -- the decoder might already have the capability built-in.)

 

I had EVERY intention of a release today, but every time I start working on it, I fall asleep.   I am too tired/sleepy to do anything useful right now.

 

Trying to send out the release when starting now and when I am in this state of fatigue -- that is a plan for failure.

The demos have been running, just complete and I'll be doing a review of them, and checking to  make sure that there are no bad surprises. 

 

There *WILL* be a release tomorrow.

 

  This is REALLY TRICKY when no specs!!!   I can 'program' in my sleep, and do EE stuff even more easily than program.   However, not knowing what the specifics of what SW is supposed to do -- guessing just doesn't work.  Even worse -- 'tweaking' is evil and frought with mistakes.

 

Thanks for the feedback.

Tomorrow is almost a promise for the release.

 

John

 

 

John, out of curiosity I applied an EQ to JustOneLook-NewDEC to shape the spectrum more like the vinyl version. I like it better this way, but maybe it's just me. If interested give this parametric EQ a try:

 

Peak F=60Hz,   Gain=-5dB,      Q=2.0

Peak F=400Hz, Gain=-2.5dB,  Q=0.5

Peak F=2kHz.    Gain= 3dB,      Q=0.6

HS    F=7KHz,    Gain=-12dB 

 

Green line is the spectrum of the vinyl clip, blue is JustOneLook-NewDEC with the above EQ:

image.thumb.png.c7b3f36d0556ef28a07fac7cb94cd72a.png

 

Link to comment
1 hour ago, John Dyson said:

 

I have produced a 'NEW2dec' version, using essentially what I plan to be the release tonight (still an open mind on it, though)...   Just to refresh, I'll provide the pointers to the clips:

 

vinyl:

https://www.dropbox.com/s/wgko1yljytaew3z/JustOneLook-vinyl-snip.flac?dl=0

old NEWdec:

https://www.dropbox.com/s/bqp6y53i2tg3h0e/JustOneLook-NEWdec-snip.flac?dl=0

the NEW2dec:

https://www.dropbox.com/s/cn2i4bw26vyiit6/JustOneLook-NEW2dec-snip.flac?dl=0

 

I ran 'NEW2dec' without hearing the new version until just before uploading.   No tweaks, and the decoder adjustment is based on listening to numerous

other recordings, NOW with the +6dB at 80Hz available along with the -12dB @80Hz pre-emphasis and +12dB@80Hz de-emphasis.

Also, the NEW2dec has the more correct HF pre-emphasis, instead of the version that gives a 'halo' effect around the HF details.


Waddya think?

Trying hard at this...  

When I do demos or tests like this, I NEVER 'tweak' to sound better -- this is what I have been planning for the V2.2.3A release tonight.  I can produce more debug versions before the final, but want to keep it really close to what is in the code now.   A LOT of recordings have 'passed the test', but a few are still a little 'strange' (e.g. the old Carpenters 1970 album.)

My mind is open for one more update.

 

I haven't looked at the spectrum in great detail yet -- average spectrums are problematical on compression/expansion like this.   It, of course, all depends on how much expansion was actually done -- so I guess that the averaged spectrum is helpful for a guideline and HELPFUL, just not precision comparison.

 

I'll give my own status report on the spectrum if it has serious problems.   I'd expect that  the bass is a little strong on this version -- but never know until a real measurement, right? :-).

 

John

 

 

The New2Dec sounds slightly better, it's moving in the right direction. Just to be clear: I didn't create the EQ by ear -- these were done to bring the decoded frequency response more in-line with what was on the vinyl clip. I agree that average spectrum isn't the greatest way to do it, but it can reveal systematic frequency errors and so can help pinpoint issues.

 

Here's the comparison between the three clips (red=New2Dec, blue=NewDec, green=vinyl):

image.thumb.png.1f1db1e32a07d264ff3d6c23a99c46de.png

 

What I hear on both, NewDec and New2Dec is not enough clarity around the vocals. The voice sounds a bit muffled and closed-in, like she was singing too close to the mic. I think the relative dip between 1kHz and 6kHz in the decode is what's causing this, and New2Dec didn't really change this part.

 

Link to comment
2 hours ago, KSTR said:
  • When you revisit the project, make a sane first things first prioritisation. From my point of view, wrt to the code itself, the top topic is the house EQ. This EQ spoils everything, really, I mean it. Neither you nor anybody else will ever be able to do any meaningful comparisons as long as the main and very dominant effect is that EQ, spanning some 12(!!!) dB (and even if it were only +-1dB that still is too much). Comparing/judging the low-level dynamics requires same large-signal frequency response to +-0.1dB and +-0.1dB of level matching, no way around it. We know this EQ is not helping, you know it is not helping. It is completely superfluous. If some post-EQ is deemed necessary to polish up the result (which may or may not be the case), this can always be done in a extra pass, with different means.

 

Klaus, not sure if you've tried it, but this is what I do to de-EQ John's files. I simply use DeltaWave to match the RAW and Decoded file, and use non-linear level EQ correction only (uncheck phase). The result is to undo the large-level EQ in processed files. You can then play or export the corrected files. Here's an example:

 

Uncorrected EQ (RAW vs. DEC):

image.thumb.png.0ead7018dcc0efe045dc380e4790ad87.png

 

After DeltaWave frequency correction:

image.thumb.png.e9d49ccee8afc3be9cfb49a40d493f09.png

 

But, I'm thinking this will make the decoded files sound a lot more like their RAW versions. At least it does to me when listening to these. 

Link to comment
32 minutes ago, KSTR said:

Hi Paul,

I thought about it but haven't tried. I've used my own de-embedding, which, as you might know, suffers in that it applies the correcting IR to the original, rather than than applying 1/IR to the decoding. But, it compensates 100% all mag and phase errors to full precision (good enough to do a "simple load" only in DW and get the best null possible).

The overall EQ is still present but now it's the same for both files. And then I would apply a simple min-phase correction, a curve fit EQ filter parameter set obtained from REW and transformed to an IR in RePhase. For quick checks, I only apply the latter... which leaves 1dB of wiggle room in the difference and more importantly, does not correct those linear phase level jumps at 3kHz an 9kHz.

 

With level EQ only applied by DW, that would mean applying a linear-phase correction(?), which isn't fully correct. Most of John's EQ is minimum phase, except for those mentioned step changes. Probably not a big deal as the overall curvature is low-Q, reducing chances of nasty pre-ringing.

 

Yes, linear phase correction in DW if phase is not selected, so you're right,  if minimum phase filters were used this will not undo them fully. Engaging phase correction will, though, but I think this will just negate all the main effects of the decoder :)

 

I did listen to a couple of tracks, RAW, Decoded, and de-EQed by DW. I think the latest decoder does a better job with frequency balance, although to me, there's an overemphasis on upper-mid frequencies. Vocals sound brighter than they should, a bit more sibilance. Not something unexpected if you look at the frequency bump from 2 to 7kHz, right where the ear is most sensitive. This gets fixed with the de-EQ, but, like I said, I can hardly tell the difference between the RAW and Decoded with de-EQ.

Link to comment
1 hour ago, John Dyson said:

 

Before reading this -- if you can resolve the needed diffs into 1st order EQ (the only kind of EQ normally used), please give me the information.  I'll immediately implement it, and we can hear how it sounds!!!  Of course, we also need to know which EQ to do BEFORE the decoding and AFTER the decoding.   Remmeber, all I did was 1st order EQ above 1kHz.  Since we are working in 1st order EQ space, it should be easy for your program to resolve it into 1st order EQ.  If not, then there is something more complicated going on.  Again, 2nd order EQ is used in one architectural place,  with a very specific and defendable purpose.

 

When doing comparisons, it isn't fair to do comparisons against material that is already multi-band compressed.  Just like when I was working on DolbyA,

you don't compare the decoder output and input for listenability, you compare through the entire encode/decode cycle.  Once we have an original copy,

the FA copy, then we can discuss frequency reponse errors on the calculated decoder ouput.   I don't have a good 'original' and FA copy combination

to begin with.  If I had a good example of those two items, from almost any recording, then the decoder can be made very accurate.  As is now,

comparing FA with the Decoder output isn't a lot different than comparing DolbyA input and output or DBX input and output.   I do agree that

the output of the FA decoder should sound SIMILAR to the FA RAW.   However, the FA RAW should also sound similar to the original

recording, but the encoding process is probably not flat also.


We need to compare pre-encoded copy with decoded copy, I really don't care about the FA copy for comparison about technical accuracy.   Once we have pre-encoded, FA generated from pre-encoded, the decoder source, and myself to fix it -- within hours, the decoded copy and the input copy will be essentially exactly the same.

 

Since we have no reference specs, schematics, reference recordings, test recordings, etc, we are stuck using reverse engineering, listening for response balances, and hearing 'tells'.  'Tells' really do work, but require alot of experience to hear and interpret.  This is lot like hearing dynamics distortions -- most people will assert that it doesn't sound correct.  Instead, I will be able to describe the problem.

 

I don't think that the static (I mean static) frequency response makes any sense on a mult-band dynamics processor with a lot of necessary EQ around it.

There are some situations where the frequency response should be flat, but claims that there must be a flat response has no supporting facts.

 

There are most likely instances where the decoder should be flat, and maybe it really should be,but unlikely after listening how FA signal sounds.   The

HF (esp over 9kHz) sounds smushed.

 

 

 

John, you keep saying static EQ doesn't make sense with dynamic processing, which is true. Nevertheless, the decoder produces a consistent static frequency change across all the processed tracks. This can't be the effect of dynamic processing, since that would change based on the signal and wouldn't be the same across two different recordings. For whatever reason, there's a static frequency curve that appears to be applied to all decoded content. That's what Klaus has been reporting, and that's what I've been saying to you for a while, even in our previous private conversations. Although the curve has changed over time and gotten better, it's still very distracting.

 

Now, if you tell me that DolbyA requires such a static EQ, I could understand it, but I'd like to see this documented somewhere -- the static correction that's applied now seems to unbalance overall frequency response.

Link to comment
  • 1 year later...
1 minute ago, John Dyson said:

Proof must then be by hearing

 

Proof by hearing is where you're going wrong, IMHO. There's no such thing for FA unless you conduct proper, large scale controlled research. Hearing is fallible, as you know really well. Asking a few others to tell you their impressions is not a solution. Random opinions are not proof or even evidence of anything approaching a conspiracy.

 

Link to comment
13 minutes ago, John Dyson said:

Then let the individual decide.  Isn't that what all of the -130dB type thing all about?   It is just that FA isn't snake oil.

 

BTW, rumor is, studies were done, and the consensus was that FA was preferred.

 

I'm not sure how else to say it, John. Your claims of a wide music industry FA conspiracy are unfounded. Preference and opinion are not enough to justify any such claims. Without evidence or proof, your decoder is just another DSP+EQ process that may or may not sound better to some on some material, and worse to others. Especially since there is no deterministic way to distinguish FA-encoded content from non-encoded.

Link to comment
12 minutes ago, John Dyson said:

I really think that it might not be a conspriacy...   It is very possibly  a lack of knowledge.  DSP-EQ cannot do the cleanup that the decoder can do, unless you want to do a n expander and descrambler also.   You can probably get by with a descrambler (I'll show you the section of code -- do it for yourself, it is essentially a wierd ass EQ.)...   It just wont do as much NR.

 

As I wrote above:  rumor is:  FA was statistically preferred by the general public than raw.

My suspicion, with headphones esp, decoded will sound better to most audiophiles than FA.  That is different than the general public.

 

What does this mean?  YMMV

 

 

OK, let's assume that your descrambler/decoder can produce something that is preferred by some audiophiles, as per your suspicion. Yet earlier you claimed to have "proof". Is your proof based on these rumors and suspicions, or something else?

 

2 hours ago, John Dyson said:

I do have proof, and am truly an expert in areas much wider than audio

 

Link to comment
56 minutes ago, John Dyson said:

I just had a thought about something that can be shown to skeptics...   A spectrum diagram, looking at the noise reduction...   I can provide a nice, intense ABBA Gold deocde before and after.   I haven't even seen the original, so I need to make sure that there is any noise there to begin with, but since it needs to be DolbyA decoded, I suspect that there is a noticeable amount.

 

I'll see what I can do...   Absolutely no 'special cases', but I do need to show a recording that has noticeable noise to begin with.   THen, listening to the result, no notable loss of PEAK treble.   Of course, the decoder eats low level hiss.

 

Might be fun.   GIve me a couple of hours, I am a little slow...   right now my BP is 160/40 with heart rate of 120, so I am a little weak right now, but I'll see what I can do. (The body parameters have nothing to do with what is going on.)   BTW, my normal heart rate is 50-55, going up to 80.   Ran out of some medicine, but been pressure tested to 250/150 -- so all is good, just uncomfortable.

 

John

 

 

Take care of yourself, John! There’s no reason to rush or to get upset over this, it’s only software. I know how hard you’re trying to make this work. My point was not to attack you or the decoder. All I really wanted to do was to explain some of the reasons for the challenges you are facing when trying to convince others.

Link to comment
54 minutes ago, John Dyson said:

Okay, keeping me honest, I haven't done the spectrograms yet.   I haven't measured anything, other than try to get a reasonable level match for the whole recording -- then create snippet.

This is the first 55 secs of 'Three to Get Read' from Take 5.

 

Original:

https://www.dropbox.com/s/4lwjzmghfgbq2bi/04-Three To Get Ready-RAW.flac?dl=0

'PROCESSED':

https://www.dropbox.com/s/88we9dfuaw0j7jr/04-Three To Get Ready.wav-DEC.flac?dl=0

 

I know which one I prefer, and it has NOTHING to do with 'processing' the recordings.  I normally listen with headphones for precise, repeatable imaging (as long as they are worn correctly), so with speakers, YMMV.   I dont judge whichever anyone chooses.   There are advantages to either, and I know it.

 

Have fun!!!

Also, I'll do some more stuff in an hour or so -- BP much better, but wanna keep it that way!!!  Maybe I was upset and didn't know it?

PS: I think it still has a minor midrange EQ problem -- wrong MF->HF EQ freq...  Been trying to choose.

 

John

 

 

The processed version has a large HF cut starting around 8kHz applied to it. Is this intentional? A noticeable lower-bass frequency reduction, as well. Minor midrange EQ problem doesn't seem noticeable by comparison:

 

image.thumb.png.62ee8ecb48b2deb95af4b6a4d2cbe230.png

 

Timing/phase difference is benign:

image.thumb.png.5210df8310f0b0e8d89dec8e5ab042f2.png

 

 

 

Link to comment
1 minute ago, John Dyson said:

 

I was afraid of HF average measurements being done.   Over a time average, HF the energy will decrease, especially for the hiss.  Other parts are also HF compressed, with VERY fast compression.

 

The peaks should be approx the same +- a dB or so.

 

We are dealing with primarily HF compressed material, and that energy *over time* will tend to be greater than uncompressed.

 

There MIGHT be an actual  rolloff, but it isn't because of average spectrum measurements.   YOU ARE GOOD!!!  GOT ME, glad I was prepared :-).

 

John

 

 

John

 

 

I also looked along the whole recording using SFFT (short-time FFT) instead of an average. The HF roll-off is present throughout the recording. If you're saying the original had too much energy above 8k,  I can't accept that such a large adjustment is required to compensate. -5dB of adjustment at 10k is extreme, and -10dB @ 11k is huge.

 

Link to comment
32 minutes ago, John Dyson said:

Yes, but SFFT still does a time average, even 128 samples Hann windowed moving average can smooth out HF peaks.    A lot of the signal above 12kHz on almost any recording drops to near nothing, on this recording, mostly buried in noise, esp when using super high speed compression that conforms intimately to the envelope  (R Dolby Genius).   Since the signal is near the level of noise, the unique capabilites of the FA decoder design will do its best to pick out the signal (with low energy) and diminsh the noise.   In this case, a lot of the actual garbage energy is removed.   This 'garbage' wouldn't be so strong without the FA encoding/assocated compression.   The hiss on the original early 1960s recording would be more similar to what you hear on the decoded copy.   Tape machines didn't hiss that much, well maybe in the '50s.   Even then, video tape was real in approx 1955/1956, and that has one hell of a bandwidth in comparison.  with SNR in the 40s.  Audio is much easier.

 

BTW: I have seen the FA decoder nuke 30dB of noise without affecting the actual, desired signal.    It is theoretically capable of -75dB on hiss, but that only really happens on fadeouts.    There is also NO NR in the 150Hz to approx 1kHz range.  There is SOME NR in the 1k to 3kHz range, and then it progresses quickly.  (I don't mean never, but not substanially.)  You wont' see with the SFFT until fadeout.

 

The human ear is not a waveform peak detector. We do average signal energy over a short period of time. A sharp transient might be processed differently, but that's not generally what we listen to when listening to music. Besides, you can listen to the difference between the original and decoded tracks and hear that it consists almost entirely of those removed frequencies above 8k. I don't doubt that original recording had some noise above 10k. But removing so much of HF also removed ambiance and HF extension of the instruments.

 

Please show an example of a 30db reduction in noise that doesn't affect music and how you're determining this.

 

Btw, I don't normally use Hann, but have a choice of about 20 different Windows. My goto window is Kaiser.

Link to comment
8 minutes ago, John Dyson said:

Kaiser has more averaging than Hann, unless you use a Kaiser with a small shape parameter.   At that point it is approx the same as Hann for signal averaging.   Higher shape paramters are worse.   Kaiser is my specialty -- got a wonderful paper about 100windows, free to distribute -- one of the originals.  I can send it to you if you want.   Yea, Kaiser can do special things if you know what you want to do and calculate/determine the correct window parameter...  Good stuff...

 

Yes, the human ear is NOT a waveform peak detector, but over an effective 256 wide sample average region at 44.1kHz, there is enough energy that HF peaks can be significantly dimished.   Are you saying that a brush over a cymbal doesn't produce a consituency of a lot of narrow energy pulses -- come on now :-).

 

Yes, I have seen it -- while distracted on other details (I was astonished.)   Next time I see it, I'll let you know -- really, not avoiding the question, but there is a huge amount of water under the bridge, and the specific situation probably happened while thinking deeply about other matters.  I WILL LOOK AROUND AND TELL YOU.   It was probably on something like the 1970 Carpenters album or somesuch -- something with hiss, but not too much.   Maybe even Herb Alpert...  Gotta check.

 

John

 

Here's the problem, John. You still have not demonstrated that any recording is FA-processed or that your decoder is needed to decode it other than by having someone listen to it and say "this one sounds better". That's really far from proof of any kind, this is just individual preference and hearing ability. It's very easy to see why someone will hear differences simply due to the frequency balance change (yes, on the average). Try a 15-20 year old listener to see just how much of the high frequencies they'll miss in your decoding. Your ears (and mine) are not that good anymore, although I can clearly hear the difference between the two recordings. To me, it sounds like a loss of ambiance and instrument extension. And yes, a little hiss which doesn't bother me in the original.

 

I chose a few windows that I thought worked best for the type of measurements I needed. To help me and others, I built a tool that lets me preview and analyze any window. Many different varieties of Kaiser are already built-in. Here's a sample of what's there, although I've tested and analyzed many, many more:

 

image.thumb.png.2f51d28b2c08897465e21e8ddb552399.png

 

 

 

Link to comment
3 hours ago, John Dyson said:

I think that I like @pkane2001and he and I should get together some time, if I ever travel again.   There is nothing but frustration, not in him, but because I cannot figure out a way to PROVE the decoder...  Nothing more than that.

 

Would love to see that, John. But I hope you understand that the skepticism isn't personal. I (and others) need a little more evidence that FA is really pervasive and needs to be fixed than just the rumor that someone liked the output of the decoder.

 

By the way, here's the impulse response of your decoder:

 

image.thumb.png.044150a44a379dfb7638add6b69f9702.png

 

Link to comment
3 hours ago, John Dyson said:

Yea, lots of time delays & phase shifts, and 1st,  2nd order & FIR.   Your display is  surprising that there is anything that seems at all like an impulse.  (It isn't really surprising, just unfortunately proper operation.)

 

Just as any multi-band compressor/expander, the idea of 'constant freq response'  isn't helpful.   Nowadays, there can be 'constant time delay', but the FA scheme wasn't designed in the world of linear phase filtering.

 

VERY VERY VERY VERY frustratingly, the only valid measurement for impulse behavior would be through the entire "FA encode -> FA decode" process.  This unfortuntate fact  does not help our situation.

 

The full system (FA encode -> FA decode) frequency response, phase, and impulse response is important to the user because it truly measures the hopefully correct behavior of the decoder.   The FA recording itself is a travesty, not having much of an effect on the desired transfer functions, unless damaged/incorrect.

 

I do like your presentation, but we need to find a 'pre-FA master tape, and decode the FA encoded version of the tape, THEN compare the differences for a quality evaluation.

 

I need to do the original Sheffield Labs Direct-to-Disk vs. FA version vs. decoded FA verison again, it is enlightening.  To me, hopefully others, it is a nail (a small, thin nail, maybe even just thumbtack) in the coffin of skeptical attitudes, hopefully slightly moving the attitudes towards acceptance.   Not everyone will embrace fully, but accepting one little nit of existence proof shows that FA decoding isn't specious.   Proof will not likely be possible without FA encoder schematics, a couple of corresponding master tapes vs. FA recording,  or God coming to help.    I cannot do it, but can show evidence, whether it is technical noisy data or listening comparisons.   Understand that rumor is that studies showed that the general public preferred FA.

 

The Sheffield labs thing is as close as we positively have right now, and isn't very good.   However, I suspect, highly suspect that my archives might have ABBA eqv to master tape corresponding to another release.  Need to find/verify/test, etc -- so for now, the clean master just doesn't exist yet for our testing..

 

 

Below is my typical babble, trying to explain the current poor, state of the art for FA decoder frequency response measurement...  Only read at your own risk.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

About 1yr ago, after a lot of experimetation & investigation, I found that the  average signal strength comparsion between the consumer RAW FA, and  decoded material SHOULD  be 'flat' except for some minor variations.  Unfortunately, on my chosen recording, the spectrum has a characteristic rolloff, thereby causing the gain control elements (viritual DA units) to continue doing active gain control.   That HF0/HF1 gain control will cause the rolloff shown below.

 

This   measurement is is made using Anne Murray 'Shadows in the moonlight' as FA source material.  Sine waves don't work as well.  I have checked about 5-10 candidate recordings, found that 'shadows'  has the 'flattest' average energy spectrum of any of the candidate tapes so far.  'Shadows' does NOT have a flat spectrum, just that it has a relatively more flat energy spectrum vs the other *tested* recordings.  

 

Since 'Shadows' is loud and is substantially, but artistically compressed, it tends to push all of the < 9kHz region gain control elements to maximum most of the time except fade-in/fade-out.  The descrambler does not modify average energy vs/ time, so the descrambler doesn't substantially  affect the measurements.. (Descrambler contains the HF outputEQ, but there is more than frequency response EQ in the descrambler.)  Anne Murray's recording comes close to that goal of inactive gain control elements.  This is important:  even though the spectrum of 'Shadows' is NOT flat, we can zero out the error by calculating differences.  *the measurement for 'Shadows' is done from 2 secs to 12 secs.

 

So,  we use the 'Shadows' recording as somewhat temporally consistent driving source signal, that is, in sloppy DSP talk, it is more 'stationary-like' than many recordings)..   The relatively high average level, and plenty of HF & LF density gives us enough signal to measure through the spectrum,   The response is measured like this:

 

gaindB = outputdB - inputdB, averaged over frequency bands of your choosing,

Using EQ with crafted characteristics gives more useful results.

 

===========================================================

Here are the *actual* response results, decoder input -> decoder output,  from Shadows in the Moonlight.   The decrease >9kHz come from the >9kHz band still doing active gain control, rolling off the HF gain.   The little wobbles happen because of the actually non-stationary behavior of the recording.

 

This measurement run was done during the writing of this message.   The behavior of lower MF and LF is similar to this 1kHz->20kHz run.

Note the correlation between the decreasing gain at higher frequences and the input signal level decreasing vs freq.  This is where the gain control has been active on the recording.

Lower levels will result in lower gain, thereby causing expansion.  *measurement is NOT totally accurate, but does give a good guidance.   EQ can only do so much, esp when following the rules, so if the response here is 'flat' (or relatively so), it is likely that the program is actually *more flat*.

 

Even though this is ugly, it has been very useful.   If we could fashion a 'pretty' version of this date, and find material that doesn' encourage the gain control to create the gain rolloff, then we would have a close-to correct response measurement for 'FA->DEC', but not   ENC->DEC, which is what we REALLY want.  (That is, ENC=master tape, DEC=decoded verison.)

 

Important to understand: the spectral energy in FA recordings is not even closely the same as the original master.

 

LEVELS 1000Hz to 1500Hz
dB raw: -42.61 dB dec: -45.68 dB diff: -3.07
LEVELS 1500Hz to 3000Hz
dB raw: -41.71 dB dec: -44.67 dB diff: -2.96
LEVELS 1000Hz to 3000Hz
dB raw: -37.81 dB dec: -40.83 dB diff: -3.02
LEVELS 1000Hz to 1100Hz
dB raw: -46.24 dB dec: -49.25 dB diff: -3.01
LEVELS 1100Hz to 1200Hz
dB raw: -46.89 dB dec: -49.95 dB diff: -3.06
LEVELS 1200Hz to 1300Hz
dB raw: -47.35 dB dec: -50.44 dB diff: -3.09
LEVELS 1300Hz to 1400Hz
dB raw: -47.66 dB dec: -50.76 dB diff: -3.1
LEVELS 1400Hz to 1500Hz
dB raw: -47.91 dB dec: -51.00 dB diff: -3.09
LEVELS 1500Hz to 1600Hz
dB raw: -48.14 dB dec: -51.22 dB diff: -3.08
LEVELS 1600Hz to 1700Hz
dB raw: -48.40 dB dec: -51.44 dB diff: -3.04
LEVELS 1700Hz to 1800Hz
dB raw: -48.68 dB dec: -51.69 dB diff: -3.01
LEVELS 1800Hz to 1900Hz
dB raw: -48.98 dB dec: -51.97 dB diff: -2.99
LEVELS 1900Hz to 2000Hz
dB raw: -49.30 dB dec: -52.26 dB diff: -2.96
LEVELS 2000Hz to 2100Hz
dB raw: -49.62 dB dec: -52.56 dB diff: -2.94
LEVELS 2100Hz to 2200Hz
dB raw: -49.95 dB dec: -52.88 dB diff: -2.93
LEVELS 2200Hz to 2300Hz
dB raw: -50.28 dB dec: -53.19 dB diff: -2.91
LEVELS 2300Hz to 2400Hz
dB raw: -50.60 dB dec: -53.51 dB diff: -2.91
LEVELS 2400Hz to 2500Hz
dB raw: -50.91 dB dec: -53.81 dB diff: -2.9
LEVELS 2500Hz to 2600Hz
dB raw: -51.21 dB dec: -54.11 dB diff: -2.9
LEVELS 2600Hz to 2700Hz
dB raw: -51.50 dB dec: -54.39 dB diff: -2.89
LEVELS 2700Hz to 2800Hz
dB raw: -51.76 dB dec: -54.67 dB diff: -2.91
LEVELS 2800Hz to 2900Hz
dB raw: -52.01 dB dec: -54.92 dB diff: -2.91
LEVELS 2900Hz to 3000Hz
dB raw: -52.25 dB dec: -55.17 dB diff: -2.92

LEVELS 3000Hz to 4000Hz
dB raw: -49.18 dB dec: -52.19 dB diff: -3.01
LEVELS 4000Hz to 5000Hz
dB raw: -50.66 dB dec: -53.86 dB diff: -3.2
LEVELS 5000Hz to 6000Hz
dB raw: -51.13 dB dec: -54.27 dB diff: -3.14
LEVELS 6000Hz to 7000Hz
dB raw: -51.56 dB dec: -54.59 dB diff: -3.03
LEVELS 7000Hz to 8000Hz
dB raw: -52.39 dB dec: -55.47 dB diff: -3.08
LEVELS 8000Hz to 9000Hz
dB raw: -53.63 dB dec: -56.95 dB diff: -3.32
LEVELS 9000Hz to 10000Hz
dB raw: -55.14 dB dec: -58.88 dB diff: -3.74
LEVELS 10000Hz to 11000Hz
dB raw: -56.80 dB dec: -61.09 dB diff: -4.29
LEVELS 11000Hz to 12000Hz
dB raw: -58.52 dB dec: -63.48 dB diff: -4.96
LEVELS 12000Hz to 13000Hz
dB raw: -60.24 dB dec: -65.95 dB diff: -5.71
LEVELS 13000Hz to 14000Hz
dB raw: -61.94 dB dec: -68.45 dB diff: -6.51
LEVELS 14000Hz to 15000Hz
dB raw: -63.59 dB dec: -70.94 dB diff: -7.35
LEVELS 15000Hz to 16000Hz
dB raw: -65.22 dB dec: -73.42 dB diff: -8.2
LEVELS 16000Hz to 17000Hz
dB raw: -66.82 dB dec: -75.88 dB diff: -9.06
LEVELS 17000Hz to 18000Hz
dB raw: -68.41 dB dec: -78.31 dB diff: -9.9
LEVELS 18000Hz to 19000Hz
dB raw: -70.01 dB dec: -80.72 dB diff: -10.71

 

Hi John,

 

The impulse response is not terrible at all and appears to resemble a linear phase low pass filter.  I posted the phase response previously, it is not that far from flat below 10k or so. I still feel the frequency response is what defines the bulk of the audible changes produced by the decoder. I see no real evidence of any other effects that are close the impact that a 5-10dB drop at 10kHz would have on music, especially one that is full of percussion instruments that have significant energy extending well beyond the audible range. If you think of a way to test your idea of peak decompression occurring on a short interval of a few milliseconds, let me know. As of now, I don't know how to verify this. Here's the original vs decoded waveform (original is blue):

 

image.thumb.png.42320017ba80817fc61665195367448a.png

 

Zooming in around 36.7 second mark to the individual sample level, you can see the major difference is the smoothing caused by the low-pass filtering process. I don't see any decompression effects, unless these are very subtle:

 

image.thumb.png.cce644b5bd79fd3071d4d361b9b62ad4.png

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...