Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

9 hours ago, John Dyson said:

This source code snippet is as promised.   This is a gain calculation loop source code copied out of the next release.   The decoder (FA and DA) is now very very correct.  This disclosed code is PURELY DA level, so has been solid for a long time -- this is used internally after each FA EQ stage.  The only relatively recent changes are some of the parameters WRT the nonlinear 'diode' exponential curves.  I am never 100% satisfied, and during the short times my hearing is working well, the values are double checked, and have recently only be tweaked with very narrow bounds (on the order of 1%.)

 

Even though this is a critical part of the code, it is only about 700 lines out of the 30000 lines (yes, I just measured them),  there is a lot more than just decoding DA, but also needs command lines, FA equalization, .wav file I/O & metadata, bandsplittiing, etc.   There is also a vast library of FIR, IIR filter support (dynamically based on current running sample rates), and massive transcendental and vector support.

 

This is only to show a basic algorithm (for those expert in reading C++/SIMD code), and there is a lot of subroutine support needed (e.g. dB2lin, vector exponential code, defining the vectors themselves, etc.)   The algorithm could be expanded into code using normal variables, but it would be MUCH slower.

 

If a decoder is written using this routine itself, and minimal I/O and bandsplitting code is used, it could be perhaps 2X faster than --normal mode on my decoder.

 

(I am still testing the FA decoder -- my testing is very aggressive, and know what it should/should not sound like.  I am not claiming that my test results are for good listening, but are intended to verify the correctness.   'Good listening' requires 'Good mastering' and 'Good decoding choices', which I am not good at. :-)).

 

I am verifying almost every recording that I have previously used for testing.  So far, 100% success (other than some botched decoding parameters -- as  usual.)

 

John

 

decoderloopsrc.zip 6.94 kB · 4 downloads

 

For those poor souls trying to read the source code:

 

I just realized that the code that I posted would be imponderable.  When I get done with the current work, I'll spend some time and add-in a detector and the needed HF0/HF1 code...   I'll do everything except the band pass filters, and the wav file reader/writer.   I'd expect that most plug-in developers could figure out what is going on once I add in the detector and HF0/HF1 subtraction, and show the gain control operation.

 

For those actually reading the code:

 

The basic concept is that the input signal level is the argument to the 'dbloop.next' subroutine call.  The variable 'LINrmsin' is actually an 8 wide vector of the signal levels as described in the source. (LFA, MFA, HF0A, HF1A, LFB, MFB, HF0B, HF1B).

The signal levels given by 'LINrmsin' are derived (diode detected or Hilbert detected)  from exactly the same signal that is multiplied by the output gain (pgain) below.  The output of the dbloop.next(LINrmsin) call is the 'loopstate' structure, which contains various versions of the gain -- some are legacy.  Some of the gains are used for implementing the HF0/HF1 separation, and the main output as mentioned above is 'pgain'.  This 'pgain' value is the actual value that needs to be multiplied by the signal to get the output signal...  This loop actually makes DA decoding VERY SIMPLE in comparison with starting from scratch!!!

 

Repeating myself, beause it is important to understand the code: The pgain output is in the loopstate return value from the dbloop.next(LINrmsin) subroutiine all, where those are the gains for each band wtih the same vector shape of the input.   That output gain is exactly the number that you want to multiply by the signal to get the output (decoded) signal.

 

The big gotcha is that IDEALLY there is a delay that should be compensated for.  The variable (in the loopstate class) needsdelay is the number of cycles that the audio signal should be delayed before being multiplied by the calculated gain.   This delay also creates a delay in the audio file output, and it takes all kinds of weird little tricks to make sure that the input and output line up perfectly -- but should be fairly simple when implementing a simple decoder.

 

Even though I realize (after thinking about it), the code is a little 'dense', the bottom line is that you have an audio signal (not really referred to in the source), the signal level derived from the audio signal (the input to dbloop.next(LINrmsin)),, and the actual gain to use (loopstate.pgain.)   Loopstate is a class (structure) that contains the interesting state and output values from the dbloop class, the .next call in that class.

 

ADDITIONAL NOTE:  I generally use an idiom, where the main routines for filters or audio processing have a class member function name of 'next'.   You call 'next' with the input to a filter or audio process, and the return value is all of the good stuff (output) that had been calculated.

 

John

 

Link to comment

Progress report -- and including everyone on the PM groups.   I am SOOO focused on the cleanup and testing, I am not allowing myself any distractions.  I am doing test decoding for quality tests ONLY.   Also, I have been reviewing and finishing the SMALLEST things  --  the intent is that the 'drift' in the design is now stopped.   With the latest versions of the 'FA' initiators being 'perfect', I think that the reverse engineering is complete.

 

* The attached working copy of the simple usage manual shows minor changes from the previous release.  I added some usage hints, and will add some more as my testing reveals more usage patterns.   There will also be a man page by the time of the release.  I am offering the simple usage manual as a status report, and showing that there is little change from the previous release.

 

Even though this program is 'command line', and still clunky to use, it really has needed a higher level quality/usability clean up after these years of design drift.   Maybe after this basic plaform is complete, then I can work more on the user interface matters.  I have been planning on a GUI interface mechanism for the last few years -- and my guess is that is where development is moving to after this apprx Friday release.

 

I am NOT intending on being 'rude' by not replying to comments (I am NOT reading anything right now), as I am simply so very focused.   This is an atttempt to avoid wasting anyone elses time.   My ham-handed decoding attempts are NOT helping the project -- the decoder needs to be in the hands of people who can actually use the program.

 

The real reason for decoding attempts have been for finding bugs in the decoder, but communications about the specific problems have been unclear.  It is totally useless for only one person (ME) to be able to use the decoder, and the unsable set of commands hasn't made it easy for other people to use it.

 

This project effort is for everyone who cannot stand the HORRID FA sound that has been pawned off as 'digital music' for the last 30+yrs.  A lot of people don't care, but a few of us does care about the bad sound.

 

Give me a few more days -- I am working on serious long term quality issues, and doing massive testing.  The changes aren't major, but wanting to make sure that my tendancy for 'tweaking' is totally satisfied.  I will not be totally finished with the project, it is simply that this phase must be completed...   It will be a short number of days, not even a week.

 

I cannot imagine that the clean-up will take more than Friday early evening.  There are NO major, substantive changes, all of the fixes are minor things like changing some tunables from 0.934 to 0.935 kinds of things.   All commands are currently frozen, and the decodes done two weeks ago will still work now.

 

John

 

Usage-V1.6.0.pdf

Link to comment

The DHNRDS FA is delayed ONE more day (I hope just one.)  It isn't a bugfix per-se, but is more determining the correct usage.  The FA decoding (and improved DA decoding)  is as close to perfect as it can practically be.   Imagine keeping 9 stages of dynamic range expander maintaining an audibly flat response, where the gains flop all over the place.  It really IS good.   The DA decoder is so good with the dynamic shaping, that I am considering (just considering) that the anti-MD code might not be normally needed, even in serious high quality mastering situations.

 

The bad news -- as the author of the code, I am still trying to figure out how to use it best, and how to explain using it.  There are some usage nits that really upset me, and some very last minute tweaks (in the last week), where some calibrations got unintentionally shifted, and I need to do more testing to verify correctness.

 

This release is being taken VERY seriously, in fact, 'deadly' seriously.   NO MORE WASTING PEOPLES TIME!!!

I am expecting to do the release at/before +31Hrs from now.  We are having some birthday celebrations (mine), so there can be some unplanned-for delays, but given no major interruptions, my estimate is 21:00 EDT USA time tomorrow (29Aug).  Will update 6hrs ahead of time if there is another delay, but I hope not.  I want to deliver this weekend.  I will accept no compromises in quality -- so a delay is a better tradeoff than quality problems (at least in my opinion.)

 

John

 

Link to comment

HAPPY BIRTHDAY !

Best to resume this when 100% sober again, though. 😉

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Thanks for the birthday wishes!

Btw -- the reason for the delay is that I decided that the commands are too complicated and 'too many words'.   The decoding sequence (unless vetoed by someone) will look something like this:

 

--tone=-44 --fcx="ddD"

 

instead of:

--tone=-44 --fcd --next --fcd --tone=-34 --next --fcd=G --tone=-24

 

or, worse:

--tone=-44 --fcx="ddD" --tone=-54 --fcx="d"

 

instead of:

--tone=-44 --fcd --next --fcd --tone=-34 --next --fcd=G --tone=-24 --next --fcd --tone=-54

 

I believe that 'thinking' in the shorthand will be much more effective than 'thinking' in terms of the detailed command.  I found that keeping track of the --tone= values to be almost impossible and confuses my eyes.   The original, detailed syntax is way too archane.   The more complete original syntax will definitely be available for more complex decoding sequences, but use should be occasional at most.

 

This means that more of the command specification work, especially the mundane settings will be done automatically.

 

John

 

Link to comment
  • 2 weeks later...

Sorry about the delays -- working on simplified command -- now, most full FA decoding can be done in one short command.   It should be something like --fcs="-44.5,fce"

 

This will do all 4 layers, automatically.  Will choose the correct tone values, everything.  Of course, there are other minor aspects like input output gains, extra optional EQ, input/output files, etc.   However, the above is all that is needed for specifying FA decoding.

 

The decoder has been TOO COMPLICATED, and it has required a lot of work to automate everything.

 

John

 

Link to comment

The new release of the FA decoder is in 2nd level testing.  I have the release ready, it is at the usual location, V1.6.1B -- but I am not ready yet to say that it is ready.

 

So far, as far as my goals are -- it is perfect.   The frustrating thing is that people are often very accomodated to the fake FA sound.   This can sometimes make the properly decoded sound appear to be a little dead...

 

John

 

Link to comment

Still working on the release and docs.  It is currently in private group testing, but available immediately on request (not recommended yet.)  The new release (V1.6.1C, might be V1.6.1D with an already known fix) should be coming out in just a very few days.  I am really sick of wasting peoples time with defects -- so I am being very careful and open to even significant fixes if we find any bugs.   The decoder is VERY close to as good as possible.

 

I have three examples attached (flac format):   A snippet from an Al Stewart recording.

1)  The Feral CD original. (makes me itch.)

2)  Partially decoded (my preferred for listening.)

3)  Probably fully decoded (dynamics too severe, might need some compression.)

 

These examples move from (1) being nasal congested, boxy, very very weak dynamics.   (2) more dynamics, sounds more like a real recording in a studio, still somewhat compressed.   (3) Lots of dynamics.  The percussion sounds like the levels associated with a REAL recording.  Admittedly the response balance on (3) is a little 'off' and does need a little EQ.  (At such an extreme decoding level, the decoder does drift a little -- but the corrective  EQ is very feasable.)

 

For producing an 'audiophile' version, IMO #2 comes close.   I get similar results of improvement, even from some MFSL recordings, who at least try to do a good job.

 

So  -- I attach the recordings in the ordering above...

AlStewart-FACD.flac AlStewart-PartialDecode.flac AlStewart-FullDecode.flac

Link to comment
10 hours ago, John Dyson said:

Still working on the release and docs.  It is currently in private group testing, but available immediately on request (not recommended yet.)  The new release (V1.6.1C, might be V1.6.1D with an already known fix) should be coming out in just a very few days.  I am really sick of wasting peoples time with defects -- so I am being very careful and open to even significant fixes if we find any bugs.   The decoder is VERY close to as good as possible.

 

I have three examples attached (flac format):   A snippet from an Al Stewart recording.

1)  The Feral CD original. (makes me itch.)

2)  Partially decoded (my preferred for listening.)

3)  Probably fully decoded (dynamics too severe, might need some compression.)

 

These examples move from (1) being nasal congested, boxy, very very weak dynamics.   (2) more dynamics, sounds more like a real recording in a studio, still somewhat compressed.   (3) Lots of dynamics.  The percussion sounds like the levels associated with a REAL recording.  Admittedly the response balance on (3) is a little 'off' and does need a little EQ.  (At such an extreme decoding level, the decoder does drift a little -- but the corrective  EQ is very feasable.)

 

For producing an 'audiophile' version, IMO #2 comes close.   I get similar results of improvement, even from some MFSL recordings, who at least try to do a good job.

 

So  -- I attach the recordings in the ordering above...

AlStewart-FACD.flac 928.35 kB · 4 downloads AlStewart-PartialDecode.flac 791.86 kB · 4 downloads AlStewart-FullDecode.flac 844.27 kB · 4 downloads

Heads up on the 'decoded' examples -- I just realized that one of the settings was wrong.  I recently made a similar mistake on an eventually BEAUTIFUL decode of the Bread album.   My machine is 100% occupied right now, but I am hoping to come up with better results in a few hours (probably tomorrow actually.)

 

I am not a great user of the decoder -- but good at doing the software.   The results can sometimes be crazy good.

I don't think that people realize  HOW BAD a lot of CDs really are, and stop being so used to that knarly, nasal, sometimes very gritty 'digital' sound (which is really this compression that the DHNRDS FA gets rid of) -- CDs without proper decoding often sound HORRID.

 

Sometimes, on decoded material, people notice a difference in the midrange.   All I can suggest is to listen to a raw recording -- NOT A CD -- you'll not hear that woody, in-a-box sound from a lot of digital recordings being foisted on the public.  The decoded results sound closer to a REAL recording.

 

On the other hand -- whomever really likes that 'woody' sound -- so be it.  I am not trying to change anyones taste in sound quality -- just trying to make an often very improved alternative available.

 

I am NOT claiming that my decodes are perfect, but the decoder IS capable of perfection, and very worthwhile

 

John

 

Link to comment
11 hours ago, John Dyson said:

Heads up on the 'decoded' examples -- I just realized that one of the settings was wrong.  I recently made a similar mistake on an eventually BEAUTIFUL decode of the Bread album.   My machine is 100% occupied right now, but I am hoping to come up with better results in a few hours (probably tomorrow actually.)

 

I am not a great user of the decoder -- but good at doing the software.   The results can sometimes be crazy good.

I don't think that people realize  HOW BAD a lot of CDs really are, and stop being so used to that knarly, nasal, sometimes very gritty 'digital' sound (which is really this compression that the DHNRDS FA gets rid of) -- CDs without proper decoding often sound HORRID.

 

Sometimes, on decoded material, people notice a difference in the midrange.   All I can suggest is to listen to a raw recording -- NOT A CD -- you'll not hear that woody, in-a-box sound from a lot of digital recordings being foisted on the public.  The decoded results sound closer to a REAL recording.

 

On the other hand -- whomever really likes that 'woody' sound -- so be it.  I am not trying to change anyones taste in sound quality -- just trying to make an often very improved alternative available.

 

I am NOT claiming that my decodes are perfect, but the decoder IS capable of perfection, and very worthwhile

 

John

 

Here is the new version of the 'Partial Decode.'   I named the file "AlStewart-PartialDecode1.flac".

The major difference is a less 'congested' sound.   The original problem that I thought that 'congestion' came from was using the wrong style of stereo image ('classical' vs 'pop'), but that guess was wrong.   I simply added one more layer of decoding, and then the congestion cleaned up.

 

The original decoding control parameters were: (5 layers)

--fcs="3,-44.5,fcc" --next --tone=-54.5 --next

 

The new one for 'PartialDecode1' is: (6 layers)

--fcs="3,-44.5,fcc" --next --tone=-54.5 --next --next

 

The complexity of the command line (using both the new style command and the fine-grained original commands) comes from the special style of decoding needed for the high quality Al Stewart recordings, and a bug in the decoder.  The eventual new command will be (for PartialDecode1):

 

--fcs="3,-44.5,fcc" --fcs="3,-54.5,fcc"

 

The bugfix for the corrected command parser is coming by Sat morning at the latest (assuming no emergencies), but planned for Fri evening.   I hope to upgrade the manual also -- at that next release, I'll make the new release available to everyone.   The new fixes will include some command parser corrections, a change to metadata creation (a command that can override some of the original metadata), and a few other super-minor things.   Right now, the sound quality is good enough to avoid touching it at all.

 

So -- the new release (which is the public release) comes at the end of the week.   I can make the current testing version available to individuals now, but I do suggest waiting for the better debugged release coming up.

 

John

 

 

 

AlStewart-PartialDecode1.flac

Link to comment

Finally, the promised release.   The decoder is now VERY easy to use and gives very reliable results.

Unfortunately, I don't have much time to write a long message about usage, except the sequence of decoding is much easier now.

Busy today, but more documention with more details to come.

 

Basically, other than I/O and simple modes like 'decoding quality', the usage is like this:

 

--fcs="#layers, calibration, FA initiator"

 

Something like this:  --fcs="3, -44.5, fcf" would be fairly common for a 3layer decode.  Most good quality consumer recordings can be decoded down to 4,5,6 layers, and sometimes they are worth it, sometimes not.  The decoder IS very accurate now -- and asyou can see is easy to use in comparison with the very complex set of commands.

 

Attached is the prelim Usage manual -- more to come.  Also, the pointer to the repository is below.  You want V1.6.2D.

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

 

John

 

Usage-V1.6.2D.pdf

Link to comment

New release -- sounds exactly the same, same settings.   Only difference is that --limiter=<dB>  and --as=<level> (limiter and anti-sibilance really work.)

 

V1.6.2E -- notice only one letter increase.   There are ZERO stealth changes.  The decoder is now very stable, and mostly just bufixes are happening.  It works super well, ONLY LIMITED BY THE USERS SKILL.

 

The --limiter switch, with a -dB argument sets a limit to the level.  There is no gain makeup, so if you specify --limiter=-10, then the maximum level will be close to -10dB.   This is a dynamics processing limiter (gain control) and not a clipper.   It is supposedly a hard limiter, but I have written a lot of dynamics processors, and unless really careful (which I wasnt), hard limiters do 'leak'.   The Limiter uses the DolbyA style detector, so has very small amounts of obvious distortion, but shouldn't be used to limit into the body of a waveform.   Like, for example, the 'Ein' recordings (some wide dynamics Telarc disks),  which ARE FA, they have gunshots, pops or explosions that are about 10dB higher than the average signal when decoding.  Most of the songs work well at --limiter=-10, but unfortunately, on that disk, some of the recordings BODY is higher than -10dB, so they really do get compressed hard.   The limiter does seem to work pretty well.

 

The --as switch (anti sibilance) works pretty well, but has a fixed threshold.  It is something that you might want to tweak, but I have found that --as=2 (which is an absolute level and not a dB level) works pretty well.   The anti-sibilance uses an array of dynamic notch filters, and isnt' very intrusive.   The anti-sibilance isnt' perfect but DOES WORK.

 

Both of these did exist in the previous release, but I was having problems figuring out why they weren't reliable.  I forgot something important -- I didn't rectify the signals, and one of my support routines did strange things.   These processors are INDIVIDUAL CHANNELS and NOT stereo.   So if you have an instantaneous peak on the left channel, it wont mess with the right channel.   This kind of behavior is okay for a super fast limiter -- so you won't get image shifts in normal cases.

 

Location:  (both V1.6.2D and V1.6.2E are in the repository -- I am keeping 'D' just in case there is a problem, which I doubt.)

 

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

 

John

 

 

 

Link to comment

There is a new 'Usage' for V1.6.2E.   Actually, V1.6.2E decoder version should probably be V2.0A -- the first fully functioning release.  However, I reserve V2.X series for a release that supports a GUI scheme.  (I have an idea for a very flexible networked GUI scheme for mass decoding operations.)   A totally functioning GUI would be +1yr, as it would be more than just a local graphical presentation, but instead able to support a decoding network for large scale DA decoding of albums.

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

------------------------------------------

 

Some improvements are noted, and the comments about --pi3k are strengthened.

Also the mention of the new --limiter and -as (anti-sibilance) commands have been updated to suggest that they now work very well.

 

A more complete manual and better usage is now coming.

The decoder is *really good* now -- even decoding some of the most challenging recordings very cleanly.

 

------------------------------------------

 

Note: the simpler decoding modes DO HELP, and are less tweaky than trying to do full decodes.   You no longer have to put up with intermodulated highs, woody midrange, swishy cymbals and messed up brass presented by almost all recordings distributed to consumers.   The decoder is DEAD ON accurate for the various curves needed to clean up a lot of recordings.

 

------------------------------------------

 

This has been a very long and difficult effort.  Even the relatively simple 'DolbyA' style decoding part of the decoder was claimed to be impossible by a lot of recording experts and even mastering people.   These naysayers were clueless.

 

A lot of my demos have been flawed, but no longer so.  Those who said that it couldn't be done were correct in one way -- the DA and FA (the nonsense damage done to practically every consumer recording)  is INCREDIBLY tricky to reverse-engineer (actual engineering) and develop.  There are aspects to the decoder that are clearly patentable, but I am currently keeping as trade secret, and going to release to the public to be a public good.

 

Anyway - I 'declare' the decoder to be fully functional.   The only changes might be to accomodate a recording type that I haven't encountered yet.  There might be some minor quality improvements still possible, and maybe some speedups.   With the 'Ein' recording, the problem came close, and some of the ABBA had always tripped me up until now.  (I mean, in the last few days.)

 

Please enjoy the results of using decoder, because you can hear what has been kept secret from you, finally able to give to you the recordings that you had actually purchased, but didn't receive.

 

The FA scheme that the decoder UNFOLDS is almost like an OLD technology version of MQA.   There are a lot of parallels, and almost all of your recordings ARE damaged by the scheme.   (FA scheme is a lot like the proverbial 'Russian Doll' as it has 'layers' that are optionally decoded.)  Interesting, huh?

 

* The term 'UNFOLDING' describing the FA decoder's actions is figuratively accurate.

 

John

Link to comment

Very technical message herein:

 

An example of the ubiquitious 'DolbyA' fog, and a way around it.

 

Here is a 20 second snippet of SuperTrouper, and I am gonna try to explain what to look for.   The 'st-undec' version is undecoded, and has the typical 'overly full' or 'woody' FA sound.   There is another attribute of the FA sound that comes directly from the DolbyA mechanism to begin with - it is an imprecision or 'fuzziness without fuzz'.   It causes things like vocal chorus to be smeared.   This isn't directly part of the FA mechanism, but that imprecision is 'modulation distortion' as created by sidebands from the mix of the signal with gain control.   These 'sidebands' are not easily managed by analog hardware mechanisms.  The beauty of the 'ease' of using the Hilbert transform in the digital realm with fast computers -- allows some control of those 'imprecision' creating sidebands.   Also, a big part of the 'overly tubby' FA sound is indeed modulation distortion.

 

So, included here is the original FA recording 'st-undec.flac', and then three progressive better decoding results.   Even though the difference in these modes is profound or almost incredible for deocoding material that had been encoded using DolbyA for noise reduction, the FA decodes aren't all that different.  I can hear the progressive decrease in modulation distortion, but I am pretty sure that most cannot.   I am trying to suggest that the '--normal' mode (the default) is mostly just as good as the highest mode (which is not even in the manual.)   I use the highest quality mode for testing ONLY because of it being impractical to use (10X slower than realtime on my i4770 Haswell), but the normal mode, when emulating 6 DolbyA units, runs about 2X faster than realtime.   The highest quality mode shows the possible ultimate result, but the simple '--fz' mode is about as high as where there is singnificant benefit, and normally I suggest use of no advanced modes.  (Actually, a portion of anti-MD is always active -- otherwise the result would be REALLY FOGGY, like a real DolbyA HW unit would be.)

 

So, 'st-undec.flac' is the FA original (lots of modulation distortion, very tubby, vocal chorus obscured), and then the progressive improvements: 

'st-decsimple.flac', 'st-decfz.flac' and 'st-decxpppMAX.flac'.   I am not claiming that the chorus is 'pretty', but is indeed more accurate in the decoded versions.   The original is from the very very best/mostclean/leastdamaged copy of SuperTrouper that I have been able to find in the last 7yrs.

 

John

st-decsimple.flac st-decfz.flac st-decxpppMAX.flac st-undec.flac

Link to comment
11 hours ago, John Dyson said:

Major decoder bugfix coming.

Don't bother with any current releases.

 

Mea culpa.

I think that I found the EQ problem that my hearing had adapted to.   I did a few more iterations and understand what went wrong.   We are talking a few 10s of lines of code out of over 30k lines of very complex code.

 

This effort (the results that I think is now working much better) has been incredibly difficult -- even after doing a decoder of DolbyA material that was claimed also to be impossible (it matches the curves perfectly.)   The DA decoder was necessary as a stepping stone to do the FeralA decoding.

 

Why do the decoding?  I used to do orchestral recordings, and almost all CDs/ and digital downloads do NOT sound like what is in the studio or a good mixdown.  They are insidiously compressed, and the FeralA decoder undoes the compression.

 

Problem?  I can tell if the dynamics processing is correct, but there is post processing EQ that I had troubles with.   My hearing adapated to an error, but had troubles getting pointers as to what was wrong.   So, I got rid of the grain, the woody sound, the hiss (on older recordings), the FLATTENED sound, and the horrible/wierded out stereo image -- but had problems with the final EQ (there is a semi-formula for the EQ, but didn't think that it was needed.)

 

Since there is so little public interest -- the attempt at getting some interest for the FREE work is coming to an end.   Those who might really be interested are still encouraged to talk to me privately.

 

It breaks my heart to see $10k or even $2k systems as an attempt to improve sound quality, when almost everyone are listening to totally damaged recordings.  This damage is extreme,and the decoder DOES  undo it.  There was a minor problem, now fixed, that seemed to encourage destructive naysayers.

 

Like I said, PMs are okay, and I'll be checking periodically.

 

John

 

Link to comment

Just wanted to make sure you know -- there will be final, working release soon.

It will be available and forever free to use/copy or spindle/mutilate.

There is quite a bit of work for cleanup and a doing a release correctly takes a day or two,

probably Saturday even though the mechanics are only about 15 minutes.

 

I believe the results are as close to perfect as can be, and given the fact

that I have no specifications for the *evil* distortion scheme (on almost every

digital recording sold to consumers) that I call FA.

 

John

 

Link to comment

@John Dyson the most recent release has major improvements in usability.

 

I have been investigating decoding, and frankly, before this most recent iteration, it was rather difficult to get the decoding parameters right. To the extent that it was rather difficult to make a decode sound better than many original CDs, let along SACDs/DSF

 

This most recent release is now ready for prime time testing. There really are only several different options (letting equalization etc aside).

Custom room treatments for headphone users.

Link to comment
On 9/22/2020 at 7:53 PM, John Dyson said:

Since there is so little public interest -- the attempt at getting some interest for the FREE work is coming to an end.   Those who might really be interested are still encouraged to talk to me privately.

 

It breaks my heart to see $10k or even $2k systems as an attempt to improve sound quality, when almost everyone are listening to totally damaged recordings.  This damage is extreme,and the decoder DOES  undo it.  There was a minor problem, now fixed, that seemed to encourage destructive naysayers.

 

 

Don't give up yet.

 

Yes if you can tweak eq that might be an improvement. 

 

No question that this offers changes in SQ that vastly outweigh many tweaks.

 

C,mon folks who are testing power cables for DC supplies, capacitors, different disk drives, ummmm ... @John Dyson is providing this software for free! ... I mean if you really care about SQ, right?

Custom room treatments for headphone users.

Link to comment

I have been trying John's work since his first free release this year, Initially I was just creating an environment where it could process all the files from a CD (long previously ripped) then coming back to it every few releases. I quickly realized that it was going to take a lot of time getting a feel for how varying controls would affect the sound, and how they worked in combination. The frequency of John's discoveries and bug fixes kept me from engaging in a serious effort to learn how to use it systematically and predictibly.

 

Fast forward  to sometime in August, and I started to get a better sense of it with the first release that allowed multi-pass processing. The latest release has built significantly on that. With it I have begun to build a sense of what audible changes occur with variations in settings. For the first time, I feel that I will get a real payback on my efforts to listen, correlate, and learn the craft needed to use this software. I look forward to John's coming EQ bug fix release, and to improving any number of my digital albums. 

 

Skip

Link to comment
On 9/23/2020 at 1:53 AM, John Dyson said:

Since there is so little public interest -- the attempt at getting some interest for the FREE work is coming to an end.  


As asked before. How can this decoder be implemented in SonicTransporter / HQPlayer / Roon / various players ? Is there any interest from third party ?

 

Is the decoder fully automatic, or does it need some sort of UI ?

Link to comment
On 9/24/2020 at 8:23 PM, R1200CL said:


As asked before. How can this decoder be implemented in SonicTransporter / HQPlayer / Roon / various players ? Is there any interest from third party ?

 

Is the decoder fully automatic, or does it need some sort of UI ?

The decoder is NOT fully automatic, but it is about as easy as possible in a command line.


The problem is that there are MODES -- and not every recording used the same mode.   There is probably a learning curve of about 1 day for someone used to command line.  (BTW, they are still making NEW FA recordings -- I have a copy of Taylor Swift 'Shake it off' and Carly Rae Jepsen (Call Me Maybe) -- bog standard FA (they use rather standard, easy to select settings.)

 

It is possible to create a simplified version (maybe not quite the quality because of the crazy math -- there are 60dB gain flopping all over the place up to 10 times per second) -- but I'd suspect that 8 check boxes for modes, a count of the number of layers (between 4-7) and the calibration level (usually a number like -44.5dB, -46dB or -49dB) -- a GUI could almost be written.   (I have planned to create an interface using JSON & sockets or whatnot and writing a separate GUI process.)

 

The decoder is a SUPER complex piece of software.   In doing a full decode, it runs 6 to 7 full DolbyA compatible expanders with all kinds of EQ inbetween them.  Each expander runs at a different calibration level.  (Until my DolbyA compatible decoder, I don't think that there were any TRUE SW decoders that could undo DolbyA encoding...   The DHNRDS does full noise reduction, reasonably flat response, etc.)   When running the 6 or 7 expanders, each one must have reasonably good match to the ancient hardware.  And those attack/release times are NOT normal time constants, but are highly nonlinear.

 

However, the use of the complex piece of software can be made easier by putting a gui in front of it.

 

I am taking a break tonight, and will try to be more coherent in writing messages tomorrow.

This has been the single most difficult release -- I promised myself that the  program will be fully compliant, and I am very sure now it matches the encoded signal very very very closely!!!

 

John

 

Link to comment
On 9/24/2020 at 11:03 PM, jabbr said:

 

Don't give up yet.

 

Yes if you can tweak eq that might be an improvement. 

 

No question that this offers changes in SQ that vastly outweigh many tweaks.

 

C,mon folks who are testing power cables for DC supplies, capacitors, different disk drives, ummmm ... @John Dyson is providing this software for free! ... I mean if you really care about SQ, right?

 

I must admit it has surprised me also how little interest this thread has gained, considering some of the other crazy stuff being tried elsewhere.  (no offence to anyone, I like a bit of crazy stuff myself)   

 

Do you think it might help if someone could collate a few examples of successful decodes?  At the moments there are dozens of versions and examples of ABBA and Carpenters tracks, but not much else.  Personally I would be interested in a small selection of tracks or short samples covering other artists and genres.

 

OK - I could try it myself, but being perfectly honest I am as busy as hell at the moment, and I think it would take me too much time for me to try to get it running and usable, mindful that it has been about 30 years since I have needed to do much with a command line software or similar.  (maybe a simple starters guide "for dummies" might help too, just to get the software up and running?)

 

But if there were a few convincing samples to try, it might be enough to convince me and a few others to give it the time it maybe deserves.

 

Just a thought.

 

As it happens, I have just been listening to the Cure's "Staring at the Sea" compilation Album from a late 80's CD rip.  Sound quality seems pretty good to me, but there are few tracks with very noticeable tape hiss.  (In particular on the second track, 10:15 Saturday Night, much hiss at the end of A Forest too)  From what I have read here, audible tape hiss is one of the more obvious "FerralA" tells.  As I said, the overall sound quality of the album seems pretty good to me, certainly in comparison to similar recordings of the era, if this one could be improved it would certainly help to convince me.

 

 

Windows 11 PC, Roon, HQPlayer, Focus Fidelity convolutions, iFi Zen Stream, Paul Hynes SR4, Mutec REF10, Mutec MC3+USB, Devialet 1000Pro, KEF Blade.  Plus Pro-Ject Signature 12 TT for playing my 'legacy' vinyl collection. Desktop system; RME ADI-2 DAC fs, Meze Empyrean headphones.

Link to comment
4 hours ago, Confused said:

 

I must admit it has surprised me also how little interest this thread has gained, considering some of the other crazy stuff being tried elsewhere.  (no offence to anyone, I like a bit of crazy stuff myself)   

 

Do you think it might help if someone could collate a few examples of successful decodes?  At the moments there are dozens of versions and examples of ABBA and Carpenters tracks, but not much else.  Personally I would be interested in a small selection of tracks or short samples covering other artists and genres.

 

OK - I could try it myself, but being perfectly honest I am as busy as hell at the moment, and I think it would take me too much time for me to try to get it running and usable, mindful that it has been about 30 years since I have needed to do much with a command line software or similar.  (maybe a simple starters guide "for dummies" might help too, just to get the software up and running?)

 

But if there were a few convincing samples to try, it might be enough to convince me and a few others to give it the time it maybe deserves.

 

Just a thought.

 

As it happens, I have just been listening to the Cure's "Staring at the Sea" compilation Album from a late 80's CD rip.  Sound quality seems pretty good to me, but there are few tracks with very noticeable tape hiss.  (In particular on the second track, 10:15 Saturday Night, much hiss at the end of A Forest too)  From what I have read here, audible tape hiss is one of the more obvious "FerralA" tells.  As I said, the overall sound quality of the album seems pretty good to me, certainly in comparison to similar recordings of the era, if this one could be improved it would certainly help to convince me.

 

 

I'll check with a few people for a copy of the material, or send me a substantial piece of the selection, and I'll see what I can do with a decode.   Doing only one song can sometimes be challenging though (sometimes it is a balancing act for doing optimal decoding for a given album.)   The most critical thing other than no errant EQ on the direct digital download/CD, is that there be NO normalization, either on an album or selection basis.   It is possible to decode an album if it is normalized all together, but normalizing on a selection basis makes life really difficult.

 

The result of an FA decode can be profoundly good, or sometimes 'ok', and sometimes even 'bad'.   There are so many variables, but lately with the latest improvements and chasing down all of the FA variants, the success rate is now pretty good.   I have no FA recordings in my possession right now that cannot be improved to some extent, but I fully expect to run into something problematical someday.

 

When running up to 8 decoders that can decode DolbyA material -- that ends up being a lot of processing and a lot of chances for error.  After carefully adjusting the DA decoder so that the errors are smaller than before, I believe that about 7 layers (basically DolbyA with EQ at different level calibrations) is the largest number of layers for maintaining quality and maximizing the removal of compression.   It appears that the common maximum for consumer recordings is either 6 or 7 layers, but very good results for casual listening can be attained at 4 layers.  (decoding at -44.5, -34.5, -24.5, -14.5dB), also, the sequence can start at -46dB, -49dB, -43dB or even -44dB.   I have seen some -50dB stuff also.   Older hissy stuff can really benefit from one or two more layers (either 5 or 6 total.)   When doing more than 4 layers, the calibration levels usually recycle back to 10dB lower, like -54.5, -44.5dB. (for 6 layers.)

 

Most of these layering details are automatic, and just the first calibration level, specific EQ mode, and # of layers need be specified.

 

John

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...