Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

Here is the new plan for the 1.5wk release (rescheduled.)

The absolute drop dead for creating the demos is about +30Hrs, that includes uploads.

 

Got one feedback comment plus I did some more reviews, finding that the rush to get the release ready before the trip (actually I have to doggysit), I made a few sloppy decisions.   Also, I was getting used to the sound, and noted afterwards some previously unnoticed distortions should be cancelled out.  (That is, slightly reprogram the handling of the 'pilot' signal.)

 

Given this, and I am loathe to leave a release online that has known problems, I just restarted the decodes for V20CR990.   The sound is now SUPER SMOOTH and sounds like a real recording, doesn't sound quite as processed.

 

The final problem was handling the HF detail rolloff.  If the HF details are maintained up to 36kHz, the sound is too edgy.   I found a good compromise, probably technically correct, to start rolling off the HF details in between 18kHz and 24kHz.   This is not an HF rolloff, but a smoothing out of HF dynamics.

 

Before this final problem, about 3 or 4 more serious ones were fixed (including the handling the pilot signal mentioned above.)

 

I have every motivation to be ready when the vacation for other family members starts.   I am the only person who can take care of the very special Maltese doggies, so must stay with them at their home!!!   It would be best if this is all finished by before +9Hrs.

 

John

 

Link to comment

Below are the plans for the very very last minute demo release before I leave:

 

(Sent to a PM correspondent also)

 

Anyway -- I am running off hopefully the 'final' version...   Just found two bugs before leaving on the triip.

 

The new version will be/IS V20CR995, and has no 'birdies' and has a very pure sound.   None of that 'buzz' that exists on most FA recordings....    It is truly 'acceptable' finally.

 

The version that I just made available about 10-15Hrs ago had the bugs, and kept irritating me.  I decided to push the preparations to the very bloody edge.   Been hurredly getting ready while working on the final version.

 

This JUST MIGHT be the true, final version!!!

About 8Hrs for the full demos, about +4 more for ABBA, about +4 more for the Carpenters, then I must leave!!!

 

John

 

Link to comment

This is more 'blog' than technical journey into solving the FA problem:

 

Someone in PM mentioned that ABBA wasn't very good for determining quality (paraphrased), and I agree.

 

Let me explain:

Back in about yr2011, I was listening to a CD for the first time almost since yr1990.   It sounded truly horrible, buzz and loud and shrill...   What was the CD:  a copy of 1992 ABBA Gold, the best version.   Given my (then) virgin hearing, it was obvious that something was wrong.   Was it my hearing?   Was it the recording quality?   What was it?

 

The origins of the FA effort started when listening to *distorted* ABBA recordings in yr2011 (to me, all FA recordings are distorted), but it turns out that ABBA is not generally the best test material.

 

In a sense, using ABBA recordings is a compulsive 'guilty pleasure', but the bonus is that I am about a 75% ABBA fan.

 

John

 

PS:  the previous release attempt approx 1/2 day ago still had a few flaws, perhaps could have lived with forever.   I can hear NO flaws in the new version, V20CR995.   I am still listening for flaws -- nothing AT ALL so far.    I keep trying to hear those evil, frustrating  'birdies', but none so far.

 

 

 

 

Link to comment

Very last minute V20CR997 demos is available.   Also, the Linux binary is available, just time to squeek it in.   The V20CR995 source resides in that distribution, but didn't have time to put a Linux source for V20CR997 together. (The source is too complicated for casual review, and needs updated internal documentation anyway.)

 

The major recent changes are mostly related to *NOISE REDUCTION IS NOW WORKING!!!*.   I KNEW that the decoder should have functioning noise reduction, but gave up on it until finding the problem JUST BEFORE giving up before leaving -- must be something that I missed.    Don't have time to describe the bug, but the bug was so subtle as to be even trickier to describe than something more complex.

 

Here is V20CR997, along with Carpenters snippets.   Didn't have time for completing the ABBA snippets, but V20CR995 does have ABBA snippets.

Along with the demos snippets are the 10 second A/B/A/B and 35 second A/B/A/B...

 

Web address:  https://www.dropbox.com/scl/fo/54pdykw48yk324bbjmc5h/h?rlkey=q4nj13lef3u92iemm2ntcqp6k&dl=0

 

Gone for about 1wk, nothing really new for about 1.5wks...   See you soon!!!

 

John

 

Link to comment

I'm back.

The decoder is close to technically finished (gonna try some speedups, perhaps try to optimize the handling of the 'pilot' signal in the FA recordings.)

 

YES - digital FA recordings ARE distorted by a pilot signal!!! -- (best not to publically describe it for now -- not normally 'obvious' on a spectrogram, but IT IS THERE.)   Fully decoding an FA recording is not possible without considering the pilot signal.


Basically, the FA decoder IS NOW A DECODER, not just distorting or improving (depending on bugs.)

 

Still going to be busy for up to another week, the current demos stand as is.

Once getting my bearings, if a QC session after a 1wk hiatus finds a serious problem, I'll fix it, but RIGHT NOW, the FA decoder is structurally complete.

 

John

 

 

Link to comment

PS: about the 'pilot' signal...   It doesn't really cause all that much audible distortion, but IT IS THERE.   Not a conventional pilot like a 'pilot tone', but has a similar purpose.

 

So, if I oversold the distortion of the 'pilot' as being a lot, it isn't a lot.   However, the fast that it exists as a side effect of the math involved in the encoding, it allows easier, much more accurate, decoding.   FA recordings on LPs or mp3 encoded do lose the pilot, so those recordings cannot be fully decoded.

 

John

 

Link to comment

Intended changes for V20C-RELEASE (the actual one, not demos):

 

1)  Highly technical: Small, but important correction to the HF expansion at >10kHz, so that there is less compression/more descrambler activity.  (Once changed, will give a less 'metal' sound.)   Also, the EQ being incorrect does impart some 'graininess' at HF.

2)  Usability: further regularize the command line switch definitions and much better enumeration of capabilities.

3)  Speedup in one of the EQ mechanisms (mitigated a speed bottleneck in multi-threaded situations.)

4)  Very slight, careful adjustment of the input/output gains so that the gains are nominally 0dB.   This opens up some opportunities for cascading decoders or using them for EQ only.

 

To me, the speedup is most important, but changing the descrambler activity >10kHz is that last little 'bit' that gives very high quality sound.  The input/output gain error is small, but a small improvement might be helpful.

 

The EQ capabilities of the decoder command line on output is so powerful that every normal EQ method internally can also be added on output, including 1st order, 2nd order, special names for standard EQs and rounding to FA frequency values.   These capabilities need to be better described.

 

I'll also review the 'pilot tone' handling to make sure that the settings/tuning is precisely correct -- might already be true.

 

As mentioned before, the decoder is technically 'complete', and the last finishing touches need to be done.   Admittedly, the 'finishing touch' regarding the HF expansion comes as a matter of careful A/B listening after a full wk of rest.

 

The QC pass makes me extremely happy about the decoder actually doing FA *decoding*, but a little less happy that FA decoding does less than I had hoped.   If there is the possibility for another dB or so of noise reduction, I'll carefully bring it online, but I doubt it.

 

True decoding of FA recordings is more of a 'clean up' than anything else.

There will not be 'radio silence' for the next week, but obvious online activity will be slower.   If I don't have anything to say, I won't say it.   IF THERE IS ANYTHING INTERESTING FOUND, I'LL CERTAINLY MENTION IT!!!


Figure on about 1 to 1.5wks before the *RELEASE*.   The current 997 version could ALMOST be denoted the 'release', but is missing minor *tweaks*, not anything substantial!!!


John

 

Link to comment

Another capability has been suggested:

A mild 'enhancement' mode...

 

I think that with the new 'tuneup', there will be a little less calling for 'enhancement', but certainly there is room for optional enhancement.

 

The 'tuned up' version is noticeably more clean, but with the same general sound.   Adding 'enhancement' can easily be done, and needs to be done in the descrambler.   Doing enhancement in the FA layers can cause pumping.

 

If there will be an 'enhancement' mode, it will be a slight improvement in dynamics & clarity, not-so-much decrease in hiss.

 

I am INCREDIBLY happy with the sound of the decoder, esp after listening again after 1 wk.

What I am INCREDIBLY happy with is NOT exactly V20CR997, but is very very close.   The improvement after the 'tuneup' is very nice indeed.   I couldn't hear the small flaws (fuzz, insufficient definition) until after giving my hearing a rest, and doing the A/B selections has resulted in a very very architecturally reasonable result.

 

When/if an optional 'enhancement' mode is added, there will be less temptation to add 'enhancement' for normal usage...   The end-user will be able to select a slightly enhanced sound as an option...

Link to comment

Well, found another problem in the audio -- fixed.

While doing the final adjustments, I had just noticed that the stereo image was 'wobbling around' giving the sense that the image is dispersed too much.

 

The effect caused by a change in frequency response when there is prominent bass along with HF.   The bass was dispersing the HF.  In fact, this dispersal is a lot of what FA encoding does.  It is possible that the decoding was making the dispersal worse than the FA under those circumstances, but I don't really know.  The dispersal is very very difficult to detect.   Once the stereo image is 'locked in', the previous dispersal is easy to hear.

 

Once the problem has been detected, the change is minor, probably can be thought of as a 'tweak', actually minor.

The decoder, being essentially finished, it is easy to find these minimal errors.   The smallest error is being searched, and we are on target.

 

John

 

 

Link to comment

With this essentially-final 'tuneup', there have been lots of little bugs fixed, all adding up to rather profound improvements over the week ago demo.

* 'tuning' consists of plug-in 3dB EQ modules of limit selection of Q values...   There are only a few points of actual tweaking, perhaps 2-3.

 

(Long, but potentially 25% worthwhile message below -- better than the usual 10% 😉)

 

The zeroing-in has become more and more rapid, starting with rather haphazard non-progress 6-7yrs ago, and using almost random searching, finding out more and more of the techniques used in the FA scrambling process.  The optimization process accelerated about 1-2yrs ago, but still was far off from being complete.   Depending almost fully on subjective techniques SUCKS as bad as anything can suck.   Early on, I feel like my mistake was to start the project to begin with.   Every unpleasantly undisciplined technique had to be used in developing the decoder, where my natural sporadic and undisciplined creativity made the situation worse.   About 1/2 the way through, a set of 'rules' had to be developed.  The rules were not 'constant', but changing the 'rules' required a very strong reason.  Without the very limited discipline of using a set of 'design rules', the decoder program would never have been made to work correctly.  'Seeding' the rules at the beginning was somewhat haphazard, but luckily many/most of the initial rules were correct.

 

In the last year or two, found the descrambler concept, more recently recognizing that the DA layers and the descrambler BOTH need to operate in LR and MS spaces, has enabled approaching the actual global error minimum.   Even the simple decision of where the descrambler complex resides in the data stream has been tricky, also the correct scaling factors for LR/MS conversion is very intolerant of mathematical approximation.  In the last year, found that almost every base-level mathematical coefficient is intolerant of error, including the values that must be rounded to 1% EIA component values instead of theoretically perfect.

 

One of the several most difficult challenges:  why is the descrambler needed, and WHAT is a 'descrambler' anyway?  It is best described as 'magic'. 🤡 (Humor very much intended.)   Also, what is this magic attribute of a pure FA signal that good old John calls 'the pilot'?

 

PLEASE GIVE THIS UPCOMING RELEASE A REAL TRY.   The demos might not contain material that accommodates your taste, but some day you might give the decoder a try anyway.   This referenced release-to-try will not be available until next week, or perhaps the week after.  If the reviewers/active friends of the project give an okay, a sneak peak set of demos might be available this upcoming weekend.   We are at a limited 2way decision point today or tomorrow and after that decision point is satisfied, we might be fully technically ready for a private demo release (private demos usually have public content, just that the result might not be ready yet -- need feedback.   Lots more testing will be done after this decision point, then it will be time to do the Windows port.   It is possible that if the decision point is different than I expect, I might have to revisit a few things, but easy to complete within the suggested timeframe.   With the loss of my Windows development environment a few months ago, I KNOW that it will take a few days to pull all of the pieces together again, needed to produce an actual, full release with a Windows binary also.

 

Sometimes in the past, a few skeptical naysayers (not intended to be judgemental) were partially correct about their criticism.   Too often, the criticism was too global and not understanding that the goal WILL be met, however painful.   THIS PROJECT HAS BEEN A PERFECT CASE OF SUBJECTIVE MEASUREMENTS/EVALUATIONS  BEING PROBLEMATIC.  EVEN WORSE, NO USEFUL SPECIFICATION IS AVAILABLE.    In the last few weeks, I have just found the only one purely objective determiner of an actual, full FA signal...   Even that determiner can sometimes mistakenly indicate an FA signal, but no matter what, it is helpful in the decoding process.

 

Early recognition (by some experts) of early single DA layer only with tweaky settings was both encouraging and disappointing, because I knew that actual *decoding* is much more complex.   The early single layer demos DID give me some hope, and a few friends helped with enough impetus to keep the project going.   I wish Alex was still around to see the progress, but even he was losing interest/ becoming concerned if the project would ever be completed.   (Admittedly, the PROJECT is not complete, just the algorithm, and now most of the configuration settings & basic program are close to complete.)   I do regret getting some really altrustic people involved too early;  I made the mistake offering participation without knowing how much more work is needed.   Even though I thought the decoder to be incomplete, I was totally clueless about how far away the completion might be.  Back 4-5yrs ago, I thought that the decoder was only a few months away!

 

The demo a week or so ago did have errors and didn't sound correct, but it was a preliminary demo that helped to solidify the technical methods.

THE NEW, ACTUAL RELEASE sounds very different, but still technically similar...

*  Using the command line, it is now possible to disable the descrambler.   It can be interesting to hear what the descrambler does...  It is truly WIERD.

 

If this seems like a suggestion that IT(the decoder) REALLY WORKS, it really does.   The decoder has been 'working' in nearly its current form for a month or so, just that there was some seriously incorrect settings, all misunderstood by me.   Actually, many of the important settings ended up being very consistent, but used some values outside of the 'rules' that I use to maintain consistency.

 

After this reviewer-based decision point is determined, the process for creating a release, a usable release (sorry, it is still command line) will be started.

 

A major disappointment and caveat about using the decoder:   it is too damned slow.

The math is nontrivial, and sadly most of the optimizations that are possible have already been done.

Using a very good, high quality level, the decoder barely runs realtime on my 10core, older generation i10900x CPU.   Maybe the more recent CPUs will do better...   Frankly, an i7/i12, i13 or i14 (or recent Zen4) CPU should start making the decoder almost comfortable to use.   I'd sure like to know what the relative FP computing speed is on the Intel efficiency cores vs. the performance cores.   I have a suspicion that in the real world, with the very heavy AVX2 processing, the efficiency cores might be 1/2 the speed of the performance cores.   Another note: make sure that your CPU cooling works well -- running the decoder WILL heat up your CPU.

 

This new version IS very different than anything before, many little nits that screw up the details and sound image have been corrected...

John

 

PS:  the current decision point matter is about certain EQ that modifies the descrambler processing above about 4-5kHz.   It will not modify the conventional freq response.

Also, in the  test version that I am working with privately (V20CR1040) there is still an apparently incorrect 'bending' of the HF dynamics.  Probably a very simple parameter change.

 

 

Link to comment

When doing a very limited set of private reviews, I think that we had a setback.

It still seems that a 1.5wk delivery time is very likely, but wanted to be totally transparent since I had been making 'victory laps'.

 

Even though I can measure an increase in dynamics, it appears that the sense of at least one person was that the dynamics were decreasing.   This unexpected situation is very plausible, and might be waking me up to a problem with the midrange processing.   Since the processing in each band should be 'the same', there is likely a problem with phase that I didn't perceive, thereby decreasing the apparent expansion.   (The increase in expansion can easily be heard by enabling/disabling the descrambler by itself, but if there is missing expansion, it would be in the midrange area.)

 

This thing is damned complicated, but the design is now capable and flexible to make the described change.   Even now, the change is mostly 'tuning' and unlikely a structural change.

 

Oh well, an internal delay of a few days out of 6 or so years isn't much.

 

John

 

 

Link to comment

Announcement:

You all probably know that I am working on the nearly impossible -- and am very close to the goal.   However, these years of work and recent events in the personal realm have taken their toll.   The goal is technically achievable, and most of the parameters are understood and implemented.   Even if the result is an approximation, it is not quite yet close-enough-approximation to the goal.

 

One aspect of my work is to TRY to be intellectually honest, whether or not my thinking be perfect.   In the last week or so, expectation bias (similar to discussed in another group) has blinded me.   In some ways, I cannot hear even the most obvious and provable differences.   The biases become so strong that they can blind me even to the most obvious differences.   If the goal was to 'sound good to me', this wouldn't be troublesome.  The goal is 'technically as correct as possible', and being honest about the results.  I just cannot do it right now.

 

As it is right now, the decoder IS working, and the descrambler IS working, but the final settings have been elusive for the last few days.   I am blind even to anything beyond the most obvious and aggressive EQ differences.   This means that progress is frustrated, and has been impossible to produce reliable results for the last few days.  There are no 'manuals' or 'technical reports' that can help right now.

 

I started working MUCH more intensively since the goal is in sight, and that has been a bad decision.

 

It seems best to 'give it a week' away from intensive work.   There'll definitely be some 'diddling around' just from passive intellectual curiousity for minor issues, perhaps also testing for the burnout to continue to persist.

 

Right now, the decoder is objectively (yes objectively) measured to be dead flat by using spectral density methods (not sine wave, but on actual material.)   The descrambler has functionality, but matching signals and differentiating some of the subtle differences has been frustrated and impossible.   A few reviewers have seen the result of the frustration, and recent relatively quick progress slow down to random variations (AGAIN, like sometimes/often before.)

 

The project isn't 'stopped' per-se, but necessarily slowed down, allowing MUCH MUCH less personal internal pressure.    Also, the recent 'explosion' after a necessary week off has compounded the frustration and damage.

 

I'll be around, but I am forcing myself that the work be on-hold for 2wks, unless I recover before then.   GOTTA GIVE THOSE BRAIN CELLS A REST!!!

 

THANKS -- thanks for your toleration.   Sure wish I could have found someone with strong technical capability able to talk in detailed technical intimacy, but I truly understand that this has been MY project and MY 'cross to bear'.

 

There are few around to spend time on this important matter, or it would have already been done.   So far, the opportunity cost, given recent earning power, is well into 7 figures.   Maybe it would have been nicer to drive a new Tesla (or be driving the 3rd so far?)   Maybe using a multi-processor 96 core CPU?   I don't think that I'd otherwise be as genearlly happy as when doing what I am doing now (just need a rest.)  20yrs ago, living an almost jet-set life, I learned the hard way that materialism didn't lead to happiness in my case.

 

WILL BE BACK ON THE PROJECT in DAYS or WEEKS, NOT MONTHS!!!

I might even contact the community with the total solution next week, but I cannot allow self-generated pressure to push me for a while.

 

John

 

Link to comment

Important note about FA recordings, and those able to be fully decoded...

 

 

HINT:   FA recordings that can be fully decoded need a certain characteristic, easily readable by SoX...

If a recording, with otherwise FA characteristics, has a noticeable DC bias, it is most likely fully decodeable.

 

FA recordings that are mp3 encoded, LP/vinyl or uses other mechanism that removes the DC bias/accuracy below 20Hz, then the FA recording is not fully decodable, but the decoder can do some processing on the recording.    There are attributes of the existence of LF < 20Hz that can make the recording easier to decode.   If there are no frequencies below 20Hz, then the decoding loses noticeable amount of the possible dynamics -- some results similar to the FA RAW, but still has some positive attributes.

 

This low frequency content is the manifestation of the 'pilot'.

 

NOTE:  just because material has LF content, it doesn't mean that the material is FA.   However, when there is a loss of LF content, then the FA decoding is made more inaccurate.

 

John

 

Link to comment
  • 2 weeks later...

Real progress is being made, and being done slowly and methodically...

 

Summary of the below:   new method found for rebuilding the FA consumer audio signal.   work is proceeding carefully.   time is starting to be invested in decoding the whole set of demos, plus some more.   When the results are ready, then there will be some demos presented.   Soon thereafter, there will be a release.

 

When?   Hoping for a release with a Windows/Linux decoder just before 2024.   Hoping for demos before then.

 

----------------------------------------------

Technical blather:

 

 

Over the last several years,  had been several rather extreme attempts to decode the FA (consumer audio) signal using the audio signal alone.   Without understanding a certain subtle attribute of an FA signal, any attempt of accurately decoding FA material would be impossible.   Some of our approximations weren't terrible, but weren't good enough.   Correctly utilizing the 'special something' in the FA signal is likely the key to the solution....

 

More background: A few weeks ago, I found a left-over signal from the scrambling mechanism and I call that leftover: the 'pilot'.   This 'pilot' is buried in the recording signal.   The 'pilot' is probably  left-over unintentionally, but it contains rather subtle/helpful hints that describe certain kinds of processing options needed by the descrambler.    Think of the 'pilot' as being left over sidebands from the 'scrambling' (FA creation) process.  Intelligently using this 'pilot' while descrambling produces rather improved results.   Without the pilot, the 'descrambling' works, but cannot recover the dynamics as completely and has no chance of being accurate.   Neither vinyl nor MP3 or other lossy compressed version of an FA recording (an mp3 rip of a CD) can be fully decoded.   That means practically nothing that is ripped on Youtube can be fully decoded either, even though most of the stuff on Youtube has been FA processed.   ONLY direct digital copies of FA recordings will work.   This is one case where vinyl always loses -- FA signals on vinyl have the pilot too diminished to be fully usable.   (Actually, there would be some left-over pilot on vinyl, but not accurate enough to be helpful.)

 

*   Previous versions of the descrambler utilized heroic, probably incorrect, techniques to do some of the same things as what the pilot helps with.   The pilot takes over for a bunch of signal processing, making the results easier to obtain and with higher quality.   There are still some 'heroic' techniques being used, but driven by the pilot signal.

 

All along, the decoder could produce plausible results on the direct-to-disk VS CD Sheffield Labs recording, and this new version also produces plausible results.   For certain recordings, plausible results always seemed to be attainable without using the 'pilot'.    For many other recordings, using the 'pilot' makes a major improvement.

 

The progress has been very slow, and intentionally restricting the speed/intensity of tedious work and redesign.   This is a serious reimaginging of some of the processing.   It is also now very clear that the FA signal, being descrambled into actual clean audio is 'conceptually' chopped up and restructured back into the original shape.    This 'chopping up' and 'restructuring' is intense, and the previous attempts were not correctly sensitive to the required precision in the correct way.   Errors in the 'reassembly' do produce 'hash' that is audible in complex mixes of music sources.

 

Also, part of the slow, tedious work, the correct 'component values' have been carefully revisited, and improved specific values have been found (determining the 'correct' vs. 'theoretical' EQ/filter/gain values.)    As an example, instead of 221.3xxxxHz, the 'base' frequency for the system is '221Hz'.   If you look up the old 1% EIA component values, you'll find that 221 is standard.   However, all other frequencies and even some Q values are based upon the 2.213xxxx theoretical value, THEN AFTER THE SCALING, those values are 'rounded' to the EIA component values.    ANY attempt to use the theoretical values alone has resulted in 'hash' in the sound.   That is, much more distortion is produced unless these 'ancient HW' values are used.   The processing DOES restructure the signal and definitely requires great precision.

 

So, over the last week or so, the descrambler design is reworked tediously to take advantage of the 'pilot signal', re-modulating the pilot along with the FA audio itself thereby producing a significantly improved decoded recording.   Also, the significant improvement would have been marginal without using the correct 'component values'.   I could go into some comments about other original design choices that I do not understand, but the bottom line is that a lot of things have been learned.

 

The sound quality is beyond interesting, and I am again HOPING, probably for the 5th time, to produce a useable decoder before the beginning of the year.

 

Some of the previous decoded results were also rather interesting, and the decoder did work FAIRLY well, but couldn't produce anywhere nearly as good as the new results.

 

The proof will be when listeners judge the improved experience for themselves.

 

Sincerely, and hopefully,

John

 

Link to comment

For the curious, enjoy a Christmas demo of V20CR2982, almost the IT version.

 

https://www.dropbox.com/scl/fo/jiynuopoqwlwwtobyrewn/h?rlkey=szmoc8bxef2nlst0mjhb6hon2&dl=0

 

Several serious improvements, if not bugfixes were added by using the 'pilot' mechanism (left over sidebands) and grouping the gain control regions instead of doing the processing in the short intervals between frequency steps.   Using both techniques lend a more 'real' sound, and pushes the decoded result beyond both output of previous decoders and FA.   WITHOUT THE SIDEBANDS, FA RECORDINGS CANNOT BE FULLY DECODED.    FA releases on vinyl will have troubles, and mp3 copies are hopeless for accurate decoding.   Needs to be a perfect digital copy of the encoded FA.   (How the 'sidebands' are derived will be privately described on request -- wont hold anything back, but want to limit the distribution describe the technique to make FA recordings undecodable.)

 

The bad news (however good) is that during the '2982' release process, I found some minor changes to the 'grouping' that further corrected the processing.   In '2982' there was a 'hack' added to the 7.5kHz band to help with sibilance, but this new grouping seems to eliminate the need for the 7.5kHz hack (basically, partially cancelling the gain control for 7.5kHz.)   In a given sequence of test releases, there is seldom a change in static freq response, only changes to the dynamics.   Therefore, 2982, 2985 or 2986 will have the same 'static' freq response even though they might sound slightly (or profoundly) different.

 

So, I am restarting the release process to produce a new release, V20CR2985, and have stopped decoding 1/2 way through the V20CR2982 Carpenters albums.   The ABBA snippets and the 99 recordings snippets ARE available at the location above.

 

This is NOT a real release, because I have promised that the next actual release will have a Windows decoder available.   These are only progress report demos.

 

John

 

PS:  I will probably play with the new V20CR2985 today, doing very careful comparisons with '2982' along with the FA.   There JUST MIGHT be a '2986' uploaded a few days  after Christmas instead of '2985'.   I am testing '2985' just this minute, and so far very happy.   I hope to find the last bugs soon.

 

 

Link to comment

The minor problems with V20CR2982 were mostly that some of the 'multi-step' dynamics settings were not quite right.   Also, the HF coming out of the descrambler was slightly confusing the DA layers from generating spurious HF components.

 

Just to go from 2982 to 2986 it took perhaps 20 test iterations to figure out the best settings for the best results.

 

The decoded results now sound surprisingly similar to the FA version, but more clean and slightly better dynamics.   In some cases, not a big difference, in others the result is more than profoundly better.

 

This corrected version is MORE than worth it, including cleaning up the chorus in ABBA's 'LIvingstone' and ONJs 'Country Roads' on the 48 singles CDs.   The mush is cleaned up very substantially.   Also ABBA Gold is probably perfect along with Dreamworld being better than I ever dreamed possible.

 

The step about 6mos to 1yr ago about the general descrambler config did come close, but 'close' can sound terrible.   It took a very long time to get to this point.

 

I'd love for some previous skeptics to give this a listen!!!   It works...

My only admission is that this took too long, and I was way too rigid in some ways.   I had to be rigid though, because there were so many variables that the project was near chaos all of the time.   It was only the dogged narrow minded rigidity and intolerance for distraction that allowed progress to be made.   Perhaps it might have been easier to avoid the extreme rigidity with someone working right next to me, with me.   It would have been wonderful to have bounced ideas around -- and there were kind individuals who tried to work with me.   Frustratingly, the propagation delay of ideas over the net had been JUST TOO SLOW!!!

 

This really is a victory lap, and you'll hear the results hopefully before the beginning of the year, also hopefully with a Windows version of the decoder...

(Also, to protect the function of the decoder, the signal attribute that I call 'the pilot' is going to be kept quiet for a while.)   As always, will explain in private.

 

Victory and relief...

MORE TO COME!!!

 

John

 

Link to comment

Even though the recent demo release was VERY good, I am finalizing some nits (e.g. LF < 100Hz response and dynamics, etc etc.)   Working on all end case stuff, making sure that there is either a good 'perceptual' match or technicall as good as I can get it.   This is incredibly tedious, and might have to do several iterations.    Right now, with zeroing in on the target, smaller and smaller errors become unacceptable.

 

Earlier yesterday, when embarking on V20CR2985, then V20CR2986 then the full run through the ABBA recordings with V20CR2987, now into the 99 (97) demo recordings, I am not 100% sure how correct it is (YET).   I thought that this would be a 'slam dunk', but I am not sure if this is a 'slam dunk' or not.   After listening to the V20CR2987 in greater detail, I am pretty sure that it will take more time doing the checks.   This, V20CR2987, is sounding pretty good, but found some deviations from the FA source material that I want to make sure are correct.   These deviations sound better, and have fewer defects in the decoded version, just want and need to be sure.

 

All I can say is WOW.

John

 

Link to comment

During the careful reviews, listening to the recordings -- esp plucking strings...

 

Did you ever notice that plucked strings on commercial recordings are missing the 'pluck' sound of the initial impulse from the string, like a guitar string?

I remember when listening to my family's bluegrass playing in the kitchen, the sound of a plucked string is quite intense (e.g. my Great Aunt Helen Osborne -- stage name, Katy Hill).   The sound of actual instruments can almost be almost repressive, but beautiful.   This impulse-like sound is totally missing on normal commercial recordings.  (BTW, I can tell you a real bit of Bluegrass history, parts of the story about 'Katy Hill' in  Charlie Monroe's band.   There isn't a lot known, but she and her banjo had become a bit of a bit of a bluegrass history enigma.  Even members of our family, including her daughter,  only have bits and pieces of the story.  If 'Katy Hill' hadn't retired into family life, in a different world, could have been the artist to play the 'Beverly Hillbillies' theme.   Great Aunt Helen was THAT good.)  

 

Anyway, back to today's reality...   Of course, the output of the decoder tries hard, but doesn't produce those beautifully intense dynamics of a real band.   The sad thing is that normal commercial recordings are woefully poor, definitely worse in this area.    Bless the wonderful refuge of boutique recordings and those who are fortunate enough to be able to participate in the experience, hopefully with a capable enough system!!!

 

Very IMPORTANT to the project...   The descrambler is hogging a thread, becoming a limiting factor on decoding speed.   In order to spend more time verifying the output of the decoder, it seems to also be a good idea to spend the time to split the descrambler into two pieces.   The descrambler naturally has two phases, each taking the same amount of CPU resources.   Splitting the descrambler is really only a few hours of work, but will also give me more encouragement to further test the decoder output.   At this final phase of new development, 99.9% of the work in the decoder is directly descrambler related.   The extra day or two delay will be used to advantage.

 

As this is being written, I hear no major flaws so far, except perhaps the highs like cymbals/high hats might be very slightly repressed sounding.   Strings/plucked strings probably sound as good as they can be given the recordings being used for testing.

 

John

 

Link to comment
Quote

As this is being written, I hear no major flaws so far, except perhaps the highs like cymbals/high hats might be very slightly repressed sounding.   Strings/plucked strings probably sound as good as they can be given the recordings being used for testing.

Hello John,

Hope you are having a good day today and not working on the decoder. I'm happy to note your recent comments on the various issues in recordings that cannot be fixed in a systematic way. I have always felt that the recording, mixing, and mastering chains invovle a lot of sub-optimal components and practices that are widely variable and idiosyncratic across time and at any given time. I look forward to the next release and am eager to start runnning my test cases which are, deliberately, very different stylistically than yours.

 

Skip

Link to comment

Status for demo, then quickly following release...   That is, hopefully 'quickly'...

 

Opened up a 'can of worms', but just so happened these were 'good worms'.   Bottom line, with this design revelation, there was a major problem corrected.   This problem had always persisted, but only seriously manifested on a few recordings.    All bugs are important, especially because we just might not catch all of them.  The fewer left in the decoder, the better chance of really good decoding results.

 

The last 2wks has been spent changing the descrambler settings from behavior determined on an individual band basis to a grouping.   I never found a good grouping until now.   A grouping is more likely to sound better than individual bands because the bands not being in sync would encourage distortions.   A good, apparently accurate sounding grouping has been found.   It might seem trivial to have found a reasonable grouping, but the design is fairly complex, and not solely on a band by band or grouping basis.

 

I know that this blather seems nebulous, but a lot of little nits have been resolved -- I had thought that they would never be resolved.   Things keep getting better and better.

 

Interestingly (to me), as I had mentioned before, there is a magic number '2.213xxx' which is the basis for all frequencies and delta frequencies.   Also, in the last few hours,  I found out that the 2.213xxx number is also the basis of at least one Q value.   To give an idea about how subtle and contorted the design sometimes seems, this magic Q value is '2.213xxx' to the 2/5ths power.   But instead of being full accuracy, it is truncated to the 1/10000s, and again, this is the magic Q value.   So, there are certain of the Q values AND frequencies AND delta frequencies all based upon this 2.213xxx magic number.   (Actually, the exact value for the magic number is 2.21309470960563772233, which seems to be  super crazy, but REALLY true:   (10^69)^(1/200).)   Those who have background in EE, and understand the theoretical basis of the EIA component values, might notice that  this is one of the theoretical, infinite accuracy component values.   The entry in the ACTUAL EIA list is a little different from this theoretical value.

 

All of this 'technical stuff' is really off topic, just that there should be a demo before the weekend, hopefully if there is no trouble a release will come soon.   However, I'll be tied up for the entire week of the 1st.   More than likely, a test version of the Windows decoder might be available this weekend -- unlikely, but the full demos/decoder release will not likely happen before 6th Jan.   Again, I am hoping that there will be a set of demos, and will throw in a Linux decoder, at the end of the week.   I just cannot do the Windows version before a day or so of work -- I am just not all that good with Windows.

 

Good news:  the decoder has produced 'release quality' results for the last few days.   The corrections are now mostly minor, but really need to be done.   (At least, fixing low hanging fruit.)   Geesh, a LOT more complex & subtle than I had ever imagined!!!

 

John

 

 

 

Link to comment

Change to note below:

I might delay by a day or so (no more than two) to offer to the reviewers.   For the reviewers who read this forum subject, if I skipped you in the PM messages, please don't be offended.   Sometimes, reviewers pick-up after a few months, and I don't want to bother those who are disinterested...

 

So -- when the pre-test demos are ready, I'll also announce here, so that if a reviewer who has been able to regain interest, then I am always happy to publically or privately correspond!!!

 

The slight delay is regrettable, and still might not happen (that is, I might be 100% confident instead of 95%), but just wanted to forewarn that so many changes have happened, that I might get 'cold feet'...

 

------------------------------------

 

 

We are very close to a good  version that is likely the most 'accurate' so far.

 

The reason for 'accurate' being in quotes is all about no available objective measurement methods, so do an exhaustive & corrective search for bad subjective 'tells' in the sound.   There is a very limited objective measurement for a limited set of parameters.   Even though the objective goals aren't sufficient, no demos and no decoders are released without meeting the limited set of objective goals.

 

On each day, most days spent working on the project, it is likely that I do 100 or more comparisons, most likely as many as 1000 (really.)   The number of saved/checkpointed test versions per day is on the order of 10, perhaps as many as 50.   All source code edits are kept, with distinguishing between automatic saves and save-exit sequences.   The number of source code edits is probably, on average, 5-10 for every checkpointed test versions of the decoder.    Even days that I take a 'vacation', at least a few tests are done, and perhaps a single new checkpointed version might be created.

 

Doing the release is a lot of work, a lot more than just doing a final compile, save and creation of the package.   It is likely that a *demo* release will be possible today, but would be better to suggest tomorrow (Friday.)

 

Anyway, the only reason for describing the amount of effort is that I want to make sure that everyone knows that progress is being made, even when I am quiet.

 

The development effort has been that of exhaustive search by someone who has lots of experience, knows the theory and most of the math.  The problem has been that there is no known 'goal' and the main developer's (my) hearing is really, really strange.   Oddly, and you won't believe this, my hearing is better than it has been in the last several years.  This amazingly better hearing has allowed more 'linear' rather than 'random' progess.   If I knew that the FA scheme was so strange, I would never have tried from the start.

 

Major announcement:   it appears the design details now appear to be fully known.   The descrambler metaprogramming scheme is now understood and actually makes sense.

 

Secondary announcement:  since the descrambler is now working correctly, it has been possible to allow the DA layers to run 'full blast' with maximum expansion.  The output of the descrambler now triggers more consistent and likely more correct expansion from the DA layers.

 

Getting close!!!

John

 

Link to comment

Made another observation about some of the previous tests/demos using the decoder...

 

A lot more recordings have the 'pilot' stripped off than I had realized.   Given that I had noticed the 'pilot' only a few weeks ago, I didn't know that it was something that could be missing.

 

For example, I have a premium copy of the 'Fleetwood Mac'/'Fleetwood Mac' album (actually, a digital 96k copy.)   I was listening very carefully, and noticed that the decoder wasn't producing the results that I had expected.   Hmmm...   What is wrong?   Right?    On the other hand, my 'vanilla' copy of Rumours sounded good...   Last night, I noticed the same thing on a 'premium' Simon & Garfunkel album, still sounded 'compressed' after decoding.   The 'olden time' original release saw major improvement upon decoding.

 

After a few minutes of analysis, recognized that every last bit of the signal attribute that I call the 'pilot' is expunged from the otherwise obviously FA recording.  'Rumors' has a very robust 'pilot', while 'Fleetwood Mac' had it all removed.


I am doing a demos-test run right now, not intended to be uploaded.   However, before I do the actual public run, I WILL check every demo item to make sure that the 'pilot' still exists.   Decoding CAN happen without a pilot, but it isn't really 'decoding', but just 'enhancement'.

 

A little bit of a slowdown happened over the last day.   I was chasing a rabbit into a rabbit hole over the last 24Hrs.   This search of the 'rabbit hole' was an attempted improvement in efficiency, lured because quality seemed to be maintained or even a little better.   The decoder requires LOTS of EQs that form the signal so that the discriminators can do the gain control.   The 'rabbit hole' was manifest as an over simplification of the EQs.  The slightest variance of phase can be totally damaging, and I did a very effective job of disabling the descrambler during the experiment.   Of course, expectation bias was the worst culprit.  The broken descrambler would have been a major embarassment.

 

It took several hours to re-constitute the descrambler, again producing good results.   A few minor 'improvements' were found during the almost wasted time and were included into the new descrambler.

 

As soon as the demos (early demos) are available, I'll tell you ASAP.   There has effectively been another delay of a day or so.   So frustrating...

 

John

 

Link to comment

Slow progress on final adjustments...

As many of you might know, when working with a fast expander, the result can sometimes sound 'more thin'.   That is the action of a strong/fast expander.   Finding correctness is very tedious, requiring lots of careful A/B.   It is especially challenging because of varying hearing and perception, but progress is being made.

 

Frustratingly, an apparently 'correct' setting just might require a bit more CPU.   This is because there are actually TWO descramblers, one for L/R and the other for M/S (cancelling some of the negative effects of each.)   In some cases, the settings are sitting AROUND both descramblers, in other cases the settings are INSIDE each one.   When the setting elements are inside, the the descrambling effect is stronger than when the elements are outside.   The challenge is to figure out which is correct, removing the most 'scrambling distortion', by using subjective methods.

 

The work is tedious, and when positive progress has stopped, showing that the best results are attained, this is when the demo release will be started.

There are also one or two basic internal elements that need to be chosen (related to both the pilot extraction and whether or not the 221.3Hz/7 frequency should be used.)   All of this is very esoteric to those not directly involved in the final settings, but trying to describe the challenge of the final setting progress.

 

All of these settings (not many) would be much easier if there was only one and the results didn't interact, along with the most important 'subjective evaluation'.   This thing has blown up from something 'simple' to a rather complex mass of 'stuff'.

 

At some point, very soon, assuming no new 'rabbit holes', I'll feel comfortable to run the final demos.   There have been about 4 demo runs in the last 4 days, each one stopped when I felt that the decoder/descrambler could produce even better results.   Again, when progress stops, THAT is when I'll feel like it is right to run the demos.

 

My time estimates have been erroneous, partially because of enthusiasm, but perhaps more strongly a matter of misunderstanding the complexities of this part of the project.   Both are my flaws, but I am also working to compensate.   ANYONE who wishes a short run of demos to prove status, just let me know.   There is always a 'functioning' decoder available -- just not quite up to my standards yet.   MY standards have been informed by others who have had high standards all along.  This has forced me to improve/increase my own.   Once these improved standards are adequately addressed, then as I mentioned above a few times -- there will be a demo release.

 

I truly 'pray' for this weekend, but might have to further delay.

 

The current descrambler processing isn't quite balanced from LF/MF to HF.   The objective freq response is 'flat' (like +-0.1dB from 50Hz to 15kHz), with slightly rising at below 50 and the tendancy to rise a little around 15kHz.   I am not worried about the static response, more focused on the dynamics processing.

 

Truly working intensively and as intelligently as possible!!!

John

 

Link to comment

I owe you all an explanation for both a delay and 'holding back'.

Least profound, there is a bit of a medical delay, not long, but will impede some of the effort, really shouldn't have been a problem.   The demo release should have been ready at least 1wk ago.

 

There are more impactful technical and subjective issues.   Expectation bias has really been a challenge.   The static freq response is now easily measured.   There is even an approx way of determining if material is fully decodeable, but the A/B comparisons are fraught with problems of expectation bias.    Full randomizing on individual recordings or small groups doesn't work, because I am so very good at hearing 'tells'.   That is, I can hear differences that aren't really related to quality, therefore giving me hints about which is decoded and which is FA.

 

Therefore, since there is definitely some credibility at stake, I am trying my heart out to keep from making mistakes caused by expectation bias and similar bias issues.  It is taking a lot of time (many hours) to 'reset' between creating a candidate release and doing the verification of the 'release'.   Since I don't have a small patient group of people here in my 'lab' to do quick verifications, I am trying to keep as unbiased as possible during the A/B comparisons.  It is probably unexpected that I am incredibly skeptical about a given test version.   I TRY to find problems, but too often get caught up in 'micro-improvements'.   'Micro-improvements' are sources of certain 'rabbit hole' situations.

 

Skepticism and fear of expectation-type biases are the primary reasons why I might have an apparently good version that COULD be demoed, but I haven't convinced myself yet.

 

I have been in 'subjective evaluation h*ll' for the last several months.   The descrambler WORKS, and can do profound amounts of dynamics processing, but 'correctness' has been difficult to determine.   ('Profound' in this sense would be several dB, maybe as much as 6dB, not dozens of dB like with normal dynamics processing.)  The settings are *very* unintuitive, but some 'meta-meta' schemes are used, therefore simplifying parts of the meta programming.

 

The primary function of the dispersive scheme is the cancellation of certain kinds of distortion, which normal expanders cannot 'reach'.   It has been challenging to avoid adding unintended kinds of distortion.   There is a lot happening in the descrambler, and it is more complex than my ability to mathematically model -- therefore it has required infinite patience and a lot of work to finish up.

 

Some of the worst 'mush' created by the FA encoding can be mitigated and made essentially 'clean', but there are many possible flaws.   It is easy to erroneously exchange one kind of distortion into another kind.   It has taken a LOT of patience to avoid exchanging the distortions.

 

MAJOR progress has been made, even in the last day or so.   A descrambler change/improvement  made today will take at least 2-3 days to qualify as an actual correction.   This is NOT a simple phono preamp with just a few variables, there are 100's of variables with effectively 1000's of electronics components.  My brain doesn't have enough wiring to fully understand the descrambler.   However, with all of these caveats -- the descrambler seems to be doing a LOT of good.

 

You will have the (source/binary/demos) code as soon as it is demonstratively very close to correct.   Requests for short demos or source code are welcome and will be satisfied.

 

John

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...