Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

Something about pushing 'ABBA' through the decoder...

 

When listening to the output of decoding the ABBA albums, the difference between various 'normal' undecoded (like straight conventional Polar releases) and decoded is worrisome.   There *has* to be something wrong.  In desperation, I pulled out the 'other', over processed ABBA albums, TCSR (the complete studio recordings) in particular.

 

The comparison between TCSR is both surprising and reassuring.   Noting that TCSR is heavily over compressed and the decoded output sounds less compressed than the normal distributions, there is quite a bit of similarity in the sound...   The output of the decoder has similar tonality to the TCSR recordings.   That is, the output of the decoder has the 'brightness' of TCSR without the hyper compression.   There is also one case, in particular, where the garble is much more intrusive on the TCSR recordings than on the FA decoder results.  On 'What about Livingstone', there had always been quite a challenge in mitigating the 'garble' in the vocal chorus.   The best that I can do is to get rid of the strong low frequency intermod products, usually the most disruptive in the sound.   The sound of the chorus on the TCSR recordings is truly unlistenable.

 

A conclusion is that the lousy sounding TCSR recordings seem to come from true master tapes, then heavily compressed.   The output of the decoder SEEMS to be very similar to the source material used on the TCSR, without the heavy compression.   TCSR gains at least 4.5 to 6dB of loudness because of the compression....

 

The other conclusion is that the decoder output has the same tonality, therefore is more evidence that the project is close to meeting its' decoding quality goal.   In fact, it is very close, modulo some very minor output EQ.

 

999B should be available in approx 8-9Hrs, the demos have just been started.

 

John

 

Here is a explanatory note sent to one of the normal correspondents/reviewers/friends, recognizing that it might be best to publically explain the reason for the potential error:

 

The discriminators need a series of EQs with specific Q values.   Normal EQ sometimes requires a mix of Q=1.414 and Q=0.5771, and that is what I used in the discriminators.   However, historically, there has also been the combination of Q=2 and Q=0.577.   When I was doing A/B checks a few days ago, it seemed like the sound was very similar and I decided to stay with the conservative, more often used Q=1.414.   More and more, it seems like that was a poor decision, and I should have used the Q=2 choice instead.  The Q=2 choice seems to correct the problem, assuming that I am hearing the actual problem.

 

Some of these choices are based on intuition because I simply cannot hear reliably and get confused.  My confusion creates bugs like this.

 

I suspect that this is the last bug in the decoder (humor suggested!!!)

 

John (again)

 

Link to comment

Good news and bad news...

Good news is that a reviewer (one of our local AS friends) didnt like a certain aspect to the sound on an ABBA recording.   He actually pointed out a problem that I thought to manifest as another, seeming unrelated  minor problem.   It so happens that I made yet another mistake on the L/R <=> M/S conversion.   It is very tricky to find a good 'tell', and this reviewer did find one.

 

The bad news is that we'll need another 'RELTRY'.   I was going to do a fully private release  for a local AS friend with another minor improvement, but instead it looks like there will be another public 'RELTRY' with this important, but subtle correction.

 

The bug is understood, but the full & accurate solution might take a few more hours.   This is one of those 'knife edge' effects, but has a wide domain of potentially correct values.  On our side, I do have a list of plausible values, but it is easy to make a judgement mistake when doing the A/B tests.   There is already a good, prospective answer to the correct L/R <=> M/S conversion problem.   It should take another listening session to verify the choice for the A/B comparison.

 

The 'RELEASE' is planned for perhaps one week or just a little less after the final 'RELTRY'.   My hope was that the recent 'RELTRY8' would be final, so the release time would have been upcoming weekend.   Now, instead of the actual release being this weekend, it will likely be next weekend.   The new 'RELTRY' will hopefully be available well before this weekend.   There might even be time for a 2nd 'release try' before the next weekend, but hopefully not needed.

 

Building the new Windows development environment on the gifted laptop will take quite a bit of time tomorrow, but might also have time to finalize the new 'RELTRY' also.   No matter what, once the choice is made, the demos would be the only delay before uploading everything -- hopefully, it will be possible to upload the Windows binary also.

 

John

 

Link to comment
On 10/15/2023 at 4:36 AM, Currawong said:

 

I'm probably going to explain this poorly, but more or less, it un-processes recordings, removing things caused by either the electronics (eg: hiss) or the engineer. Of the tracks I've heard, with some music you get a bit more clarity where everything is, and with other music, it's like a massive veil has been lifted, and it sounds more like a modern, minimally-processed recording, where you can hear a great detail of detail of what the mics actually picked up.

 

On 10/15/2023 at 9:01 AM, John Dyson said:

Yes, your explanation is good & helpful 😇, infinitely more readable than what I will write below...  I am hoping to add more details, more than likely increasing confusion!!!

 

There are a few kinds of processing sometimes done to recordings, some types of processing are done more often than others.

Of the various kinds of 'mastering' done to recordings, there is one kind of processing that appeared in the middle 1980's, simultaneously or just before CDs were released.   This processing, done very often to digital recordings is like an 'auto mastering' with other dubious 'benefits', mostly to the IP (intellectual property) owners.    Nowadays, non-boutique vinyl is sometimes made with the same or similar 'auto mastered' recordings.   The 'FA decoder' project is intended to at least mostly undo the 'auto mastering' processing.   (The name 'FA' is poorly chosen, suggestions are welcome!!!)

 

*  The 'auto mastering' (FA) does cause a 'veil', that I personally find to be intrusive, but a lot of people (by far, most) don't seem to be bothered by it.   The amount of 'distortion' is far far far greater than the distortion in todays well designed analog equipment.   The 'auto-mastering' (FA) kind of distortion is apparently subjectively subtle for most people.

 

(The rest of this message is meandering, but might give a little more background, but the details are'nt absolutely necessary to generally understand what the project is about):

 

The processing does create a veil, and is algorithmic, somewhat 'reversible' or 'undoable'.   The 'auto mastering'  processing is done in analog, which means that the processed recordings, even if digital productions, are done with the special analog processor.   The processor is a very precision device and complex, and requires a lot of 'technology' to undo the effects   In the analog world, a 'decoding' or 'undoing' would almost require using exactly/physically the same components -- but using those exactly the same components would not produce good results.   There are aspects of the 'special automagic mastering' that impart a semi-permanent distortion, totally impractical to clean up in analog electronics.   The availability of very powerful CPUS, starting in about the 2011-2015 timeframe allow for realtime 'de-mastering' of the recordings, with some techniques that can MUCH/nearly ALL of the semi-permanent damage.

 

My claims that there is a 'common' mastering scheme  for most consumer recordings  IS controversial.   But, there is so much subjective evidence that there is an almost exactly the same 'footprint' on almost all non-boutique consumer recordings checked so far.  This continued evidence of the little technical/subjective tidbits about needing extreme precision  keeps me from rescending/demuring about the claim of 'common mastering scheme'.   If I found any evidence or technical hints that the processing doesn't exist, I would have stopped and made a public announcement, then hiding in shame, but that hasn't needed to happen.   It would have been easy to give up, but my integrity and persistence (along with the hopes of a few steadfast audiophiles) keep the project going.

 

The good news is that all of the components of the 'auto-mastering are now known. The results for 'undoing' the 'auto-mastering' are starting to show better, if not 'reasonably good' results.   The actual goal is to undo all of the 'auto mastering', and we are moving in the correct direction, close to hitting the target.  'Sounding good' is a secondary, but very important goal.   The primary goal is to fully undo 'all' of the 'auto mastering'.   Perfection is impossible, but getting very close to perfection is acceptable.

 

I (and the project) have lost a LOT of credibility from my erroneous claims motivated by extreme enthusiasm and sometimes poor judgement.  In some circles, I might be thought of as an  eccentric fool,  the reaction is fully understandable, and I might have felt similarly if I didn't agree or understand about the project.     There are a few people who understand the project and have come to know me fairly well online, one person knew of me back in the FreeBSD days...  There is still kind support from those kind and patient souls   I am so very thankful for those kind people, including the very kind and patient forum owner.*

 

*  I am actually a little confused about truly knowledgeable people not believing in the project -- but no hard feelings!!!   I did have a cranky attitude at first, great frustration and 'technical suggestions' that resulted from 'mis understanding' the function of the program.   I am very happy to discuss this stuff with the skeptics again, but would probably be a lot easier if I had time to write some truly accurate technical docs.   Frustratingly, some of the decoder is 'so far out in left field' that I don't understand the necessary math.  (piecewise nonlinear operations on analytic signals.)  I could conceptualize and 'draw pictures in my mind', but not understand the processing in a pure mathematical form.)

 

Have faith!!!!  The purpose for the project IS real, and the project will start being more credible.  Is the decoder really needed?  probably not.  Will it be nice for some audiphiles?  Probably so.

 

John

 

 

Very interesting!  I may have to play around with this at some point.

 

The amount of effort you've put into this project is awesome.

Link to comment
2 hours ago, Spacecase said:

 

Very interesting!  I may have to play around with this at some point.

 

The amount of effort you've put into this project is awesome.

Thanks for the kind comment...

The project has been a major challenge.

 

Even though I had always overestimated the quality -- too much enthusiasm as a 'bad drug', the decoder IS getting close to truly usable beyond just testing/reviewing.   There have been 1000's of setbacks, but there have also been reviewers helping, then going away, then coming back again.   The reviewers/contributors/friends have really been critical for progress, even if sometimes keeping my over estimating the quality in check.

 

I must also suggest a very kind soul who left us a few years ago 'Alex'.   He was so close to the end of his patience with me, but his friendship really helped during some really tough times.  Also, the forum owner has tolerated a lot of 'noise' and a few tantrums.  Now, there are more very kind souls helping, no-one ever truly replaces another, but I so much respect and appreciate those who are helping more and more nowadays.   There are a few ongoing PM conversations, truly not all that many, where there has been important technical help and kind encouragement.

 

The decoder is massive, and needs to emulate ancient hardware down to the level of EIA component values.   A most straightforward example might be an often used value for '3dB'.   Technically, '3dB' is a nice value, but doesn't work in the decoder'.   The actual value for '3dB' is '3.045....dB', which is the dB value for a ratio of '1.42'.   For many of the basic parameters in the system, 'pure' numbers produce distortion, while the 'EIA component value' equivalent produces clean sound.

 

I have learned a LOT, and once the decoder develops REAL credibility, there is a lot of 'cool audio EE stuff' to offer to other interested parties...  There is some good technical stuff in the decoder that needs to be passed on to audio EEs -- at least one substantial and patentable bit of technology, hidden in a bunch of obscuring, messy source code.

 

Every show of interest is very important.  It is getting close to being useful...

John

 

Link to comment

It has been taking longer than expected to re-thread the needle (that is, reprogram the descrambler.)

One reviewer/friend/contributor online has been very critical, but ended up, probably without direct intent, helped to create an entirely new set of contraints on the descrambler.

 

I have mentioned over and over again about the intricacy, but also mentioned my own 'sloppiness'.   When the descrambler sounds 'almost the same' with two different settings, I tended to optimize the result towards stronger dynamics.   This bias towards 'stronger dynamics' had created an incorrect bias in the descrambler metaprogram.  (The descrambler itself is okay.)

 

With 'yet one more' set of constraints, it had invalidated a lot of the old decisions, therefore needing to rethread from scratch.   The basics about the descrambler are still correct and much of the metaprogramming is also still okay.   Some of the details, individual choices had systematically incorrect settings.

 

Given everything learned up to the 25Oct2023 timeframe, the threading is making more intellectual sense than any other set-up.  It has been challenging to match some of the input and output EQ elements, and required some time to determine the correct values.

 

The results are more 'fine' and 'detailed' while still maintaning good, strong dynamics.

The constraints are a challenge, and I am trying to be true to those who have been doing reviews.   I'd like to avoid excessive # of review passes, therefore the next actual demo releases might be delayed a little.   I might do a snapshot just to show progress before the actual 'RELTRY'.   For looking at version IDs in the future, snapshots are numbered like 'V20Annn' instead of 'V20A-RELTRYnn'.   An actual release is 'V20A-RELEASE', sometimes might be 'V20A-RELEASEn' if there is a very very minor update needed.

 

With luck, we'll have a 'RELTRY' in the next day, but most likely will have a snapshot before a 'RELEASE TRY'.

As of the time of this posting, I suspect that changes will be less than 5-10 lines before something being available, but has to be tested/verified on numerous recordings -- often just 'guessing' at what the decoded output should sound like....

 

John

 

Link to comment

Truly trying my heart out for a 'RELTRY', but might just be a test release.   Even if just a test release, a lot of work is going into it.

There is somewhat more participation since we are getting very close to useable (not just my hyperbole.)

 

The current problems are the 'sound' of 'Boomarang' on the ABBA/ABBA album, and the sound of 'Aja' on the 'Aja' album.   Both are being improved, but the timing on 'Aja' required enough rework that some basic, low level, mistakes were found.  Time delays must all line up, and there is a lot of necessary EQ in the design of FA decoding.

 

I am trying to do a 'plausibly good' release late tomorrow >+24Hrs, but will be faster and just as good if I can.

My hearing 'turned off' earlier today, so even A/B comparisons are super-tedious and error prone.   Unless my hearing returns, only limited improvements are possibleeeee

 

The decoding results are generally amazing, but still not good enough for day-to-day use.   Most likely, next week will be fantastic.

There won't be any 'blather' this weekend until there is something good to announce, or something REALLY interesting is determined.

 

John

 

Link to comment

I was just pondering on the announcement of a new Beatles track, which will be released on November 2, based upon a recording on a cassette done by John Lenon just before he died. Using technology, they managed to extract the vocals and clean them up. I think the FeralA Decoder has just as much value for people wanting to hear their favourite music with all the distortion removed.

Link to comment
12 hours ago, Currawong said:

I was just pondering on the announcement of a new Beatles track, which will be released on November 2, based upon a recording on a cassette done by John Lenon just before he died. Using technology, they managed to extract the vocals and clean them up. I think the FeralA Decoder has just as much value for people wanting to hear their favourite music with all the distortion removed.

Thanks for the comment!!!

Here is an untested decoded copy of the AP (AFAIR -- did the rip a long time ago) remaster, a snippet of the song 'Yesterday'.

 

This will sound noticeably different than the original, less 'sparkle', but in the decoded version, much of the sparkle is re-inserted into the recording in the correct temporal locations.

Headphones show somewhat natural room ambience, as I sometimes write: 'all the way down to the background'.

Disclaimer:  I am HF-deaf about 1/2 of the time, and there hasn't been a review on this version, but the minimal objective HF measurements show "okay', YMMV.

 

I sure hope that this snippet isn't a letdown...

https://www.dropbox.com/scl/fi/bjg58rdrfcbitai4ulaj5/13-Yesterday-2009-Digital-Remaster-V20A-RELTRY230-SNIP.flac?rlkey=j44w36a3kj0hop0m3bumqvbbk&dl=0

 

John

 

PS:  the decoder used isn't even a demo release version, but should be pretty good.   There are still a few flaws being concentrated on.

 

Link to comment

The upcoming release will sound even better than the previously demoed version that produced 'Yesterday'.

There have been several bugs that have persisted for the last 6 months, even though many many many other bugs have been corrected.   Some of the bugs were corrected based upon the Aja track, but also improved the sound of 'Yesterday'.   The decoded result does sound more correct now.

 

The descrambler design has been mostly solid for at least several months, but the descrambler meta-programming has been just as much of a challenge than the descrambler concept itself.

 

The actual target being worked on over the last several days has been the 'S/T' track of Aja.   There is still some 'suboptimal' behavior when playing Aja, but is listenable.   I'll be working on improving the Aja vocal before an internal review copy of the 'decodes' late on Wednesday (probably Thu in OZ -- never figured that out.)   I might make a passing public mention of the Wed demo, but the main focus will be private reviews/comments by the several reviewers.  Proper constructive criticism, however negative, is welcome from ANYONE!!!

 

If Wednesday goes well, which I expect, then we'll have a REALLY GOOD decoder -- maybe available early this coming weekend?   If there is a delay, it will be to focus on creating the best possible decoder and creating the Windows version.   Other than the decoded results, an actual Windows decoder will be available (in addition to the Linux decoder.)   I am a little worried about all of the 'hoops' needed to create the Windows decoder, but the availability of the Windows-based decoder is necessary before a release can be announced.  (My old Windows machine failed -- gotta set up a machine from scratch, with the same development environment.)

 

John

 

Link to comment

 

A V20A-RELTRY999 had been intended for reviewers and has been produced, and is in the upload process.   This demo 'RELTRY999' has been found to be flawed in a subtle,

but noticeable/important way.

 

Instead of making a dual RELTRY999 and RELTRY999A public mention (and private announcement) for to review bot, there will only be a RELTRY999A mention/review in about 1day.   The RELTRY999 is so very close, I mean a 'razors edge' close to being technically good.  However, the flaw with HF processing (HF dynamics processing overly softened)* is a simple but deadly  bug.  Because of the bug, RELTRY999 will be 'sitting around', and not really a major focus.  (Those who remember the URL for the public demos will be able to find any uploaded version that hasn't been already deleted, I'll also mention that URL when RELTRY999A snippet demos are available.)

 

(* the softening of the HF dynamics processing helps to control Gibbs effect problems, but it is a very tricky thing to do without having a strong effect on the audio.)

 

So, the HF 'softening' will be changed, requiring at least a few hours of work before starting the 'demos creation' process again.   When the demos have been completed and upload, there will be the public mention/announcement of RELTRY999A for review.  A few prospective improvements have already been tested, all of the improvements appear to be 'good', just need to pick the best.

 

This new version does sound really good -- gotta get feedback from the reviewers for RELTRY999A first before making serious claims -- I have made bad decisions about claims before.    Those who were not previously a 'reviewer' are very welcome and encouraged to give feedback when it is available.   GOOD things are happening.

 

John

 

 

Link to comment

Some pre-announcement feedback was provided on V20A-RELTRY999A, and it suggests the possibility of slightly too much midrange.  (Thank goodness to the really great golden ears out there!!!)   I'll be looking into the possibilities and await further feedback.

 

Public repository:

https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0

 

Public snippets of V20A-RELTRY999A are available in the general public repository above.   You are welcome to do some checks for yourself, and further feedback is welcome.   Both V20A-RELTRY999 and V20A-RELTRY999A are available, but it looks like V20A-RELTRY999B might be needed.  * If you ever see full recordings in the public repository, please tell me immediately, because it should not happen.   I do check, but might make a mistake.

 

ADD-ON:   I did the initial version of this post in a panic.   Because of the complaint from someone who I generally trust, even though most of us is wrong from time to time, I did panic.   The reason for the panic might be correct -- I did try the first possible change that requires NO change to the regular structure to the descrambler, with SEEMINGLY positive results.   With my relatively deaf hearing, a simple, proposed change seems like there might be an improvement in imaging and 'life', without really an explicit change in the midrange dynamics.   There are literally 1000's such possible changes, but the decoder has been exhaustively and exhaustingly developed, and this slight change was a potentially BAD choice.   The change is literally changing a commented out line, code already exists.

 

THEREFORE, I'll start a RELTRY999B, do reviews myself for 'bad sound' before offering to the reviewers and other friends here on AS.

 

If the midrange needs a modification, the change is immediately known as it is a small step, a slight modification of the regular structure of the descrambler.  Deviations from the normal seem to be done at both ends of the spectrum, and possibly needed in the middle also.  The change isn't really 'frequency response', but if needed would be dynamics expansion in the midrange, about 2kHz.

 

It is also possible that the midrange is correct, which then suggests that V20A-RELTRY999A is indeed a good prototype for the actual release.

 

Sure wish I could get this right, but if you do listen, I think that you will find that the basic quality is fantastic.

 

John

 

Link to comment

One more status update before the next try at 999B or C...

I have decided to further study some of the historical settings around the time of the Q change in the discriminators.   This review will take a few more hours, but be very beneficial.

 

The Q change was made because of ineffective A/B choices, and choosing wrongly in some cases.

The original Q value back a few weeks ago was correct, but had made a change after defective A/B comparisons, because the 'new' Q a week ago is more standard given the normal EQ in the system.  The 'standard' Q value in 999A was wrong.   Apparently, the Q values in the discriminator should be exceptional values.   This is one of the cases where 'following the rules' can produce incorrect results.

 

WIth this change, a few other things can be optimized, thereby further improving the sound beyond 999A.   Don't get me wrong, these are not major changes, which are either slight or #ifdef changes (that is, code that already exists, selected at compile time.)   Even in the few hours since this posting, my ability to hear has changed, therefore further demonstrates to me that the choice was wrong.   The optimal change associated with the Q value change from 1.42 (ideally 1.414...) to 2.0, however, does require more A/B testing and choice.

 

I LOVE the sound of the new version of the decoder, just that there are some HF timing issues that create harshnes and some problems with the dynamics because of bad Q choice.   The dynamics and the highs in 999A are STILL more realistic than the FA version, just that they were wrong.   Doing more comparisons, the upcoming 999B (or 999C) will be closer to the few references in my archives.  (Too few references to use, really need more than 2, and it would have been best if one of  the references hadn't been mastered.)

 

THIS THING IS A BEAST.

 

John

 

 

Link to comment

The delay today is so regrettable, but has a real purpose.

When I was told that the midrange was a little 'strong', I actually misinterpreted what was going on in the decoder.   It took a few more reviews and tries than I had thought it would take so that I can understand the problem.   Previous code almost solved the problem correctly, just that subsequent mistakes were made.

 

I better understand the bug, and it really is a bug.   Even the slightest error produces a noticeable quality problem, and I simply made a mistake.

In order to revisit the midrange, a few sections of code are being reverted to some older (but better) stuff, with some coding upgrades in the meantime.

 

Instead of today, the next demo/review version will be tomorrow evening.   This will take the whole day, I guess could be done in a few hours.  However, while I have good hearing, it will be good to take advantage of it.   Trying to produce such complex results is mistake prone when working too quickly.

 

The 999A version wasn't totally wrong, just that it wasn't good in two specific areas, one in one of the discriminator Q values, and I have found that the 221.3/7 and 221.3/49 discriminator frequencies are needed, previously purposely left out.  (Yes, there is an algorithm for the sequence of potential discriminator frequencies, then the freqs need to be selected from the sequence.   This selection is like a 'key' in a lock, and if not correct, the sound will not be as correct as it should be.)

 

If possible, and if I have become confident that the new demo version would be good/correct, it will be produced more soon than tomorrow evening...

 

John

 

Link to comment

Already told one of the reviewers, thought I'd tell everyone.

I have several days of damage to undo and retest.   Sadly, the developer of this program is having REAL troubles with his hearing, and sometimes becomes over enthusiastic.

Next time, I'll have to ask/beg the reviewers/contributors/friends, the few that are left, to again do the reviews before publically announcing a release.  That was originally my plan, but the decoder is so close to being *totally correct*, but I guess -- not close enough.    My arrogance has sometimes led to bad decisions about doing releases.

 

The code has to be reset back about 1wk (mostly already done), and the corrected new changes added back in.   Apparently, had a week or two progressive hearing loss because of my aversion to taking 'pills' including vitamins/Magnesium -- I know, TMI.    Progressive hearing loss is sometimes very tricky to detect.   Sometimes, I cannot even do a reliable straight A/B comparison/selection.  Even more frustrating than 'straight loss', is that my hearing changes at a variable rate, sometimes getting better with a 'goodness peak' at around 8AM.   The loss appears to have started about 5yrs ago, a few years after the project started and noticed some things about FA about yr2011.  As the decoder has become less insane, my hearing has gotten worse.   Maybe when I am totally deaf, then the decoder will be perfect? 😕

 

I greatly regret at least several days delay.   I screwed up badly.   The combination of too much enthusiasm and changing hearing has helped make a bad decision about a release.

1wk delay.  SORRY!!!

 

*  PS:  the most recent test object has been Aja/Aja, it has some really challenging vocals.   Even with the corrected mistakes, there is improvement when decoding.   Progress had still been made, just a few other errors that drove the decoder into 'wrongness'.

 

John

 

Link to comment

There was a need for some rework -- bad decisions because of bad hearing.   The first demos for reviewers will be no sooner than Wednesday.   Of course, the demos will be publically mentioned and URL provided here also.   Hopefully, some time after the reviewers demos are created, I'll be able to start working on the Windows environment for being able to build a Windows version again.

 

New feature will be to specifically enable/disable descrambler, with no changes other than descrambling enable/disable.   The difference is very interesting, and really demonstrates what the descrambler can do.   Also, I ran into a bunch of Phil Collins in the donated archives, and gosh darn, everything that I try to do, the decoder wont work on those recordings.   I spent the last full day on the Phil Collins recordings, and at least the earliest ones produce the exact results that one might expect for a non-FA version of a recording.   Since other recordings are much more amenable, the Phil Collins stuff is placed in the background for now.

 

The non-A/B choice bugs in the descrambler were mostly related to the amplitude of the descrambing action.   What sounded best in an A/B comparison when 1/2 deaf sounds pretty bad when one can hear well.   After the step-by-step corrections, the resulting sound is much more clean, and closer to what I expected (again.)     It appears that the correct settings for the basic metaprogramming have been found.   Also, the L/R<=>M/S conversions were corrected, the sequence was backwards relative to what was needed.   This total botch reduced the amount of descrambling action to a seeming/subjectively 1/2 of possible improvement.   This L/R<=>M/S pattern was established 6mos ago, and the corrected result comes from the attempt to revisit every major decision.

 

Honestly, I have felt that the decoder wasn't helping as much as it should.   It had been 'helping' to some degree, but something seemed to be wrong.   Given the literally 100's of architectural choices and 1000's of potential settings, it is easy to make mistakes. IF ones' hearing is not very good.   The L/R<=>M/S pattern choice was made long ago, and subsequently admittedly my standards suffered.  The L/R<=>M/S decision is subtle, not hearing much distortion when the rest of the decoder isn't mostly correct.  Now, the decoder is mostly correct, and with better hearing, and a better decoder, the L/R<=>M/S choices were corrected.

 

* Even before the current corrections, the decoder has been sounding MUCH better, and in most regards seems to have found a global maximum of quality.   The improvement 'popped' a few weeks ago, and the work currently is to take better advantage of the improvements (and the thing about bad choices and hearing matters.)

 

I am still going back a few weeks, sometimes months, just double checking decisions.   There were serious mistakes in the last 2wks regarding some A/B choices.    I didn't expect the A/B choices to be so distorted when dependent on bad hearing, I thought that there might be adequate compensation between choices.   I was wrong.

 

So, the reworked/retested version will be available for demos as early as Wednesday, Linux decoder will also be available.   The Windows environment frankly frightens me, but I'll do a Windows version before the next actual release.

 

Very sincerely,

John

 

Link to comment

Sorry for the long delays over delays, even though years.

As mentioned before, this is a complicated thing -- but some more overdesign has been found.

 

This remediation of overdesign had been forced by noticing the creation of 'birdies' in the output signal, and also noticing some cases of extreme intensity in the subjective output.

 

The correction is implemented as simplification of the discriminator steps.   The discriminators do a very good job of FA decoding with a simpler design.   The simpler design depend on the built-in pre-emphasis/de-emphasis layer instead adding on an additional expansion by being more complex...  (conceptually, the discriminators should have 'gone with the flow' instead of forcing a solution.)

 

Once this simplification had been completed, a few vexing problems have disappeared.  There is no re-design, just a simplificaton of the discriminator elements in the descrambler.  (There are actually two descramblers. M/S and L/R -- this correction has a 2nd order, sometimes 4th order impact, primarily on intense parts of a signal.)

 

Originally, I had thought that the 'birdies' were caused by a correct design, but Gibbs effect & Nyquist causing troubles.   Yes, the problem was Gibbs & Nyquist, but instead driven by an excessively strong descrambling phase, essentially like an N^2 thing being done twice instead of a more proper N being done twice.   The signal was being torn apart in certain cases instead of simply 'decoded'.   Subjectively, the change is generally less obvious than the description might suggest, but the 'birdies' are gone, and excessive intensity from time to time is gone.

 

Many of the quality problems causing delays have come from this persistent over design mistake.

 

Obviously, instead of this previous Wednesday as the missed goal, I am trying for early next week for the 'reviewers' demo (still public.)   The Windows version of the decoder wiill still be coming only after the official release in a week or so.   During early/reviewer demos, the decoder isn't usually distributed anyway.

 

The next message will either be early next week that the reviewer demos are ready, or if there is a major change somewhere.

As is, the design is conceptually complete.

 

John

 

 

John

 

 

Link to comment

A very important change to the decoder/descrambler design, but not primarily just a 'coding change', but instead a change in 'design concept', which is showing GREAT promise.

 

A comment happened on another major audio forum about how beautiful Karen Carpenters voice is, specifically 'Yesterday Once More'.   Since I am fortunate to have a pure FA (heh) copy of Yesterday Once More, NOT OTHERWISE MASTERED -- I took a listen...   Hmmm...  Something is very wrong with the decoder -- way way way too intense, like a serious 'hearing aid'.

 

This required a rethink, trying to re-understand my complaint about the FA sound, and an idea came to mind.   FA sounds *something* like the enhancement used in old fashioned vocal communications devices (e.g. Ham, Police, etc), but is, of course, different.   The vocal enhancement is done by an HF boost, then clip.   That is NOT what FA does, but the purpose might be similar.   FA needs to pass a wideband signal, while vocal communications requires less wideband and intermod is less important.

 

The descrambler conceptually 'comes close' but had been missing the mark.   Basically, it seems like FA removes the modulated (*modulated*) lower frequencies, leaving a 'smush' in the lows, therefore causing the highs to be incoherent -- making them sound a little distorted, esp 'Yesterday once more'.  I had mistaken the need for the highs to be reconstructed themselves, but instead the highs and lows matched up again, with the LOWS being restored mostly first.

 

What to do?   The FA signal seems to throw away important information in the audio...   It does, but much of it can be recovered.   There is still enough of the information left over to do a lot of reconstruction, and the descrambler can do it!!!   In fact, the test version is doing most of the reconstruction right now.   It is a highly precision operation, like a lot of the rest of the system, but it is working. (There appears to be a very slight amount of hidden information left in the FA signal -- I am not disclosing until I have more proof and won't do so anyway, so that the record companies cannot subvert the 'pilot signal' of sorts.)

 

Still, it will be a few days -- but this long delay is partially caused by this new change.   The biggest modifications in the descrambler are how the >= 500Hz is being processed...   The cutoff point at 500Hz is very important (actually 442Hz.).

 

John

 

Link to comment

The new 'concept' has eventually produced a higher quality, more finessed version of the previous results.  Good news, in a way.

 

It appears that the 'metaprogramming' is very sensitive to certain changes, which had previously not been tested.   A lot of the original metaprogramming concept was based on intuition, which has mostly, not completely, appears to produce nearly correct results.   When making minor changes to the 'good' metaprogramming -- going outside of the 'bounds', the result is very 'disturbing', moreso than ever.   A minor change can result in 'radio static', almost as if the descrambler was an FM  'radio receiver'.   I had NEVER predicted that.

 

Given the new settings being closer to the 'radio static' versions (seeming not dangerously close though), I'll still do more testing to make sure about a lack of 'surprise' when using the decoder.   Also, a side-note...   Karen Carpenter's vocals on the very intensely recorded 'Yesterday once more' are much more clean than either the FA version or previous decoder versions.

 

The new concept has given a seemingly more accurate programming.

 

John

 

Link to comment

Plan for the week:

Wednesday (15Nov) -- preliminary demos, mostly intended for reviewers....

Saturday (19Nov) -- demos intended for less involved users....

Next Saturday (25Nov) -- 'release'....

 

Missed goals will cause a shift of a week.   It has taken 5yrs for me to realize that this project is very/too challenging for me.

 

Reason for new expectations:  new view on design requirements, but basic architecture is the same.   Metaprogramming is changed for more of a 'carrier' restoration instead of a variable dynamics expansion/compression.

 

Various test objects appear to show improved results, but not always 'prettier'.   Mostly, the result is 'better defined' with same general response balance.   Previous versions sometimes tended to be subjectively bass deficient.   That mistake happened because the decoder should NOT be 'flat' in a conventional audiophile technician/developer sense, but more of a sophisticated engineering design, probably by R Dolby or similar peer himself.   This means that the design might hopefully be an intuitive audiophile technology, but parts might be beyond my own -- fairly advanced knowledge, even now.  * I did a lot of work doing subjective comparisons, and there is NO scheme, even with serious descrambler expansion, that works other than explicit EQ at the lower response range.   I have tried EVERYTHING to avoid simple EQ, and will be the first time that simple EQ will have been chosen.

 

The result is coming close to the ideal, but still need something to redirect if I make any mistakes.   We are far far beyond gross errors, but the result still requires some back and forth.   Relative to previous versions, there have been conceptual errors as mentioned in the change in 'design requirement', but also frustratingly minor errors like using 110Hz instead of 111Hz for one of the frequencies.

 

Nothing else much to say except, once this is working, finally, it will be a gift to be able to correct the quality of consumer digital audio recording purchases.   1000's, literally 1000's of mistakes were made by me, and hopefully we are down to a few.

 

Honestly, the actual goal of perfection might not be possible, as I have seen signs of uncorrectable errors, but improvement is definitely going to happen, but NOT improvement in the subjective, but simply closer-to-the-original.

 

IF the goal of the first Wednesday is not met, then the schedule will shift by a week.   There will be no more pushing for artificial time goals, an artifact of my previous worklife and methods that produced successful results.   Those methods are wrong for this project, as everyone else but me has known.

 

Goodness follows....

John

 

 

 

Link to comment

There is a totally new mode of operation using the same software, just different programming...

 

 

There is now some evidence that 'true decoding' at the level of the olden time NR systems is happening.   The operation is 'near-lock step' with the encoding.   There are still minor 'proportionality constants' that need to be determined.   A new mechanism had just been found yesterday.

 

It appears that the 'lowest layer' of the 'Russian Doll' has been found, and the answer is probably NOTHING like what most audio processing designers might think...   I'd love for someone to finally say -- 'I knew the answer all along', it was just a trade secret?!!?   More will be written about in the next weeks.

 

For 'near perfect' decoding, FA files must be nearly bit-for-bit copied, the fragility is not obvious.   If an FA recording isn't copied bit-for-bit, then there is risk of quality loss or even the lock being destroyed.   '.flac' files are safe, simple sample rate conversion down to 44.1kHz is also safe.    Filtered files are NOT safe, depending on the filtering, but it is best just to say -- FILTERED FA FILES CANNOT BE NEAR-LOCKSTEP DECODED.     A filtered file can be 'decoded' like what previous versions of the decoders could do, which means -- far from perfect.    The decoding results come close to what might be achieved when using a DolbyB/DolbyC NR system, but there does also appear to be substantial NR below 1kHz on older recordings like 'Take 5'/Brubeck.

 

If there is some slip of a few hours in the schedule tomorrow, it would definitely come from a personal delay.

I don't think that there are any more layers to be resolved, just proper low level settings.

 

John

 

 

Link to comment

Reviewer demo version is in progress right now.   These demos take LOTS of time, at least the catalogs of two major historical groups, and the normal 95 or so demos...

 

There will be no restarts or anything like that because lots and lots of testing/double checking, etc has been done.

It might be +12Hrs or even longer, but the demo repos will be populated as subsections of the demos are completed.

Public snippet demos are also going to be produced, private and public comments will be welcome.

 

Initially, I had planned that the historical groups would be run first, but after due consideration, the previous tradition of the 95 demos will be done first.

There will be announcements of incremental decoding progress.

These results are worthwhile, more than worthwhile, and the flaws appear to be very very slight compared to any previous version.

 

Using the 'left-over' pilot data has made it possible to much better replicate the recording from before FA encoding.   Might still not be perfect, I have NO way of doing thorough testing or comparison, non-FA copies of normal consumer recordings are simply not easy to find.

 

John

 

Link to comment

V20B-RELTRY50 review/demo is available...


This version is intended for contributor-reviewers, but public comment is also welcome.
 

I haven't made the private-to-reviewer announcements yet, the reviewer repo examples arent quite ready.  Reviewers can probably find their examples in a few hours, but I'll go through the list and send PMs or emails in a few hours.  Sorry for the delay!!!   The reviewers deserve the best responsiveness, but the whole set of packages are just not ready yet!!!


PUBLIC REPO EXAMPLES:
https://www.dropbox.com/scl/fo/9u1e9u0vyjitajn4l26o0/h?rlkey=t401ghb9a3sgp59mgun6u69wq&dl=0

 

The 'flacsnippet' directory contains the 'decoded' snippets.
The 'fscmp10' directory contains 10 second A/B/A/B segments, where the first is the 'decoded' version.
The 'fscmp35' directory contains 35 second A/B/A/B segments, where the first is the 'decoded' version.

 

-----

 

SOMETIMES, listening to short segments doesn't show the full benefit of decoding.   Even though it is easier to remember the 10 second segments during comparison, the 35 second segment gives a better idea about the sound character...

 

The major improvement is utilizing a non-audio 'pilot' in the FA signal.   The 'pilot' does not normally interfere with the direct listening experience, but is helpful in more complete recovery of the original signal.   Once the FA signal is decoded, this 'pilot' is expunged from the audio.  (The pilot is not a tone per-se, but comprises modulation products that can contribute to reconstruction.)


The MF/HF resp at 1.5kHz -> infinity is pretty much covered.  The freq response is not ruler flat, but response is -0.08dB -> +0.25dB, where the response increase above 0dB starts at about 5kHz.   The HF rolloff is only determined by internal 24kHz limits, probably just a little lower freq.   The LF has been a challenge...

 

The LF might or might not be correct.   I still cannot find a 'natural' setting for the LF, but once subjective decisions are made, the setting should be easy to do.  Feedback on 'more bass, please' or 'bass is okay, dont touch' will be very helpful!!!   I cannot hear reliably well anymore, so my judgement of 'response balance' or 'how much LF', 'how much MF' or 'how much HF' is of limited helpfulness.   GUIDANCE IS BEGGED FOR.


So, the big question is about the bass...   Since this is a reviewer 'release', this might not be perfect, but with the additions of following the 'pilot', the sound should be otherwise fantabulous.

 

Thanks so much...   The resulting release, incl decoder, will come in several days to a week!!!

John

 

 

 

Link to comment

To clarify the release plans, if the needed modifications are slight, then a USER/PUBLIC reviews release will be available about this coming Wednesday, 22Nov.   This date assumes that any bugs are minor.   During the time between the weekend and Tuesday 21Nov, I'll be updating the command line switches, making the commands a little easier to understand.   Also, the highest priority will be building/debugging the first Windows version of the decoder in several months.   (I lost the original development environment -- laptop failure.   I'll have to rebuild the environment so that the next Windows release of the decoder can be upload.)

 

As of right now, approx 2Hrs after the reviewer/contributor release today, there is one minor bug, where one individual with better hearing than myself had suggested a small modification to the HF response.   Actually, there had been a 0.2dB to 0.25dB response increase above 10kHz in RELTRY50.   It seems that there needed to be a little more flat response.   The current, internal test version shows a rolloff of about -0.10dB at 15kHz, then -0.50dB at 20kHz.   Because the measurement method is flawed, the rolloff is likely slightly less than the -0.50dB.   In fact, the +0.25dB rising response starting in the 10kHz region might have been worse for RELTRY50.

 

The EQ sequence (necessary given the FA signal design) includes both a simple freq response, but also includes the HF dynamics processing setting.   So far, there will be a fairly small change in the 'sound'.  The bass is still 'up in the air'.   Until this weekend, the focus will be on the character of the sound.   When there are significant changes requested, I'll upload examples of the change, if requested (or if it seems like a good idea.)   When the HF changes is made and decided upon, I'll upload a couple of examples.

 

John

 

Link to comment

One of the very helpful reviewers made a comment about some lost life, reality or somesuch in Dire Straits 'Industrial Disease'.

 

I think that the reviewer is spot-on, and certain recordings, Industrial Disease in particular, has been a bit of an irritant -- I perceive it as 'something is wrong'.

 

Answer:  apparently, the DA layer (DolbyA emulation) had an attack feedback problem, where the feedback would overshoot, therefore making the dynamics too deep for certain kinds of HF dynamics.   It makes the sound 'too tight' with a loss of ambience.  The next, public release will be corrected.   Also, I'll be making a corrected version available in the next day or so.

 

There is already a solution in hand, but might not be the permanent solution.   Still more testing/verification is needed.

To me, the resulting sound so far is more 'musical', a bit more 'smooth'.

 

A second improvement is related to the bass/midrange in general, where the ambience in that frequency range is a bit more 'real' also.   The solution is different, but there is also improved sound for that kind of material also...

 

* If the corrections go too far, the resulting sound can be made worse -- there is an 'optimum setting' as a result of focused A/B comparisons and very a limited group of settings.

 

Will keep you up to date.

 

John

 

Link to comment

An additional comment is requesting an extended LF, not boosted.   I do believe that the LF is slightly truncated...   For much material, there will be very little difference.

 

One difficult part of the decoder is the bass response, where to apply the LF EQ point and how far down should the LF pass through.

 

There will be a demo in about 1day with the corrections.

 

John

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...