Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

  • 2 weeks later...
2 hours ago, Night Rain said:

Any updates at all?

I have been kind of ill, but also taking things more slowly and being more careful.   My hearing has been h*llish for this project, and I have found other way of doing the final EQ without totally depending on my hearing.   The new mostly generic post-decoding EQ (almost the same for all pop recordings, and classical tends to be slightly different) has been determined by COMPARISON with the RAW original and not by 'sounding good' or 'sounding reasonable'.   If the result doesnt sound 'good', esp for POP, the first choice is to try 'classical' stereo image mode, then tweaking the 'calibration offfset' (the entire calibration value no longer needs to be specified by the user), then doing MINOR EQ differences.   Usually, the EQ changes aren't needed -- the 'stereo image' issues can fool the user into thinking that there is an EQ or calibration error, so try the stereo image with --fw=classical first.

 

This is important:  please note that FeralA recordings are very heavily MULTI-BAND compressed in a jumbled scheme.   Because of the band choice for DolbyA, it basically means that when there is below about 120Hz bass in a FeralA original, it will be greatly boosted.   Likewise, FeralA recordings tend to have more, unnatural, 'tinkle.'   I cannot/WILL re-create the false bass boost, or the excessive >9-20kHz treble boost, because doing that is artificial, and unwise for a general purpose solution for (for the decoder) tweaking for individual recordings.   I WILL NOT DO THAT artificial boost.   The end user can do a 'sounds better' tweak for themselves.   Admittedly, what the end user hopes to hear  is all important, but I cannot tweak for one user, then upset another.

 

First about the upcoming agenda, there WILL be an update approx Tuesday (+3 to +4 days.)   I will be releasing a DolbyA tested version for professional users on Monday, but 'FeralA' is more complex, and I just determined the correct EQ all the way through.   The EQ isn't really done by 'tweaking', but there is a set of 'building blocks', and some of the 'building blocks' are a challenge to choose and make correct EQ.   Believe it or not, the lower MF and LF EQ is the most difficult.  The HF EQ is almost trivial, because most of it is 'pre-ordained' and also the 1st order EQ at high frequences does work a bit more intuitivve. All of the EQ is 1st order and not the more common 2nd order or higher as used by consumers.   The 1st order EQ at lower frequencies is TOTALLY non-intuitive to use.   It has been a total b*tch to get right.

 

I expect that the production version will sound the same as this demo version, but I am further updating the user interface to make it more practical for consumer users.   The new version has MANY DolbyA compliance updates -- including matching the lower frequency distortion compensation almost precisely.  (Previously, there was some small amount of distortion left over from the Feral A encoding, while FeralA encoding in consumer recordings DOES produce significant LF distortion because of the fast attack/release times.)   The DHNRDS now more completely undoes this distortion.

 

This first demo below  shows that there is very little freq response difference between FeralA and decoded versions, where I have provided the RAW version, and 1 layer, 4layer, 5layer, 6layer, 7layer and 8 layer decodes.   If you cycle through them quickly, you'll find that the first portion of the selection will sound very similar.  It is only later on where the dynamics increase and do show a *slight* improvement in the decoded versions at about 4 layers and above.   It appears that 7 layers is the most correct result in this case.   THIS EXAMPLE IS NOT INTENDED TO SHOW THE GREAT IMPROVEMENT OF FA DECODING, but only that decoding does  NO real damage.   The resulting response balance is close to the original, but not quite as 'jumbled up', 'muted', 'boxy' as Feral A recordings mostly sound.

 

https://www.dropbox.com/sh/eqhpcyezj1g8cqs/AAD2EHynnH2ZppZme8-Cu9ewa?dl=0

 

 

The second demos are the same demos as before, but with corrected EQ, both more correct internally, and more correct in post decoding.   There is a much more profound improvement in most of those cases, and I dId NOT cherry pick.  So, some of the selections might be 'who cares?' some of the selections might be 'WOW'.   All of these selections are FeralA to begin with.   SOME OF THE SELECTIONS GET BETTER AFTER THE 50 second snippet.   I can provide more (after 50 seconds) of any given example on request.   ALL OF THESE WERE COMPARED WITH THE APPROX 10-15 versions previously done -- most versions done internally.  In some cases, the more accurate results below are less exciting than some of the previous,  but match the balance of the original RAW FeralA versions more closely.

 

https://www.dropbox.com/sh/v90m7q56g64tfgo/AACao_I34J7x2ZJu91qpKG4wa?dl=0

 

Most of the time has been ferreting out the little differences between DolbyA HW, even though it originally sounded pretty much the same.   There are certain cases, esp FA, where the results are much more clean.   DolbyA shows some technical improvements from the previous version, but not quite as profound.   Again, FA is over 5X more complex than DolbyA, and almost impossible to undo FA.

 

So, expect something Tuesday/perhaps Wednesday if I have problems making the user interface as clean as I hope.   The technical challenges are pretty much finished.

 

John

 

 

 

 

 

 

Link to comment

Getting some initial good results for the upcoming release.  Some pretty good, but a little bit conditional reviews.   Resulting from the reviews, my intended release date for the decoder of 19Jan2021 or 20jan2021 will need to be extended for another day.   These results are pretty good -- my hearing is still iffy, but I have been using different methods for the post-decoding EQ (where most of the problems have been.)

 

This project has software that is immensely complex, with lots of NECESSARY EQ, lots of strange filters, lots of Hilbert transforms for the higher quality modes...  Sadly, the biggest problem is choosing the right building blocks (NOT TWEAKING) for the final EQ.   This final EQ is the inverse of the EQ done before encoding...   The guessing and reverse engineering for the EQ has been difficult and painful.

 

The new release is getting pretty good reviews so far, but as I mentioned above -- somewhat conditional.   There needs to be some minor changes and simplification of the post-decoding EQ settings.

 

So, figure on Thursday (21Jan) or Friday (22Jan) at the latest.  I want to make it available before the weekend.

 

John

 

Link to comment

Probably great news, but a little premature -- I am just so happy (so far)!!!

 

You know that the commands have been too complex -- but I have been working on remedying the complexity.   My friends on the private group have been nudging me (including Alex) and helping a lot.

 

Anyway -- about the commands...   A few minutes ago, I was going to expose all of the EQ needed, but during the review, I found an odd HF EQ that didn't seem right.   It was an odd 18k to 22k first order boost, that I couldn't justify during the review.   ALL of the real troubles have been with the HF EQ...    As an experiment, I dropped the 18k->22k eq, and lo-and-behold -- it appears that most recordings are being adequately decoded in a single mode.   There are still two submodes (bass extension and HF softening) -- those submodes are still needed, but the command set has just simplifed to ALMOST a single command.   We just might have found the rest of the 'secret sauce'!!

 

If this all ends up being true, the probability of release (unless illness) should be 100% by friday, and perhaps 75% by tomorrow!!!  (There was always going to be a 100% friday, but my life might be made easier.)

 

John

 

Link to comment

Of course, I am working on the release, but someone on another message list brought up the spectre of using video card for parallel operations.   I am really thinking about this if interest in the decoder increases.

 

I did an evaluation (with an open mind, instead of my ovelry conservative attitude), and believe that the program can do 4X8 or 4X16 1500 tap hilbert transforms in parallel.   Along with that, perhaps 4X8 or 4X16 other, shorter FIR filters could be done in parallel.

 

If I have the energy and powerful enough graphics card to test with, I might be able to put together an experimental Linux version of the decoder which uses the video card.

I don't know the design requirements and execution characteristics of video cards, even though I played with it about 5yrs ago, and not sure if there would really be a speed-up, but it would certainly be nice if the very highest quality mode could be done reasonably quickly.   The normal quality mode would most likely NOT be sped up though.

 

The decoder already squeezes almost every cycle out of normal AVX2 (or other) machines.   There IS an avx512 version for Linux machines, but there really is no advantage to AVX512 for current CPUS.

 

Just thinking about this...

 

John

 

Link to comment

I fully expect the release to be hopefully under +15Hrs from now.   I want to make the decoder available for the weekend.

 

The command syntax will usually be something like this:

 

--fb=a

--fb=b

--fb=c

 

These will likely decode 1/2 of pop recordings, without additional arguments.   Most will likely be '-fb=b'.   There are also some modifiers.   The Carpenters 1970 album is an outlyer, with slightly different parameters.   Without fully explaining the command, it will be this (again, atypical recording):

 

--fb=b,nnn,1ww,0

 

Which basically means, use the 'b' mode, normal (nnn) settings for the next three (high, medium, low freq modifiers), and 1:2 M/S decoding image (1), and the result needs a 1.414 (ww) output image.   Most of the time, like for Symphony recordings where the stereo image modifier is needed, the commnd might be '--fb=b,nnn,1,0'.   THESE ARE VERY EASY SETTINGS TO DO -- once you try it a few times.

 

* Many of the recent demos might have been done with the equivalent of '--fb=c,lnn,1ww,0' command, which means that I was starting with the wrong decoding mode.  I make mistakes like this because of my hearing.   The resulting sound would have been a little 'tinny'.   Sorry about that -- I am fighting an extremely variable hearing issue.

 

If you remember, most of the time, that the second argument is usually 'nnn' -- that would be a good thing.   It isn't necessary to type the arguments when the default values are used.

 

Most recordings, by far, wont' need different from 'nnn', and the most likely, if anything, will need a variant of the decoding image modifier (the 0ww) thing.

More often than not, when specifying the decoding image modifier, you'll specify a '1' (which means a 1:1 decoding image, often used for classical recordings.)

 

The final argument (the final number '0') is the calibration offset, seldom needed to change, but needs to be '0' for the carpenters 1970.   Most recordings use the default of -2.

 

I didn't explain the entire syntax, and a few minor things might change.

 

 

The design  tradeoffs included:

 

1) minimal typing

2) minimal # of variables

3) consistency of syntax

4) very little memorization needed other than a simple command template.

 

NOTE:  All decoding demo issues have been caused by my poor hearing.

 

By far, most of the 'knobs' are available in the command example above for those who have reasonable hearing!!!

 

John

 

Link to comment

I apologize -- things got busy here -- the information is the same, but need to delay for a day (tomorrow.)   I tried everything to get it ready, but ran out of time (personal matters.)   I'll try to get it ready as early as I can.   Also, there is ONE technical matter -- and I don't want any more egg on my face -- so I am reviewing it very carefully.

 

John

 

Link to comment

Again, I apologize for the delay...   The delay is very much worth it -- the amount of time and frustration of using the decoder is being VERY GREATLY diminished.   Every time I start the release process, I keep finding ways to merge the commands, further simplifying the usability.   Right now, a new testing cycle has started, while also looking for new recordings to test!!!

 

Right now:   the decoding commands are:

most classical, supertramp, most instrumentals:  --fb

most pop: --fc

older pop, dead sound: --fd

ABBA:  --ff.

 

I am chasing down every recording that I can find, and making sure that the EQ is correct for each one.   All of this is about post-decoding EQ -- the decoding is for all practical purposes:  perfect.

 

This is a super massive effort, chasing down every recording that I have.   Note that the '--fe' switch is missing.   It is part of the logical progression, and might not be needed.   I believe that I have probably found all of the combinations.

 

Note that if the settings are not the default, then the stereo image setting and calibration still need to be set by the users.   There will be simpler commands for those.

 

There are NO needs for submodes, even after testing about 20-30 recordings so far.   This has greatly simplified decoding.

 

Previously, I thought that the decoding could be collapsed into a single command, but the EQ was made too easy by a midrange EQ error, which covered up the EQ errors.

 

I am getting full depth, very clean results -- it is just about testing right now, perhaps finding any outliers that are not handled.

(There are still the submodes, but I am not going to document using them -- because they should NEVER be needed now.)

 

The wait of a few extra days will probably save DAYS AND DAYS of searching for correct EQ -- which would have previously been approximate.   The new settings are common amongst many recordings -- probably showing that we are finally getting the precisely correct decoding EQ.

 

John

 

Link to comment

Still delayed, but for the benefit of the program users.

 

Down to 2 decoding modes, and doing all kinds of EQ optimizations to make sure that the EQs are correct.

The more correctness, less likelihood of decoding variants.

 

The only known variant right now is a slight adjustment to the highs.   The decoder *MUST* do the adjustments, and

are very tricky to do manually.   This means that I have to get the EQ right, because the casual user cannot easily do the

EQ for themselves.  (I can explain the formulas, but why bother at this point.)  The 250Hz offsets have been more

important than I had ever realized.   Also, the offsets aren't across TWO EQ paths, but instead THREE.   This has

made a difference in some of the EQ architecture, but in the longer run (over a period of a day since the change), this

change has also simplified the EQ scheme to some extent.

 

I'd rather give you something really, really good rather than something to get by with.

 

Link to comment

Delay has been worth it.   probable release tomorrow, doing demos now.

The commands that enables FA mode:

--fa

(nothing else.)

 

The commands that can likely change from album to album:

* calibration (either --coff= or --toff= or --tone=, each is the same thing, just different way of setting it.)  --coff is easiest (-2, -1, 0, 1, 2), where 0 is the default.

* stereo image:  --fw=classical (if needed), and output stereo image:  (--stw= -2,-1,0,1,2).   For normal pop, the default is '1', for classical, the default is '0'.

(Some pop uses classical mode and vice versa.)

 

I expect the release (seriously) tomorrow early night (21:00 EST USA time.)

The wait has been worth it, because the decoding command appears always to be --fa now.   I will reserve some other modes just in case, but the entire set of demos (running now, will upload tonight) use --fa -- or --fa=M right now.  I need to default --fa to --fa=M before release.

 

 

Link to comment

V1.9.6C is ready.

I made a single numerical mistake for the demos, so they will be delayed for a few hours.

 

The decoder produces BEAUTIFUL results in my estimation, but feedback is welcome (esp about too strong low bass.)   I think that the response is now balanced -- but my hearing is not trustworthy -- feedback is welcome.  (The EQ seems to fit the correct pattern, but there are still variables which I sometimes have trouble with hearing.)

 

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

 

Will provide URL for demos in a few hours.

 

John

 

Link to comment
36 minutes ago, John Dyson said:

V1.9.6C is ready.

I made a single numerical mistake for the demos, so they will be delayed for a few hours.

 

The decoder produces BEAUTIFUL results in my estimation, but feedback is welcome (esp about too strong low bass.)   I think that the response is now balanced -- but my hearing is not trustworthy -- feedback is welcome.  (The EQ seems to fit the correct pattern, but there are still variables which I sometimes have trouble with hearing.)

 

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

 

Will provide URL for demos in a few hours.

 

John

 

Please hold off for an hour or two.  I scrambled something.

 

There WILL be an answer within an hour.

 

Link to comment
19 minutes ago, John Dyson said:

Please hold off for an hour or two.  I scrambled something.

 

There WILL be an answer within an hour.

 

It is back -- I had done a test, but got distracted and left the test values enabled, instead of the correct values.  I haven't looked at that section of the code again for a day or so.   When listening for the highs (have been working on the lows), I noticed something 'wrong'.   The problem is NOW fixed.

 

YOU WANT:  V1.9.6D

 

Sorry about stuttering the release, but I have been known to do this.  I really miss system test phase of development.

This is one very complex piece of software -- and it is just at my ablity to picture...   Some day, I'll document it, but the best form of source code encryption is the extreme detail and complexity!!!

 

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

Link to comment

As many of you know -- I am having TERRIBLE problems with the LF range.   The high end is distortion free, but the lows keep faking out my hearing.   I'll be doing another release when I can figure out how to rebalance the lows.   There are certain rules that must be followed, or there will be troubles.

 

I have about 24Hrs more before a necessary personal duty for a week.   Let me see what I can do about the bass.

 

To explain -- I intellectually know that most of what needs manipulation are the lowest frequencies, and they need to be flattend (rolled off), but somehow my hearing is so screwed up that I get misled.   My hearing is learned to listen for distortion, which tends to push me to decrease the lower frequencies -- which are often somewhat distorted.   I think that the next attempt in about one hour, with a direct comparison might help.  Why the hell is my hearing so screwed up?   The highs are now pristine, but also part of the trick is the region in the 2k to 4k need to join well (also the 100Hz range.)

 

The results are intended for OTHERS -- as I have given up on hearing the details that are being so damaged in our recordings..

 

 

 

Link to comment

Been getting feedback/etc.   My hearing isn't working quite right, and trying to interpret feedback usefully -- my fears of insufficient LF caused me to over shoot the goal.

 

There are all kinds of comparisons, tricks, etc that I have to use on the LF band.  The HF range is easier in some ways, because the comparisons create all kinds of modulation distortions, gain control effects/etc that are so very audible to me.   That isn't to say that the HF range is easy to deal with, it is just that I have better tools for the HF range.

 

I originally got feedback about 'not enough bass', etc -- and bass in the previous mistaken release ended up really overwhelming -- but I cannot hear it, only hear the effects of overloading my headphones/electronics.

 

The version currently in testing (actually about 4 for comparisons) is looking very interesting.

I apologize for the stuttered releases, but the LF range is a real challenge, and they used 1st order EQ in a situation that I had no experience.   I can design the EQ mathematically, that is no problem for me.   However, without a spec, and reverse engineering based upon materials where most of the time the orignal goal is not available -- this is challenging.

 

I have a drop-dead time of approx 15Hrs from now, but I do believe that it will be ready by then.   The LF EQ changes have been profound, but the rest of the decoder is *perfect*.    This makes it especially frustrating where a simple LF EQ problem is holding up the project.

 

Remember also, THE ONLY FA INITIATOR LEFT NOW IS --fa.   There is NO MORE --fcs!!!   In fact, by the time that the release comes out, I will have disabled --fcs with an appropriate error message about using --fa.   THERE IS MUCH LESS TWEAKING FOR DECODES NOW!!!

 

 

 

Link to comment

ANNOUNCEMENT:

The people working with / helping the project have told me that the decoder is not ready yet.  There have been troubles with the LF equalization -- but otherwise the decoder appears perfect.   I think that I found the problem, but I might be gone for about 1wk, in a few hours.   Instead of creating confusion, perhaps making another mistake, etc -- the release will be delayed for about 1wk.  (If I figure out how to move enough of my computing facility to my 'away' location, I'll do it -- but there are several frustrating logistical issues for such a move, even temporary.)

 

The wait for the decoder is worth it.   I really don't want people to have to fidget while using the decoder anymore, and it is so darned close to near perfect, but with the final bug.

 

Again, I believe that I found the problem, and previous attempts at solving it were broken by over-design.   Everything else was pretty complicated, but the LF EQ just might be fixed by two single pole EQ, alas, there is no time for testing.

 

John

 

Link to comment
  • 2 weeks later...

There are still some delays...   But I want to warn about expectations -- because the result will definitely sound different than the original consumer messed up garbage.

 

1)  The normally available consumer material has an artificially boosted lower midrange, which comes from both EQ and LF distortion.   The EQ is corrected by decoding, which then removes that tubby FA sound.   It takes a while to get used to the missing tubby sound.   Even expert audiophiles have gotten used to it.  I regularly listen to copies of actual masters -- never being touched by the distribution damage.

 

2) The decoder should be relat ively flat, but I have been fighting against a 3dB peak at 9kHz.   That artificially brightens vocals, and creates an edge.    The peak is relatively narrow, and that is why I didn't notice.   The current code has the peak removed.   I am still working on other aspects of the EQ -- the EQ needs to be spread amongst each decoding layer vs. the single input/output.   If the combination is correct (the previous version was correct except for the peak) so maintianed consistent frequency response vs. # of layers of decoding.

 

3)  A recent complaint about certain sounds disappearing is a valid comment, but respectfully invalid complaint.   Low level, esp high frequency sounds on FA recordings are artifically boosted, and the listener might have gotten used to the 'boosted' version.   Sometimes, the decoder will push the near-noise level signal into near oblivion.   That is how noise reduction works.

 

When the new decoder comes out, PLEASE don't be disappointed by the screwed up midrange from consumer recordings being missing.

 

The result is sometimes a 'thin' sound, sometimes sounding low too-strong vocal midrange when compared with lower midrange.  The fact is -- the lower midrange SHOULD be diminished.

 

There is a total refactoring of the EQ to get rid of the 9kHz boost.   This refactoring is producing very similar results -- missing 9kHz peak.

 

I'd expect a few more weeks delay.

Link to comment
On 2/11/2021 at 6:35 PM, John Dyson said:

There are still some delays...   But I want to warn about expectations -- because the result will definitely sound different than the original consumer messed up garbage.

 

1)  The normally available consumer material has an artificially boosted lower midrange, which comes from both EQ and LF distortion.   The EQ is corrected by decoding, which then removes that tubby FA sound.   It takes a while to get used to the missing tubby sound.   Even expert audiophiles have gotten used to it.  I regularly listen to copies of actual masters -- never being touched by the distribution damage.

 

2) The decoder should be relat ively flat, but I have been fighting against a 3dB peak at 9kHz.   That artificially brightens vocals, and creates an edge.    The peak is relatively narrow, and that is why I didn't notice.   The current code has the peak removed.   I am still working on other aspects of the EQ -- the EQ needs to be spread amongst each decoding layer vs. the single input/output.   If the combination is correct (the previous version was correct except for the peak) so maintianed consistent frequency response vs. # of layers of decoding.

 

3)  A recent complaint about certain sounds disappearing is a valid comment, but respectfully invalid complaint.   Low level, esp high frequency sounds on FA recordings are artifically boosted, and the listener might have gotten used to the 'boosted' version.   Sometimes, the decoder will push the near-noise level signal into near oblivion.   That is how noise reduction works.

 

When the new decoder comes out, PLEASE don't be disappointed by the screwed up midrange from consumer recordings being missing.

 

The result is sometimes a 'thin' sound, sometimes sounding low too-strong vocal midrange when compared with lower midrange.  The fact is -- the lower midrange SHOULD be diminished.

 

There is a total refactoring of the EQ to get rid of the 9kHz boost.   This refactoring is producing very similar results -- missing 9kHz peak.

 

I'd expect a few more weeks delay.

 

HI John,

 

I have been following your work online over the last few days since I had reason to investigate DolbyA in depth, and I 100% agree with you on the leaking of DolbyA into the consumer world.

 

I have no direct examples myself but from my work and experience within the record industry I can 100% see how this can happen easily.

 

I have a rather unique view of the industry in that I was brought up within the world or recording and also manufacturing and sales. I have literally experienced everything from recording in pro studios to home recording and from Vinyl cutting and pressing, cassette manufacture and CD production (not directly CD pressing) on a professional level.

 

I am also a self taught developer and whilst I have never really worked on anything as low level as you do I love code and how things work. On top of this I am also a qualified electrician and have some minor qualifications in electronics engineering also. The whole problem and your solution really fit my interests like a glove.

 

Masters get passed between various different companies or departments for various releases and licences and it would not be common for the original master to be sent for obvious reasons. I feel that there is a large margin of error to be attributed to copy masters. A simple labelling error could easily cause DolbyA to be leaked into a manufactured product. It is unfortunate and I can 100% guarantee that in all but a few isolated cases this would not have been done deliberately. It would simply have been a case that the engineer who mastered the release for the final format would likely not have been the engineer who originally made the final mix master or production master. In fact the original engineer would likely not have heard the music again until after it was released to the public unless there was reason to do so.

 

eg. In the case of cassette, a master would be sent for production and the master would have then been transferred to a "loop bin" master (before the days of digital bins) by the factory engineer. If the incoming master (form the original mixing/mastering engineer) was not labelled the then the transfer to the, likely 1", loop bin master would not have been passed through Dolby. This of course could happen in reverse also where a master was incorrectly labelled as being DolbyA but was actually not and decoded inadvertently.

 

Of course the same principal also applies to Vinyl and CD and every other format, consumer or not (DAT copies, multitrack masters (where some tracks have DolbyA and some over dubs do not)). The whole thing is a minefield but one that needs highlighting at both a professional and consumer level.

 

I actually found your work as I felt I had a master, transferred from tape, that was not right at all. I was altering the EQ to some hellish levels to make it sound anywhere near reasonable. I decided that the master had been incorrectly DolbyA decoded and as a rough test passed the master through the recording stage of some DolbyA hardware to see if it made the master sound more like I expected it should (ie. I applied DolbyA to the muddy compressed audio). The difference although not right at all was like night and day.

Now I have not managed to get into this any further and I need to get the original tape (which is labelled DolbyA and has Dolby tones) transferred again, this time making sure it is without any Dolby processing so I can compare the sound. I highly suspect that the Dolby unit was either left in bypass after the tones were recorded or that the tape master in question is a copy master and was decoded when copied but with the tones remaining. Both of these possibilities could have resulted in the master not being DolbyA but then being decoded on digital transfer due the the tones and labelling, giving me this muddy mess to work with.

 

After finding your posts and software and reading the comments it also became apparent that this could have happened before on some items that have been released that I am associated with (not directly). I have seen comments from consumers in relation to items that are now under my control but were released in the 90's- early 2000s where the releases are slated for sounding thin and not at all like the vinyl. Now we both know that vinyl does have a unique sound and it was always the opinion that we only had what we had to work with and that the transfers from tape were what they were, so we had previously all but dismissed these comments as we felt that they had no merit, many of the releases were worked on by professional engineers but, of course they were not the original engineers and in may cases the recording artists were not involved in the re-release for various reasons. But.... leaked DolbyA would make sense so I am going to be investigating this is due course.

 

I am also open to that idea that some masters may be a mixture of DolbyA and non DolbyA tracks as this could easily happen when compiling a production master in the correct track order etc.

 

I am really interested in working with you in anyway I can on your decoder and I am also interested in how I may be able to obtain a copy of the "professional" version that is licensed to use all the command line switches etc. so I can really evaluate the added benefits vs DolbyA HW. Whilst my electronics knowledge is not at a level where I understand the technicalities in detail I do understand the basic principals of capacitance, inductance, reactance and impedance etc and I can at a very high level begin to imagine that a software solution will produce a cleaner and more accurate result than analogue processing ever could.

This got me thinking not only about degradation of electronic components in existing units but also how aspects such as operating temperature could alter the response of some components (especially older ones).

 

In conclusion I am 100% with you all the way and have a grounded knowledge of the inner workings of the record industry behind me at all levels. I would love to discuss things with you directly and would like to discuss the options relating to your "professional" version of the product.

 

I am not sure of the best way to contact you directly but I have found your profile on linked-in and sent you a message on there also, so maybe we could exchange details through the linked-in messaging system?

 

Regards

 

Calum

 

 

Link to comment
1 hour ago, CAL-TEC said:

 

HI John,

 

I have been following your work online over the last few days since I had reason to investigate DolbyA in depth, and I 100% agree with you on the leaking of DolbyA into the consumer world.

 

I have no direct examples myself but from my work and experience within the record industry I can 100% see how this can happen easily.

 

I have a rather unique view of the industry in that I was brought up within the world or recording and also manufacturing and sales. I have literally experienced everything from recording in pro studios to home recording and from Vinyl cutting and pressing, cassette manufacture and CD production (not directly CD pressing) on a professional level.

 

I am also a self taught developer and whilst I have never really worked on anything as low level as you do I love code and how things work. On top of this I am also a qualified electrician and have some minor qualifications in electronics engineering also. The whole problem and your solution really fit my interests like a glove.

 

Masters get passed between various different companies or departments for various releases and licences and it would not be common for the original master to be sent for obvious reasons. I feel that there is a large margin of error to be attributed to copy masters. A simple labelling error could easily cause DolbyA to be leaked into a manufactured product. It is unfortunate and I can 100% guarantee that in all but a few isolated cases this would not have been done deliberately. It would simply have been a case that the engineer who mastered the release for the final format would likely not have been the engineer who originally made the final mix master or production master. In fact the original engineer would likely not have heard the music again until after it was released to the public unless there was reason to do so.

 

eg. In the case of cassette, a master would be sent for production and the master would have then been transferred to a "loop bin" master (before the days of digital bins) by the factory engineer. If the incoming master (form the original mixing/mastering engineer) was not labelled the then the transfer to the, likely 1", loop bin master would not have been passed through Dolby. This of course could happen in reverse also where a master was incorrectly labelled as being DolbyA but was actually not and decoded inadvertently.

 

Of course the same principal also applies to Vinyl and CD and every other format, consumer or not (DAT copies, multitrack masters (where some tracks have DolbyA and some over dubs do not)). The whole thing is a minefield but one that needs highlighting at both a professional and consumer level.

 

I actually found your work as I felt I had a master, transferred from tape, that was not right at all. I was altering the EQ to some hellish levels to make it sound anywhere near reasonable. I decided that the master had been incorrectly DolbyA decoded and as a rough test passed the master through the recording stage of some DolbyA hardware to see if it made the master sound more like I expected it should (ie. I applied DolbyA to the muddy compressed audio). The difference although not right at all was like night and day.

Now I have not managed to get into this any further and I need to get the original tape (which is labelled DolbyA and has Dolby tones) transferred again, this time making sure it is without any Dolby processing so I can compare the sound. I highly suspect that the Dolby unit was either left in bypass after the tones were recorded or that the tape master in question is a copy master and was decoded when copied but with the tones remaining. Both of these possibilities could have resulted in the master not being DolbyA but then being decoded on digital transfer due the the tones and labelling, giving me this muddy mess to work with.

 

After finding your posts and software and reading the comments it also became apparent that this could have happened before on some items that have been released that I am associated with (not directly). I have seen comments from consumers in relation to items that are now under my control but were released in the 90's- early 2000s where the releases are slated for sounding thin and not at all like the vinyl. Now we both know that vinyl does have a unique sound and it was always the opinion that we only had what we had to work with and that the transfers from tape were what they were, so we had previously all but dismissed these comments as we felt that they had no merit, many of the releases were worked on by professional engineers but, of course they were not the original engineers and in may cases the recording artists were not involved in the re-release for various reasons. But.... leaked DolbyA would make sense so I am going to be investigating this is due course.

 

I am also open to that idea that some masters may be a mixture of DolbyA and non DolbyA tracks as this could easily happen when compiling a production master in the correct track order etc.

 

I am really interested in working with you in anyway I can on your decoder and I am also interested in how I may be able to obtain a copy of the "professional" version that is licensed to use all the command line switches etc. so I can really evaluate the added benefits vs DolbyA HW. Whilst my electronics knowledge is not at a level where I understand the technicalities in detail I do understand the basic principals of capacitance, inductance, reactance and impedance etc and I can at a very high level begin to imagine that a software solution will produce a cleaner and more accurate result than analogue processing ever could.

This got me thinking not only about degradation of electronic components in existing units but also how aspects such as operating temperature could alter the response of some components (especially older ones).

 

In conclusion I am 100% with you all the way and have a grounded knowledge of the inner workings of the record industry behind me at all levels. I would love to discuss things with you directly and would like to discuss the options relating to your "professional" version of the product.

 

I am not sure of the best way to contact you directly but I have found your profile on linked-in and sent you a message on there also, so maybe we could exchange details through the linked-in messaging system?

 

Regards

 

Calum

 

 

I am best contacted though email or PM (private messaging here at AS.)   If you send me a PM, I'll send you my email address.   I don't do Linked-IN very much -- not really looking for a job at all.

 

About DolbyA -- if you want a practically perfect decode of a DolbyA image, you can use the FA decoder for that -- I would happily create a license file for you to enable 'professional mode', so the DolbyA decoding can be done with the FA decoder.   The decoder, just a few weeks ago, privately got rave reviews about this current line of decoders in DolbyA mode.*   The clarity and detail is far far better than a true DolbyA, while still providing full noise reduction and restoration of dynamics.   I was SO careful to match DolbyA behavior, but even better.   (The DHNRDS is a feed forward design, with the needed feedback adjuncts, while DolbyA units use direct audio feedback schemes.)   There are so few degrees of freedom with a direct audio feedback scheme, or even the parametric scheme as suggested by the Sony DolbyA patent.   The additional degrees of freedom as utilized by the DHNRDS then allows some additional advanced processing that strongly mitigates the distortions that are naturally created by a fast gain control processor.  The problem with doing a feedforward scheme is that modeling a complex nonlinear feedback system, accurately unfolding it into a feedforward system is very nontrivial...

 

* Until a few weeks ago, even my own confidence in the DolbyA quality was somewhat conditional.   Recently, my confidence has increased markedly.   The real frustration is that some users have been provided earlier versions of the decoder that weren't as perfect...   I really mean it -- the current version really opens up the sound, while maintaining almost exactly the same dynamics & response balance.   Earlier versions tended to over enhance the sound- which is NOT what you want.   The DolbyA decoding must simply be accurate -- not modify the intent of the recording in any way.   The DHNRDS is probably as accurate in the important ways as encoding on one DolbyA unit and decoding on another version.   There *are* some bumps in the important frequency ranges that don't  exist on a true DolbyA, but those bumps are on the order of +-0.25dB or so.   This small lack of flatness comes from the feedforward design and needing to emulate analog filters in the digital domain, and IIR emulations analog filters just don't work in the way needed for the application.   So, there is some really hard-core, brute force processing to make the decoder as good as it is.

 

The FA project (which is the consumer decoding) is so very difficult -- it requires a very accurate DolbyA decoder, multiplied by 7.   Also, the EQ and levels between each step must be precise.   The FA design is NOT documented anywhere, except most consumer recordings appear to be FA encoded.   The reverse engineering is very tenuous on FA -- not even schematics of ancient HW to refer to.   Of course, there are DolbyA schematics floating around, but that is only a partial component of the FA process.

 

Over these years, it appears that the FA mode of the decoder has gotten close, but always seems to have had flaws -- mostly beause of my necessary guessing in the reverse engineering process.   There have been criticisms of the FA decoding results, some valid and helpful, some just wrong -- but validly register a complaint.   I do believe that people will be very surprised by this refactored/reworked decoder, where the layers that are being corrected haven't been materially changed for about 2yrs.  The corrections and improvements come from about 2yrs of very painful experience...

 

I believe that I just had another breakthrough in the recent hour -- up until about 1Hr ago, it appeared that there were three styles of LF EQ being used in the FA recordings, because the resulting sound had been different in three different styles.   After the *total refactoring* of the FA decoder design, and re-calibrating every step, trying several intiutive kinds of level setting -- I found a combination of levels between each layer (10dB chunks) of expansion (decoding) where the differences in sound are simply resolved by the levels in the recording itself.*

 

* The thing about 'intuitive' level/EQ setting...   It is NOT tweaking, but using intuition, somehow 'reading the designers mind', using an intuitive understanding as an EE -- and understanding what the system is intended to do.   There are usually several plausible choices, one of which produces perfect results.   The other 'not-so-good' choices, often produce plausible, but imperfect results.   Revisiting the whole design, with the learning from the last few years -- it is sometimes easier to make the truly correct choices -- but still sometimes making mistakes.

 

Anyway -- contact me -- I might even be able to help with the troubles that you are having.   I am not an audio engineer in any way, but have developed some small amount of knowledge about what is going on.    At least, we can communicate about these things -- maybe with some benefit!!!

 

John

 

 

 

Link to comment

There will be a release within a few days.   I cannot imagine a delay beyond early Saturday morning (+2-3 days.)

 

The last remaining problem is controlling the <50Hz levels.   I am still working on it, but WILL be easy to fix.

 

The most difficult ongoing issue has been dealing with the 1st order EQ, which is a very different animal than normal 'tone control' EQ.   If the 1st order EQ isn't correct in a certain way, the results can be ugly, and a sense of distortion.

 

The sound is VERY GOOD and clean, and I suggest listening to these snippets.   THEY ARE REALLY REALLY GOOD.   The only problem is that some still have a bit too much 10-30Hz in them, and the LF can be disturbing.

 

All of the modulation distortions are fully controlled, and another scheme has been added in addition to the other schemes.   The sound should appear to be as clean as possible now.   The release should be coming out in just a few days.   The repository site is getting Linux versions now from time to time, but once there is an official release, there will, of course, be Windows versions of the decoder.

 

I admit that the decoder hasn't been perfect in the past, but I personally feel that the sound of FA is so disgusting that almost anything that helps to mitigate FA is an improvement.   Not everyone agrees with this sentiment, but this decoder version should make the pickiest audiophile happy.  (modulo the LF problem that I mentioned above.)   The only real problem with fixing the LF issue is determining the best way to enable access to the remedy.

 

THIS IS A VERY VERY DIFFERENT ANIMAL:

 

https://www.dropbox.com/sh/tepjnd01xawzscv/AAB08KiAo8IRtYiUXSHRwLMla?dl=0

 

Link to comment

I have just had a listen (only on an average set of headphones) and they do sound really clean. I will have another listen in the morning on some propper speakers and give you some feedback. 

 

Is there any chance you could add the FeralA source material prior to decoding so that it is possible to compare the source to the result?

Link to comment
21 minutes ago, CAL-TEC said:

I have just had a listen (only on an average set of headphones) and they do sound really clean. I will have another listen in the morning on some propper speakers and give you some feedback. 

 

Is there any chance you could add the FeralA source material prior to decoding so that it is possible to compare the source to the result?

Sure -- I'll pull something together.   The big problem is that I am running out of Dropbox space, but I can move some things around and make the source material available.  Give me a day or so and will happily put them together!!!

 

About using real speakers -- heed my comments about excessive LF.   It isn't a dangerous amount of LF, but is strong.

 

 

Link to comment

I also want to warn about the ABBA examples.   I think that my hearing is playing games, and ABBA sometimes needs a 9k to 18 (or 21k) rolloff.  I don't know -- my hearing is NOT trustworthy.   It is easy to add the EQ, but I don't know whether or not to add it.

 

When doing the comparison though -- LISTEN CAREFULLY FOR THE DETAILS BEING COHERENT.   The fixed EQ is simple, just that I cannot reliably do it.

 

 

Link to comment
7 hours ago, John Dyson said:

I also want to warn about the ABBA examples.   I think that my hearing is playing games, and ABBA sometimes needs a 9k to 18 (or 21k) rolloff.  I don't know -- my hearing is NOT trustworthy.   It is easy to add the EQ, but I don't know whether or not to add it.

 

When doing the comparison though -- LISTEN CAREFULLY FOR THE DETAILS BEING COHERENT.   The fixed EQ is simple, just that I cannot reliably do it.

 

 

Good news -- my hearing has come back (hopefully not temporarily again), and I'll re-insert the equalizer that I didn't know if it was needed or not.

The infrastructure and code exists, but from time to time, I get fooled whether or not it is needed.   Trying to compare with references that I KNOW are correct doesn't help.

 

Really trying to get done with this before my hearing goes permanently!!!   I'll create corrected demos in hours, and create a release by Saturday night.

 

Sorry about the misfire, but my hearing is REALLY strange nowadays.   I am listening to a very clean version of Supertramp right now, and is NOT overly crisp like other demos were.   GOTTA GET THIS WORKING TODAY BEFORE MY HEARING GOES NUTS AGAIN.

 

John

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...