Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

18 minutes ago, PeterSt said:

 

As far as I can tell, on April 29 I reported about the "L" version of the time and explicitly called the highs OK. You can read back on it in your PM.

I don't know about highs in later versions because other things bothered me.

 

Peter

Yes -- good.   I think that the problem ended up being the specific decodes -- the early Carpenters albums need Eq, and I think that I didn't do tje corrective EQ completely/correctly.

 

About the more recent versions -- the decoder has been getting positive feedback, including other places in this world.   The HF thing panicedme, because I thought that it was something that was being overlooked or not telling me.  I perceived it as conflicting information and a setback.   The general HF scheme/requiements are not the typical scheme that most EEs or audio people would even consider.   Trying to do the straightforward thing will create distortion and was one reason why some versions that might have temporarily had a moderately good response balance, but  had other, related and very serious defects that weren't being considered by others.   Sometimes superficially 'okay' is still pretty bad -- I am chasing every detail, far beyond the obvious. 

 

The problem when people might try to help and say that they like the frequency response characteristics, that is all and good.   However, I hear other distortions, and have to figure out the way to play decoder 'tetris.'  Sometimes things have to be backed out and another path taken.   This is NOT like just throwing EQ on a problem.   If it was just about EQ, the problem would be TRIVIAL - TOTALLY TRIVIAL.

 

Also, I did find that Supertramp is doing the old peaking trick, I have narrowed it down to 6kHz, 9kHz and 12kHz.   They didn't use a regular equalizer, but instead the old mixing Q values at each frequencies thing.  (If you use a two high Q values, one is +dB and the other -dB, very nice peaking/edge enhancement can be done.)  The 'peaking' can help punch through the FA distorted signal.   When decoding Supertramp with the super accurate and perfect tracking (correct pre/de emphasis), there is no smearing.  I had similar problems when using ABBA as test source -- they do some odd things on each album.   This does worry me that as the decoder becomes more accurate, the variations in the 'mastering' can become more obvious.

 

I admit that earlier versions havent been perfect, and I do listen to people, but again -- I need technical language.   A few minutes ago,  just got confused elsewhere when someone talked 'music speak', and that is also outside of my area of expertise, just like text that is overly peppered with audiophile-speak.   However, by taking a guess about what they were talking about -- I figured it out.  It was yet another case of FA distortion when two bands were cleaving, therefore causing apparent (but not real) nonlinear distortion.

 

 

 

 

 

Link to comment
9 minutes ago, jabbr said:

Simon & Garfunkle's "Sound of Silence" is improved, with a reduction in background hiss and edge using --coff=-2 --fa --xp (3.0.2H)

Thanks!   One thing that you might find on S&G is that their master tapes sometimes have extreme sibilance on the 'S' sound.   This is a known fact about the master tapes, and the decodes can also have the problem on some songs.   My 'solution' is to use '--as=39' for an anti-sibilance scheme.   Currently, that is the minimum amount of sibilance processing, but can still have some negative effect on other HF dynamics.   This anti-sibilance scheme is one item on my 'fixit/improveit' list.   (The anti-sibilance is a variable notch.   It is not always active even when enabled,  almost acts like a notched compressor.)

 

Also, about 'sibilance' -- along the same lines, since the decoder is becoming more accurate, I can detect another form of enhancement as used on some Supertramp material.  I think that I discussed it earlier, but it is easily mitigated.  (I should probably create a built-in command instead of needing to use a sequence of crafted EQs.)

 

No matter what, I am trying to find any bugs, and I do think that the program is becoming more worthwhile to use.  However, bugs are always going to be there, and there will always be opportunities to make the program easier to use.

 

Tje V3.0.4C release, not yet officially released, but is available.   You must know by now that it is very diffcult for me to resist making changes to the program.   While doing the massive decodes to prepare for the release, I have not had the slightest temptation to edit the program (V3.0.4C) for days (May 10).   This is a new milestone :-).

 

Thanks again, and very specific criticism is always welcome.  I want to make the program better and better, until no longer possible.  That should keep me busy for the rest of my life :-).

 

John

 

Link to comment
59 minutes ago, jabbr said:

Simon & Garfunkle's "Sound of Silence" is improved, with a reduction in background hiss and edge using --coff=-2 --fa --xp (3.0.2H)

Important:

One thing that is different between your suggestion and what I use is that I also use --fw=classical.   It might make it a bit more clean sounding.


Choosing between --fw=classical or not (or even --fw=wclassical is sometimes very helpful)  THESE ARE TRICKY/SUBTLE things to choose.   My prediction is that more mistakes will be made when choosing to use '-fw=classical' or not instead of --coff=-2 or --coff=-4.

--fw=classical often takes several reviews for me to decide.

 

What should one expect when making the correct choice?   The edges are more clean, and the stereo image for various specific instruments will be more stable.  How does one 'teach' someone else to make the correct choice?   I don't know -- but I have troubles with --fw=classical also.   It seems to be normal to make a mistake on that.

 

BTW -- I COULD BE WRONG!!!

 

 

Link to comment

Official release for V3.0.4C.

https://www.dropbox.com/sh/5xtemxz5a4j6r38/AADlJJezI9EzZPNgvTNtcR8ra?dl=0

 

Demos for V3.0.4C:

https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0

 

V3.0.4C  has actually been available and in testing for the last two or three days.   It works really well, and there has been ZERO temptation to modify V3.0.4C for quality improvement reasons.  Normally, by the time I do a release, I am usually  working with a version at least two or three minor changes ahead (I won't release something that I know to be bad.)   This is NOT true on this release -- I see no need to make changes to the decoding algorithms AT ALL.  I am using the V3.0.4C release AS-IS for my own purposes.  (I normally use the AVX512 version, but also use AVX2 from time to time.)

 

My 10core computer has been active 90% of the time, doing decoding tests, also burning up my hearing -- trying to find all of the nuances of the various recordings.  Also, more importantly, trying to find actual bugs or needed correction to the decoder.  THIS IS THE TIME FOR PRECISE TECHNICAL CRITICISM.   I appreciate 'sounds good',  as it makes raises my mood a little.   I REALLY appreciate constructive criticism like  'the highs are edgy', or there seems to be too much <30Hz leaking through, etc. -- those can help finish the development.   When comments and clear/clean criticisms are contributed, it either offers me a chance to explain the issue, or could give me a pointer to a potential problem to be fixed.  Since the decoder is a complex multi-band dynamic gain device, certain measurements will result in nonsense, but if I can figure out some useful measurements, I'll make them available for reviewers and testers to use.

 

I LOVE constructive criticism.  However, even though it might sometimes appear that I don't listen to criticism, and sometimes might seem sour about criticism -- I hope that it is realized that there have been very negative and undeserved comments about the project, its goal, and even me personally.   SOMETIMES I am defensive as a learned response, but I do believe that the results have serious credibility now.  The project is almost indisputably valid, and  very importantly, thank goodness that a few good  people eventually got it through my 'thick skull' about my poor hearing...  My hearing problem has wasted at least several months, if not a lot more time. (I actually estimate a year or so.)

 

Be aware that recordings need some post decoding EQ.   That is NOT a decoder bug, but simply what happens when the recording is modified to sound good with FA encoding.  MOST (by far most) recordings will be okay with a straight decode.   For the demos, I'll publish the variants soon (probably Monday or after)  -- just so you can more easily reproduce my results, or do even better.

 

The demo/archive sites are available, and will continue to be  populated over the next week.  I foresee that V3.0.4C is at the beginning of final stabilization of the decoding algorithms.  Other than minor EQ command line changes, addition of minor EQ features, upgrade of the 2nd order EQ command line capability, improving the anti-sibilance,  the only real algorithm changes should be very minor.  These EQ command line improvements will be literally trivial to implement, so will have zero effect on the stability.   The existing compressors, limiter, the input EQ -- those are all going to be mercilessly hacked out.   There will be other internals cleaned up/removed or improved.   You'll probably not see the effects of ANY of these clean-up changes.   Hopefully, we might find some features to add that might actually make the decoder nicer to use.

 

Heads up about my availability -- I'll be gone between 14May and 16May -- so there might/will be 'radio silence' for those days.

John

 

 

Link to comment

During my testing, the decoder has been producing fantastic, but unfortunately  slightly flawed results WRT bass.   The only notable audio flaw (per feedback and my own reviews) is that the bass is a little muddy at times, and I just found the problem...  It is an exceptionally minor post-dynamics LF EQ sequence issue, but even though it is technically *simple*,  it is important to fix.   There will be a V3.0.4D release to fix a minor bass EQ problem -- +48Hrs from posting.   The bass is excessive to the tune of perhaps +3dB at most, depending on the bass in the source material.   Along with that release, the demos will be updated.  With the 1st order EQ being used, 1st order EQ problems can easily appear to cause other, seemingly very distant problems -- believe it or not, improperly tuned bass 11st order EQ can make vocals sound distorted in the higher registers.  1st order EQ is a very odd thing to work with, nothing like the normal consumer EQ schemes.   Consumers probably aren't given 1st order to work with very much BECAUSE it is so tricky to work with.


The previous V3.0.4C  test results for Supertramp have been slightly flawed because the wrong setting for '--fw=classical' was used.   The basic ABBA 'decodes' are totally uneventful nowadays.

 

A lot of things have been double checked in the decoder, almost every EQ design decision has been 'second guessed', and there are really no SERIOUS audio flaws anymore.   The bass problem mentioned above can easily be EQed out, but it is wrong to leave that minor bug in the program -- so will be fixed in the next day or so.   The extra bass is a dB or so in the 25Hz-40Hz range.

 

With restored hearing, I am surpised about the improved clean-up of the recordings, without notable new artifacts (well -- again, the bass.)   My work-arounds for being partially deaf have also been very effective - requires lots of time consuming discipline, but at least no more embarassing mistakes.

 

While being offline, I found some important likely speed improvements.   More and more during testing, the results using the --xp and --xp=max modes have been almost amazing, very noticeably better than --fz or --fx.   These 'x' series modes modes are quite slow on normal 4 core CPUs, so I have been looking at improving the structure of the program.  The math is okay, but the ordering for the cache access operations is suboptimal.   Cache behavior is often much more important than the actual CPU speed, and the cache behavior of the program is exactly where the slowdowns reside.  (I ran some profiles -- finding that the speed issues are definitely all cache oriented.)   If all memory accesses were full speed, the decoder could be as much as 10X faster.   It is ALMOST as if each internal block of 1500 samples is flushing the system cache -- that causes a serious loss of performance.

 

Link to comment

New release:  capable of 'perfect' results.  🙂    REALLY.

 

V3.0.5D is no-nonsense, and even the minor response bumps and phase errors have been corrected in this release.

 

The significant changes are in the bass EQ area again...   There was also a very minor change in the HF pre/de-emph, but only minor.   The bass EQ is now very smooth and does the dynamic LF response scheme very accurately (perfectly.)   The anti-distortion parameters have also been optimized, but most of the time the improvement is only marginally audible.

 

All of the normal historically provided demo snippets are provided, along with snippets from every one of the recordings on each of the 8 ABBA albums are provided.  The ABBA decodes are as clean as ANYTHING available anywhere, including the well loved specialty remasters.   If you want your own 'perfect' copies, I can supply the decoding parameters...

 

The ABBA 'remasters' are in a subdir below the normal demos.   So are the new decoder releases.   Use the docs for V3.0.4C -- everything for users is exactly the same.

 

General demos:

https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0

 

ABBA snippets (all 8 albums):

https://www.dropbox.com/sh/o7wuk5og6b7e33e/AADMk2o30YniruQkJwS5YasWa?dl=0

 

Decoder binary:

https://www.dropbox.com/sh/5xtemxz5a4j6r38/AADlJJezI9EzZPNgvTNtcR8ra?dl=0

 

 

 

Link to comment

In the current release, the decoder is truly as audibly close to perfect as it has ever been, and probably as good as it needs to be.   However, I found two very minor opportunities for improvement.   One item is that there is an almost irreduceable fog created by the LF band being modulated by the gain control.   In the next release tonight (in approx 9Hrs, ready for the weekend), a slight mitigation of the LF modulation created fog (perhaps 3-6dB) will be included.   This 'fog' is a natural consequence to the gain control as created by DolbyA type devices.  This is an improvement BEYOND  expectations of an equivalent HW design.

 

Also, there is a slight blurring of the super highs.   The super-highs are further cleaned up by ever improving pre/de-emphasis.   All of the demos up till now have been done without the slight improvment in the curve.   Most of the change is outside of the audio band normally used for sensing levels.   Still, the signal at 17kHz and above has just a barely noticeable effect in the  more audible regions.   The two EQ in question are at 18kHz and 21khz.   It required great care to decide to make this improvement, since it is far far outside of my own hearing range, so I have no blatent audible indications of a bug.   However, there is now enough proof that the change IS an improvement.

 

Given these two improvements,  without a careful  A/B comparison, I doubt that anyone would notice an improvement.   The eventual  target is perfection -- and given the actual behavior of a HW design, the results are already  beyond perfection.   The lack of strong fog effects in the higher frequencies compensate for any slight rolloff that might sporadically happen in the decoder design until tonight   There is NO significant change in frequency response if measured by a static response measurement scheme...

 

As I had mentioned before, a simple, uninformed static analysis of frequency response is so conceptually defective as to be ludicrious.   An objective measure of the quality of the decoder needs to be deferred until the specification is released (if ever.)  If there was an objective measure, it would have already been used, just as used for the DA mode.

 

The decoder is NOT disappointing at all -- and been getting good feedback, only mildly conditional.

 

Where did I make the mistakes in the past:  1) using Supertramp as a basis for 'normalcy'  2) Using ABBA as a basis for 'normalcy'.   if a normal, more common recording is subjected to a decoder that has been optimized for most ABBA and/or Supertramp recordings, the results would mostly be a strange midrange., which is exactly the behavior that was previously achieved.

 

With careful analysis, I found that there is a possible opportunity for a 2-3X speed improvement of the highest quality decoding modes.   The normal inline programming is as efficient as possible, but a normal programming methodology ignores cache effects.   If the code is re-organized, the very expensive cache misses should diminish by 2-3X or more.   Cache misses are where the CPU spends time, not really the heavy, high powered math.   Waiting on memory is a  major impediment for higher performance.   I don't think that the cache usage improvemnts will help Atom type processors very much, but any 4th generation i5/i7 should see a very noticeable speedup.   With the monster sized 2nd level cache on 'X' machines and Xeons, the speedup should be profound, with the cost of whatever cache misses that are left -- should be reduced to nil.

 

The release for tonight is already ready, but redoing the demos.   The release will be tonight along with a large number of the demos.

There is NO reason to defer using the decoder today -- the release tonight will show a very small, miniscule improvement.   They only reason why it is being made -- I know about it.

 

 

 

Link to comment

Frustratingly, found an issue based on hearing, but when finding that problem -- another way to further clarify the recordings.  This is the reason why I recended the versions currently on demo.

 

This hearing problem is frustrating, because the differences are binary -- when I lose my HF hearing, then make the wrong choice.   I believe that the current results are far far beyond previous versions, but still need to run tests.   Even then, I cannot be sure.

 

The decoder REALLY works well -- and any argument that it doesn't decode well is 'specious' -- however, the final EQ has been trouble.   Even when I compare the original/RAW FA with the resulting version with decoded/cleaned-up dynamics, I cannot tell if there is a frequency balance match or not.   If the result is overly HF hard or too soft, I cannot compare reliably.  My hearing has been getting worse over the last few years, and my reliability has dropped significantly over the last few years....

 

However, the dynamics are SUPER clean -- and NO hiss in comparison.   I know that people have accomodated to the defective digital releases, but recordings should sound a lot better than the crap normally being sold.  With a little help -- I mean, real help, the decoder can be made complete.   I can supply the knobs, or work intimately with someone for about 1wk to resolve these issues.

 

A few comments aren't really all that helpful -- this is actual, INTENSIVE, BELL-LABS LEVEL REVERSE ENGINEERING, and I know that most people aren't used to this kind of thing, but there is a lot that someone can learn, if they want to help me overcome my extreme hearing loss -- and help complete the decoder.

 

Private message me -- and maybe we can complete this project.   It really does make an amazing difference, just use a tone control on the current or most recent results.   All of the 'missing midrange' problems are gone, and were easy to fix once I moved to different material for reference.

 

John

 

 

 

Link to comment
ATTEMPT AT PUBLICLY RESOLVING THE DECODER PROBLEMS
(THANKS BEFORE READING ANY OF THIS!!!)

(Wish this was as easy as FreeBSD when I also developed a new,
novel VM mgmt scheme in 1992 – just being adopted by Linux in 2021.  Lots of spits and starts in FreeBSD
also – new stuff is really difficult!!!)

I know that most people are tired of this, but without help here, I
will need to utilize other methods to resolve these issues.  I’ll have to find competent DSP/EE technical
types with good hearing, but I still believe that just a few changes to building-blocks can fix all known
problems.  Using other kinds of assistance might force the project to be commercial, but maybe
the project needs better marketing anyway?  It might need better organization like the FreeBSD project
had.  However, I really REALLY REALLY want to keep the program free.*

* GOOD NEWS – just sped up the higher quality mode ‘--xp=max’ to approx 2X faster, more to come.   The ‘--xp=max’ mode
was never intended for FA, but really does help A LOT.  Now, ‘--xp=max’ is faster than ‘fz=max’
used to be.!!! ‘--xpp’ even appears to be useful, with even 2X higher detail in removal of ‘fog’.   Really
trying to make the best quality a lot more practical to use.



(NO NEED TO READ FURTHER UNLESS LOVING BLATHERING DETAILS!!!)

The V3.0.8G release (or whatever released in the next few days) will be a reference for criticism.   In the
new version, I will have applied EVERY hard-core technically understandable criticism to the program (and some
of the apparently more technical the most divergent from being helpful!!!)  I am trying DESPERATELY to remove my
hearing as a variable, but that also has had varying levels of success.

Sometimes, simple descriptive criticism is most helpful.  When making EQ mods in dynamics software, it is NOT like applying
normal EQ -- things are really different,  just adding 'bass boost' can do anything from too wide a frequency range to
being totally useless because of phase cancellation.  Almost everything is VERY NUANCED because of the complex dynamics.

I have been aware of almost every defect -- but my hearing has been creating all kinds of response errors -- on the order
of 6dB or more!!!   Even the opposite of what is expected has happened -- too much high end!!!   Most of the time,
more than about 2-3 minutes of very soft, but intellectually intense listening will start causing some loss of HF hearing. 
MOST OF THE TIME, MY HEARING HAS FOOLED ME INTO THINKING THAT THE DEFECTS ARE FIXED...

Note about apparent ad-hoc 'adjustments':  the adjustments are very coarse grained.  Once the results become
approximately correct, most likely they will be almost exactly, if not exactly correct.   There is no 'in-between'
tweaking.   Using the wrong building blocks can sometimes alias the correct settings, but that is MY
problem to resolve.

The architectural conservatism is so strong that the eccentric mechanism for supporting apparently 'normal' bass
has been avoided.  The correction requires using an odd form of 1st order EQ.  The needed 2nd order EQ didn't
worry me as much as the modified 1st order EQ that makes the bass 'complete'.  THE NEW VERSION WILL SUPPORT FULL BASS.

Because of misunderstanding about actual applicably, technical feedback has sometimes been misleading or informing
about sometimes interesting, but inapplicable details for the specific dynamics processing scheme.  I GREATLY respect
peoples time and effort so NO SENSE IN DOING
A DETAILED TECHNICAL ANALYSIS THAT IS ONLY OF MODERATE TECHNICAL INTEREST, right?

Probably best to either stay totally non-technical or be purely/completely EE/DSP technical.

Bottom line, NON TECHNICAL criticism, describing the sound might be best.

PLEASE try to describe in terms like the below (deviations in the frequency ranges below -- NO NEED TO BE
ABSOLUTELY PRECISE IN THE FREQUENCY RANGE.)   Similar non-technical, but reasonably well defined language is also helpful.
I'll try to interpret when possible!!!

Here is a list of technically specified frequency ranges where correction might be needed:
  • Super-highs: 15kHz to 20+kHz.
  • High-highs: 9kHz to 15kHz (change affected by 12kHz and 9kHz 1st order EQ).
  • highs: 2kHz to 9kHz (change affected by 3kHz and 6kHz EQ, lesser extent 9kHz EQ.)
  • midrange is the baseline (1kHz to 3kHz)
  • lower midrange:  two bands:  500Hz to 1.5kHz and 150Hz to 500Hz.
  • Upper bass:  75Hz to 150Hz
  • Lower bass:  20Hz to 75Hz.
------------------------------------------------------------------------------


Here is an example of known historical defects (with approx technical defects):

Super highs too strong (often due to poor choice because of bad hearing):
+3dB to +6dB from 9kHz to 20+kHz Most versions have had too strong super-highs.   Some have
also been the opposite.  Probably the worst problem with my hearing.
The difference is usually ONE building block!!!

Hole in the lower midrange/upper bass (e.g. the V2.2.X series):
-3dB from 150Hz to 500Hz Some have given positive comments with versions that have these
kinds of defects.  This might be because of attempts to encourage me, but also might
be a blind range for many, just like me.  The 'hole' has been on the order of -3dB or so.

Too little lower bass: -6dB from 20Hz to 75Hz...
This fault has been most common.  I have been able to add the very-low bass ANYTIME,
but requires using an atypical kind of 1st order EQ.  I have resisted using the scheme
because I haven't seen the creative method used anywhere else.  I have been worried about
phasing side-defects.  Out of exasperation, I have decided that it is okay to use the
modified inter-layer EQ, simply because the fault without using the scheme appears to be
a bass-damaging phase cancellation anyway.  This lack of lower bass is well understood,
just the prescription for the diagnosis was very worrisome.

*** Almost all versions, except the V2.2.X series and the current V3.0.8 series have too little lower bass.

*The only reason why I had accepted the above defects has been because of a lack of reference,
frustratingly variable hearing, and extreme conservatism on my part.   I do not like adding or
changing things without very strong evidence. The FA raw versions are NOT a good reference,
but the only version available.


=======================================

Comments that are helpful:
  • Super-highs too strong/too weak.
  • High-highs too strong/too weak.
  • highs too strong/too weak.
  • (midrange is the baseline, wherever that is)
  • Upper bass:  'sounds like hole in the sound', too 'thin'.
  • Lower bass:  'no boom or thud, too strong boom or thud'
A 'heavy' sound can come from too much upper bass or upper lower bass (e.g. 75Hz to 500Hz.) 

A 'woody' sound (often like FA undecoded) comes from too much 250Hz to 1kHz.

A ‘thin’ sound can come from missing ‘250 to 500Hz’ or ‘500Hz to 1.5kHz’ depending on the kind of ‘thinness’.

If you cannot nail down the exact frequency range, just try to estimate.

THANKS!!!
Link to comment
On 5/23/2021 at 3:04 PM, John Dyson said:

ATTEMPT AT PUBLICLY RESOLVING THE DECODER PROBLEMS

 

From several angles I start to have the impression that I am talking to myself.

Where are those other people ? anyone ?

 

On 5/23/2021 at 3:04 PM, John Dyson said:

The V3.0.8G release (or whatever released in the next few days)

 

There hasn't been any new that I could see, thus:

I have been listening to the 3.0.5D "Demos" and I think I have a different kind of response than other times;

 

Those Demos (with Carpenters, ABBA, Beatles, Al Stewart, etc.) sound better than they have done before. But now a next issue arises: they all sound the same. For me, as a provider of neutral sound, this is killing.

I think it is unavoidable that the dynamic EQ may not be so dynamic as you want, or else I won't understand what the EQ-ing is supposed to do. Anyway, it adds serious flavor.

 

I would like to add that very far away those Demos could be subject to improvement of old recordings-times, that is, this is how others may perceive it. However, with a "best" reproduction system, those oldies are so enormously transparent to begin with, that each small hint of sauce will be killing indeed.

If I can't bring this across, then bad luck.

 

The good news is that any track on a singular basis (not having the context of the others) sounds accurate, fresh and balanced. This is what you wanted. But John, believe me, this is where hardware development starts. So if I'd make a D/A converter without audible/measurable distortion but which has a same flavor all over (tracks and albums and artists) then very maybe one person who likes that flavor will not sell this DAC within a month (after that he will), but for me it will be total failure. Mind you, I might be extreme in that "neutral" department, but coincidentally I create "extremely good" sounding products (by not let having them an own sound). Your goal is about "extremely good" just the same. Now how to get rid of the flavor ...

 

Peter

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
2 hours ago, PeterSt said:

Where are those other people ? anyone ?


Me for one, as I get little snippets of time here and there. When I retire in a few weeks I hope to have more time (but of course my wife may have other plans).

 

 I mentioned to John what you did, that the latest were improved. And then in fact I suggested he would likely get constructive criticism/feedback from you.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
48 minutes ago, Jud said:


Me for one, as I get little snippets of time here and there. When I retire in a few weeks I hope to have more time (but of course my wife may have other plans).

 

 I mentioned to John what you did, that the latest were improved. And then in fact I suggested he would likely get constructive criticism/feedback from you.

I'm another. I have communicated with John more by PM. I fear that the 'dancing about architecture' effect is maginfied in multi-sided conversations, so I hope that the vocabulary I and John use one-on-one will be a little clearer than including the word/thought set of a larger group of participants. 

Link to comment

Got good news....

After several days of running tests, reviews, etc etc..  have a plausible release candidate, and gonna run the tests today.   I have 2-3Hrs of tests that I like to run, and if they pass, I'll look into some optional 'reverse mastering' EQ (usually LF cut or boost at 500,250 or 150Hz.)   The decoder can do the 'reverse mastering' already, but there are too many post decoding EQ values to remember.   I believe that several choices would handle the old Carpenters, Linda Ronstadt and a few other recordings.    I don't think that any more than 3-4 reverse mastering choices will be needed.   For example, if the bass is too strong/muddy, there will be 3-4 progressively stronger choices for correction.   These mods are perhaps desirable 10% of the time -- not all that often, but I'd hate for someone to be unhappy when the specific problem is so easy to correct.

 

I would give spot demos, but sometimes out of context, the demos might be confusing.   If I get really happy, might make demos available before release.

 

This has taken a LONG time, and I have kept myself sequestered  I should be more responsive, but I am trying SOOOO hard to finish this up. Will definitely complete the PM  correspondence before the release is finalized, to make sure that I didn't overlook any comments.  

 

(The rest is mostly blather -- important thing is that if all goes well, a release is coming soon.   Given success after another day of tests, then release will come perhaps 12Hrs after that.  My guess is that 9PM USA Est time Friday is possible, but if all goes well, will try to make it by 3PM USA Est -- late afternoon GMT --  time for people in EU/UK.)

 

Since my hearing is known to be varying vs. time, like when doing an earlier release several weeks ago, I have been doing tests/measurements about 4-5 times per day.   Once the choices at all test times correlate, then the choice is deemed correct.   Since many of the choices interact, it took several iterations to end up with stable results.  A final choice is not accepted until passing a couple of test cycles.

 

The basic EQ seems okay through multiple days of review, but I ran into some trouble about different versions of the same recording sounding very different -- each version seems correctly decoded, but the EQ before encoding is very different.   Before tomorrow nights release (assuming the candidate is okay), I might add some EQ 'macros' for common cases to optionally reverse some mastering decisions.   It is best for the decoder when the FA encoded version has not been mastered before encoding.   However, it does appear that some recordings have been 'mastered', therefore further damaged, so that they sound better when FA encoded.   I have some good ideas about fixing some of the common kinds of damage done to the recordings before FA encoding.

 

As the 'decoding' becomes more complete, then precision and accuracy are more important.   The underlying DA decoding is 'perfect' and able to withstand at least 9 iterations witihout too much drift.   The errors all come from the interaction between the input, the decoding layers, and then the output.   All of these interactions include 1st order EQ, some level translation (e.g. one case where +3dB needs to be added), and one portion of the output EQ needs some 2nd order EQ.   The errors are in the interactions between blocks.  For many months (years) my old project partner was telling me that the DA quality was good enough, but my goal was to be able to work in the FA configuration, which is infinitely more stringent!!!

 

People in audio know all about how mis-applied expanders sound...   Ever notice that the 7 expanders used on the decoder usually do not present expansion artifacts?   This is because the dynamics processing  IS pretty much 100% correct.   90% of the problems in the last few months are on the output EQ.   The problems with output EQ have reflected back to the other EQ sections, and the correct settings come from an iterative application of the building-blocks until they settle to something that works correctly.

 

 

 

 

Link to comment

This might help -- I need to know if the response balance on this snippet seems okay.  I am not looking for perfection, even though that would be nice.

I used this example because of the relatively natural vocals -- not processed.   As soon as processing comes into the chain, all bets are off WRT correctness.

 

Basically, are the highs too extreme or suppressed?

Is the bass too strong or weak?

 

https://www.dropbox.com/s/gi80qbc9eqie4z1/17 Dionne Warwick - Walk On By - (There's) Always Something There to Remind Me.flac?dl=0

 

I believe that the balance is 'okay', but I have no reliable basis to compare against -- just once in a while somewhat better hearing.   Very fine levels of commentary are helpful.  Feedback that might correct my 'belief' would be very useful.

 

Admittedly, this recording is one of those that were 'mastered' before the FA encoding.   I have another copy, seeming to be closer to the master tape, exactly the same recording, but more 'thin' sounding.  This one comes from a premium collection, and I believe even the FA version was a bit 'heavy' sounding.  I added 500Hz,-3.0dB 200Hz,-1.5dB and 100Hz, -1.5dB which seemed to balance the recording better, and left 'space' for the vocal.   Even with this EQ, the result is still more 'heavy' lows and lower midrange than the more 'direct' copy of the recording.  (I just did the minimum EQ to make 'space' for the vocal.)   The sequence above will be one of the options in the optional mastering correction EQ.

 

Also, I enabled the anti-sibilance to deal with the 's' sound appearing from time to time.   The recording doesn't sound bad with the slight touch of sibilant 's', but I didn't want

the distraction by any other impairment.  I am willing to show the version with the slight 's' sound (it is NOT obtrusive, just noticeable), but that distracts from the general response balance issue.

 

Most interesting to me -- are the highs too strong?   Are they too weak?   I believe that the settings that I used are correct, in fact, I found another bit of EQ used on encoded files -- a 1.5dB change at 18kHz.   Errors that small might normally cause a small frequency response change under normal conditions, but when wrong EQ like that is used on FA materials, there is a very noticeable loss of quality.   The slight change above did markedly improve the decoding, along with the slight frequency response change.

 

 

 

Link to comment
2 hours ago, John Dyson said:

This might help -- I need to know if the response balance on this snippet seems okay.  I am not looking for perfection, even though that would be nice.

I used this example because of the relatively natural vocals -- not processed.   As soon as processing comes into the chain, all bets are off WRT correctness.

 

Basically, are the highs too extreme or suppressed?

Is the bass too strong or weak?

 

https://www.dropbox.com/s/gi80qbc9eqie4z1/17 Dionne Warwick - Walk On By - (There's) Always Something There to Remind Me.flac?dl=0

 

I believe that the balance is 'okay', but I have no reliable basis to compare against -- just once in a while somewhat better hearing.   Very fine levels of commentary are helpful.  Feedback that might correct my 'belief' would be very useful.

 

Admittedly, this recording is one of those that were 'mastered' before the FA encoding.   I have another copy, seeming to be closer to the master tape, exactly the same recording, but more 'thin' sounding.  This one comes from a premium collection, and I believe even the FA version was a bit 'heavy' sounding.  I added 500Hz,-3.0dB 200Hz,-1.5dB and 100Hz, -1.5dB which seemed to balance the recording better, and left 'space' for the vocal.   Even with this EQ, the result is still more 'heavy' lows and lower midrange than the more 'direct' copy of the recording.  (I just did the minimum EQ to make 'space' for the vocal.)   The sequence above will be one of the options in the optional mastering correction EQ.

 

Also, I enabled the anti-sibilance to deal with the 's' sound appearing from time to time.   The recording doesn't sound bad with the slight touch of sibilant 's', but I didn't want

the distraction by any other impairment.  I am willing to show the version with the slight 's' sound (it is NOT obtrusive, just noticeable), but that distracts from the general response balance issue.

 

Most interesting to me -- are the highs too strong?   Are they too weak?   I believe that the settings that I used are correct, in fact, I found another bit of EQ used on encoded files -- a 1.5dB change at 18kHz.   Errors that small might normally cause a small frequency response change under normal conditions, but when wrong EQ like that is used on FA materials, there is a very noticeable loss of quality.   The slight change above did markedly improve the decoding, along with the slight frequency response change.

 

 

 

Should have trusted the decoder over my hearing.   I just uploaded a version without the EQ -- sounds better after some rest without mods.  DIRECT from the decoder.

Unless get critical feedback, the decoder EQ isn't going to be modified!!!  Certainly not going to change it because of my own hearing...

 

 

Link to comment

One more note -- I am not claiming that the previous mod corrected the result -- let me explain...

 

There are three available settings for the high end.  The version being demoed is something like the 'max'.  There are two other settings that are possible also.

If there needs to be less 'high end', I can pull back the EQ in certain steps.  (It doesn't make any sense to express the numbers in 'dB' without describing the curves.)  However,

 it is something like 3dB steps at 18kHz.  Normally,  I cannot hear well 'up there', so the steps, which aren't really steps AT 18kHz, but a smooth curve between about 9kHz and 18kHz.

 

So, if it needs less, tell me.  If it needs a LOT less, then tell me that also.

I am still experimenting as my hearing is now VERY clear at the high frequencies.   Myself, I feel like the highs are too strong, but not really sure,.

 

IF this can be narrowed down, the release will be tomorrow night.

 

 

 

Link to comment

The V3.0.9 series of the decoder is almost ready.  The previous 'Dionne Warwick' example is a pretty good preview, and I have uploaded some previews to the various distribution and demo sites.  However -- big however, all versions, including the V3.0.9C that I thought would be the release has a slight +dB HF tilt, therefore causing a slight metallic sound.   This tilt has a different impact that the more severe problems in the past, but imparts a slight metallic edge to the recording.   There is NO WAY that a normal consumer 2nd order EQ can correct this metallic sound.   The previously demoed V3.0.9A (Dionne Warwick) demos also had some HF gain control caused pseudo-distortion -- fixed entirely in the V3.0.9C

version.   It is possible that the metallic sound crept in (or became more obvious) with the pseudo-distortion correction, I dont' really know.  The tilt will be fixed by tonight though.

 

Therefore, since I did find the metallic edge to the sound (it is a strange HF intensity, NOT the same as a normal treble boost), it will require a redo and rebuild of the decoder with about -0.75dB at 12kHz and perhaps -0.75dB at 15kHz (1st order HF shelf.)   Luckily, my HF hearing is working right now, so it is a good thing that my  hearing is working before the official release goal of +7Hrs.

 

The most worrisome thing for me -- the needed -0.75dB type EQ doesn't follow 'the rules'.   So, there might be another problem, or the anti-distortion might be interfering with the EQ curve (the anti-distortion issue is probably most likely the problem.)   Instead of a brute force EQ, I might consider another remedy that will avoid the explicit tweak.  (As I mentioned before, I truly dispise tweaks.)

 

The V3.0.9D release will most likely be available along with the normal snippets in 7Hrs, but the 'private requests' will not be available until tomorrow morning.  Sorry about missing the early evening goal for UK/EU -- the V3.0.9C version IS available, but I don't believe good enough -- don't bother unless you are REALLY interested.

 

Sorry about the delay, but been getting better feedback on this set of decoders.   It is even very likely that many people cannot hear the tilt issue, as I certainly cannot unless my hearing is working >12kHz.

 

Time to focus on the tilt problem, and get the decodes started!!!

 

 

Link to comment

Sorry about this -- another day delay.  The V3.0.9C version IS a major improvement over all previous versions, but has a noticeable, easily fixable problem.  Fixing the more 'minor' problem has ballooned into a realization of some STOOOPID code, and also some audio processing issues that cannot be fixed by an adjustment.   A minor re-shuffling of the 'Tetris' blocks in the HF pre-emph/de-emphasis is showing a very incredible improvement.

 

Let me describe here:    If you listen carefully to orchestral pieces -- the violins.  Did you ever notice that the violins sounded 'smushed' together, almost distorted?   Part of the V3.0.9C improvement was to mitigate a lot of the 'smush'.   Something was still wrong, even though the 'metallic' sound was easy to fix for the non-existent V3.0.9D release.  In fact, the 'metallic' sound was much simpler to fix than I thought -- and it was a REAL fix.

 

On the other hand, NOT ALL of the 'smush' in the violins is gone in the V3.0.9C (or the V3.0.9D release that I tried to do.)  There was something STILL wrong with the pre-emph/de-emp. When doing some experiments, while maintaining almost the same response shape, I found a direction for a better match to the FA encoding characteristics..  (the pre-emphasis is a general boost from 3kHz to 9khz, then a cut from 9kHz on up.)   The de-emphasis is almost the opposite.   The exact configuration of the boost/cut is important.  If the shape doesn't match the encoding (the FA original encoding) perfectly, there is a sense of 'distortion' or 'smush' left in the recording.   This, basically is left-over FA encoding that the decoder couldnt correct.   Even worse, sometimes if the pre-de emphasis is incorrect, the decoder can create expansion artifacts -- those are much uglier than FA

 

When revisiting the pre/de-emphasis again, with the rest of the decoder much more accurate that I found a much better direction, where the violins now maintain much beter clarity, while the rest of the instruments (e.g. pianos) maintain the same improvement made in the V3.0.9 release.  (You might notice that recordings like 'Downtown' are MUCH better in the V3.0.9 series of releases.)

 

Bottom line:  I am very sure (unless personal emergency) that a more worthwhile release beyond the V3.0.9C preview is possible in a day.  In fact, I could probably do it by today at 9:00PM USA Eastern time (+3Hrs)  -- but not worth the hurry-up and potential mistakes.   Even though there is now a decoder that sounds very good that I can use for reference, my hearing is still NOT good enough to do A/B comparisons.  It is much better to be careful and do the release in a more orderly way, rather than to rush something out tonight in 3Hrs.

 

The clarity is MUCH improved in the test code right now, it is well worth the wait.  However, if you want to experiment with something that workse REALLY well in comparison with anything before, then V3.0.9C is okay.


I am planning for tomorrow at 9PM USA Eastern time, but will try for 3PM if all goes well.

Very important:  the same general tonal balance IS being maintained.   The differences are all about the dynamics processing and the associated EQ needed to make it work correctly.

 

 

Link to comment

Release V3.0.10C is ready...

IMPORTANT:  this version really has overcome my hearing problems.   Specific information & remedy is below.

All usage methods are the same -- the correction is the canonicalization of the output EQ and fine-corrections to the pre/de-emphasis.

 

Private sites still being updated with fog-free versions, they will not be ready until tomorrow.  Clean, but foggy decodes ARE available.

Public demo snippets are available, but as quick decodes.   There is some fog in the demos, but will be corrected tomorrow.

('fog' is common in consumer recordings, but is a subtle softening of the sound, but not a response loss.)  Again, fog free

versions will be coming tomorrow.

 

The current V3.0.10C demos and snippets are what I have been hearing, but NOT accurately.   Coincidentally. my changing loss of hearing generally matches IDENTICALLY to the last phase of the EQ for the final layer.  I had violated my engineering sense and removed the EQ so that the sound seemed corrected to me.

 

The current results come from a precise implementation of my original engineering intuition.

 

Location of snippets:

https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0

Location of decoder:

https://www.dropbox.com/sh/5xtemxz5a4j6r38/AADlJJezI9EzZPNgvTNtcR8ra?dl=0

 

Background:

 

BUGFIX from V3.0.9C to V3.0.10A (V3.0.10B/C is a minor correction – normalizing to exactly the engineering design)

Most important – added final stage of EQ that does a last step correction for my screwed up hearing. Interestingly, my hearing loss almost 100% matches the final step of EQ that was removed for the last layer. Normally, each layer has two 1st order 6dB @ 18kHz EQ, but I had removed it because my hearing told me that the EQ on the last layer was incorrect. Sadly, all along, this last layer of EQ was actually NEEDED. I let my engineering aptitude and intuition be trumped by my broken hearing!!!

 

Additionally, a few more modifications of the pre-emph/de-emph seems to have FINALLY zeroed in on the needed EQ. All of the EQ appears to be a matched (mirror image) of a 6dB/3dB 3kHz offset pair at 3k/6k, 9k/12k and 18k/21k. Therefore, the pre-emph was 3kHz -6dB, 6kHz -3dB, 9kHz +6dB 12kHz +6dB and 18kHz -6dB, 21kHz -3dB. Additionally, the per-layer EQ of two each -6dB at 18kHz is needed on EVERY layer, and my previous skip of the two -6dB at 18kHz was incorrect.

 

 

Link to comment

John, I'm not sure whether you have covered this topic, but does your software work to decode dolby A tapes?  I have been gifted (actually permanent loan) six 15ips 2 track tapes which are rough stereo mixes of 16 track analogue live recording (on 2 inch tape that I could also get on loan) that a quite well know recording engineer (he has more than 1200 listings on allmusic.com) recorded at a live jazz festival in 1980.  He told me that the music has never been commercially released and that he actually was stiffed by the company who hired him to do the recording.  So he had the tapes and gave them to me last Wednesday.  I've checked them out and they are in excellent condition for their age with no shedding.  They sound quite good even without dolby A decoding, but clearly need the decoding to sound right.   I am investigating standard dolby A decoders, like the Model 363 with dolby A or switchable dolby A or SR modules.  Your software may be a good option.  Do you have to digitize the original signal or can I play a tape in real time through your software like I can a regular dolby decoder?   Thanks,  Larry

Analog-VPIClas3,3DArm,LyraSkala+MiyajimaZeromono,Herron VTPH2APhono,2AmpexATR-102+MerrillTridentMaster TapePreamp

Dig Rip-Pyramix,IzotopeRX3Adv,MykerinosCard,PacificMicrosonicsModel2; Dig Play-Lampi Horizon, mch NADAC, Roon-HQPlayer,Oppo105

Electronics-DoshiPre,CJ MET1mchPre,Cary2A3monoamps; Speakers-AvantgardeDuosLR,3SolosC,LR,RR

Other-2x512EngineerMarutaniSymmetrical Power+Cables Music-1.8KR2Rtapes,1.5KCD's,500SACDs,50+TBripped files

Link to comment
2 hours ago, astrotoy said:

John, I'm not sure whether you have covered this topic, but does your software work to decode dolby A tapes?  I have been gifted (actually permanent loan) six 15ips 2 track tapes which are rough stereo mixes of 16 track analogue live recording (on 2 inch tape that I could also get on loan) that a quite well know recording engineer (he has more than 1200 listings on allmusic.com) recorded at a live jazz festival in 1980.  He told me that the music has never been commercially released and that he actually was stiffed by the company who hired him to do the recording.  So he had the tapes and gave them to me last Wednesday.  I've checked them out and they are in excellent condition for their age with no shedding.  They sound quite good even without dolby A decoding, but clearly need the decoding to sound right.   I am investigating standard dolby A decoders, like the Model 363 with dolby A or switchable dolby A or SR modules.  Your software may be a good option.  Do you have to digitize the original signal or can I play a tape in real time through your software like I can a regular dolby decoder?   Thanks,  Larry

Yes, it directly does DolbyA recordings, but needs digitized .wav files (normal 16, 24, FP in -- 24 or FP out),  96k/192k/384k or 88.2k/176.4k/352.8k input/output in DA mode.   When compared with using the decoder in FA mode, you gotta do is to skip using the '--fa' switch, which switches the decoder into FA mode.   The natural behavior  without the --fa switch is DolbyA mode.   The need for licensing has been disabled, so recent versions can do DolbyA out of the box.

 

The decoder can work in realtime -- on anything from a 4 core Haswell or faster, the decoder can easily play a DA file in realtime at a high quality mode (e.g. --xp).  FA mode (which you aren't asking about) requires running multiple virtual DA devices, which makes it much more CPU intensive.   Even in the intensive FA mode, I can do realtime decodes on my 10 core CPU in --xp=max mode.  On a 4 core Haswell, most likely for FA mode, it should be able to run realtime in --fz mode.

 

The key to doing totally correct DolbyA decoding is that you need to line up the tones with the decoder.    There is a switch called --calib to use while playing the tones at the beginning of the recording.   The numbers coming back from the running log that happens during --calib are the values used in --tonel=xxx and --toner=xxx.    If the tone values are close enough to the same, then you can just use --tone=xx which sets both channels.

 

For a first cut, and normal levels on tape were used, then approx --tone=-13.00 will get you started.   Tapes show calibration tone levels somewhere between -12.60 to -20.00, but most often in the -12.70 to -14.50 level range.   When doing DolbyA decoding, I do suggest using the --xpp or even --xpp=max if your computer is very fast.  However, --xp or

even --fz for the quality modes are 'good enough', likely surprising you with the clarity relative to using a true DolbyA unit.

 

About the decoding quality:   On some recordings, when compared with a true DolbyA unit, the decoder might seem to be profoundly more bright sounding -- during my own testing, the brighter sound from RECENT decoder versions is more correct than a true DolbyA.   Also, you might notice that vocal chorus are more clean sounding.

There have been some questions about the quality of the decoder, but it TRULY decodes the material, and doesn't just do an EQ or slight expansion.   The full noise reduction and response balance correction is done by the decoder.

 

On the other hand, there are definitely dB level transient errors, but differences between certain versions of DolbyA HW units will also show differences.   Instead of being totally flat at 0dB, the DA decoder has about a +-0.25dB ripple in the response curve.   The variation doesn't appear to cause much trouble, because if it did cause lots of trouble, then using seven instances in series for the FA decoding mode would be impossible.

 

The DA mode of the decoder is not perfect, but when comparing with the distortions that happen during true DolbyA HW decoding, the transient response errors are probably less audible.   The decoder *DOES* carefully emulate the precise complex DolbyA attack/release characteristics.

 

I guess the most important help for using the decoder in DA mode is -- what should the command line be?

 

Something like this:

da-avx --info=1 --fz --tone=-13.00 --input=infile.wav --output=outfile.wav

 

To detect the calibration level:

da-avx --info=2 --calib --input=infile.wav --output=junkoutput.wav

 

Given the output of the calibration levels output by the above command, then you can modify the original command line to be like this:

da-avx --info=1 --fz --tonel=<leftvalue> --toner=<rightvalue> --input=infile.wav --output=outfile.wav

 

If you want to play a DA recording in realtime -- you can try the following:

da-avx --info=1 --fz --tone=<tonevalue> --input=infile.wav --play

 

If you are using Linux with SoX, you an even do this:

sox infile.wav --type=wav - | da-avx --info=1 --fz | play -q -

That is -- the program will use 'pipes', even on Windows.  On Windows, the pipes are less natural, but do work.

 

If decoding takes too long for your computer, then try '--fx' instead of '--fz', or even specify nothing instead of '--fz' for approx DolbyA HW quality.

If you have a fast enough computer, and want the most clean decodes possible, then try:  --xp or --xpp or even --xp=max instead of '-fz'.   If using

the very advanced modes, the clarity of cymbals and details in vocal chorus will be astounding.

 

-----------------

I  suggest is to trying the decoder....   It is free, and in DA mode, really does work well.   It is not perfect,

but during the original development, it was ME who was never satisfied with the results, not the other team member(s.)

It has been recently (in the last 6months) that I feel that the DA side of the decoder is almost as good as it can be.

Again, NOT PERFECT!!!

 

Let me know privately if you want a few more pointers.

 

 

Link to comment

But -- I am rescending all V3.0.10E versions right now.   There are a few nits, however good it is -- I don't like some of the characteristics.

 

It will be available later on today.


Sorry about these 'stutters', but THIS MUST BE AS GOOD AS I CAN DO FOR NOW.

Link to comment

Here are some updates:

 

1) NONE OF THIS HAS ANY EFFECT ON DolbyA mode decoding.  The DolbyA mode is fixed and hasn't needed to change in a long time.  If there are any changes to DA mode, they are minor and all are very subtle improvements.   Basically, DolbyA decoding 'just works'.   If it didn't 'just work', then there would never be any FA mode success AT ALL.  The FA mode runs 7 (yes, seven) DA decoders in a specially staggered way -- if there were significant errors, NOTHING on FA would have worked at all.   There are still minor FA problems, but this is because there is NO specification, no input/output examples, and a developer with poor hearing.   The DA mode has been designed against real hardware and we had good test materials.  DA mode is quite solid.

 

 

2)  I delayed (or actually, rescended) the FA release because the 'metallic' sound that has been bothering me was only partially corrected by the slight EQ.  I kept revisiting it, and created an improvement, but not a solution.  Eventually, I determined that the problem is deeper than I thought.   Actually, the correction was simple, but not just a post decoding EQ.

 

Since the FA correction is so simple, and the improvement is clear, the release will be redone very soon today, I am trying for before late afternoon EU/UK time, but haven't been very successful at that target time in recent past...   I am trying though..   As soon as the demos are re-created for my review, then if everything is okay, I'll do the release again.   The change is very simple, but important.  A very good aspect of this change:  No slight output EQ tweaks.   The 'tweak' really bothered me,

but seemed necessary.   With this change, there is ZERO modification of the output past the canonical decoder.   This means that with this hopeful fix, the design is still totally clean.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...