Jump to content
IGNORED

'FeralA' decoder -- free-to-use


Recommended Posts

The previous 'final' release was found to have a 'few' flaws -- technically minor, but hearing-wise, I guess, important.   Been working all so far, all week, on answering the defects.

 

Also, it seems to be important to describe the improvement about the decoder 'lifting the fog'.   DolbyA (or fast gain control) fog is this strange thing that does something similar to what mp3 does to damage the sound of recordings.   It assumes that damage near the primary signal frequency is unimportant.   The fast gain control fog is so strong that it DOES audibly damage the signal, and had been noticed by recording experts since the introduction of DolbyA.   The following statement is very important, so will 'inverse repeat it from near the end of this note':

 

*** The loss of detail as created in normal consumer recordings is NOT about energy or strength of tones, but is more about 'time resolution' or 'temporal detail'***

 

What does the 'fog' sound like?   The 'fog' doesn't have a sound, but when comparing with an original recording, one might notice that details are *wierdly* obscured.  One aspect of the 'fog' is the loss of fine spatial details.   The 'fog' tends to make the location of instruments in the sound field noticeably more diffuse.   The fog itself does NOT sound like a change in spectral energy (the highs are still existent, the lows are still existent, the midrange is still existent), but it smooths the energy across time, thereby the location of instruments, vocals and transients tends to diffuse.   *This diffusion is created by other energy happening at the same time in the recording*.  Vocal chorus tends to be less distinct, certain violin solo situations tend to diffuse the location of the violin, etc.   In FA encoded materials, ALL of the sound energy is stil in the signal, it is just that the distortion sidebands add a layer of 'confusion' to the signal.   I'd suspect that the amount of 'confusion' or notice is likely dependent on the ability of an individual's hearing to distinguish fine details, and the fog will cause the loss of the details, yet still 'hear' the sound. *The loss of detail in normal consumer recordings is NOT about 'energy', it is about effective 'time resolution'*

 

When listening to  well made 'classical' or 'orchestral' recordings *after decoding*, the location of instruments in the sound field becomes more clear and less of a blur.  The difference isn't night and day, but does make a recording sound more 'real' and less 'artificial'.   The time base distortions in analog tape also creates 'fog', but with a different cause and is very dependent on the mechanical quality of the audio deck.

 

* One of the most persistent mistakes about internally setting-up the decoder is to mistakenly 'enhance' the temporal detail instead of 'recovering' it.  Minor mistakes that would *never* be noticed in a conventional design are made too obvious, therefore for perfect decoding, the settings must be both FREQUENCY RESPONSE/ENERGY and TEMPORALLY perfect, adding additional layers of constraints.  This makes it very difficult to perfect the decoder.

 

 

John

 

Link to comment

It has been a long time since the last release, but I am working on the final polishing, getting errors down to minimal (difficult considering the dynamics.)

The release might be available today, tomorrow, or soon thereafter.

I have found an approximate way to measure the actual frequency response (sine wave gives too many weird errors.)  It is getting close to release, and please realize that the frequency response is based upon the energy available in the recordings used to drive the measurements.

 

 

If you look at the numbers, you'll see a droop above about 10kHz, but that comes from a lack of energy in the recording, therefore the measurement shows some weakness.   There is a peak around 500Hz of about 0.40dB, and that is probably an error in the decoder.   That MIGHT have to persist, but still looking at it.  Also, the increase in energy as frequency goes down is really needed.   A truly flat response doesn't sound right.   This MIGHT be an energy issue, or might be a natural artifact, so the frequency response in a 'trend sense' is correct.

 

Frankly, given these caveats, the freq response isn't too bad at all!!!

Note that the frequency ranges are not monotonic, and there are explanations in the text above.   The *only* thing that seems like an actual error and is subject to correction before release is the 500Hz peak of 0.40dB.   That MIGHT not be fixable, or might be an artifact of several things that are uncorrectable.   If possible, will be fixed before release.

 

Frankly, considering 60dB of gain control, this is pretty accuratem very subject to energy level variations.  (It appears that -1.4dB is nominal, and each number is the approximate peak energy level differences, so ranges tend to show the maximum levels.)

 

LEVELS 20Hz to 20000Hz:  -0.71  * this number is a manifestation of the energy over the bandwidth, including the bias towards LF energy.
LEVELS 1000Hz to 3000Hz:  -1.31
LEVELS 1500Hz to 3000Hz: -1.25
LEVELS 1000Hz to 1500Hz: -1.39
LEVELS 3000Hz to 20000Hz: -1.42
LEVELS 3000Hz to 4500Hz: -1.22
LEVELS 4500Hz to 6000Hz: -1.42
LEVELS 6000Hz to 7500Hz: -1.35
LEVELS 7500Hz to 9000Hz: -1.48
LEVELS 3000Hz to 6000Hz: -1.33
LEVELS 6000Hz to 9000Hz: -1.41
LEVELS 3000Hz to 9000Hz: -1.36  * energy differences above 10kHz appear to be an 'available energy' difference
LEVELS 9000Hz to 12000Hz: -1.83 * does not indicate an error in the decoder
LEVELS 9000Hz to 20000Hz: -2.07
LEVELS 12000Hz to 20000Hz: -2.67
LEVELS 12000Hz to 15000Hz: -2.48
LEVELS 15000Hz to 18000Hz: -3.12
LEVELS 15000Hz to 20000Hz: -3.21
LEVELS 3000Hz to 20000Hz: -1.45
LEVELS 1400Hz to 1500Hz: -1.48
LEVELS 1300Hz to 1400Hz: -1.48
LEVELS 1200Hz to 1300Hz: -1.46
LEVELS 1100Hz to 1200Hz: -1.39
LEVELS 1000Hz to 1100Hz: -1.29
LEVELS 900Hz to 1000Hz: -1.14  * the general energy difference increase below 1kHz appears correct
LEVELS 800Hz to 900Hz: -0.96
LEVELS 700Hz to 800Hz: -0.76
LEVELS 600Hz to 700Hz: -0.48
LEVELS 500Hz to 600Hz: -0.14
LEVELS 400Hz to 500Hz: -0.02   * this range is subject to correction/review
LEVELS 300Hz to 400Hz: -0.17
LEVELS 200Hz to 300Hz: -0.57
LEVELS 150Hz to 200Hz: -0.75
LEVELS 100Hz to 150Hz: -0.38
LEVELS 50Hz to 100Hz: -0.57
LEVELS 40Hz to 80Hz: -0.59
LEVELS 20Hz to 50Hz: -0.49
LEVELS 20Hz to 30Hz: -0.47
LEVELS 10Hz to 30Hz: -0.48

 

Link to comment

Was right on the edge of doing a release.  When doing the listening reviews, got about 2/3 of the way through the recordings, and found the infamously irritating +dB HF tilt.

The tilt is not measurable using the techniques that I have available -- neither sine wave or spectral energy measurements are accurate/stable enough to be useful.   Even a 0.5dB tilt can be very irritating to listen to.  (I call it the 'slippery sound'.)   My measurements are not accurate to much better than 0.25dB, so the 0.5dB measurement is too close to the amount of measurement error.

 

I have found the cause of the tilt -- it comes from part of the HF phase descrambling, where I wasn't as thorough as I should have been.   The tilt is now under control, and every other known problem -- including the sibilance blow-out on the old Carpenters recordings, the bass matches (except less fuzzy than FA), and the detail on some recordings is pheomemal.

 

It really is looking like a release might just happen in about +14Hrs.  (My usual 9AM or 9PM EST USA time release times.)

The results are incredibly smooth -- so smooth that the demos include 'SuperTrouper', one of my original 'tilted windmills.'   I don't have the guts to try 'Dreamworld' yet, but might just add it, if good enough.  (The Dreamworld recording is somehow damaged, but decodeable.)

 

John

 

Link to comment

The release is probably ready, but not released yet.  After a final review today, I am still a little concerned about how the bass is supposed to sound.  The energy in the bass is correct, but the sound is different.  The FA encoding process as manifest on most consumer recordings *DOES* distort the bass a little, and I have been thinking that the resulting decoded bass seems to be much more clean, has probably some of the distoriton mitigated.  It will take another day for me to settle this matter.  I'll also be getting feedback from the reviewers. 

 

The best way to listen for the difference:   the FA sound has a growling bass, sort of mixed up with itself.  The decoded version has more pure sounds, but still might not be correct.  It is important not to take prejudicial opinons, so it takes a little while to decide.

 

Attached is a *VERY EASY TO HEAR* HF distortion improvment.  You don't have to listen very hard to hear the improvement.

Frankly, this shows how the quality of some pop recordings is embarassingly bad.   Listen for the 'jumble' in the 'raw' (CD) version.

These are only 10 seconds long, so it is very easy to zero-in on the effect.

 

MammaMia-RAW-SNIP.flac

MammaMia-DEC-SNIP.flac

 

 

 

Link to comment

About the 'bass' issue in the upcoming fully announced release (all over the net), I have decided that the bass is 'good enough' for now.   However, I intend to review for another day, perhaps creating a V6.0.4R release.   The feedback from a few reviewers has been varied but NOT seriously critical at all.   My own opinion that after a second/tertiary review, there MIGHT be room for VERY SLIGHT improvement.   The rest of the decoder is so perfect that most listeners probably cannot casually tell the difference -- the improvement for decoding is in the details, not the gross sound difference.

 

Since this thing has dragged on long enough, I have decided to study the bass one more day and see if the 'muddy' sound of the raw FA recordings is more correct than the 'tight' sound of the decoded materials.  Eiither the 'Q' or 'R' releases must be the 'final' for a month or so.   One more day wont' hurt anything.

 

The plan for a full, 'world wide' release next week is still operative.

 

 

 

 

Link to comment

Good news -- one of the reviewers gave me a little help on the final bass EQ, and I think that we finally have made it correct so that I am more happy.  Previously, there was just something wrong, but my perceptive abilities didn't allow me to solve it myself.

 

Otherwise, the feedback from one sole reviewer helping just thislast  minute is VERY VERY good.

 

Sadly, I'll have to do another release, the prep is starting in about 15minutes.   I'll try to have it ready by +5Hrs.

It will be V6.0.4R.

 

Still on track for the 'world wide' release next week.

 

 

 

Link to comment

After getting some feedback and considering the planned 'world wide' release next week, I'll be doing an interim (full function minus minor bugs) release today. 

 

 The 'local-final' release  will be called REL5final-V6.0.4S.   A local reviewer and another  private reviewr was happy with V6.0.4Q, and I was conditionally happy also.  (The condition was based upon acceptance by others.)    A few nits (very minor) had to be corrected along with some source code cleanup really needed to be done.  The V6.0.4S release be creating demos today for local AS (and the other close reviewers) and also uploaded later on today.   After fixing a one out of 1000 startup bug, it will become V6.1.0A next week.  The major delay for a  'world wide' type release is more logistics than it is 'programming' per-se.

 

The results are looking REALLY good, and expect the announcement & availability as late as early this evening.

 

ADD-ON:

I forgot to append the measured spectral density response for V6.0.4S -- it is NOT a 100% accurate measurement, but is one of my double-checks very badly needed because of my unreliable hearing:  (There are bumps because of low spectral density at certain freqs, higher spectral density at other freqs, and other error sources.  The response is MORE flat than the measurement implies):

 

LEVELS 20Hz to 20000Hz
dB raw: -23.47 dB dec: -23.79 dB diff: -0.32
LEVELS 1000Hz to 3000Hz
dB raw: -32.96 dB dec: -33.44 dB diff: -0.48
LEVELS 1500Hz to 3000Hz
dB raw: -35.43 dB dec: -35.51 dB diff: -0.08
LEVELS 1000Hz to 1500Hz
dB raw: -34.53 dB dec: -35.41 dB diff: -0.88
LEVELS 3000Hz to 20000Hz
dB raw: -37.42 dB dec: -37.80 dB diff: -0.38
LEVELS 3000Hz to 4500Hz
dB raw: -39.68 dB dec: -39.85 dB diff: -0.17
LEVELS 4500Hz to 6000Hz
dB raw: -41.20 dB dec: -41.54 dB diff: -0.34
LEVELS 6000Hz to 7500Hz
dB raw: -42.09 dB dec: -42.42 dB diff: -0.33
LEVELS 7500Hz to 9000Hz
dB raw: -43.40 dB dec: -43.91 dB diff: -0.51
LEVELS 3000Hz to 6000Hz
dB raw: -38.53 dB dec: -38.75 dB diff: -0.22
LEVELS 6000Hz to 9000Hz
dB raw: -41.32 dB dec: -41.73 dB diff: -0.41
LEVELS 3000Hz to 9000Hz
dB raw: -37.54 dB dec: -37.82 dB diff: -0.28
LEVELS 9000Hz to 12000Hz
dB raw: -44.83 dB dec: -45.74 dB diff: -0.91
LEVELS 9000Hz to 20000Hz
dB raw: -44.65 dB dec: -45.80 dB diff: -1.15
LEVELS 12000Hz to 20000Hz
dB raw: -48.73 dB dec: -50.49 dB diff: -1.76
LEVELS 12000Hz to 15000Hz
dB raw: -48.82 dB dec: -50.40 dB diff: -1.58
LEVELS 15000Hz to 18000Hz
dB raw: -52.65 dB dec: -54.86 dB diff: -2.21
LEVELS 15000Hz to 20000Hz
dB raw: -52.61 dB dec: -54.91 dB diff: -2.3
LEVELS 3000Hz to 20000Hz
dB raw: -37.42 dB dec: -37.80 dB diff: -0.38
LEVELS 1400Hz to 1500Hz
dB raw: -37.60 dB dec: -38.19 dB diff: -0.59
LEVELS 1300Hz to 1400Hz
dB raw: -37.30 dB dec: -38.04 dB diff: -0.74
LEVELS 1200Hz to 1300Hz
dB raw: -36.94 dB dec: -37.82 dB diff: -0.88
LEVELS 1100Hz to 1200Hz
dB raw: -36.51 dB dec: -37.50 dB diff: -0.99
LEVELS 1000Hz to 1100Hz
dB raw: -35.97 dB dec: -37.02 dB diff: -1.05
LEVELS 900Hz to 1000Hz
dB raw: -35.33 dB dec: -36.35 dB diff: -1.02
LEVELS 800Hz to 900Hz
dB raw: -34.62 dB dec: -35.52 dB diff: -0.9
LEVELS 700Hz to 800Hz
dB raw: -33.88 dB dec: -34.58 dB diff: -0.7
LEVELS 600Hz to 700Hz
dB raw: -33.12 dB dec: -33.53 dB diff: -0.41
LEVELS 500Hz to 600Hz
dB raw: -32.23 dB dec: -32.32 dB diff: -0.09
LEVELS 400Hz to 500Hz
dB raw: -31.04 dB dec: -30.94 dB diff: 0.1
LEVELS 300Hz to 400Hz
dB raw: -29.68 dB dec: -29.70 dB diff: -0.02
LEVELS 200Hz to 300Hz
dB raw: -28.22 dB dec: -28.68 dB diff: -0.46
LEVELS 150Hz to 200Hz
dB raw: -28.53 dB dec: -29.00 dB diff: -0.47
LEVELS 100Hz to 150Hz
dB raw: -28.44 dB dec: -28.60 dB diff: -0.16
LEVELS 50Hz to 100Hz
dB raw: -29.81 dB dec: -30.12 dB diff: -0.31
LEVELS 40Hz to 80Hz
dB raw: -31.63 dB dec: -32.08 dB diff: -0.45
LEVELS 20Hz to 50Hz
dB raw: -37.57 dB dec: -38.17 dB diff: -0.6
LEVELS 20Hz to 30Hz
dB raw: -46.07 dB dec: -46.69 dB diff: -0.62
LEVELS 10Hz to 30Hz
dB raw: -46.19 dB dec: -46.81 dB diff: -0.62

 

 

 

 

 

 

Link to comment

I plan that there will be two sets of demos for V6.0.4S.

Version 0 will be done in the highest possible quality.   The detail is profoundly more clean than anything before.   Definitely not more 'bright', just more clean sounding.

Version 1 will be done in a 'high' quality mode, very similar to normal decodes.

 

The '=max' modes were broken for a few months, the bug being very tricky to find.   It ended up being a difficult to notice spelling error in the C++ code, after correcting, the '=max' modes work great.   Therefore, for spending perhaps 1.5X more time to perform a 'decode', you get a more clean sounding result.   The '--xpp=max' might sometimes be better than '--xppp', but using '=max' is very different than a step-up from (--xp to --xpp, for example) in general quality mode.   In fact, the quality improvements don't just 'add', but instead 'multiply'.   At the highest quality modes, e.g. '--xppp=max --dp', or for the very patient '--xpppp=max --dp', the result is incredibly clean.   The results using '=max' on earlier versions of the decoder would sometimes clarify an error.  Since the decoder now makes many fewer errors, the improvement is more than obvious.

 

When listening to the 'Version 0' kind of decode as it was coming off of the 'assembly line', it seemed that the result was distractingly clean.   I did careful A/B comparisons, and there was no actual differences in the 'sounds', but the V0 type of decode had less 'disturbances' and 'fog' in the sound.   Portions of the sound that had previously been inaudible, the little details are now audible.  The details aren't 'enhanced' and nothing is 'added', but instead something has been removed -- a kind of veil in the sound, that is sometimes called 'fog' is very noticeably lifted.

 

For those interested, the comparison might be enlightening.   The comparison has certainly been a 'wake-up' for me.

I am hoping for the release in 10Hrs, perhaps a little before.   Everything will finish-up before then, but I need to do careful reviews, and will pull-back if there are any problems.

 

 

 

Link to comment

Got some feedback from one of the reviewers here on the AS forum, and he made a comment that opened up a GOOD can of worms -- yum yum.   Anyway, the new 250/150Hz oriented phase shifts probably had a 'delta frequency' error.   Originally, on the '4S' release, still decoding, the phase shift frequencies were 250&225Hz and 125&150, but probably should be 250&200Hz and 125&150Hz  (these phase shifts help AMAZINGLY to create the most proper stereo image that most of us has recently heard.)  The phase twist left over after a 'straight' FA decode always seemed to create a 'stark', 'unreal' sound.   Interestingly, there is something odd about the FA decoding, where the 'twist' isn't all that apparent.  It certainly made the EQ at 250Hz and 125Hz somewhat more challenging than it should have been.

 

Given this change, and the need for at least 2-3Hrs of verification, the release originally planned for +2Hrs from this posting time will be +14Hrs from now.   it might be further delayed if someone comes up with more *REALLY GOOD, CONSTRUCTIVE IDEAS*.   I have promised myself that this release will have NO known nits, bugs, eccentricites, etc.  Of course, there might always be bugs, but there will be no known audio processing bugs.

 

So, 2100 EST USA time is nixed, and will be tomorrow at 0900 EST USA time.

 

Link to comment

The release must be delayed -- explanation...

 

During the last minute changes to accomodate the comments/critiques made by the reviewers, I made an omission in a section of code.

 

The critical parts of the anti-distortion needed to be re-inserted, or the results were grainy.   In fact, most previous releases in the last week or so had the 'missing anti-distortion' syndrome.

 

There critical parts of the anti-distortion are inserted into certain blocks of the LF, MF and HF output and inter-layer EQ.  This morning, when listening very carefully to the decoding results, the effects of partially-implemented anti-distortion became obvious.   Frustratingly, the only parts of where the missing anti-distortion has strong effect are the HF related to vocal sounds.  The result is that the sound is miraculously clean except certain aspects of vocals.   It is a bit disconcerting to have really clean background, the a distorted vocalization (not sibilance.)  It gives a sense that the voices are buzzing.   The fix is trivial, but I missed adding it in when re-implementing the anti-distortion.

 

Since I have promised that this release will have no known bugs, I must delay the release.   I'd be very dismayed to delay the release for 12Hrs (my next normal release time), I'll probably try to make the release available 6 to 9 Hrs from now.   I couldn't hear the problem last night.   My morning hearing is amazingly acute, but starts getting weak in about 1Hr after waking up.   Will be inserting the anti-distortion in the missing place and starting the decodes RIGHT NOW.

 

The 'V6.0.4T' decoder has already been uploaded, but the demos haven't been yet.   I'll be uploading the 'T' demos, but they are not an actual 'release' package.   The 'T' demos will be coming in about 2Hrs.  Those will be delayed about 1Hr because I must focus on adding in the missing block of anti-distortion!!!


SORRY ABOUT THIS...    Regretfully, I made a mistake/omission!!!

 

 

Link to comment

REL5Final-V6.0.4W is hopefully *FINAL* for REL5.

There will be a more public release in about 1wk, but will be the same thing, but will more associated documents.

 

V6.0.4W demos and decoder.

https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0

 

The 'private requests'  will be available in a few hours.

 

This as been a rough one, because the standards were so very high.   Any nits, any nits at all weren't going to be tolerated.   If there was a nit, it must be well understood.

 

The hold up was about the passive anti-distortion, and it is NOT optional for the best results.   Also, when adding it back in, I found that it is useful only in a few strategic places, and then the last go-around came from the fact that I forgot to add it back into another strategic spot.

 

The last real additions before adding the anti-distortion back in were phase descramblers both in the upper LF (approx 250Hz) and a descrambler comprised of simple little twisty EQs at 6k, 9k, 12k, 18k, 24k, and 30k.  These all correspond to the HF output EQ frequency steps, except the 9kHz step is repeated to get a bit of a 'downturn' in response -- thereby eliminating part of that 'excess HF' problem that was happening early on.   There are lots of little details needed, and NONE of the EQ can easily be seen by measuring frequency response, partially because a very important part of the EQ is phase descrambling.

 

WIthout the anti-distortion, the decoder output sucks.

 

I am totally amazed at whomever originally designed the FA scheme, esp considering it was probably done in the early 1980s.

 

 

Link to comment

Very high, top-notch decoding quality versions of the demos have just been to the same places.  The decoding wasn't done in the very highest-highest quality mode, instead in the highest practical quality.  The higher decoding quality and mitigation of modulation distortion (fog) makes the details more obvious.  Of course, sometimes 'details' are actually 'problems' in the recordings.   It *is* interesting to hear some details unveiled, not realizing that the specific sound was in the recording all along.   There is ZERO intentional enhancement in the decoder, it is all based on reversing a process along with 'mending' the signal -- not trying to change emphasis in any way.

 

Since the decoder NOW really does have similar frequency characteristics of the original, listening is actually more interesting to me.  In the last one or two releases, it has been very difficult to quit listening during real-time decodes of my favorite listening material.   If I don't quit realtime listening, then decoding the demos requires significantly more time.

 

About the electronic frequency response characteristics -- of course, measuring the frequency response using a sine wave as a source produces results with lots of error. Improved results are obtained by using composite test signals, with a few caveats of course.   I found that a truly flat response as measured by my hacked together measurement method doesn't sound good. Using heroic methods to force a flat response produces bad results.   But, an engineering-wise approximation based upon an attempted flat response DOES produce a very near-equivalent frequency response balance.  Unfortunately, the easily measured frequency response balance is NOT sufficent alone for the decoder output to sound right.  Since, on the encoded FA material, the phase is scrambled before compression, the sound immediately after expansion hasn't sounded 'right'.   By scrambling the phase during the original encoding, the peak vs RMS is decreased a little and made a little more consistent.  This keeps the compression from trying to follow a 'bumpy' signal, thereby avoiding excess artifacts.   On the decoding side, without descrambling the phase, the sound would have many of the impairments that were in previous verisons of the decoder.  Perhaps most obvious:  blown-out-sibilance, where the sibilance doens't sound natural, then will transiently become too loud.   There are other side effects without decoding the phase, and those side effects do not sound good.  Before adding special descrambling,  I was continually very surprised and frustrated that something was wrong with the decoding result.   Honestly, I started thinking that the recordings naturally had the impairments.  Of course, I was wrong.

 

Both the LF (250Hz) and the HF (6k, 9k, 12k, 18k, 24k, 30k) have special phase shifts based on using sharp filters.  I found the need for another signal modification -- fortunately, early on, I found the 'anti-distortion' scheme, basically dithers the 1st order EQ over a +-125Hz region.  Without that dither being added to specific parts of the EQ in the 1kHz to 3kHz region, the sound is wierdly distorted in a way similar to the recent decoder release.   An early version of this release was missing one of the dithered EQ layers, and it caused an interesting, oddly distorted 'edge' of vocal sibilance.  I don't know where the need for the passive anti-distortion comes from, but it is really needed for the best quality.   Previous over-designs that added the anti-distortion on every EQ step caused more distortion, but helped to mitigate the effects of phase scrambling.  Until each of the individual impairments was well understood, there was a tension between fixing two different problems.  Now, they are resolved, and the proof is in the EXTREME clarity and smoothness of the decoding results.

 

 

 

Link to comment

Found a bug...   This goes to show that even when reviewed, a bug can pass through!!!

Here is the message that I sent privately:
 

----

 

I should have checked the 250Hz change on other headphones.   On the DT990s, the sound is better, but I tried listening on another pair (DT770), and it 'honks'.   I reverted the change, and it quit honking!!!

 

---

 

Gotta do another release, but still gonna have to go through all of the  decoding results with the 'honker' removed :-),

I don't fault anyone, esp the reviewer.  I should have checked on other devices, but it REALLY DID sound better on the DT990s, and it REALLY DID flatten the response.  So much for the phase scrambling in the LF -- I WILL check the possibility that the original 250Hz/200Hz, 150Hz/125Hz phase shift isn't correct.  No matter what, the broken fix was MORE FLAT!!!

 

This goes to show, a flat response for decoding isn't always the correct response.

No matter what, the only change will be this LF bug -- sorry about 'honking' at you!!! :-).

The final corrected version will be ready at 9PM EST USA tonight (about +14Hrs.)

 

===================

PS:  the code (upcoming V6.0.4X)  now has both the 250 and 125Hz phase shift, and there is no honking and the midrange is better filled in than otherwise.   I was being 'conservative' about avoiding the 125Hz EQ mod because it made no serious difference in freq response (didn't flatten it any better.)  However, using just the 250Hz made it worse sounding, even though better freq response.   Both EQ mods give good freq response and good sound.

 

Rule1:   check on multiple output transducers

Rule2:  on the decoder, flatter is not necessarily better!!!

 

=================

 

 

 

John

 

 

Link to comment

I feel really bad about this, and I know that there are a *few* indivduals who are looking forward to the 'X' release to fix the 'honking', but I must delay fixing the release for a day.

 

There was a fundamental LF EQ problem that I couldn't detect on my wonderful headphones.   My less-than-wonderful headphones could detect the problem, which is a perfect example that defects in the audio pipeline can make other defects more obvious.

 

The problem is NOT onerous, but requires more than 'hack and patch'.   The LF EQ is *definitely* the correct general shape, even with a fairly flat freq response.  But, there are some aspects of the LF EQ which need careful study.  I do understand the basic problem and what the LF EQ needs to do, but there are so many little 'tricky places' where errors can hide.    This EQ correction requires patience that I have in abundance, but also needs discipline, which I must be careful to nurture.

 

For the few of you who continue to be interested, especially those kind people who have helped to review the results, PLEASE HANG IN THERE WITH THE PROJECT!!!

 

I should have this problem fixed in another day, perhaps will hedge my bets by saying that it might take two days.

But, actually, I don't think that it will be delayed more than the single day because the problem is so minor.

 

(The EQ problem is related to misuse of 2nd order EQ to achieve a desired frequency response characteristic.  This 2nd order EQ gives a 'resonant' sound that I could not detect either by measurement or given my primary listening situation.  The answer is a refactoring of the same frequency response -- testing/verifying the results of the refactored EQ is time consuming and tedious.)

 

Again, sorry.   I COULD have rushed this, and am very close to an answer, but the patient users/testers/reviewers DESERVE a careful attempt & solution.

 

 

 

Link to comment

Status after rework:  (alll flatness claims are within a very narrow range, certainly better than +-0.5dB)  Other, non-flat claims are described below.

ALL RESULTS DESCRIBE THE EVENTUAL 'FREQUENCY RESPONSE' using non-sine wave, pseudo random signal.

 

PREVIOUSLY, the decoder output was nominally flat, but I found out that a flat upper LF/lower MF is INCORRECT.  I felt *very* pressured by some individuals that the transfer function should be flat.  My intuition with a multi-band gain control device that it will not be flat.  My intuition was correct, but the reasoning about multi-band gain control wasn't totally on target.

 

All in all, flat at -3.0dB except for needing a peak in the upper bass/lower midrange as described below.  The nominal flatness is disrupted at the highest and lowest frequences because of low energy density and expander being operative at very low levels.

 

HF is flat between 3k and 10k, with a slight rolloff to 20kHz...   Decrease above 10kHz is be expected because of strong decrease in energy in the recording, and FA expansion.   This is VERY good. 

 

MF is flat between 1k and 3k, first time.   Previously, there was always a +-0.25dB disparity.

 

LF is weird.   I totally forgot about 'flatness', and did a careful set of comparisons over individual ranges approx 200Hz to 1kHz, then 20Hz to 250Hz.   Used numerous test recordings with varying LF behaviors.   Considered the lowest frequencies (feeling of bass, thud, boom, and lower vocal.)  All now match except a little 'wobble' in the lowest bass, with a slight bias towards seeming louder (just a little.)

 

Previously, the bass was relatively flat, within about +-0.25dB, and took significant EQ.  The sound WAS different from FA.   I iwas fooled by the 'flat response' thing, also my biases in how the output should sound.  IT IS VERY EASY TO MISJUDGE COMPARSIONS.   This difficult stuff, and I have been in 'subjective comparison hell' for the last 5yrs!!!.

 

Now in the future 'X' release, the sound, in individual bass ranges and as a composite is VERY VERY close to the same.   Since FA was done with 1st order and 2nd order EQ, I knew that the EQ could be matched -- found out NOT!!!   The interlayer EQ also needed to be considered, with each step needing rolloff below 25Hz.   The EQ steps, between each layer is REALLY necessary to avoid too much bass!!!  Previously, the interlayer EQ caused a flat response, now there is more LF rolloff.

 

-------------------------------------------

IN CONVERSATION WITH ANOTHER PARTY:

A day or so ago,  you and I were talking about the upper LF, and the EQ.  I was fooled by my relatively flat headphones that the results seemed a little full at upper bass, but was an almost natural EQ design.  When using my upper bass/lower midrange heavy headphones, the sound 'honked' which says that there was a prominent peak in the output signal.

-------------------------------------------

 

 

Yesterday, I determined that the output is faulty on '4W'.   Given that embarassing result, last night I did a marathon re-EQ effort, using both headphones (normal speakers are not precise enough), each that sound totally different.  Very carefully did matches to the general sound, the HF phasing, and most important the sound of the lowest, low, middle and upper bass individually and together.  On this re-implementation, I TOTALLY ignored the frequency response curve.  The results below are the result of the implementation that gives the same bass sound as the FA RAW, but with a slight excess 'lowest freq bass'.  The resulting response, doing numerous very careful comparisons, the bass response in '4X' matches with exactly the same 'boom' and no 'honking'.

 

When doing these comparisons, it is very easy to make mistakes.  Frequency response judgement is very difficult to retain the memory when doing comparisons.  I have to close my eyes and concetrate, not painful but certainly very tedious.   As you know, I feel that it is time that it is do or die for the project.   This is all out effort rather than just trying hard!!!

 

 

Note that the LF sound is extremely similar (really the same within reasonable bounds), based on two different kinds of headphones and speakers.   Each headphone sounds totally different when comparing the two headphones, with one set having a serious middlerange boost.  Now, the decoded sound vs. FA input has a very very similar balance on both headphones and the speakers...

 

10Hz to 80Hz:   -4.3dB (very low energy density below 30-40Hz)

80Hz to 200Hz: increase from about -4.3dB to -3.1dB

200Hz to 400Hz: increase from -3.1dB to -1.4dB

400Hz to 700Hz: decrease from -1.4dB to -2.3dB

700Hz to 1kHz:  decrease from -2.3dB to -3.05dB

1kHz to 3kHz: approx same from -3.05dB to -3.2dB, average: -3.12dB

3kHz to 6kHz: approx flat -3dB to -3dB (in the -3dB to -2.9dB range, bounces +-0.05)

6kHz to 9kHz: approx flat -2.97dB

9kHz to 12kHz: decrease from close to -3dB to -4dB (HF decrease because of low energy density on test material)

12kHz to 15kHz: approx flat -4dB to -4dB

15kHz to 20kHz: decrease from -4dB to -4.6dB

 

Note the bass rolloff below 80Hz:   reason: signal energy decrease causing downward expansion

low Q peak in the bass between just above 200Hz to 1kHz: reason:  Dont know (FA design to avoid muddiness?)

1kHz on up, nominally flat.

 

There is very little compression/expansion between 120Hz and 2kHz (filter freqs: 80Hz and 3kHz), moderate expansion above 2kHz to 12kHz (filter freqs 3kHz, 9kHz), notably active expansion 8kHz to 20+kHz (band overlaps.).

 

The only serious anomaly is the peak between 200Hz and 1kHz, with a maximum at 400Hz to 500Hz.   THIS DOES NOT APPEAR TO BE A BUG.   The sound is very close to identical between FA and RAW.

 

 

Release gonna be tonight.    I'll be posting this publically later on this morning.

Link to comment

After the demos and the 4Y decoder are released,

I'll put together a 'demo reel' that plays the first approx 10seconds of each recording, first the FA version, then the decoded version.

 

Now, the decoded version sounds *exactly* the same as the FA version, except the HF dynamics are stronger, and a little more tight bass (just a little.)  There are slight differences also on sections of recordings at very low levels.  Also, the detail is better maintained on classical stuff.   The full demos will be moderate quality (to save time), but the quick A/B demos will be    decoded at the 'full monty' of '--xppp=max --dp'.

 

Some recordings start only after 5 seconds into play.   I'll just write those 5 sec off, because the amount of effort needed to edit the recording.

 

* HF in low level sections of recordings is also less enhanced on decoded material.  (Sometimes low level material will seem like less HF -- but WILL be more natural.)

Also, normalized playback will sometimes make the bass seem a little less strong, but ONLY on select recordings

Link to comment

The next version of the decoder will have BASS/MIDRANGE/HIGHS that sound EXACTLY like the FA original, with certain, variants, improvements.

I really am working diligently for the truly serious HiFI community -- those who know that spending $10k on something will not fix their recordings.

 

Because the output has almost exactly (I mean really exactly) LF as the FA original, I am not totally happy with it.   I don't like the FA bass, whether direct from the CD or after decded.  There is something really wrong witih the FA bass.   The EQ input/output transfer function shows that the FA EQ/sound seems purposeful from the standpoint of modifying the original sound.   Since I now have a PERFECT match to the FA LF sound, I need to do some research into modifying the FA EQ to see if the results can *also* be made to sound what is correct in my opinion.

 

This additional evaulation will require yet another day.  I might support both a 'cr*ppy' FA sound option and a 'beautiful' normal sound option (humor intended about FA bass being cr*ppy!!!.)   I NEED TO FIGURE OUT IF I AM MISGUIDED..  Really, really, really need to take another day to feel comfortable.

 

Here is what the decoder sounds like while 100% emulating the FA bass sound:

 

1)  Subtle instruments like violins/etc, the sound of their strings is more clean, yet the frequency spectral energy is almost 100% identical with the FA original.   That is, the only real difference is that the timing of the various frequency components is a little more in sync.

2)  The bass sounds a little more tight, but the energy is the same.  A drum has more of a drum sound, not a 'splat'.  That is, the highs aren't over emphasized, kind of giving the drum too much HF energy.   Otherwise, careful checks show that the LF/lower MF energy are the same.

3)  Less hiss, noticeable even on recordings created in the early 1980s'.   After that, recording technology got a lot better, and the hiss was less troublesome.

4)  More natural vocals.   The sibliance in the decoding result is now very much controlled.  No more sibilance-blow-out, but also the sibilance is better aligned with the vocal.  Just sounds more natural. (Almost all versions of the decoder before 1wk ago had horrible sibilance problems -- magic dust was found.)

5)   Cymbals more natural -- Al Stewarts stuff is coming closer to what it should sound like.   His material is *serious* and I am not 100% sure that the decoder is tracking it perfectly, but it is very much improved.

 

Probabaly a lot more improvements -- I just don't remember them right now.

At the higher quality levels, reasonably good quality classical recordings DO see a very noticeable improvement.  The chorus of instruments change from a bit of a blob into instruments that you can hear.

 

I hear few decoder impairments now, but there is one that is clearly noticeable to me (even with my tinnitus), that on a few recordings, if you listen very carefully, there is a slight noise modulation coming from hiss on the recording, and being modulated by the gain control.  I hear this on Anne Murrays recording, but might actually be on the tape.  I am not sure if the noise modulation is caused by the FA decoder.   Also, the Carpenters Bacharach medley does have a lot of hiss to begin with, but there is also a very slight noise modulation assocated with it.

 

 

Link to comment

Really, really good news...

Remember that yesterday that the release was delayed for a day or two?  The goal was to figure out why when emulating the FA sound almost exactly, I didn't like it very much.  I found the difference in EQ between the FA sound and FLAT.  The FA sound is nowhere near flat WRT energy density transfer function, but I found the EQ that converts the FA EQ to FLAT EQ.

 

The EQ is tabulated below -- and is very simple.  BTW, the result produces essentially flat in the sense of expected behavior (not quite technically flat, but there IS an expected difference between the 1kHz-3kHz range and the 20Hz-1kHz range, and the difference is approx 1.5dB at the limits.)  It is much expected given the required EQ for DolbyA gain stability when the units are cascaded.  (There is a difference in thresholds between the bands.)

 

Here it is:  (remember, this is just the conversion from FA to FLAT)

1 1st order LF EQ at 1kHz, -3dB

1 2nd order LF EQ at 1kHz, -1.5dB, Q=0.577

1 2nd order LF EQ at 1kHz, -1.5dB, Q=0.8409

1 2nd order LF EQ at 500Hz, 1.5dB, Q=0.577

1 2nd order LF EQ at 500Hz, 1.5dB, Q=1.414

 

On the above, the Q values might not be 100% correct.  Changes to Q values do not show up very well in my coarse measurements, so the exact Q values are subject to listening tests.  I started with FA as a base because that was the previous pre-release decoder behavior.

 

What can one infer from these numbers above:

1)  FA is likely has modified EQ

2) the decoder must support a 'flat' and 'FA' mode.  This will allow the preference to be controlled.

 

Since the decoder does support pre-defined command line settings in the file 'da.ini'.  One will be able to specify the default kind of EQ based on one's taste.  Of course, that default value can be changed by an explicit specification on command line.   I am still not sure which mode should be the internal default, but I'd suspect that most people being accomodated to the FA sound, the decoder will produce that sound as a default.

 

THIS IS A BREAKTHROUGH FOR THE PROJECT!!!

The release will be produced soon, once I have figured out the best way to support these differences.

 

 

Link to comment

This is a message that I just sent to one of the reviewers as a response to a question comment that correctly implies that the encoding process, and location doesn't make sense...  Here is my response (I tried to avoid any comments that spills the identity of the other correspondent).

 

 

I think that you  have a sense of my frustration about where it is happening.   We all have some evidence that the decoder is able to 'decode' the material.   I could tell that something very consistent was happening even almost 10yrs ago, but didn't know exactly until perhaps 4-5yrs ago, and not all of  the details, of course, until recently.

 

I can think of three major places:

1) mastering

2) by the distributors

3) mass CD production

 

NONE of it seems to make sense, but we have existence proof.  The decoder (my project)  IS not the product of genius and the most amazing expander ever made -- the 'decoder' is really just a decoder.   The only 'genius' is the designer of the processing scheme.  there is real, true audio genius in the design.   R. Dolby is likely the inventor.  I know of NO ONE else who might have the brilliance.   There are some really good 'smarts' in how the encoding process works.   Some of R Dolby's patents lock up any HW implementation, and Sony has a patent that locks up the obvious design of the semi-direct design of DolbyA HW.  (I just figured out a way to do the DolbyA decoding in a really, strange perverse way.)  It is very possible that R Dolby's patents were strategically created to keep people from undoing the FA process...  NOT LIMITED BY HW PATENTS IN, ESPECIALLY MY FA DECODER DESIGN.

 

The only way that it is done in mastering is that there are 100's of liars or those tied up with trade secrets (NO, NOT A GOOD ANSWER), the recording distributors, and not everyone uses mass CD production.   The encoding device is NON-TRIVIAL, and must only be 10-20 of them in the world.   It would be a rack of hardware at least 4U, perhaps 8U full of DolbyA cat22 cards (6 or 7 as mentioned before.)   The encoder *is* doable, but a crazy thing.   Any encoder must be done by almost exacty the same design...

 

All I can say is...if you can come up with a good idea, I'd appreciate it a LOT,  I mean it!!!

 

(One more thing about the encoder design:

It is possible that the *encoder* design is based on some concepts of DolbySR. The decoder might only be a way to mostly/partially undo the encoding.  I make no claims that the decoder is undoing everything, just that it seems like the FA decoder undoes most of the encoding.   If this was a DolbySR type design, it would take perhaps 1/3 of the HW as bundling 7 cat22 units together.

 

Link to comment

Here is an example of the 'fog' that I keep writing about.  It is subtle, but this example attached shows it pretty well.

I am in the midst of preparing a release, and while testing/modifying, noticed this as a  reasonably good example.

 

It is difficult to describe what to listen for in the differences, but measurements show that both versions have very similar dynamics characteristics, but might sound noticeably different.   A hint at noticing the fog:  listen for the smallest details on the 'fogrem' version (fog removed).   Then, listen for those same details on the 'foggy' version.  You might notice that the details are almost audible in the foggy version, but there is something distracting/confusing in the sound, so that the details are hidden.   This 'fog' are the sidebands that I keep writing about.  The modulation sidebands ARE the fog, and a kind of distortion that is different from normally thought of nonlinear effects.   The anti-fog (anti distortion) technology was used to create the 'fogrem' version, while the 'foggy' version does the simple gain control multiplication (signal * gain.)

 

Another side effect of the modulation sidebands -- not only are the low level details fogged-out, but also the peaks are 'cancelled'.  There are both high level and low level effects of the modulation sideband thing.   Both effects should be noticeable on these examples

 

* BOTH EXAMPLES ARE CREATED WITH EXACTLY THE SAME DRIVING FUNCTION FOR GAIN CHANGE, and also the 'foggy' version is actually *optimized* for minimum fog energy.  Since the decoder can theoretically create as much fog as 7 DolbyA units, a simple gain control operation could render a recording unlistenable.  SO, even the 'foggy' version needs optimized gain control, but instead the 'foggy' version is brute force optimized.

 

These snippets are very short, and intended for doing a very fast A/B comparison.  This allows the brains 'buffer' memory to be used -- more details are more easily compared with very short immediate comparisons.

 

Also, these decoding results are still not exactly where I want them to be, but are very close.  The sibilance in ONJ's voice sounds is still a bit 'wrong'.

 

fogrem.flac foggy.flac

Link to comment

Good news -- I think we have it down to 5 layers, which makes it sound better and makes the decoder faster.

Speed has been a major problem on the decoder, and that is VERY ACTIVELY on my mind.

 

Layer reduction is easier now since the decoder has become very accurate.  Errors in the layering are more obvious and absolute.  Some of the layers weren't active anyway, and removing the inactive layers helps to make the decoder run faster and provide better quality.

 

WRT the previous 'foggy' comparison demo -- the upcoming decoder release will be *more* clean than the 'fogrem' (fog removed) version.

Of course, to clarify - BOTH versions were decoded.  The difference is in how the gain change is applied to the signal.

 

Below -- a few notes about the anti-distortion implementation...  Read at your own risk...

 

The 'anti-distortion' signal calculation does the 'holy grail' 'wait until the zero crossing to apply the gain'.  However, the decoder doesn't misguidedly try to do the impossible 'zero crossing' trick directly in normal time domain.  Instead, it does the gain change in a strange world of analytic signal space, cancelling the sideband as it is created.  So, even though the goal isn't 'wait for the zero crossing', that is the end effect of the method.   Of course, doing the 'zero crossing' trick directly is a fools errand, the code does it by happenstance based upon the sideband reduction.   On the decoder, the technique is used only in the 3kHz -20kHz+ range, with actually some leaking down to about 1.5kHz or so.

 

In the decoder application, using technique  in the 20Hz to 600Hz range will NOT be helpful because distortion created during original encoding needs to be cancelled.  Human hearing is very sensitive to wave shape below 600Hz or so, the distortion really needs to be cancelled.   So, the decoder must do the 20Hz to 600Hz gain control by traditional methods.   If one was doing a perfect compressor or a 'perfect' expander, then the technique is probably helpful down to the lowest audio frequencies.   There are caveats trying to reach down to 20Hz, including an almost impossible 20Hz to 20kHz, super accurate 90deg phase shift with a window that needs approx 2-3X the number of taps that a normal Hann window needs.   That would likely be a 12000 to 15000 tap Hilbert transform (at 66.15kHz).   Eeek!!!   There are better ways to effectively do the full wideband gain control than to use a direct method.  Band splitting can be used to chop the transform calculations up, then do a decimation/interpolation scheme for the low frequencies.   Doing the high quality gain control in frequency range segments does have additional considerations/challenges, but not difficult to overcome.   The details of band-splitting in the decoder follow...

 

On the FA decoder, in the very very highest quality modes (--xpp=max, --xppp=max or --xpppp=max), the 3k to 9kHz band is chopped into 3 segments and the 9kHz to 20+kHz band is chopped into 2 segments.   Since the 3kHz to 9kHz band really means 1kHz to 20kHz (because of the very wide skirts of the DA band pass), chopping the *nominal* 3kHz to 9kHz band into 3 segments makes sense.   The frequency ranges are split up in approximate even *energy* chunks, so gain control is applied to each sub-band 'chunk'.  Then, before recombining, the bands/sub-bands are pre-filtered.   Implementing all of these sub-band segments  means that the 'sample packets' being passed around though the decoder aren't just a signal sample and the gain, but instead there are 7 different parts of the 2 channel signal, and 8 gain values for each of the highest level bands.  (a gain value for each band (4) on each channel (2), for a total of 8 gain values for every sample.)   All of this state information needs to be passed amongst the threads.   Some aspects of the signal also must be seperable from the payload carrier for timing reasons.

 

 

 

 

 

 

 

Link to comment

 

Was going to write a long message describing the work being done right now, but instead describe where the release is right now.

 

The general EQ is now both flat and compares very favorably with the FA original,  general sound definitely superior.  On the version planned to be released,  general response balance is IDENTICAL to the original.  Some aspects are more clean/clear, and still concentrating on the last steps of clarifying the sound.

 

The FA scheme does it's fair share of phase scrambling (probably some peak to average improvement.)  It does just enough that it can make a very accurate result be challenging/tedious to attain.  Phase scrambling is not easy to see on a magnitude response measurement, but some kinds of scrambling do have some effect on the magnitude response.   This hopefully final step is descrambling the HF.  I didn't recognize/realize this scrambling until recently.   Also, didn't figure out the correct filter functions until a few days ago.  The most obvious defect that the descrambling 'fixes' is distorted sibilance, but also brass instruments and other intense sources are improved and made a little more stable in the sound field.   An approx answer has already been found, but finding the actual, 'canonically correct' answer takes time.   It is important to be careful because the wrong answer can be damaging to some recordings.   Even trying to be careful, mistakes can (and will) sometimes be made.

 

After the answer for descrambling is found, and the comparisons for the decoder output are bounced back and forth a few times, it will be time to try for a release.  A 'release try' is 'building the Linux/Windows version', doing the demos decodes, checking the decodes as the come off of the 'assembly line', and if everything okay, send the files to the distribution staging location.

 

It isn't uncommon for the demos builds to be aborted because of decoding problems.  Some 'releases' have had to been retried more than 2 or 3 times.  After the release is stopped, first thing is that the version ID is incremented, then fix the bug.

 

In the future, decoding the demos will be noticeably less time consuming -- only 5 layers instead of 7.  Should get approx 7/5 speedup!!!

Hoping to start preparing for the demos tonight, but anyone who knows me -- I seem to make a lot of mistakes in the last few years... Sorry about that...  Really trying...

 

 

 

 

Link to comment

About the 'fog' posting -- I want to make it 'clear' that 'fog' is not easy to perceive, not everyone will perceive it.

I just got motivated to start thinking about a way to quantify the 'fog distortion'.  'Fog' is not a sound, and it is really useless to listen for a sound.

If anything, 'fog' is an 'anti perception' effect.   The original energy IS ALWAYS in the the recording, but is 'blurred' at audio frequency rate as the fog.  So, your ears hear the sound as a 'spread' signal, at low levels not able to perceive it.   With anti-fog processing, the previously obscured sounds are made more obvious.

 

When testing for something like fog, it is best to listen to the 'de-fogged' signal first.   It seems to be easier to detect something missing instead of something 'new'.

 

Also, most important to me, because of self-respect and respecting others -- it is okay to say 'I don't hear the fog', because I inititally believed that it was 'golden ears going crazy' also.

 

The very nice thing about the anti-fog mode:  it can be disabled, thereby giving noticeably faster decoding operations.  The normal decoding mode also goes to great lengths to avoid adding new fog.   The gain control signal is carefully crafted to avoid 'swishing' across the signal in a haphazard way, thereby avoiding fast modulation of the signal waveform.  It is the repeated, persistent back and forth fast modulation that creates the fog-blurring.  If the modulation is carefully controlled, then the added energy can be minimized.

 

The release is coming soon.   I found a fundamental mistake where the 'threshold' (calibration level) is erroneous.  It is a bit tricky to find the singular correct calibration level when there are so many levels that sound correct.   There is a periodic nature to the 'good sounding' calibration levels.  RIght now, I am exhaustively checking for the singularly correct setting.

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...