Jump to content
IGNORED

Blue or red pill?


Recommended Posts

20 hours ago, mansr said:

I upsampled the 6 analogue captures of the music track 64x (to ~11MHz) and aligned them. Then I plotted the power spectra for the differences between each AA and BB pair, three of each.

 

 

 

I'm not following these plots - are you saying you achieved nulls of 140dB down between the versions when sufficiently aligned?

Link to comment
9 hours ago, psjug said:

What do you mean by glitch?  Something else besides the ultrasonic stuff?

 

Most of the ultrasonic stuff - the obvious noisy stuff was filtered out, but there was residual, just above 20kHz murmuring going on ... was this significant, was it an artifact of the recording process?

Link to comment
8 hours ago, psjug said:

If there is any problem in that band it seems to be the same on A and B.  Diffmaker is only showing file differences in the higher ultrasonics, where random noise does not subtract out.

image.thumb.png.c2c13e101f71d8e106cce24671e764a1.png

 

I'm looking for patterns in what's going on ... and this sometimes needs several cycles at looking at what one has, over time - one jumps to an early wrong conclusion, or thought, which fizzles out - which doesn't mean discarding the data one has; rather, a revisiting is required, using another technique.

 

In the end it may turn out that the operation of the recorder itself is causing too much interference, masking or disturbing vital details - I have been aware of this on several occasions; just having the circuitry of the monitoring device active is altering the environment too much, and you lose what you're trying to measure. This always has to be considered, https://en.wikipedia.org/wiki/Observer_effect_(physics).

Link to comment
13 hours ago, manisandher said:

 

The system that we used for the A/B/X? Sure. The analogue captures sound massively degraded compared to the original file (and to the spdif captures). They seem to have lost the cues I heard in the A/B/X.

 

Mani.

 

Which confirms something I noted earlier - just on a visual comparison between the digital and analogue captures in a 'complex' area of the track, the analogue version has been "rounded", just enough to pick it with the eye - like a piece of fine sandpaper has been run along the digital original, taking off the pronounced 'burrs'.

Link to comment
10 hours ago, lmitche said:

The sibilance in Patricia Barbers voice is natural, I could hear it directly, but not really noticable unless one focuses on it. But it is there nevertheless so don't expect zero sibilance on her recordings. That would not be right.

 

Sibilance is an excellent starting point for learning to hear distortion - it's part of human speech, but less than competent playback makes it irritating, annoying; because of the added distortion. I have a vast range of recordings with vocals, and on none of them is there a "sibilance problem" - because I work to eliminate the systems issues which can make this artifact so offputting.

Link to comment
30 minutes ago, sandyk said:

 

You can't get rid of excessive sibilance due to too close miking and hard limiting.

 You can reduce it however with better than average playback.

 " Norah Jones-Come Away With Me" album is a good example of this.

 

What is being reduced, is the seasoning added by the playback's distortion contribution - too much salt and pepper can ruin the finest cooked piece of meat, and it will never rescue a very average slice of beef.

 

Vocals are typically not limited, it's the transients of the backing instruments which are kneecapped; and this can disturb the sense of the piece - but female opera singers going for the big note can be a major struggle for many systems; I've had mine shut down because the amplifier overheated, producing the note.

Link to comment

Okay, it appears to be agreed that the analogue captures are too imperfectly captured so be of real use - for identical sections of the track, the 3. analogue capture _ A version compared to the 1. digital capture _ A version shows a steady loss of spectrum energy starting at about 3kHz, which reaches a 3dB loss by 20kHz. IOW, the HF content has been substantially compromised, which correlates with the visual loss of HF on the analogue waveform - this means that the two are highly likely to sound different.

 

Which part of the test setup is at fault is another matter ...

Link to comment

Another quick look, at the spectrums - different from the Tascam, at the HF end; gains 3dB at 20kHz! Starts to differ from the digital at about 7kHz, and actually drops a dB or two around 10kHz. IOW, the shape of the HF content difference is almost the inverse of the Tascam capture; suggesting, if nothing else has changed, that the ADC is the main "culprit".

Link to comment

Okay, now that Mani has confirmed he can hear something in these captures, I'll start looking again for anything can distinguish them.  First step, in comparing 9._A and 10._B, the spectra are perfect copies, visually, in Audacity for matching sections.

Link to comment

Just had a more serious listen to clips 9 and 10 - not at the best level for picking things, just in Audacity with clips multitracked, and soloing each, back and forth ... and to my ears I can pick a difference, over the laptop speakers. In particular, the quality of the backing piano - the B version sounds like they used a better instrument, A sounds a touch clangy and lacking in overtones; less "rich". Note, highly unoptimised playback so it doesn't mean that on a top class rig the positions won't be reversed - what matters is if at least some difference can be discerned.

Link to comment

By upsampling and finer aligning I can get an excellent nominal null between 9 and 10 for a short section of the overall clip - but this is telling me that the clock of the ADC is varying too much between the two takes - I'll have to do some very, very slight speed adjustment of one track to improve the correlation in time.

Link to comment

Silly bugger me ... I was encouraged by Dennis getting somewhere with Diffmaker, so foolishly gave it a try - which I hadn't done up to now. Well, I was immediately reminded of how it's such an appalling app citizen - it threw my Windows 8.1 laptop into a frenzy of memory and disk thrashing, which effectively froze the machine - it was working, on DiffMaker, but I couldn't see what was going on - impossible to bring up Task Manager. It finally proclaimed that it couldn't digest the tracks, but kept on grinding anyway. Finally, it crashed - but then getting it to really quit somehow caused Firefox browser to crash as well ... ... will I persevere with it, trying to work out how to cajole and tickle it under the chin, to get anything? ... hmmm, I think not ...

Link to comment

Very good to see Dennis and Peter working on this as well! Something of interest is that I earlier manually got the tracks to reach a deep null, for about 5 seconds in towards the middle, and I just spent a little bit listening to that section of the null, in comparison to the B track. The slightest thing that comes through is, guess what, the sibilance of the singer ^_^ - now, I may find that disappears as I improve the correlation for longer through the track - to be continued ...

Link to comment

A lot more than that, IMO - the holy grail is to be able to identify the actual variations in the output of real life equipment, that can be subtle, but very important for people who take listening to music
"seriously" - this has been an ongoing tussle, and it's worth getting one's hands dirty somewhat, trying to get a handle on things.

Link to comment
4 hours ago, pkane2001 said:

 Whichever delta has the lowest amount of noise/harmonic distortion would be the one that improves the SQ. Since the only difference between A and B was a different SFS setting, we'd know which SFS setting produces the best SQ. In theory...

 

 

I strongly suspect i's going to be very much less clearcut than that - the key artifacts may be buried in noise, and hence dismissed as being irrelevant; but it's already very well understood that the human hearing system is remarkably adept at extracting information within noise. This is usually expressed in a positive context, but that doesn't stop it also being true in a negative sense - bury a repeating, very irritating sound glitch in random noise, and people's hearing will register its presence.

Link to comment
1 hour ago, STC said:

 

That’s exactly my point. If the sound quality is good enough that you cannot tell the difference without a reference than the importance or difference is insignificant. 

 

It is like distinguishing shades of green. How many can tell which is Jade green and Persian green just by looking at the color with side by side comparison. At least here the difference is more pronounced than the A and B of the analogue capture. 

 

This is not how to approach the exercise. If there is a difference, and one displays the qualities of distortion less then you're going in the right direction - the goal is convincing sound, and this can only "pop out" if enough of all those "little things" are addressed.

 

I learnt what happens 3 decades ago - each, seemingly tiny step contributes to the whole - it may take 10, 20, 30 steps; but it is the steady unraveling of the "bugs" that makes it occur ... I use exactly the same methods to advance the SQ today as I did 30 years - because it works ...

Link to comment
1 hour ago, STC said:

Unless you have the same setup that you had 30 years ago, then you do not not for sure what really changed the SQ. You can only speculate .  Most of the so called difference can be attributed to other things. I may change a cable and perceive better sound but I may have also changed my sitting or speakers location or cleaned the contact along that. 

 

That reminds me of of someone who specialized in tweaking a Quad amplifier. After three years of hard work he decided to do the modification to another similar Quad. Guess what? The unmodified Quad sounded better. You need a reference.

 

That's the point of repeating an experiment, say by different researchers. If doing things using a certain procedure always achieves a particular result then you have a high confidence that "you're on to something".

 

What changed the SQ was locating issues, weaknesses in the equipment, and the combination of the parts - just like  a mechanic repairing a certain model of a vehicle; various units come in with specific problems, all very different, but after repair they all perform "just like the reference" ... that's what's going on, here.

 

The reference I use I have described often: completely disappearing speakers, all recordings come to life, ability to go to any sane volume level with complete comfort, etc, etc. Most systems are so far from this goal that one doesn't know where to start ... :P.

 

A friend of my local audio friend is caught up in the strange world most audiophiles inhabit - he has money to burn, and sometimes his system shows great promise; but the next time it's pretty well a mess again. He most certainly has no reference - just using the metric that the more expensive the gear he buys, the better it has to be, QED.

Link to comment
13 minutes ago, Audiophile Neuroscience said:

Science aligns with evidence, that's the way it is. Whether I or others prefer the evidence or not is immaterial. The evidence thus far is that Mani can hear a difference in  bit identical playback to a significance level of p=0.01. Unless anyone has evidence to the contrary and not just 'noise' ?

 

And I can hear the difference in the analogue captures of that event, as can Mani - I'm not interested in playing ABX games; if the difference is clearly audible, end of story. What now is of interest is locating the data in the tracks which causing this.

Link to comment
3 hours ago, pkane2001 said:

 

 

I'm starting to get most of the functionality I want in my own diffmaker-like software.

 

...

 

Still lots of work to do, but I'm having fun! Something to play with as I listen to music ;)

 

 

Excellent work! I'll be very keen to give it a spin when you have some sort of usable version together, B|.

Link to comment

If this is helpful, the current set of analogue captures, A and B, have a drift of about 11 samples from beginning to end between them, for a 3 MHz sampling rate - the software will need to be able to accurately adjust for this, to achieve a good quality null.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...