Jump to content
IGNORED

Blue or red pill?


Recommended Posts

8 minutes ago, mansr said:

I'm willing...

 

Stop deflecting.

 

4 hours ago, mansr said:

I did, and there's potentially some skew there.

 

You say you've done the analysis. Where's your evidence for "potentially some skew" in the ABX? Put up or shut up.

 

Mani.

Main: SOtM sMS-200 -> Okto dac8PRO -> 6x Neurochrome 286 mono amps -> Tune Audio Anima horns + 2x Rotel RB-1590 amps -> 4 subs

Home Office: SOtM sMS-200 -> MOTU UltraLite-mk5 -> 6x Neurochrome 286 mono amps -> Impulse H2 speakers

Vinyl: Technics SP10 / London (Decca) Reference -> Trafomatic Luna -> RME ADI-2 Pro

Link to comment
7 hours ago, mansr said:

It's not necessary to be aware of it beforehand in order to be influenced by it.

 

Different keys do make slightly different clicks, but more importantly the number and timing of the clicks would have differed. Experiments have demonstrated the ability to recover typed information, typically passwords, from nothing more than the sound of the keyboard.

 

yes, the CIA has done that - in some cases by using laser interferometry off of an exterior window...

 

but if "keyboard clicks were audible through the closed door" there is a simple test for that and you don't need to make another trip

Link to comment
7 hours ago, Audiophile Neuroscience said:

 

IMO that is not fair. You need to provide evidence of how and why the results are invalid, not speculation. I wholeheartedly agree that reproducing the results would add further strength but that does not invalidate the available evidence.

 

 

well, no

 

or I could say Hell NO!

 

the affirmative has the burden of proof - you know that from reading in science

Link to comment
7 hours ago, Audiophile Neuroscience said:

 

IMO that is not fair. You need to provide evidence of how and why the results are invalid, not speculation. I wholeheartedly agree that reproducing the results would add further strength but that does not invalidate the available evidence.

 

 

well, no

 

or I could say Hell NO!

 

the affirmative has the burden of proof - you know that from reading in science

Link to comment
4 hours ago, PeterSt said:

 

Watch for the general larger jumps in the music (file) at the "decimal" level. But you'd first need to have the comparison all right and including what I all have already, this is a not-so-easy task for me as well.

 

By now I dare say with confidence that I am able to resurrect my analysis code and that a. it now can work with 24 bits (took me the whole day to only get to that level (it was 16 bits)) and b. that I can see a pattern in Mani's recording (a pattern for Difference).

And btw c. : That I can now compare two recordings *not* under my own control (this is about alignment). So the capturing as such (see below next to "Input") is not in order and it is just "third party" files now.

 

XXAnalysis02.thumb.png.9dc9d4d5c85296352eb60c322fe917a9.png

 

 

I'm starting to get most of the functionality I want in my own diffmaker-like software. I have it aligning simple sinewaves down to about 0.01 sample level.  Still need to do drift correction (it's already measuring drift), and I'm not happy with amplitude adjustment yet -- it's not as precise as I think it could be. But, getting there. Here are the original two waveforms, misaligned by 2.5 samples, one with added noise:

image.thumb.png.8fffa1aa1c3042ee1f13cff13d0ae298.png

 

And here they are after phase and amplitude adjustments:

image.thumb.png.4e1bf0c5595734aa426650c2bdc4bc9c.png

 

Still lots of work to do, but I'm having fun! Something to play with as I listen to music ;)

 

Link to comment
2 hours ago, pkane2001 said:

 

 

I'm starting to get most of the functionality I want in my own diffmaker-like software. I have it aligning simple sinewaves down to about 0.01 sample level.  Still need to do drift correction (it's already measuring drift), and I'm not happy with amplitude adjustment yet -- it's not as precise as I think it could be. But, getting there. Here are the original two waveforms, misaligned by 2.5 samples, one with added noise:

image.thumb.png.8fffa1aa1c3042ee1f13cff13d0ae298.png

 

And here they are after phase and amplitude adjustments:

image.thumb.png.4e1bf0c5595734aa426650c2bdc4bc9c.png

 

Still lots of work to do, but I'm having fun! Something to play with as I listen to music ;)

 

Very nice pkane.

 

I know I said Diffmaker was open source at one time.  I was mistaken about that.  It is freeware open to the public.  I don't suppose there would be any harm in contacting Bill Waslo who wrote DIffmaker if that is helpful.  Maybe he'd let you update his version and  replace it.  He gives it away so you aren't costing him business. 

 

A tool like Diffmaker with a more up to date and non-buggy implementation would be really nice.  Over the last decade lots of people have wanted Diffmaker only upon using it swearing off of it as too unreliable. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
57 minutes ago, esldude said:

Very nice pkane.

 

I know I said Diffmaker was open source at one time.  I was mistaken about that.  It is freeware open to the public.  I don't suppose there would be any harm in contacting Bill Waslo who wrote DIffmaker if that is helpful.  Maybe he'd let you update his version and  replace it.  He gives it away so you aren't costing him business. 

 

A tool like Diffmaker with a more up to date and non-buggy implementation would be really nice.  Over the last decade lots of people have wanted Diffmaker only upon using it swearing off of it as too unreliable. 

 

Thanks Dennis. I was hoping that diffmaker was open source, but since it wasn’t, I decided to start from scratch. Now I’m having too much fun writing it myself to try to fix someone else’s code :) 

 

If I get stuck or decide it’s too much trouble, I’ll try to reach out to Bill.

Link to comment
3 hours ago, pkane2001 said:

 

 

I'm starting to get most of the functionality I want in my own diffmaker-like software.

 

...

 

Still lots of work to do, but I'm having fun! Something to play with as I listen to music ;)

 

 

Excellent work! I'll be very keen to give it a spin when you have some sort of usable version together, B|.

Link to comment

If this is helpful, the current set of analogue captures, A and B, have a drift of about 11 samples from beginning to end between them, for a 3 MHz sampling rate - the software will need to be able to accurately adjust for this, to achieve a good quality null.

Link to comment
1 minute ago, fas42 said:

If this is helpful, the current set of analogue captures, A and B, have a drift of about 11 samples from beginning to end between them, for a 3 MHz sampling rate - the software will need to be able to accurately adjust for this, to achieve a good quality null.

 

Yeah, I'm measuring drift already and planning on doing a linear fit to remove it.  I also used non-linear (polynomial) drift modeling in the past, but so far I have no reason to think that this might be better (or necessary) for audio. But I'll definitely experiment with different drift removal techniques. It certainly helps a lot to have some test tracks to play with, other than the ones I generate myself! :)

 

 

Link to comment
8 hours ago, Sonicularity said:

What causes confusion for me is the idea that detailed descriptions are provided by the listener about how the sound differs, which would correspond to someone being able to hear differences with the relatively large gap in time between playback of the test samples.  Yet, this same information was lost in the first attempts using ABXXXXXXXXXX.  Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell.   It just seems like a logical step in the process to understand what is happening.  I come across as the bad guy to some when I think along these lines; though, I really just want to learn what is going on. 

 

 

 

Human hearing is remarkably adaptive, and very quick to learn - once you 'know' what the best quality of some piece of music is your brain fills the gaps, and they all sound like the best version. Everyone experiences this type of thing with a poor quality radio - listen to some music you're not familiar with; it sounds terrible - put on a favourite tune, and you'll happily bop away to the sound ... your brain and memory add all the needed oomph ...

 

Unfortunately, human hearing is not like a really, really dumb animal, that can asked to sit, roll over, beg, etc, etc, ad nauseum, and keep playing the game on cue ... scientists get bugged by the fact that people are like, well, people - and are not perfectly predictable, :).

Link to comment
8 hours ago, Sonicularity said:

What causes confusion for me is the idea that detailed descriptions are provided by the listener about how the sound differs, which would correspond to someone being able to hear differences with the relatively large gap in time between playback of the test samples.  Yet, this same information was lost in the first attempts using ABXXXXXXXXXX.  Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell.   It just seems like a logical step in the process to understand what is happening.  I come across as the bad guy to some when I think along these lines; though, I really just want to learn what is going on. 

 

 

 

<quote>"Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell. " </quote>

 

Any and all tests have varying degrees of capabilty and can be rated as good or bad tests as such. Better tests score higher in such things as reliability, validity, sensitivity, true/false positives/negatives etc. Plain Xrays often 'miss' things seen on other scans and vice versa. In this case I see no evidence that the test procedure created the differences providing a "tell". Importantly also, the very same test (ABX) was applied to the samples meaning it was a controlled variable. To whatever extent the test itself was influencing the outcome (as opposed to accurately sensing the outcome) it was the same for all samples.

Sound Minds Mind Sound

 

 

Link to comment
14 minutes ago, Audiophile Neuroscience said:

I have no issue with challenging the evidence, looking for uncontrolled variables and so fourth but thus far there appears only IMO rather far fetched speculation of maybe potential possibilities without a shred of evidence to support it.

 

 

Anyone who has experienced how doing "crazy things" affects the sound, like myself, wonders what all the fuss is about - the world is in a fine balance of an enormous number of factors determining everything, and human hearing is just a bit more more senstive to some non-obvious cause and effect chains. Some people prefer the world to be a simple, robotic place; like a well-oiled piece of software - but it 's just a tiny bit more complicated than that ...

Link to comment
6 minutes ago, fas42 said:

 

Anyone who has experienced how doing "crazy things" affects the sound, like myself, wonders what all the fuss is about - the world is in a fine balance of an enormous number of factors determining everything, and human hearing is just a bit more more senstive to some non-obvious cause and effect chains. Some people prefer the world to be a simple, robotic place; like a well-oiled piece of software - but it 's just a tiny bit more complicated than that ...

 

 I would say you have touched upon the crux of the problem. Whenever people claim  "how doing "crazy things" affects the sound", other people ask for evidence. In this context an ABX blind test. Mani did just that. The evidence stands.

Sound Minds Mind Sound

 

 

Link to comment
1 hour ago, Ralf11 said:

No, this not the way science works!  We have some evidence, but it is no way at scientific level.

 

I'll agree there appears to be a hearable difference (by some people).

 

 

 

lets agree to disagree on how science works. What we have is scientific evidence to a p=0.01. That evidence was gained from a pre-agreed and reasonable test methodology. There is speculation as to confounding issues but no evidence to support the speculation. The current available evidence stands. It is not proof. It is scientific evidence. Conclusion : <quote>"there appears to be a hearable difference (by some people)"</quote>. Agreed.

Sound Minds Mind Sound

 

 

Link to comment
2 minutes ago, Ralf11 said:

As someone who published my first scientific paper in the mid-'70s I'm not going to agree with all the above, esp. the methodology comment.  (and a good methods section would help)

 

There is something tho. 

 

Enough to apply for a grant, not enough to publish...

 

My point is you appear(ed) confused between evidence and proof but just my impression.

"published my first scientific paper in the mid-'70s". So what is your science background...deja vu ?

Sound Minds Mind Sound

 

 

Link to comment
3 minutes ago, Ralf11 said:

proof is really a legal term - no scientist uses that word

 

proof definition: 1. a fact or piece of information that shows that something exists or is true:

As said, the evidence is not proof but it is valid evidence, highly significant, and it stands.

 

 

3 minutes ago, Ralf11 said:

does the avatar help?

 

No. C'mmon Ralph, out with it !

Sound Minds Mind Sound

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...