manisandher Posted April 25, 2018 Author Share Posted April 25, 2018 8 minutes ago, mansr said: I'm willing... Stop deflecting. 4 hours ago, mansr said: I did, and there's potentially some skew there. You say you've done the analysis. Where's your evidence for "potentially some skew" in the ABX? Put up or shut up. Mani. Main: SOtM sMS-200 -> Okto dac8PRO -> 6x Neurochrome 286 mono amps -> Tune Audio Anima horns + 2x Rotel RB-1590 amps -> 4 subs Home Office: SOtM sMS-200 -> MOTU UltraLite-mk5 -> 6x Neurochrome 286 mono amps -> Impulse H2 speakers Vinyl: Technics SP10 / London (Decca) Reference -> Trafomatic Luna -> RME ADI-2 Pro Link to comment
Ralf11 Posted April 25, 2018 Share Posted April 25, 2018 7 hours ago, mansr said: It's not necessary to be aware of it beforehand in order to be influenced by it. Different keys do make slightly different clicks, but more importantly the number and timing of the clicks would have differed. Experiments have demonstrated the ability to recover typed information, typically passwords, from nothing more than the sound of the keyboard. yes, the CIA has done that - in some cases by using laser interferometry off of an exterior window... but if "keyboard clicks were audible through the closed door" there is a simple test for that and you don't need to make another trip jabbr 1 Link to comment
Ralf11 Posted April 25, 2018 Share Posted April 25, 2018 7 hours ago, Audiophile Neuroscience said: IMO that is not fair. You need to provide evidence of how and why the results are invalid, not speculation. I wholeheartedly agree that reproducing the results would add further strength but that does not invalidate the available evidence. well, no or I could say Hell NO! the affirmative has the burden of proof - you know that from reading in science Link to comment
Ralf11 Posted April 25, 2018 Share Posted April 25, 2018 7 hours ago, Audiophile Neuroscience said: IMO that is not fair. You need to provide evidence of how and why the results are invalid, not speculation. I wholeheartedly agree that reproducing the results would add further strength but that does not invalidate the available evidence. well, no or I could say Hell NO! the affirmative has the burden of proof - you know that from reading in science Link to comment
pkane2001 Posted April 25, 2018 Share Posted April 25, 2018 4 hours ago, PeterSt said: Watch for the general larger jumps in the music (file) at the "decimal" level. But you'd first need to have the comparison all right and including what I all have already, this is a not-so-easy task for me as well. By now I dare say with confidence that I am able to resurrect my analysis code and that a. it now can work with 24 bits (took me the whole day to only get to that level (it was 16 bits)) and b. that I can see a pattern in Mani's recording (a pattern for Difference). And btw c. : That I can now compare two recordings *not* under my own control (this is about alignment). So the capturing as such (see below next to "Input") is not in order and it is just "third party" files now. I'm starting to get most of the functionality I want in my own diffmaker-like software. I have it aligning simple sinewaves down to about 0.01 sample level. Still need to do drift correction (it's already measuring drift), and I'm not happy with amplitude adjustment yet -- it's not as precise as I think it could be. But, getting there. Here are the original two waveforms, misaligned by 2.5 samples, one with added noise: And here they are after phase and amplitude adjustments: Still lots of work to do, but I'm having fun! Something to play with as I listen to music manisandher 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
esldude Posted April 25, 2018 Share Posted April 25, 2018 2 hours ago, pkane2001 said: I'm starting to get most of the functionality I want in my own diffmaker-like software. I have it aligning simple sinewaves down to about 0.01 sample level. Still need to do drift correction (it's already measuring drift), and I'm not happy with amplitude adjustment yet -- it's not as precise as I think it could be. But, getting there. Here are the original two waveforms, misaligned by 2.5 samples, one with added noise: And here they are after phase and amplitude adjustments: Still lots of work to do, but I'm having fun! Something to play with as I listen to music Very nice pkane. I know I said Diffmaker was open source at one time. I was mistaken about that. It is freeware open to the public. I don't suppose there would be any harm in contacting Bill Waslo who wrote DIffmaker if that is helpful. Maybe he'd let you update his version and replace it. He gives it away so you aren't costing him business. A tool like Diffmaker with a more up to date and non-buggy implementation would be really nice. Over the last decade lots of people have wanted Diffmaker only upon using it swearing off of it as too unreliable. And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. Link to comment
pkane2001 Posted April 25, 2018 Share Posted April 25, 2018 57 minutes ago, esldude said: Very nice pkane. I know I said Diffmaker was open source at one time. I was mistaken about that. It is freeware open to the public. I don't suppose there would be any harm in contacting Bill Waslo who wrote DIffmaker if that is helpful. Maybe he'd let you update his version and replace it. He gives it away so you aren't costing him business. A tool like Diffmaker with a more up to date and non-buggy implementation would be really nice. Over the last decade lots of people have wanted Diffmaker only upon using it swearing off of it as too unreliable. Thanks Dennis. I was hoping that diffmaker was open source, but since it wasn’t, I decided to start from scratch. Now I’m having too much fun writing it myself to try to fix someone else’s code If I get stuck or decide it’s too much trouble, I’ll try to reach out to Bill. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post fas42 Posted April 25, 2018 Popular Post Share Posted April 25, 2018 Note how the objectivists become remarkably subjective in their determination to defend their castle of beliefs ... manisandher, PeterSt and Summit 1 1 1 Link to comment
fas42 Posted April 25, 2018 Share Posted April 25, 2018 3 hours ago, pkane2001 said: I'm starting to get most of the functionality I want in my own diffmaker-like software. ... Still lots of work to do, but I'm having fun! Something to play with as I listen to music Excellent work! I'll be very keen to give it a spin when you have some sort of usable version together, . Link to comment
fas42 Posted April 25, 2018 Share Posted April 25, 2018 If this is helpful, the current set of analogue captures, A and B, have a drift of about 11 samples from beginning to end between them, for a 3 MHz sampling rate - the software will need to be able to accurately adjust for this, to achieve a good quality null. Link to comment
Popular Post Audiophile Neuroscience Posted April 25, 2018 Popular Post Share Posted April 25, 2018 8 hours ago, mansr said: That's not how it works. There are many possible explanations for the outcome of the test. If you favour one of them, it's up to you to provide supporting evidence. Hand-waving does not constitute evidence. 4 hours ago, Ralf11 said: well, no or I could say Hell NO! the affirmative has the burden of proof - you know that from reading in science Yes Ralph of course ("hell" or no "hell") I am familiar with the burden of proof and various relevant philosophical arguments (Russell's Teapot, Hitchen's razor etc). This is why I say, should you or mans or others have evidence to support "many possible explanations" let alone any specific explanations to counter the available evidence, then present it.The current available evidence stands. It has nothing to do with "favoring" or "preferences". It is objective evidence. That's the way science works! I have no issue with challenging the evidence, looking for uncontrolled variables and so fourth but thus far there appears only IMO rather far fetched speculation of maybe potential possibilities without a shred of evidence to support it. Bottom line: the tests as described by Mani appear to have been conducted in a pre-agreed manner and with a reasonably scientifically rigorous methodology according to an ABX protocol. That evidence stands and at a significance level of p=0.01. Should anyone have counter claims, produce evidence.Until such time the current available evidence stands - whatever your fancy! Summit and manisandher 2 Sound Minds Mind Sound Link to comment
pkane2001 Posted April 25, 2018 Share Posted April 25, 2018 1 minute ago, fas42 said: If this is helpful, the current set of analogue captures, A and B, have a drift of about 11 samples from beginning to end between them, for a 3 MHz sampling rate - the software will need to be able to accurately adjust for this, to achieve a good quality null. Yeah, I'm measuring drift already and planning on doing a linear fit to remove it. I also used non-linear (polynomial) drift modeling in the past, but so far I have no reason to think that this might be better (or necessary) for audio. But I'll definitely experiment with different drift removal techniques. It certainly helps a lot to have some test tracks to play with, other than the ones I generate myself! esldude 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
fas42 Posted April 25, 2018 Share Posted April 25, 2018 8 hours ago, Sonicularity said: What causes confusion for me is the idea that detailed descriptions are provided by the listener about how the sound differs, which would correspond to someone being able to hear differences with the relatively large gap in time between playback of the test samples. Yet, this same information was lost in the first attempts using ABXXXXXXXXXX. Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell. It just seems like a logical step in the process to understand what is happening. I come across as the bad guy to some when I think along these lines; though, I really just want to learn what is going on. Human hearing is remarkably adaptive, and very quick to learn - once you 'know' what the best quality of some piece of music is your brain fills the gaps, and they all sound like the best version. Everyone experiences this type of thing with a poor quality radio - listen to some music you're not familiar with; it sounds terrible - put on a favourite tune, and you'll happily bop away to the sound ... your brain and memory add all the needed oomph ... Unfortunately, human hearing is not like a really, really dumb animal, that can asked to sit, roll over, beg, etc, etc, ad nauseum, and keep playing the game on cue ... scientists get bugged by the fact that people are like, well, people - and are not perfectly predictable, . Link to comment
Audiophile Neuroscience Posted April 25, 2018 Share Posted April 25, 2018 8 hours ago, Sonicularity said: What causes confusion for me is the idea that detailed descriptions are provided by the listener about how the sound differs, which would correspond to someone being able to hear differences with the relatively large gap in time between playback of the test samples. Yet, this same information was lost in the first attempts using ABXXXXXXXXXX. Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell. It just seems like a logical step in the process to understand what is happening. I come across as the bad guy to some when I think along these lines; though, I really just want to learn what is going on. <quote>"Immediately I suspect the test procedure is somehow responsible for the differences, creating some tell. " </quote> Any and all tests have varying degrees of capabilty and can be rated as good or bad tests as such. Better tests score higher in such things as reliability, validity, sensitivity, true/false positives/negatives etc. Plain Xrays often 'miss' things seen on other scans and vice versa. In this case I see no evidence that the test procedure created the differences providing a "tell". Importantly also, the very same test (ABX) was applied to the samples meaning it was a controlled variable. To whatever extent the test itself was influencing the outcome (as opposed to accurately sensing the outcome) it was the same for all samples. Sound Minds Mind Sound Link to comment
fas42 Posted April 25, 2018 Share Posted April 25, 2018 14 minutes ago, Audiophile Neuroscience said: I have no issue with challenging the evidence, looking for uncontrolled variables and so fourth but thus far there appears only IMO rather far fetched speculation of maybe potential possibilities without a shred of evidence to support it. Anyone who has experienced how doing "crazy things" affects the sound, like myself, wonders what all the fuss is about - the world is in a fine balance of an enormous number of factors determining everything, and human hearing is just a bit more more senstive to some non-obvious cause and effect chains. Some people prefer the world to be a simple, robotic place; like a well-oiled piece of software - but it 's just a tiny bit more complicated than that ... Link to comment
Audiophile Neuroscience Posted April 25, 2018 Share Posted April 25, 2018 6 minutes ago, fas42 said: Anyone who has experienced how doing "crazy things" affects the sound, like myself, wonders what all the fuss is about - the world is in a fine balance of an enormous number of factors determining everything, and human hearing is just a bit more more senstive to some non-obvious cause and effect chains. Some people prefer the world to be a simple, robotic place; like a well-oiled piece of software - but it 's just a tiny bit more complicated than that ... I would say you have touched upon the crux of the problem. Whenever people claim "how doing "crazy things" affects the sound", other people ask for evidence. In this context an ABX blind test. Mani did just that. The evidence stands. Sound Minds Mind Sound Link to comment
Audiophile Neuroscience Posted April 26, 2018 Share Posted April 26, 2018 8 hours ago, mansr said: I'm willing to set up a more rigorous test. Are you (or anyone else) willing to take it? Sound Minds Mind Sound Link to comment
Ralf11 Posted April 26, 2018 Share Posted April 26, 2018 No, this not the way science works! We have some evidence, but it is no way at scientific level. I'll agree there appears to be a hearable difference (by some people). Link to comment
fas42 Posted April 26, 2018 Share Posted April 26, 2018 3 hours ago, pkane2001 said: It certainly helps a lot to have some test tracks to play with, other than the ones I generate myself! This is an excellent resource, for test data: https://www.gearslutz.com/board/showpost.php?p=13240356&postcount=1617. Not all the links still work, but there's plenty there to play with. pkane2001 1 Link to comment
Popular Post PeterSt Posted April 26, 2018 Popular Post Share Posted April 26, 2018 I know this is too quick and dirty but say it is a teaser. Try to open the below on a monitor of 3320+ pixels wide, or two next to each other, 1680+ each. Size is 3311x1006. Try to look at it at full size, as the dots shown take into account the monitor's resolution. For the general idea it is not thaaat important but when you want to pick nits, then it is. Literally. This is the difference between the #15 and #16 files. #15 is the reference, #16 is shown as the difference against that. Horizontally this spans just over 1ms, see the scale in the top (we're at 6.1 secs here). Vertically it is sample values (the middle line being 0), but divided by 256 because otherwise it goes off screen. So might you be able to count a span of 100 pixels vertically, this represents a value 25600 on the 16 million, assumed that Mani indeed recorded at -0dBFS and in 24 bits. This is just one (1ms) part of the file of 28000ms (28 seconds) and you can stare at it for 20 minutes and still discover new things. Do notice that while a part you see here will be repeated elsewhere - say 2 seconds later -, e.g. 5 ms down the line this pattern changes (not shown but trust me). You may like to concentrate on the first larger peak above zero, which emerges at "096" on the second (sub)scale (this is at second 6.099, the "100" being 6.100). You will see quite similar peaks each ~0.043 second, the second one appearing just before the 140 mark. 3rd just beyond 182. You can see from the rest of the plot that indeed this is a repeating pattern all over. The band of "noise" is to be regarded the real environmental noise, not related to the subject. Anyway you can see that the width of the band of noise is quite similar all over (you could calculate the dB of it by means of seeing the width of the band in pixels and multiply by 256 and divide that on to 16million and convert to dB). The orange graph is experimental and it heads towards identifying the patterns better. For example, our three peaks come along with 7, 8 and 7 downward excursions respectively and with that you may identify them as "almost equal". Might this pattern occur again 5 seconds further down the line, then the orange graph helps identifying it. Because a pattern like the one we see is is not happening out of the blue, in Mani's system the 0.043 second interval should be recognizable one way or the other. Now, since the only parameter changed was the Split File Size (SFS) and we know that the one was set at 200 and the other at 0.1 (which numbers are without unit except for "SFS" itself) we can aim for the SFS of 0.1 interval to see what causes it. This, with the notice that the SFS of 200 will imply processing maybe once per minute. Now, if Mani would again play something of the same bit depth and sampling rate and with the same further settings (and SFS at 0.1 !), including the same upsampling rate and filtering, his logging may show something of the 0.043 second. I can not guarantee this because I can not do over here what Mani does behind his two closed doors, but he can put up the X3 and X3PB log files of this (not XX). I reckon that the 0.043 is a sub-happening in the midst of the split file part loading itself because 0.043 seconds will be too short for the implied chunk reading which may happen at an interval of say 20 times per second at this upsampled rate of 192000. Don't worry about these numbers, but this is the way to approach a thing like this, if we want to know what happens in the first place. Peter acg, manisandher and semente 2 1 Lush^3-e Lush^2 Blaxius^2.5 Ethernet^3 HDMI^2 XLR^2 XXHighEnd (developer) Phasure NOS1 24/768 Async USB DAC (manufacturer) Phasure Mach III Audio PC with Linear PSU (manufacturer) Orelino & Orelo MKII Speakers (designer/supplier) Link to comment
Audiophile Neuroscience Posted April 26, 2018 Share Posted April 26, 2018 1 hour ago, Ralf11 said: No, this not the way science works! We have some evidence, but it is no way at scientific level. I'll agree there appears to be a hearable difference (by some people). lets agree to disagree on how science works. What we have is scientific evidence to a p=0.01. That evidence was gained from a pre-agreed and reasonable test methodology. There is speculation as to confounding issues but no evidence to support the speculation. The current available evidence stands. It is not proof. It is scientific evidence. Conclusion : <quote>"there appears to be a hearable difference (by some people)"</quote>. Agreed. Sound Minds Mind Sound Link to comment
Ralf11 Posted April 26, 2018 Share Posted April 26, 2018 As someone who published my first scientific paper in the mid-'70s I'm not going to agree with all the above, esp. the methodology comment. (and a good methods section would help) There is something tho. Enough to apply for a grant, not enough to publish... mansr 1 Link to comment
Audiophile Neuroscience Posted April 26, 2018 Share Posted April 26, 2018 2 minutes ago, Ralf11 said: As someone who published my first scientific paper in the mid-'70s I'm not going to agree with all the above, esp. the methodology comment. (and a good methods section would help) There is something tho. Enough to apply for a grant, not enough to publish... My point is you appear(ed) confused between evidence and proof but just my impression. "published my first scientific paper in the mid-'70s". So what is your science background...deja vu ? Sound Minds Mind Sound Link to comment
Ralf11 Posted April 26, 2018 Share Posted April 26, 2018 proof is really a legal term - no scientist uses that word does the avatar help? Link to comment
Audiophile Neuroscience Posted April 26, 2018 Share Posted April 26, 2018 3 minutes ago, Ralf11 said: proof is really a legal term - no scientist uses that word proof definition: 1. a fact or piece of information that shows that something exists or is true: As said, the evidence is not proof but it is valid evidence, highly significant, and it stands. 3 minutes ago, Ralf11 said: does the avatar help? No. C'mmon Ralph, out with it ! Sound Minds Mind Sound Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now