Jump to content


  • Content Count

  • Joined

  • Last visited

About knickerhawk

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. He mostly used the Type 55 P/N material. He claimed that most of his best work from the 50's on was done using the Polaroid system. A particular favorite of his from the Polaroid era is El Capitan Winter Sunrise from 1968: https://shop.anseladams.com/v/vspfiles/photos/1901029-2.jpg
  2. I was assuming (perhaps wrongly so) that Rankin contributes articles to Stereophile. If that's the case, he's almost certainly subject to some kind of contract regarding the material he submits for publication and how he holds himself out to the public as a representative of the publication. Thus, I'd be surprised if his contractual relationship with Stereophile was so narrowly drawn that it would exclude content published in the User Comments section, but I'm making no more than an educated guess. If he's just a vendor that has a familiar relationship to the Stereophile crew but otherwise doesn't actually formally submit content for publication by Stereophile, then your qualification is valid (although it doesn't fundamentally change things for anyone who copies content from the site illegally).
  3. You need to step back and consider the two provisions and the reason for the distinction between "Content" and "User Content." Obviously, the TOU carves out User Content from other content because it is content supplied by end users/consumers of the site and for which they retain copyright (albeit, with the grant of an unrestricted license to the site). All other content is provided by the site's owner either through work for hire or other contractual arrangement with contributors. In that regard, Rankin is providing "other content" per the definition of "Content," and the fact that this particular "other content" was posted in the comments section doesn't change its source and doesn't change the working relationship between Rankin and Stereophile nor the contractual relationship between end users and the site itself. Again, this is a separate consideration from the copyright question. ALL content that appears on the site (defined as "Content" or "User Content" or "other material") is copyrighted. The only question is WHO can successfully bring an infringement claim against a third party. As between Rankin and Stereophile/TEN, one or possibly both would be able to bring an infringement claim with respect to whatever originated from Rankin. For purposes of the copyright infringement claim it matters not at all whether the copied text appears in the form of an article or a comment to an article or elsewhere on the site.
  4. No, that is not the "crucial point." As a "user" of the Stereophile site, Brinkman Ship agreed to the terms of use (TOU). The TOU expressly prohibits republication of any content that appears on the site, which of course includes the comments. As between Brinkman Ship and the publisher of the site (TEN/Stereophile), a contract has been formed, and Brinkman Ship has violated the terms of the contract. As a matter of contract law TEN/Stereophile has a cause of action. This is a separate issue from the license between the publisher and the creator of the content and whether it is "exclusive" or otherwise. TEN, as the non-exclusive publisher, may not have standing to enforce a copyright infringement claim against Brinkman Ship. However, that uncertainty does NOT grant Brinkman Ship (or anyone else) carte blanche to engage in infringement. It simply means that copyright holder (presumably Rankin) or his legal agent would need to bring the action. Regardless...it's still a copyright infringement and a no no. And speaking of agency, there is probably some kind of agency relationship between Rankin and Stereophile based on the terms of a contract we are not privy to. It might even grant exclusivity or an outright transfer of copyright of any IP produced by Rankin in his capacity as a writer/representative of Stereophile. If so, Stereophile would indeed have standing to bring an infringement claim in addition to its contract claim. The bottom line here, folks, is don't quote whole comments (or other works). Snippets and links are the way to go to avoid problems for yourself and your friendly host site (Computer Audiophile).
  5. Yes, beating a retreat might be the better part of valor for you here.
  6. In your first post of the other thread you stated: I had my host select albums play MQA streams from Tidal, then the same tracks from his NAS without telling me which was which, and we turned off the display of the DAC. We also muted the first 3 seconds of every track. We repeated the process with me selecting tracks from Tidal and his NAS. So which way is it? You streamed the album from Tidal and somehow it's now mysteriously disappeared from Tidal (by the way, something I haven't noticed with respect to any of the many Tidal MQA streams I've done over the past two months). OR You downloaded the MQA, forwarded or physically brought a copy to your friend's and had him upload it to his NAS, which is inconsistent with what you wrote in the other thread.
  7. Is the identity of the person in any way relevant to the appropriate answer? My personal answer is "No." What is your personal answer to the question?
  8. Question: What is the ethical standing of a poster in the MQA-related discussions here who first presents him or herself as objective, neutral or otherwise open to the arguments of both sides but who, in fact, is already decided and conceals a deeply partisan position? Should that person be ashamed of him/herself? Should that person be roundly condemned and then ignored, regardless of whether his position is aligned with one's own?
  9. History doesn't repeat itself, but it often rhymes...
  10. It is supposition but a reasonable one if you have a (statistically large enough) population that is falling into a normal distribution for the samples that vary by more than the .8 dB measurable limit in my hypothetical. Regardless, it still doesn't matter if you have enough samples outside of the unknown cohort. With a large enough sampling outside of the unknown cohort and the preference results obtained from inside of the unknown cohort you should be able to statistically determine the distribution of louder MQA and louder non-MQA tracks within the unknown cohort. In the unlikely case of a distribution of actual preference results from within the unknown cohort that's inconsistent with the predicted normal distribution, you would need to further investigate the cause.
  11. Let's see if we can establish consensus about the testing scenario I outlined in my previous post before we wade into the details of what I did in my personal testing. Do you agree that the scenario I described is an example of how it is possible to obtain valid preference results from a blind A/B test even though you have not verified sound leveling to within .2dB?
  12. That's interesting. I haven't detected that "tell" with my Bluesound streamer/dac. In fact, run-in times can vary either way by a fair amount and I'm suspecting its sometimes related to the differences in listed track times as well (i.e., maybe the track time differences aren't always signs of a different master being used, just different padding at the beginning/end of tracks???). For instance, check out the first track run-in time on Nik Bartsch's Ronin "LLyria" album. In the MQA version the track time is listed as 6:50 and in the CD version it's listed as 6:58. That difference is clearly at the beginning of the track. The MQA version starts the music immediately and the CD version starts with about 8 seconds of silence. The first time I launched the CD version it took so long that I mistakenly hit the play button again!
  13. Both you and esldude are demanding preciseness in my testing but neither of you are being very precise in your criticism of it. You are broadly claiming that no listening test can be valid unless sound level matching within .2 dB is enforced in the test based on the fact that subjective preference can be influenced by application of as little as .2 dB difference in playback of the same track. Therefore, we know that there is a danger zone between .2 dB and the normal human threshold of audibility that needs to be controlled for. You and esldude seem to be arguing that the only valid way to control for this subliminality zone of influence is to measure the sound levels of the A and B samples to within .2 dB accuracy. Anything less accurate than that is, as you put it, "simply invalid" or as esldude put it, "fantasy." My contention is that you do not need that level of accuracy to obtain significant results that can prove listener preference is based on something other than sound level. How? Well, let's consider what should be a statistically significant way of achieving valid listening results without accuracy to .2dB. Let's say we start with a population of 125 tracks to be tested. We have an MQA version and a non-MQA version for each track. We first want to toss out tracks with differences in loudness between the two formats that are audible to our human subject. One way to do this is to run a repeated random blind A/B test where the subject is asked to pick the louder version. We can throw in a control with slight volume attenuation applied on a random to confirm the accuracy with which our subject is listening for loudness. Let's say that the subject can't detect sound level differences in 100 of the tracks. Now we need to determine if there is any subliminal sound level differences within the remaining 100 tracks. Our problem is that we only have a sound level measuring methodology accurate to, let's say, .8dB. If it turns out that some of the tracks are louder on the MQA side and some are louder on the non-MQA side and maybe some are equal, then we're in luck. Let's say it breaks down nicely to 1/3 for each of these possibilities. Now we run the subject through the main blind A/B tests for identifying preference. Do you now see where this is going??? If preference is based primarily or exclusively on loudness, then the known louder tracks should be statistically preferred to the known quieter tracks and the preference within the unknown cohort will be mixed. In this scenario we can't conclude anything meaningful about preference when loudness is completely eliminated as a variable (i.e, reduced to no more than .2dB difference). However, if preference does not follow loudness to a statistically significant degree in the known loudness cohort, then we know with statistical significance that something other than loudness is dominating preference - the point being that we've achieved this result even though we haven't level matched all the way to .2dB. Now, if we can't agree on the validity of the approach described above, there's no point in moving on to a discussion of how close my personal testing comes to being "valid" or merely "fantasy." I'll willingly slink away with my tail between my legs when you demonstrate the error in the test design described above.
  14. Please turn down your preach level (it's quite audible) and I'll turn down my sarcasm level and then maybe we can have a useful discussion about what I actually wrote regarding the range of sound level differences picked up by the crude methodology I used and the fact that, even when the non-MQA version is measurably louder, it didn't adversely affect my ability to identify (and prefer) the MQA version. My premise here, is that - after having listened carefully to dozens of tracks at the same volume setting for each format - I have established a consistent ability to detect the MQA version (when blinded). I happen to prefer that MQA sound signature in my system but preference really isn't the determinant issue here. Rather, it's the fact that I can consistently identify the MQA version. Now, consider that there are three possibilities here regarding sound levels: All of the MQA tracks were louder All of the non-MQA tracks were louder Some MQA tracks were louder and some non-MQA tracks were louder The first option would explain both why I could detect a difference and preferred the MQA tracks. The second option would explain why I could detect a difference but doesn't explain the preference (especially considering the reasons for my preference are those usually associated with louder versions). This result would be an interesting and confounding to testing expectations. The third option would explain that my ability to detect (and prefer) the MQA versions is not correlated with sound level differences between the formats. It is simply incorrect for you to proclaim that the only way to obtain valid test results is by level matching. That is the case when dealing with a single sample or a homogeneous population of samples in which the loudness of one set is greater than the other and you do not know which is louder. It is not the case when dealing with a random sampling population. There is more than one way to skin this cat...
  • Create New...