Jump to content
  • Pure Vinyl Club
    Pure Vinyl Club

    Digital Vinyl: Temporal Domain

    Note : The following article contains information that has been deemed incorrect by leading digital audio engineers. I attempted to corroborate the findings of this article by asking several digital audio experts. I was unable to find anyone who could back up the statements made, with any scientific data or theory. Consider the following article retracted.

     

    I am leaving the text of this article up on CA because it has enabled a good discussion to take place. By leaving it up, people can read what was claimed and read the followup arguments that the prove it incorrect. To remove the article completely only opens up a space for this to happen again, and again, and again.

     

     

    I take full responsibility for the publishing of this article. I should have had a technical editor check it before publication. I apologize to the CA Community for the error in judgement.

     

    - CC.

     

     

     

    1-Pixel.png

    Temporal Domain of Signal, or What is More Important for Listening to Music, Static or Dynamic Characteristics of the Sound Signal?

     

    Every time my audiophile friends, who do not have an analog setup (TT), come to me and see huge piles of expensive, rare LPs, they get puzzled. They wonder, how can it be that LP lovers spend huge amounts of money on their "analog" hobby, while suffering such discomforts when listening to music. They say this method of listening in the 21st century is absolutely impractical. In addition, there are signal distortion and limitations in many of the technical aspects of vinyl.

     

    In response, I always say the same thing in support of analog - it's mainly because of the time domain signal. We (fans of analog audio) are willing to make these sacrifices and inconveniences for much better performance in a time aspect, the so-called dynamic characteristics. Static characteristics, those belonging to the spectral and dynamic domains (Dynamic Range, THD + N, Frequency Response, etc.) certainly are important for high-quality sound, but when it comes to listening to music in real time, in my opinion, it is the dynamic characteristics that matter most

     

     

    Often, in response to my comments, people react with skepticism. They say they are used to trusting technical information that can be measured and compared and what I say is very subjective and ephemeral.

     

    Also viewing comments here on СA, especially those connected with the current topics such as MQA, I have noticed that some members react rather skeptically to the arguments about MQA's improvements of characteristics in the time-domain. And, some even question the very existence of such improvements.

     

    Here it is shown that "High-resolution in temporal, spatial, spectral, and dynamic domains together determine the quality value of perceived music and ,sound and that temporal resolution may be the most important domain perceptually". Temporal resolution, is actually what I would like to briefly discuss with you.

     

    There's a deeply rooted opinion that frequency above 10 kHz, and moreover above 20 kHz, contains a small amount of music information. And yet research shows that, for example transients from cymbals contain significant frequency components extending even above 60 kHz. The trumpet playing fortissimo has transients components up through 40 kHz, and in the case of the violin even temporary frequency of 100 kHz occurs.

     

    As you can see quite a lot of music information is contained in frequencies above 20 kHz. Of course, immediately a question is raised: "Are we able to hear it?". To answer this, it is worth mentioning some rarely discussed issues. Commonly cited audibility up to 20 kHz frequency is derived from conventional hearing tests, which are based on the audibility of simple sounds. But there is an alternative look at the issue from the more "dynamic" side. This is the temporal resolution of the ear, not the "static" harmonic content and audibility of pure sinusoidal tones.

     

    This may be more appropriate in a case of music signals than the prospect of simple tones. The actual music signals have a very complex structure as a result of the imposition of the attack and decay of many instruments. More importantly, their frequency spectrum is very different between the short period of the initial attack, or the rise of sound, eg. as a result of pulling a string or striking a key of a piano, and the subsequent, much longer sound decay.

     

    There is a large group of instruments, which are characterized by a very "transient," dynamic nature of the initial attack phase of the sound. Xylophone, trumpet, cymbals and striking a drum achieve dynamic levels in between 120 and 130 dB within 10 ms or less. One thing we can say for sure, it is not possible for a CD-quality sample scattered at 22.7ms to have an opportunity to correct the commissioning attack phase of musical instruments, which are half the distance between two consecutive samples.

     

    And the attack phase is very important for audio reception. In experiments, in which the samples were of wind instruments dissected in a way that combined the short attack phase of one instrument with a longer sound decay of another one, listeners identified the sound that of the instrument with the short fragment attack, not the longer decay sound.

     

     

    image2.png

     

    The sound wave graph from a cymbal being struck by a stick. The sound increase is nearly instantaneous, followed by a long sustain of a rather uniform nature. - from highfidelity.pl

     

     

    When viewed from the hearing mechanism perspective, you can find information indicating that the signals which have pulsing character (i.e., generally transients), in contrast to simple tones, activate significantly larger areas of hearing cells than pure sinusoidal tones (which, in nature are almost non existent). In the case of pulses, the possible temporal resolution of the human ear may be up to 10 microseconds, corresponding to frequencies of 100 kHz.

     

    This information is also confirmed in the opinion of recognized practitioners. Art Dudley from "Stereophile" magazine, in an interesting interview from The Editors cycle, is of the opinion that the Nyquist frequency does not apply while there are working decimation and reconstruction filters of complex music signals. In his opinion, two samples may be used to describe a single frequency, but do not provide sufficient density samples to describe the speed at which the signal increases or decreases. This is crucial to distinguishing between music and ordinary sound.

     

    Also I would like to quote, in the context of the above information, an excerpt from my correspondence with Dr. Rob Robinson:

     

    "My thoughts are that with extended frequency response you are not capturing "audible" frequencies but rather preserving the critical time relationships in the music at all frequencies. Human hearing might not be able to "detect" sounds above 15 - 20 kHz or so, but on the other hand hearing, in conjunction with the brain, is very sensitive to temporal information. It's been reported that the human auditory system is capable of discerning temporal differences of tens of microseconds or less (and note, at 192 kHz the time between samples is 5 microseconds). This temporal discrimination is the reason we are able to accurately discern directional / spatial cues. Hearing evolved so that the location of threats, e.g., the cougar about to pounce, could be determined accurately, as key to survival. The spatial information comes not only from amplitude, but the time difference between the same sound arriving at each ear. And the more sensitive hearing is to temporal information, the more accurately that spatial cues can be located.

     

    A CD format brickwall filter will affect time relationships, part of the reason that CD format digital audio may sound less "natural" than analog (or live sound). Preserving temporal information is key to preserving lifelike sound and imaging. While all digital audio will affect temporal information, the influence diminishes the higher the sample rate, because the antialiasing and reconstruction filters are operating at ultrasonic frequencies. So, by using higher sample rates, even though we may be recording sounds that are inaudible, we have better preservation of the temporal information in the signal, which conveys a more lifelike presentation of the music. Besides using a high sample rate to capture the signal, we also have the ultra wide 5,000 kHz bandwidth (five thousand kilohertz, as contrasted with "just" 20 kilohertz as the generally accepted audible upper frequency limit) of the Seta preamplifier which again faithfully preserves temporal relationships in the music signal (internally, the front end circuitry has a risetime of less than 50 nanoseconds)." - Dr. Rob Robinson

     

    If we take into consideration the typical technical parameters of audio, which is mainly bandwidth and dynamics (signal-to-noise ratio), we can easily come to the conclusion that, omitting the variables associated with the physiology of hearing, audiophile devices should not differ from each other, and moreover sonically stand out in relation to the audio devices from the mass market.

     

    And yet, there are people willing to pay much higher prices for equipment and the typical specifications are often similar or even slightly worse than the cheaper devices of the mass segment.

     

    Most importantly, in many cases audiophiles agree on the description of the main attributes of the sound of the given device, although expressed in a specific descriptive dictionary, and not in strict technical parameters.

     

    This raises a difficult to challenge conclusion that if some audiophile characteristics are consistently perceived by a large number of people there's a good chance that behind this stands specific physical phenomena, though their nature can be complicated and can be difficult to express in simple numerical parameters, eg. dynamic range or frequency response.

     

    What may these phenomena be? If the key to the mystery lies not in the parameters of the frequency domain (frequency response) or dynamics (noise at a low level), then a single area remains, and that's phase issues, or timing aspects of the sound. In fact, these are the most fundamental parameters of the sound signal, because they underlie its creation, what a sound wave actually looks like in the time domain. The question is how much of the sound wave graph corresponds to the wave reaching the microphone registering this recording.

     

    The nuances of the tonal colors, to the greatest extent, are shaped by the sound wave characteristic from each instrument. And, it's not just a simple analysis of the contents of the so-called harmonics but more of dynamic aspects, mainly the so-called attacks, or the rising of sound at the moment of its creation. It is not difficult to imagine that the course of the rise in amplitude of the sound will be quite different for wind, string and plucked instruments. It's a very fine structure of transients, which over a very short period time, this new tone of a musical instrument provides the bulk of information about its color and texture. Studies show that the human ear is most sensitive to the initial part of the pulse of a new musical sound.

     

    Any disturbance or contamination of this sensitive time structure leads to a noticeable loss of sound quality from the perspective of people sensitive to audiophile aspects, such as nuances in fidelity transmission of all the colors of musical instruments.

     

    In other words, the time domain signal (issues phase, or timing aspects of the sound). In fact, these are the most fundamental parameters of the sound signal, because they lie in its creation - thus what a sound wave in the time domain actually looks like.

     

    So, one of the main advantages of vinyl is the lack of restrictions of temporal resolution in LP. One of the key challenges for us in the Pure Vinyl Club was to find a way (technology, a method of recording) the equipment to maintain a maximum level of temporal resolution from the LP while recording in digital. This does not mean that we were going to compromise or neglect other characteristics which are also important for the sound.

     

    Paweł Piwowarski in his article "PLIKI HI-RES - niezbędny krok do nirwany czy nadmiarowy gadżet?" on High Fidelity.pl in the October 2016 issue to which I referred above, noted that "The trumpet playing fortissimo contains transients of 40 kHz". I invite you to watch this little video using our LP rip, which clearly shows that transients of the trombone can get higher than 50 kHz, and trumpet reaches almost 70kHz!

     

    Later, in one of the following articles, which might be called "What is actually recorded on LP" I will showcase many interesting videos and screenshots, which clearly show that in many musical instruments transients exceed the 40-50 kHz threshold, and among them will be some unexpected ones (contrabass and sibilance of the human voice).

     

    Also, many audiophiles have prejudices about the LPs Dynamic Range. Here's a screenshot of the DR of an album's full side (Duration: 24:07, RAW Record).

     

     

     

    screnshot-DR.jpg

     

     

     

    I will focus on these and other interesting LP aspects in more detail in in the next articles of the Digital Vinyl series.

     

     

    Thank you,

     

    Igor

     

     

     

     

     

    Sound Samples

     

     

    Trippin (Kenny Drew – Trippin (1984, Japan) Promo WL, Baystate (RJL-8101))

    Official DR Value: DR13, Gain Output Levels (Pure Vinyl) – 14.00dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (103MB)

     

     

     

    Play Fiddle Play (Isao Suzuki Quartet + 1 – Blue City (1974, Japan) Three Blind Mice (TBM-24))

    Official DR Value: DR13, Gain Output Levels (Pure Vinyl) – 8.02dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (113MB)

     

     

     

    Make Someone Happy (Carmen McRae – Live At Sugar Hill San Francisco (1964, USA) Time Records (S/2104))

    Official DR Value: DR14, Gain Output Levels (Pure Vinyl) – 7.23dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (99MB)

     

     

     

    Early In The Morning (John Henry Barbee, 1963

    VA – The Best Of The Blues (Compilation) (RE 1973, West Germany) Storyville (671188))

    Official DR Value: DR14, Gain Output Levels (Pure Vinyl) – 10.63dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (75MB)

     

     

     

    La Cumparsita (Werner Müller And His Orchestra – Tango! (1967, USA) London Records (SP 44098))

    Official DR Value: DR11, Gain Output Levels (Pure Vinyl) – 0.00dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (104MB)

     

     

     

    Wild Is The Wind (The Dave Pike Quartet Featuring Bill Evans – Pike’s Peak 1962 (RE 1981, USA) Columbia (PC 37011))

    Official DR Value: DR12, Gain Output Levels (Pure Vinyl) – 10.31dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (109MB)

     

     

     

    People Are Strange (The Doors – 13 (1970, USA) Elektra (EKS-74079))

    Official DR Value: DR11, Gain Output Levels (Pure Vinyl) – 0.00dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (82MB)

     

     

     

    Let’s Groove (Earth, Wind and Fire – Raise! (1981, Japan) CBS/Sony (25AP 2210))

    Official DR Value: DR15, Gain Output Levels (Pure Vinyl) – 7.89dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (108MB)

     

     

     

    Smooth Operator (Sade – Smooth Operator (1984, Single, 45rpm, Japan) Epic (12・3P-581))

    Official DR Value: DR13, Gain Output Levels (Pure Vinyl) – 7.15dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (114MB)

     

     

     

    Fernando (Paul Mauriat – Feelings (1977, 45rpm, Japan) Philips (45S-14))

    Official DR Value: DR14, Gain Output Levels (Pure Vinyl) – 5.76dB, Edit “Click Repair” – yes

     

    192 kHz / 24 bit (112MB)

     

     

     

    1-Pixel.png




    User Feedback

    Recommended Comments



    In fact, this is a popular picture with a chart showing ANALOG (signal from the best microphone), as a reference standard.

     

    Igor, I would be very surprised if this "click" was indeed a naturally produced acoustic signal picked up by a mic. Can you think of any naturally produced "click" sound in the real world with frequency components above 96kHz (as is obvious from the problems in reconstruction using a 192kHz sample rate) that has no reverberation at all, like this one?

    Share this comment


    Link to comment
    Share on other sites

    Igor, I would be very surprised if this "click" was indeed a naturally produced acoustic signal picked up by a mic. Can you think of any naturally produced "click" sound in the real world with frequency components above 96kHz (as is obvious from the problems in reconstruction using a 192kHz sample rate) that has no reverberation at all, like this one?

     

    An electric discharge across a spark gap produces a rather sharp click, but even that will have some reverberation.

    Share this comment


    Link to comment
    Share on other sites

    Igor, I would be very surprised if this "click" was indeed a naturally produced acoustic signal picked up by a mic. Can you think of any naturally produced "click" sound in the real world with frequency components above 96kHz (as is obvious from the problems in reconstruction using a 192kHz sample rate) that has no reverberation at all, like this one?

     

    Jud, I actually wrote with irony about this "popular" picture. Look again at the picture - there the peak of the DSD is even higher than the peak ANALOG ;-))

    Share this comment


    Link to comment
    Share on other sites

    Jud, I actually wrote with irony about this "popular" picture. Look again at the picture - there the peak of the DSD is even higher than the peak ANALOG ;-))

     

    Thanks, Igor. Irony (which I love) is among the most difficult things to try to translate across even very minor language barriers.

    Share this comment


    Link to comment
    Share on other sites

    99.999% would really like help understanding how and why. It's over our heads.

     

    Like many things, it comes down to assumptions and math. The big assumption is that the human auditory system does not respond to frequencies >20kHz. If this is true, then the Shannon-Nyquist sampling theorem provides a transformation between the time and frequency domains ... the Fourier transform ... which equates a signal in the time domain to the Fourier transform in the frequency domain.

     

    So heres the thing... when you say that the human auditory system can respond to *transients* at x,y, or z microseconds, nanoseconds etc ... you are *exactly* saying that it is responding to certain frequencies, and if the 44kHz sampling rate cannot sample these transients sufficient for the auditory system to hear, then you are saying that the auditory system is responding to frequencies higher than 20 kHz. Again it comes down to your assumption. These two things are mathematically the same according to Shannon-Nyquist.

     

    My own position is that just because people cannot generally hear an isolated tone > 20 kHz or even 16 kHz, that, because the system is non-linear, that does not imply that the system cannot respond in some fashion to a tone or combination of tones > 20 kHz i.e. a short transient. So the other big assumption in that argument is that the auditory system is fundamentally linear (but it is highly non-linear). Consider, as an example, IM distortion which, for an amplifier, might cause problems when fed a 100kHz or 1MHz signal. That's because of non-linearities in the electronics, and the human auditory system has its own non-linearities.

     

    The old saying about: ASSUME => ASS-U-ME

     

    So, the math is entirely correct. Its the assumptions behind the math that should be questioned.

     

    Is this understandable?

     

    Jonathan

    Share this comment


    Link to comment
    Share on other sites

    Like many things, it comes down to assumptions and math. The big assumption is that the human auditory system does not respond to frequencies >20kHz. If this is true, then the Shannon-Nyquist sampling theorem provides a transformation between the time and frequency domains ... the Fourier transform ... which equates a signal in the time domain to the Fourier transform in the frequency domain.

     

    So heres the thing... when you say that the human auditory system can respond to *transients* at x,y, or z microseconds, nanoseconds etc ... you are *exactly* saying that it is responding to certain frequencies, and if the 44kHz sampling rate cannot sample these transients sufficient for the auditory system to hear, then you are saying that the auditory system is responding to frequencies higher than 20 kHz. Again it comes down to your assumption. These two things are mathematically the same according to Shannon-Nyquist.

     

    My own position is that just because people cannot generally hear an isolated tone > 20 kHz or even 16 kHz, that, because the system is non-linear, that does not imply that the system cannot respond in some fashion to a tone or combination of tones > 20 kHz i.e. a short transient. So the other big assumption in that argument is that the auditory system is fundamentally linear (but it is highly non-linear). Consider, as an example, IM distortion which, for an amplifier, might cause problems when fed a 100kHz or 1MHz signal. That's because of non-linearities in the electronics, and the human auditory system has its own non-linearities.

     

    The old saying about: ASSUME => ASS-U-ME

     

    So, the math is entirely correct. Its the assumptions behind the math that should be questioned.

     

    Is this understandable?

     

    Jonathan

     

    I remember reading that different parts of the brain are used for processing transients vs. tones, but that doesn't answer whether the sensory end would be capable of detecting such fast-rise-time transients in the first place. I would guess there's got to be research, but I haven't been able to find anything right on point.

     

     

    Sent from my iPhone using Computer Audiophile

    Share this comment


    Link to comment
    Share on other sites

    I remember reading that different parts of the brain are used for processing transients vs. tones, but that doesn't answer whether the sensory end would be capable of detecting such fast-rise-time transients in the first place. I would guess there's got to be research, but I haven't been able to find anything right on point.

    Yeah I recall reading something at some point.

     

    *By definition* however, if the auditory system responds to a transient to short for the 44 kHz sampling to model, then the auditory system is responding to a frequency higher than 22ish kHz. There can't be a response unless the information somehow gets into the system. The math is really that certain, and the only things which can be questioned are the assumptions.

    Share this comment


    Link to comment
    Share on other sites

    Like many things, it comes down to assumptions and math. The big assumption is that the human auditory system does not respond to frequencies >20kHz. If this is true, then the Shannon-Nyquist sampling theorem provides a transformation between the time and frequency domains ... the Fourier transform ... which equates a signal in the time domain to the Fourier transform in the frequency domain.

     

    So heres the thing... when you say that the human auditory system can respond to *transients* at x,y, or z microseconds, nanoseconds etc ... you are *exactly* saying that it is responding to certain frequencies, and if the 44kHz sampling rate cannot sample these transients sufficient for the auditory system to hear, then you are saying that the auditory system is responding to frequencies higher than 20 kHz. Again it comes down to your assumption. These two things are mathematically the same according to Shannon-Nyquist.

     

    My own position is that just because people cannot generally hear an isolated tone > 20 kHz or even 16 kHz, that, because the system is non-linear, that does not imply that the system cannot respond in some fashion to a tone or combination of tones > 20 kHz i.e. a short transient. So the other big assumption in that argument is that the auditory system is fundamentally linear (but it is highly non-linear). Consider, as an example, IM distortion which, for an amplifier, might cause problems when fed a 100kHz or 1MHz signal. That's because of non-linearities in the electronics, and the human auditory system has its own non-linearities.

     

    The old saying about: ASSUME => ASS-U-ME

     

    So, the math is entirely correct. Its the assumptions behind the math that should be questioned.

     

    Is this understandable?

     

    Jonathan

     

    Those non-linear electronics that can display IM do have response to those frequencies even though non-linear. Even if non-linear, how can the ear create IM like distortions unless it responds to those higher frequencies? How could it even in theory respond to high frequency transients, but not high frequency steady tones?

     

    Yes the brain processes transients differently that steadier sounds. But that doesn't free it from being limited by the frequencies the ear is able to put upon the auditory nerves. The stereocilia that convert sound to nerve impulses are tuned to respond no higher than a center frequency of 15 khz or so. That they are something like weakly tuned filters is why we have some response to slightly higher frequencies (near 20 khz when young). What about that would appear able to respond to faster transient events?

     

    You can state that maybe it does, but what is the hypothetical mechanism behind how that would happen?

    Share this comment


    Link to comment
    Share on other sites

    Those non-linear electronics that can display IM do have response to those frequencies even though non-linear. Even if non-linear, how can the ear create IM like distortions unless it responds to those higher frequencies? How could it even in theory respond to high frequency transients, but not high frequency steady tones?

     

    Yes the brain processes transients differently that steadier sounds. But that doesn't free it from being limited by the frequencies the ear is able to put upon the auditory nerves. The stereocilia that convert sound to nerve impulses are tuned to respond no higher than a center frequency of 15 khz or so. That they are something like weakly tuned filters is why we have some response to slightly higher frequencies (near 20 khz when young). What about that would appear able to respond to faster transient events?

     

    You can state that maybe it does, but what is the hypothetical mechanism behind how that would happen?

     

    The response rate of the ear, and the auditory system in general is subject to empirical measurement.

     

    I am saying that evidence that the system is responding to a certain frequency or range of frequencies is evidence that *something* in the system has to respond to these frequencies -- hard to even say this without it being a tautology.

     

    How could a system respond to high frequency transients without responding to high frequency tones? Easily. The transients may be used for localization for example.

     

    Now, purely for example, a system may get be designed to respond only when the frequencies 1kHz, 10kHz and 100kHz are presented in phase at the same time -- if you were to only test individual tones you would entirely miss the 100 kHz response. That's just one example there could be many many possibilities that haven't been tested.

    Share this comment


    Link to comment
    Share on other sites

    Those non-linear electronics that can display IM do have response to those frequencies even though non-linear. Even if non-linear, how can the ear create IM like distortions unless it responds to those higher frequencies? How could it even in theory respond to high frequency transients, but not high frequency steady tones?

     

    Yes the brain processes transients differently that steadier sounds. But that doesn't free it from being limited by the frequencies the ear is able to put upon the auditory nerves. The stereocilia that convert sound to nerve impulses are tuned to respond no higher than a center frequency of 15 khz or so. That they are something like weakly tuned filters is why we have some response to slightly higher frequencies (near 20 khz when young). What about that would appear able to respond to faster transient events?

     

    You can state that maybe it does, but what is the hypothetical mechanism behind how that would happen?

     

    There may indeed be no response to transients with faster rise time than the highest sustained tone one can hear. But I sure would like to read a research publication that provides a definitive answer to the question.

     

     

    Sent from my iPhone using Computer Audiophile

    Share this comment


    Link to comment
    Share on other sites

    There may indeed be no response to transients with faster rise time than the highest sustained tone one can hear. But I sure would like to read a research publication that provides a definitive answer to the question.

     

     

    Sent from my iPhone using Computer Audiophile

     

    It is not even about hearing at his point. Your basilar membrane won't respond physically. That is where the motions of air imparted to the ear canal become nerve signals. Since there isn't a part of the BM to respond at higher frequencies there is no encoding of the higher frequencies sent to the brain to work with.

    Share this comment


    Link to comment
    Share on other sites

    In fact, this is a popular picture with a chart showing ANALOG (signal from the best microphone), as a reference standard.

     

    That is one heck of a microphone, then. Tell us, in what sort of universe do you live?

    Tell us more: what would the impulse response of something like a U47 look like?

    Share this comment


    Link to comment
    Share on other sites

    ... what would the impulse response of something like a U47 look like?

     

    With leather?

    Share this comment


    Link to comment
    Share on other sites

    Igor, I would be very surprised if this "click" was indeed a naturally produced acoustic signal picked up by a mic. Can you think of any naturally produced "click" sound in the real world with frequency components above 96kHz (as is obvious from the problems in reconstruction using a 192kHz sample rate) that has no reverberation at all, like this one?

    There is a surprising amount of energy at ultrasonic frequencies produced by a finger snap. Don't know about reverb.

     

    https://books.google.com/books?id=Sb5nkkz-QIoC&pg=PA143&lpg=PA143&dq=echolocation+finger+pulse&source=bl&ots=lEjXIOoHSZ&sig=f6Nyowv-58v9BGEZVkunrSwnIio&hl=en&sa=X&ved=0ahUKEwih4f_7v87SAhUB7iYKHRciC6cQ6AEIPzAF#v=onepage&q=echolocation%20finger%20pulse&f=false

    Share this comment


    Link to comment
    Share on other sites

    That is one heck of a microphone, then. Tell us, in what sort of universe do you live?

    Tell us more: what would the impulse response of something like a U47 look like?

     

    'Sokay. He said in response to my question along the same lines that he was speaking ironically, but it was apparently lost in translation.

     

     

    Sent from my iPhone using Computer Audiophile

    Share this comment


    Link to comment
    Share on other sites

    Agree 16/44 is enough, The Trinity Sessions is one example of a great 16/44 recording.Maybe the A to D converters/process is worse now than in early years?

    The vinyl version however (I have both) beats the CD hands down. Strange, since the source is a DAT recording at 16/48.. There's more to sound than all the measurements mentioned in this thread!

     

    Sent from my HTC One_M8 using Computer Audiophile mobile app

    Share this comment


    Link to comment
    Share on other sites

    ...

    In fact, this is a popular picture with a chart showing ANALOG (signal from the best microphone), as a reference standard. And the loss (or lack of loss, as they represent -))), in the case of DSD) in the temporal domain of the impulse response and energy when trying to register and then reconstruct this signal with the help of various digital standards (48, 96,192 and DSD).

     

    There are several related issues:

     

    1) Bandwidth necessary to produce an impulse response whether analog or digital -- looking at peaks when bandwidth limited they get shorter and fatter (to describe in simple English)

     

    2) In the case of digital, the "close in" phase error causes widening of the impulse

     

    3) Intermodulation distortion can cause split peaks

     

    But yes, when we want to preserve the "live mic" sound, we want to preserve this impulse response using either or both good analog and digital techniques.

     

    The impulse response is the basis of measuring the "transfer function" in systems analysis language

    Share this comment


    Link to comment
    Share on other sites

    That is one heck of a microphone, then. Tell us, in what sort of universe do you live?

    Tell us more: what would the impulse response of something like a U47 look like?

     

    I wouldn't be so quick to question the microphone. Assuming the microphone is accurate and not knowing the specs assuming it is bandwidth limited to 100 kHz -- I would suspect that the PCM 96/192 recordings might have more jitter (specifically close in phase error) than optimal -- that would produce the pattern we are shown

    Share this comment


    Link to comment
    Share on other sites

    I wouldn't be so quick to question the microphone. Assuming the microphone is accurate and not knowing the specs assuming it is bandwidth limited to 100 kHz -- I would suspect that the PCM 96/192 recordings might have more jitter (specifically close in phase error) than optimal -- that would produce the pattern we are shown

     

    It's the complete lack of any reverb that makes me question whether it could possibly be a mic pickup of a natural acoustic signal.

     

     

    Sent from my iPhone using Computer Audiophile

    Share this comment


    Link to comment
    Share on other sites

    'Sokay. He said in response to my question along the same lines that he was speaking ironically, but it was apparently lost in translation.

     

    "In fact, this is a popular picture with a chart showing ANALOG (signal from the best microphone), as a reference standard. And the loss (or lack of loss, as they represent -))), in the case of DSD) in the temporal domain of the impulse response and energy when trying to register and then reconstruct this signal with the help of various digital standards (48, 96,192 and DSD)."

    Share this comment


    Link to comment
    Share on other sites

    It's the complete lack of any reverb that makes me question whether it could possibly be a mic pickup of a natural acoustic signal.

     

    It is clearly not. Nothing is that perfect.

    Share this comment


    Link to comment
    Share on other sites

    It's the complete lack of any reverb that makes me question whether it could possibly be a mic pickup of a natural acoustic signal.

    Fair enough. I often wonder, perhaps like many of us, when in a dive bar which I assume doesn't have the highest end equipment, how the sound still has that wonderful "live" quality. Its interesting to consider what the qualities of the "live mic feed" that often aren't captured by recordings. Could be a number of things, and not always what the recording engineers claim e.g. Cookie Marenco: is it DSD256 or just great equipment and care and skill or Barry Diament, likewise (not DSD though ;)

    Share this comment


    Link to comment
    Share on other sites

    It is not even about hearing at his point. Your basilar membrane won't respond physically. That is where the motions of air imparted to the ear canal become nerve signals. Since there isn't a part of the BM to respond at higher frequencies there is no encoding of the higher frequencies sent to the brain to work with.

    I was at NO Jazzfest in 2013 and it was raining, and I recall the "Better Than Ezra" set where we had managed to get fairly close to the stage but dozens back and totally jam packed like sardines. Everyone was standing in mud and I distinctly recall feeling the bass reverb up my legs through the ground. Now *that's* bass and no concern about my cochlea hearing below 20 Hz -- I could clearly feel it :)

     

    But keep the response of your own BM to yourself -- how do you know that my membranes don't have nonlinear responses, perhaps when 64kHz is presented along with 16 kHz? Or maybe retinal inputs to the cortex? Again, you are cognitively stuck in linear analytics. Can you cite literature which indicated the auditory system is limited to Fourier analysis?

    Share this comment


    Link to comment
    Share on other sites

    I was at NO Jazzfest in 2013 and it was raining, and I recall the "Better Than Ezra" set where we had managed to get fairly close to the stage but dozens back and totally jam packed like sardines. Everyone was standing in mud and I distinctly recall feeling the bass reverb up my legs through the ground. Now *that's* bass and no concern about my cochlea hearing below 20 Hz -- I could clearly feel it :)

     

    But keep the response of your own BM to yourself -- how do you know that my membranes don't have nonlinear responses, perhaps when 64kHz is presented along with 16 kHz? Or maybe retinal inputs to the cortex? Again, you are cognitively stuck in linear analytics. Can you cite literature which indicated the auditory system is limited to Fourier analysis?

     

    Different parts of the BM vibrate in tune with different frequencies. Simple mechanical construction. Yes you can feel in different body parts the 20 hz or below. The construction of the BM means there is no location for it to vibrate at 64 khz. Since the vibrating fluid moves the hair cells which is what moves the endings to produce nerve impulses your BM has no ability to do anything at 64 khz. If your brain has a way to process ultrasonic results without ultrasonic input then you might be onto something. As I am not the one claiming extraordinary processing ability beyond those physical parameters and beyond any known testing of human hearing ability it is not up to me to prove a ridiculous negative. That is your job if you insist on hearing responding to ultrasonic rates of transient increase.

     

    Once you have accomplished your job, then we can talk about how common such steep high frequency transients are in real music whether live or recorded. I seem to recall for instance at 100 khz air will lower the level of sound about 5 or 6 db per meter.

    Share this comment


    Link to comment
    Share on other sites

    Different parts of the BM vibrate in tune with different frequencies. Simple mechanical construction. Yes you can feel in different body parts the 20 hz or below. The construction of the BM means there is no location for it to vibrate at 64 khz. Since the vibrating fluid moves the hair cells which is what moves the endings to produce nerve impulses your BM has no ability to do anything at 64 khz. If your brain has a way to process ultrasonic results without ultrasonic input then you might be onto something. As I am not the one claiming extraordinary processing ability beyond those physical parameters and beyond any known testing of human hearing ability it is not up to me to prove a ridiculous negative. That is your job if you insist on hearing responding to ultrasonic rates of transient increase.

     

    Once you have accomplished your job, then we can talk about how common such steep high frequency transients are in real music whether live or recorded. I seem to recall for instance at 100 khz air will lower the level of sound about 5 or 6 db per meter.

    Dude, you are outside your knowledge base. You aren't groking the concept of nonlinear response. The things you say with seeming certainty are obtained where? please provide specific scientific references which specifically address the "inability" of the not only the cochlear apparatus but the greater auditory system to respond in a nonlinear fashion to ultrasonics.

     

    For example, there are articles which describe ultrasonic bone conduction:

     

    https://www.ncbi.nlm.nih.gov/pubmed/17282589

    http://www.tinnituscenter.com/pubs/pg111_114.pdf

    The Study of Digital Ultrasonic Bone Conduction Hearing Device - IEEE Xplore Document

    https://pdfs.semanticscholar.org/a925/1364cc39960a59d3bd098edf7a65fd31cc09.pdf

     

    Do you even understand these concepts? If so please respond with specific citations.

     

    To be very clear however, what I am claiming is not that the auditory system *does* respond to these frequencies, rather that evidence that the system responds to transients at those frequencies *would be* direct evidence that the system does, in some fashion, respond to those frequencies. I think you are having a really hard time understanding this mathematical concept which is keeping you bogged down in the BM.

    Share this comment


    Link to comment
    Share on other sites




    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now




×
×
  • Create New...