Jump to content

bluesman

Members
  • Content Count

    1967
  • Joined

  • Last visited

About bluesman

  • Rank
    Crusty Old Curmudgeon

Recent Profile Visitors

8601 profile views
  1. Great piece! FWIW, Stax cushions have always seemed a bit thinner & more fragile in feel than most others. They actually glued themselves together on my SRX Mk 3s after being left unused for a few months when we built our house & moved. Thanks for the info & its articulate presentation!
  2. With all due respect, I don't think this is a problem with internet forums per se - I think it's largely a problem with people. Anonymity and accountability are not mutually exclusive. But each of us has to hold himself or herself accountable and behave accordingly, even if anonymous. One can (and should) have integrity and be trustworthy, open to change, and comfortable with ambiguity whether on a soapbox in the town square or hidden behind an internet pseudonym. We hold ourselves accountable by accepting reasonable limits, recognizing and apologizing when we unintentionally cross them, and displaying a self-deprecating sense of humor (for which we have smileys 😁 ). It's true that there can be no accountability without consequences - but accepting responsibility for a faux pas, apologizing, and actively trying to learn from the encounter are consequences even if no one but you knows you have to do it. Sure, visual clues like facial expression and body language help us assess others' reactions to us. But the critical cues are those that herald developing discomfort, insecurity etc in a response - and I think they're most often there. It's a rare thread that goes off the rails without some indication of impending trouble before the crash, no matter how subtle. Sensing, understanding and reacting productively to the emotions of others is the heart of emotional intelligence. When you sense that someone is becoming uncomfortable or feeling insecure / threatened, you have to help them get back to a more comfortable position before you can continue the primary interaction. This is where that self-deprecating sense of humor can be invaluable: "I didn't mean to offend you with my opinion . I don't agree with yours, but I recognize that it's as valid and important to you as mine is to me. Please help me understand why you feel the way you do. If there's common ground, let's find it." You may make a breakthrough and go on to a great interaction.....or you may get flipped the digital bird. But your EIQ makes you try, and that's all you can do. Then you just gotta decide when to hold 'em and when to fold 'em. FWIW, I didn't make this stuff up in my head. I've been a Six Sigma Master Black Belt for about 12 years and am trained and certified in Design for Six Sigma, Lean, and Change Management by GE. I was associate chief medical officer at a 1000 bed academic medical center for about 15 years and found this approach to be very helpful in managing medical staff behavior and solving problems. After 38 years of medical staff and hospital leadership, I'd much rather have to deal with audiophiles than doctors, Chris!
  3. I applaud your use of Goleman's Emotional Intelligence as a framework for your post and thoughts, Foggie. It's a wonderful book that's relevant today (and will be tomorrow) even though it's 25 years old - I strongly suggest that everyone read it before throwing more fuel on this fire. I see his concept of an emotional IQ as a kind of social equivalent to a handicap in golf. No matter how great the disparities among people's knowledge, beliefs, and personalities, they can all interact enjoyably and productively if their EIQs are sufficiently high. His basic premise that we do best in life when we learn to temper the rational with the emotional (and vice versa) is a perpetual key to success on many levels. If everybody took Goleman's approach to heart and developed the sensitivities embodied in EI, the world (which includes AS and every other internet forum) would be a much more pleasant and productive place. Perhaps the best reason to adopt it is that it helps people who differ greatly on issues get along better and more productively. I'm paraphrasing Goleman to illustrate your bullet points in what I hope may be a more obvious and inspirational way: Self-awareness: understanding personal moods, emotions & drives, and their effect on others. manifests as appropriate self-confidence, realistic self-assessment, sense of humor about yourself, knowing & controlling your own emotions. Self-regulation: managing disruptive impulses and moods, suspending judgment, thinking before acting. manifests as trustworthiness, integrity, comfort with ambiguity and being open to change. Internal motivation: being driven by passions that go beyond money and status, by the joy of learning and doing. manifests as drive to achieve, true optimism, and ability to commit to ideas and efforts. Social awareness: sensing and understanding the emotions of others, and interacting appropriately to achieve the best outcome manifests as empathy and awareness of / respect for the hierarchy of relationships in groups & organizations Social skills: managing relationships, finding common ground, building rapport. manifests as ability to lead change by persuasion and intelligent discourse rather than brute force. This great discussion around EI in the Harvard Business Review that says it very well "Don’t shortchange your development as a leader by assuming that EI is all about being sweet and chipper, or that your EI is perfect if you are — or, even worse, assume that EI can’t help you excel..." (not that sweet and chipper wouldn't go a long way toward smoothing some of our most contentious posts and threads 👁️ ) A high emotional IQ can help students learn and teachers teach. Those here with sound knowledge of a subject can be mentors, coaches and inspiring leaders for us all by adopting Goleman's approach to relationship management rather than berating those who don't agree with them. There may be a key to a kinder gentler AS in this simple approach: present your opinion, support it with what you think is the best available evidence, welcome dissent, and be sensitive to emotional cues that suggest the need to back off and/or take a different approach. Whether you think the best available evidence is objective or subjective doesn't matter - there's room in the world for us all, and there is no winning or losing. Believe what you wish, support it as best you can, and live with it. If your emotional IQ is high enough, you'll always be open to change if presented with new evidence you accept - and you'll be better able to convince others of the wisdom of your own opinions.
  4. No worries, mate! The older they get, the smarter you become. Ours are now 38 and 41, and (if we interpret them correctly) it seems that my wife and I may not be quite as dumb as we used to be.
  5. Thanks for the fine work and write-up! I've been sorely tempted to check out some new McIntosh, and you may have pushed me over the line. I'm old enough to remember when the audiophile world and press shunned McIntosh. This was a critical part of my formative years, as I loved everything about Macs from their sound to their looks to their build quality and couldn't understand the flames from non-Mac dealers and the press. Thanks to McIntosh, I learned to trust my ears and judgment far more than reviews and opinions that differed too strongly from mine to be objective. I've owned at least a dozen of their products since buying a new MX110 and a pair of used MC40s in 1969, and I only sold my last pieces (a pair of MC75s) when we downsized from a house to an apartment four years ago. That applies in spades to the original audiophile objections to early Mac tube amps because they operated in class B. Everybody knows that class B sounds dull and lifeless 😁
  6. Great stuff! Here’s another teaser for the article I’m preparing right now - my task yesterday was to install, set up and gain more experience with OpenMediaVault on Raspbian Buster Lite on a Pi 3b+. Today I’m adding it to a multi-Pi music system for live recording, ripping, and listening. It’s up to 3 so far - one as a dedicated audio workstation, one for mixing, mastering, file conversion, and listening, plus the NAS to keep all files out of USB traffic and archive every bit. The reason for separate recording and monitoring devices is that a 3b+ can’t process both a source signal and real time monitoring of it without stuttering, popping and dropping out. A 4gig Pi 4 handles this better for single track live recording and for ripping, but the price has to be paid for latency. Fortunately, Audacity has an excellent correction function, although it’s a bit tedious to dial in. It offsets the input by 123 msec on mine after setup, which lets me lay down multiple tracks with excellent time alignment. Once I figure out how to make it work with a brace of 3b pluses that I already own, I’ll try to distill it down to a portable recording station with two Pi 4s. I’m waiting for complete resolution of the problems with the 4 gig version before buying any more.
  7. The 4 gig Pi 4 is obviously still a work in progress. I'm reminded of the early life of the Porsche 911 and how continued increases in engine size and power pushed components to and beyond their limits. Yes, it's another loose analogy - but as displacement got closer to 3 liters than the original 2, little things like head studs started failing. And factory "patches" like case savers and Dilivar studs were only partially effective. Like air cooled 911s, the poor little Pi may have reached the limits of safe and reliable performance without costly and exotic work-arounds - and that's how reliable and inexpensive high performance items turn into finicky and expensive ones. Let's keep trying to make these the best they can be, recognizing that we're probably just biding time until the next advance in SBC design.
  8. The Bell system “speech band” was 300-3400 Hz through decades of dial phone use. Bell Labs did a lot of research to determine everything from the optimal frequency response of their phones to the size of the holes in the dial and buttons on touch tone phones. The equipment was very high quality until the demise of Bell - and it was tough as nails. I suspect that those black dial phones were bulletproof! I blew a 6L6 in my guitar amplifier on a gig in the summer of 1968. It was almost midnight, and I had no spare.....but we had another 2 hours to play. So I called the phone company’s repair service from the club, explained my predicament, and asked if they had any tubes I could buy. The guy who answered asked where I was and said he’d get back to me. About ten minutes later, a Bell System truck pulled up and the driver brought two 6L6s to the bandstand, telling me I should replace both for best sound. I asked what I owed him, and he asked for my home phone number - he told me it was “repair service” because I was a customer!
  9. I agree with you. Digital recording breaks the continuous frequency spectrum of analog sound into quanta of frequencies and compresses all levels within each quantum to a mean. This reduces dynamic contrast within each quantum and has to affect the liveliness of reproduction to some degree. Higher bit depth creates more quanta, so there should be less of this effect. I must admit that I'm not sure I can actually hear it in most recordings of the same material made at 16 and 24 bits - but theoretically, it makes sense to me.
  10. The magic word in this is "available" (at 36 seconds into the video). Bit depth determines the dynamic range available for use during recording, i.e. the maximum DR of recordings captured by the system. This is independent of the source program itself and of playback equipment. I suspect you could record a rock band playing a song that has a DR of 4 dB (which is typical in some genres) at an average level of 0dB with an 8 bit system and hear little if any difference compared to a 16 or 24 bit capture. The noise floor would be 20 dB below the signal in an 8 bit file, and the signal would be sufficiently loud and sufficiently compressed to render any differences in accuracy inaudible. Low bit rates create an artificially high noise floor by "compressing" all signal that's within the lowest quantum level range in each sample to the same amplitude, which makes the noise as loud as any musical content that's also within that range. Signals above the top of that range are unaffected by the noise, although they too are compressed within their ranges (which is not mentioned in the video).
  11. Very interesting thoughts - thanks! I see the digitized waveform a bit differently, in that each and every instrument being played is present (if being played at the time of capture) in each and every sample. The single instantaneous value being captured is the summation of all values for all parts being played. We can't separate them within an individual sample because there's no dynamic context - the samples by themselves contain data but no information, and are a perfect example of the difference between the two, in my opinion. But sequenced as they were when captured, they define a complex waveform in which the individual parts can be identified by ear and in a Fourier transformation. And we could determine the contribution of each instrument to the value of that sample with a little (OK, more than a little...) mathematical manipulation. Of the 1.3V in the 12,273,418th sample of a string trio piece, we might see that 0.2V were the violin, 0.4 were the viola, 0.5 the cello, 0.15 the natural intermodulation of the three, and 0.05 the cumulative noise. Just thinkin'...........😉
  12. I'm not suggesting otherwise and apologize if I gave that impression. I was just trying to convey in simpler terms that the way in which it does this is to enable more accurate capture of instantaneous signal levels in the source waveform, which obviously encompasses both the bottom and the top of the DR. And that accuracy is, in large part, determined by errors resulting from fitting each sample to the size of the "word" (i.e. bit depth). This is quantization error, if I remember this all correctly. The other common confusion I see stemming from this is failure to understand that the bit depth of the recorded file determines only the DR of the recording. It determines the DR of the source file you're playing, not the SNR of your playback equipment.
  13. That's true for the equipment but not the program material, which is an important functional difference. Program material rarely has a DR equal to the SNR of the equipment through which it's being played. Unless the DR of the program is equal to or greater than the SNR of the system (which is virtually unheard of today), the desired listening level will determine the effective SNR. If the recording is a quiet piece with limited DR, e.g. concerti for solo violin or guitar, the listener may like to listen at sufficient volume to make system background noise intrusive. Tchiakovsky's 4th has a wide DR, so I set the volume control lower to avoid excessive peak SPL. This also lowers background noise, so the audible SNR is higher.
  14. I'm offering a loose functional analogy meant to be illustrative for Teresa and not a literal description. I thought the movie analogy was more useful because each frame is an instantaneous sample of the changing visual "signal" and the end product is a dynamic sequence of these samples. As the dynamics of motion picture production and control are similar in many ways to those of an audio waveform, it just made a lot more sense to me than your example. I could be wrong - I look forward to feedback in this thread to help me improve my communication skills. I suppose that pixels in an image can illustrate the same concept in a different representation, the main differences being that pixels are not samples or "complete" representations of anything. They're components that combine to form a static image just as linked dyes combine to form the color image on emulsion based film. And there are many different kinds of pixels that vary in shape, size, ability to display color etc. For Teresa et al: there are similarities that may help you understand the subject that started this discussion. Each pixel in an image can display multiple colors within a designated set, e.g. RGB or cyan-magenta-yellow-black (because not all pixels are functionally alike). An 8 bit color image can carry 2^8 colors - it's like having a box of 256 crayons that divide the entire visible color spectrum into 256 parts by frequency. No matter how many shades of color are in the source, the pixels in the screen will use the closest of the 256 colors it can display to each color in the source image. If the exact shade of red is between two in the "crayon box", it will use the closest one. A 10 bit image can display 1024 different colors, so it can render an image closer to the original in color composition. The accuracy of color rendition is somewhat analogous to the accuracy of voltage representation in a single sample of a digitized audio waveform, in that the exact value is limited to a given number of decimal places. So it's "rounded" up or down to fit within the limits of that digital field. The more bits available per sample, the more accurately the value can be recorded (i.e. the more significant digits it contains and the smaller the potential difference - no pun intended - between the actual value and its digital approximation).
×
×
  • Create New...