REALISM VS ACCURACY FOR AUDIOPHILES
PART 2: THE REAL SOUNDS OF LIVE MUSIC
In the first of this series of articles (link), we discussed the overall sonic impressions music can make. We explored some of the basic elements of a musical performance, including
- a little about how and why it sounds as it does to the listener
- how it interacts with and is shaped by the environment in which it was created
- how it interacts with the systems through which it’s captured, manipulated, and converted to an archival and playable form
- how it behaves in the systems and environment in which it is reproduced
- whether and why (or why not) what we hear at home is an accurate reproduction of the original
Most importantly, we began to explore the science and data behind the duality of realism and accuracy in reproduction. Keep in mind that these two characteristics are not mutually exclusive. Further, they are not paired continuous variables trapped in a distribution equation that requires them to sum to a fixed value. In other words, accuracy and realism can coexist at any levels from “0 to 100%” each, and a change in either may or may not change the other (although it could influence the listener’s perception of both).
I enclose the % value in quotes above because we really can’t measure realism in reproduction. An audiophile’s estimate of the degree of realism is purely subjective and I know of no way to measure it. We can and often do ask multiple respondents to judge the realism of reproduction, which generates a frequency distribution of responses that is often used as an objective measure. And it may be useful, but making it a measurable parameter doesn’t make it objective – it’s still subject to the subjective interpretations of human respondents. Even if we run sophisticated and comprehensive audiometric testing on all participants, and we use intra-observer and inter-observer correlations for consistency after multiple presentations, we’re still left with the big unknown that is the brain.
Then there’s the problem of the poor auditory memory of the human, which was introduced and supported with science in the first article in this series. As we learned last time around, our auditory memory is the poorest among vision, audition, and physical touch. So realism remains a subjective parameter, although there are categories and specific examples of devices that are widely found to be more realistic reproducers than many others.
When we want to understand, discuss, and evaluate objective and subjective parameters as tightly interwoven as realism and accuracy in audio reproduction, I think it helps a great deal to start with what is known and shown to be fact. A full and usable fund of knowledge will make the transition to subjective application of that knowledge much easier and the ensuing discussion more useful and informative. CLICK ALL THE LINKS SCATTERED THROUGHOUT THIS ARTICLE TO HEAR THE EXAMPLES AS YOU READ.
FACTS AND ASSUMPTIONS IMPLICIT IN THIS ARTICLE
- This article (and the entire series) is meant to be an audiophile’s guide to greater understanding and enjoyment of music of all kinds. It is ONLY a guide – it is not an instruction manual.
- Neither you nor I will ever be able to identify the majority of instruments in the recordings we hear by make, model, and vintage – and there’s no reason to try or to believe that it’s necessary.
- Because they affect every aspect of every type of music, from creation to performance to capture to playback, the differences among instruments of the same kind (e.g. trumpets, clarinets, guitars etc) are quite important to some combination of the composers, conductors, performers, audience, recording team, designers / builders / users of audio equipment, and listeners.
- Because of #3, these differences have an obvious and significant effect on the music we hear.
- Although identifying specific instruments by brand, model, and vintage is difficult-to-impossible in performance or in playback, the audible differences among similar instruments are far greater than the audible differences in equally similar equipment to which AS participants (as well as audiophiles everywhere else) apply terms like “eye opening”, “astounding”, and “jaw dropping”. So you should hear consistent differences, even if you have no idea what they are or why.
- Based on #5, anyone who can hear a 3 dB boost applied to a narrow frequency band can and should be able to hear and appreciate the kinds of differences among instruments that this article describes and discusses.
- It doesn’t matter if you know that you’re hearing a Selmer Mk 6 or a Yanagisawa tenor saxophone. What matters is that you understand how a player’s choice of instrument can affect both his / her playing and what you hear. The combination of player and instrument shapes the unique, recognizable tone you identify with a player or group. You don’t need to be able to identify Stan Getz or Sonny Rollins or Ben Webster by name – but if these three sound the same (or even similar) to you on decent recordings, you’re missing a major part of the experience.
- With guided experience over time, you should come to know and recognize a reasonable approximation of the real sound of your favorite artists and music.
- With this knowledge, you will be better able to judge the accuracy of your equipment and the quality of the recordings you play.
- There are exceptions to everything. Here’s one that you want to keep in mind: Charlie Parker frequently pawned his sax because he often “needed” money for other things. So he often played a cheap student model or an instrument in bad repair. He sounded like Charlie Parker on pretty much anything he played.
ACCURACY IN THE CAPTURE AND REPRODUCTION OF MUSICAL PERFORMANCE
COMPARING TO A REFERENCE
Almost all of you are used to using references to verify accuracy. For example, you verify your downloads by hashing (or, at least, you should if you don’t). The digital content of a file can be summarized by running an algorithm on it that will extract random characters and sum their digital values. If the checksum you calculate when running the download through the same algorithm is not exactly the same as the source file’s (which is provided to you for verification purposes), the download is either corrupt or the file you thought you were downloading is not the original or has been altered. This can be critical and is a very practical concern. The Linux Mint image download site was hacked a few years ago and everyone who installed Mint from the iso file on their site at the time without verifying the download also installed malware.
What does this have to do with music and audio? We have a similar situation in audio reproduction – what comes out of your speakers may or may not be an accurate copy of what went into the file you’re playing. And that file may or may not be an accurate copy of what the performer played. But (and this is a BIG but!), we ain’t got no algorithm to verify the accuracy of what we’re hearing. We have no objective, measurable reference with which to compare. We don’t have checksums – all we have is our ears and our brains!
So we subconsciously run everything we hear from our audio systems through a cerebral algorithm. We capture sound with our external, middle and inner ears. We pump the electrochemical signals from the cochlea through a neural network to our central auditory processing pathways, and the interpretations from our deeper cortical centers shape our perceptions and impressions of what we hear. If the “auditory checksum” matches with our brain’s reference, we’re happy. It doesn’t even have to be a perfect match – most of us are happy if it’s in the ballpark, given the lack of malware on audio files to date. And if it’s nowhere near where we want it to be, we do a number of things – twist knobs, seek “better” source files, swap out equipment, search the web for “the answer” (which is never 42) etc.
As I started pointing out in the last piece, there are so many variables in music that none of us has experienced more than a small portion. In fact, most audiophiles have very limited experience with the vast spectrum of instruments, venues, musical compositions, performers, recording equipment and methods, etc on which our joy depends. So I thought it would be helpful and interesting to introduce those of you who don’t know this stuff to the spectrum of sound and activity that creates the music we love to hear.
A TREE VIEW OF MUSICAL INSTRUMENT CLASSIFICATIONS
Just as there’s a file tree to organize and guide you through your file system, there’s an hierarchical tree to guide you through the maze of musical instruments. Right about now, many of you are saying “Why should I care?”. In the immortal words of Jack Benny: “Well…………”
You really do need a scorecard to tell the players apart. Without knowing how different the “same” instruments can sound and why they sound as they do, it’s very hard to know if what you’re hearing is what you’re supposed to be hearing. We don’t need to go into the history and evolution of each and every instrument. But some familiarity with the instruments you’re most likely to encounter will almost certainly help you to better understand and enjoy listening to music. And once you become aware of the audible differences among instruments of the same kind, you’ll be much better able to evaluate and appreciate accuracy in your systems.
The other reason it matters to audiophiles is that the physical mechanisms that produce the sounds we hear, record, and reproduce dictate to a significant degree how they are best heard, captured, processed, archived, and played back. As we’ll discuss and illustrate more fully a bit later, all sounds in music have an attack, decay, a period of sustain (which is of constant amplitude for bowed instruments, organs, etc but decays some for most others), and a release. These characteristics and their interplay shape what we hear and how we hear it, e.g. better pianos & guitars generally have longer duration and stability of sustain. They dictate performance parameters for our equipment. And if not captured and reproduced accurately, they can alter the source sound in ways that affect its uniqueness and recognizability in playback. Combine these characteristics with the unique frequency spectrum of an instrument’s fundamentals and harmonics, and you have a broad palette of opportunity for distortion of many kinds, starting with flaws in the instrument and extending all the way through the signal chain to your ears.
Many instruments are found most often in a specific musical setting, but there are cool exceptions. In jazz, Dorothy Ashby played harp, Julius Watkins played French horn, Howard Johnson plays tuba, and Freddy Kaz played cello. There are rock violinists and pedal steel players. Open your minds and listen!
CLASSES AND GROUPINGS OF INSTRUMENTS
Although the main instrumental classes differ by world region, ethnicity, culture, etc, and are based on factors such as how sound is produced by them, the materials from which they’re made, etc, the Greeks defined the basic instrument classes still used today for western music as winds, strings, and percussion. These are based more on how they are played than how they produce sound, since many instruments are hybrids of one sort or another. But this tripartite grouping serves us well as the top level of arborization.
Not all instruments in western music are easily categorized. For example, the piano has strings, but they’re struck by hammers – so it could be considered a stringed instrument, a percussion instrument, or both. There are keyboard instruments with strings that make sound by plucking rather than striking them (e.g. the harpsichord), and there are keyboard instruments with no strings at all (e.g. the celeste). This may seem like a minor distinction, but the sound waves they generate differ in major ways, so they sound / blend / record / process / reproduce quite differently from each other.
Similarly, there are different subclasses of the other instrument groups. Many, but not all, instruments made of brass are considered to be part of the “brass” group. Yet many instruments in the brass group are made of other metals or even nonmetallic materials. There are many plastic tubas and trumpets, for example. Each material sounds different, projects differently, records differently, etc – and if they all sound alike to you, you’re missing a vital part of the experience. Some wind instruments are made of metal and some of nonmetallic materials (e.g. the clarinet is typically made of an African blackwood).
Some rely on vibrations of the players’ lips to generate sound, and most of these are traditionally made of some alloy of brass and are traditionally called horns or brass. Many are made from other metals (e.g. sterling silver) or modern composites. They share common use of a cupped mouthpiece into which the player’s lips are gently pressed and “buzzed”, with the size and shape of the mouthpiece being determined both by the specific instrument and the preferences of the player.
The best known prototypical horn is the bugle, a piece of bent tubing with no mechanical means of altering its pitch. The tubing is fixed in length, so the only control over the pitch of its notes is to buzz the lips at any of its resonant frequencies or fractions thereof that can be generated by a human’s lips. The subsequent addition of a small sliding portion of tubing enabled a small range of tuning to compensate for minor variations in dimension from bugle to bugle.
Those of you who are into 16th and 17th century music (of which there’s quite a lot) will know the “natural trumpet”, which was a valveless / keyless trumpet that was in essence a really fine bugle. Skilled players could lip many harmonics of the natural resonances of the tubing to play diatonic scales, although I don’t think any ever succeeded in playing a chromatic scale.
By the turn of the 19th century, the desire to add some kind of manual control over the notes being played led to the addition of small keys over holes in the tubing, similar to the mechanism on a clarinet. But this was an imperfect approach that didn’t work well for many reasons, the most important of which was that the holes in the tubing dulled the tone, reduced projection, and sounded generally inferior to instruments with sealed tubing.
As you can see, winds can direct and radiate sound in any direction, depending on their design. This affects their sound in live situations and dictates optimum recording methods and techniques. If recorded or reproduced poorly, some are easily confused with others - “they all sound alike!”
When Heinrich Stölzel invented the valve in 1814, he revolutionized brass horns. Although he used it in a French horn, it was a piston valve similar in design and operating principle to today’s trumpet valves. French horns have used rotary valves for many years, although there are still some piston valve horns of various kinds and there are rotary valve versions of modern horns usually found with piston valves, e.g. trumpets, flugelhorns, tubas, & euphoniums.
Other wind instruments generate sound from the vibrations of a flexible resonant substance over which the player blows air (most often natural reeds until the past few decades – many substances besides natural reeds are now used for this purpose). These are known as woodwinds, although they may be made from metal, rubber, plastic etc. And they may have one reed (e.g. clarinets, saxophones – below left) or two (e.g. oboes, bassoons, English horn – below right). PS: the English horn isn’t a horn.
KNOW THE SOUND OF THE SOURCE
To get started, I humbly suggest Roland Kirk as an example. For those who don’t know his work, Kirk was a multi-instrumentalist with a broad and firm grasp of music. He was controversial, to say the least, and his music lacks the wide appeal of a Wes Montgomery or a Dave Brubeck. But he was unique in many ways, one of which was that he played 2 and even 3 horns at a time…..and did so very well. He sought new and different sounds even as a teenager, resulting in his adopting two rare horns called the manzello and the stritch. My money’s on the high probability that few of you have ever heard a manzello or a stritch live, that even fewer have heard them played at the same time by the same person, and that none of you has heard a saxophone, a manzello, and a stritch played simultaneously in live performance (let alone by the same guy). So how could you possibly know if your system reproduces Roland Kirk recordings accurately?
A MORE COMMON AND CLASSIC EXAMPLE: TRUMPET VS CORNET VS FLUGELHORN
Both have exactly the same tubing length (5.4’), but the cornet bore tapers out over the half that leads to the bell, while the trumpet’s bore is cylindrical for a full 2/3 of its length. A traditional cornet mouthpiece has a deep, funnel-like cup that gives a mellow sound while a trumpet mouthpiece has a shallow cup shape yielding a brighter and more directional sound. The conical bore and the deeper mouthpiece give the cornet its classic tone, and those who know can immediately tell Warren Vaché’s cornet from Wynton Marsalis’s trumpet by the sound of the first note. They sound quite different, although they’re most often thought to be the same instrument by careless listeners.
Every trumpet and player combo sounds different too. Listen to Doc Severinsen’s wonderful sound and compare it to Vaché and Marsalis. Even in these lousy video soundtracks, you can clearly hear the differences in tone, projection, and smoothness. Now compare this with the big, round sound of Freddie Hubbard on the flugelhorn. If you can’t hear the differences among Vaché, Marsalis, Severinsen, and Hubbard, you’re missing a thrilling and vital component of the music.
Accurate reproduction is obviously critical to the artists, composers, conductors etc who specify a given instrument. And the unique sounds you associate with your favorite musicians are usually facilitated and enhanced by their purposeful choice of instrument.
WHY AND TO WHOM DOES THIS STUFF MATTER?
If you look at random posts across the spectrum of the audiophile community’s web presence, starting with AS, you’ll find devotees of every imaginable genre of music. While some are in the distinct minority, a fair number of people enjoy music from every one of the last 10 centuries. Each genre has its own palette of sounds, from the early Gregorian music of the 9th century through medieval and renaissance music to the polytonal and alternative music of the 21st century. Instruments were invented and developed specifically to produce the sounds of the time, and composers wrote for new instruments that caught their attention and inspired their imaginations.
No matter what music you enjoy, you’re listening to at least a few instruments rarely heard elsewhere. Further, music written for the specific instruments of its time is often reinterpreted for and even more often simply played on modern instruments. In fact, music written for specialized instruments but played on common instruments of the modern day abounds, and many audiophiles don’t even realize that they’ve never heard their favorite music as it was meant to be performed, i.e. on the instrument(s) for which it was written. The instruments themselves affect the performance and the sonic image that is captured and reproduced for us. The SQ and mechanical characteristics of instruments are major contributors to the sound and feel of the music they make, from pre-piano keyboards to 19th century wind instruments to the latest 4 valve trumpets used today in jazz, Baroque, and ethnic music (e.g. Middle Eastern).
Proper instruments and players who know how to use them are both essential, if the music is to sound “right”. But even if the true sound and feel of the music is captured in the recording, your system must be accurate enough to let it escape into your listening room – and you have to know what you’re listening for, or it’s lost. It has to be there and you have to recognize it.
HISTORICALLY ACCURATE MUSIC DEMANDS SONIC ACCURACY
A lot of the beauty and uniqueness of music through the centuries comes from the fascinating instruments for which it was written and meant to be played. For example, if you love Baroque music or early Romantic works, you probably understand how different the music sounds played on period instruments vs modern ones. Every aspect from the sound itself to the mechanical demands placed on those who play them by period instruments affects the overall sonic image. Keys were harder to press 200 years ago, so speed, phrasing, articulation, expressiveness, etc were all different.
Some period instruments like fine Cremonese violins, cellos, basses etc were every bit as beautiful to play and hear when new as they are today. But keyboards, keyed instruments, percussion etc have evolved into truly different instruments despite having the same names as their ancestors. Classical guitars have evolved in the way they’re carved, braced, and finished. Strings have evolved, whether for musical instruments or tennis rackets – and musical instruments, like tennis, are played differently today in large part because of structural and technical evolution. The difference between a modern guitar string and a 19th century string is as great as that between a gut tennis racket string and ones made from Kevlar or polyester. Hearing the difference requires true accuracy in reproduction.
If you think tubes vs transistors was a major issue in the music and audio world, you can’t imagine what a stir the issue of period instruments vs modern has been. Baroque music lovers are still debating (OK – they’re arguing) over the relative joys and atrocities. The big battles began in earnest over a 1964 recording of the Brandenburg Concertos by Concentus Musicus Wien. Here’s a typical review:
“Music lovers were either baffled or enthralled by the rough edges, the texture in which orchestral voices, imbued by the spirit of counterpoint, were vying for equality without ever becoming one, never letting up on the conversation, each expressing its personality and distinct voice.”
“Flutes lacked resonance and eschewed vibrato, trumpets and brass blared like hunting horns, oboes and flutes sounded like cackling crows. This was no pretty-sounding music, with the heavy-handed accents of the romantics. Instead, it was living music, full of relief and rhythmic buoyancy, with a sense of drama of its own volition. But it came at a heavy price, the necessity to master those instruments first before attempting to lighten the sound and the rhythmic flow, objectives that took some time to realize.”
A truly accurate system will make the aforementioned distinctions obvious (and solidify your opinions about the alternatives, no matter what they may be). But you can easily see how a system that colors reproduction with “euphonic shading” [my term] could smooth off the very edges that distinguish performances on period and modern instruments. A healthy dose of old fashioned tube warmth could very well color the character right out of baroque music played on period instruments or make a lute sound more like a guitar than it should. And a finely focused system with a forward top could turn a bright horn or double reed into fingernails on a blackboard for those irritated by brilliant highs.
Much of the joy listeners get from their preferred music stems from the characteristics that differentiate it from other genres. Such differentiators may be lost in playback through a system whose performance characteristics are aimed at a different combination of program material and listener. Those who want huge bass favor systems that could emphasize the low register enough to shade or even obscure delicate contributions from soft spoken instruments in higher registers. And the oboes and flutes that sounded like“cackling crows”, as decried in the quote above about the Brandenburgs, are very sensitive to playback equipment.
Since the subtle tones and timbres of many instruments stem from their unique harmonic spectra, some kinds of otherwise euphonic harmonic distortion could even affect the sound of instruments enough to make them indistinguishable from others (e.g. oboe vs English horn playing in the same register).
So literal accuracy in reproducing what was recorded can be critical to the enjoyment of music, because true accuracy is essential to recreating the total experience intended by the composer and performers and (hopefully) captured by skillful, sensitive, and dedicated recording engineers. Mere realism isn’t enough – would you really be happy with an English horn recording that sounds like there’s a real live oboe in your living room? If you can’t clearly hear the difference between a Bach lute suite played on a classical guitar and the same piece played on a lute, it doesn’t really matter how realistic the instruments sound to you – you’re simply not hearing the music. And if you can’t hear the richness and expressive spectrum of a 9’8” Bosendorfer grand in music written to be played on that instrument, you’re missing something no matter how well your system makes it sound like a real 5’7” Steinway in your living room.
BEYOND FREQUENCY RESPONSE AND DISTORTION
Phase effects introduced by system components or recording technique can audibly affect the harmonic structure (and therefore the overall sound) of instruments. Relative amplitudes and phase coherence among the harmonic spectrum produced by an instrument determine much of its timbre. Single reed instruments (clarinets, saxophones etc) all generate sound from resonance of the vibrations of the reed in the blind end chamber formed by the player’s mouth and the mouthpiece. The bore of a clarinet is close to cylindrical, which favors odd order harmonics over even. But a saxophone's internal shape is more conical, a shape that generates more even order harmonics that make the sound fuller and richer.
Metal instruments generate ringing distortion that’s inharmonic with the tones being generated by the vibrating column of air. This adds what most describe as a “metallic” component not present in wooden and composite instruments. This is especially important for music played by instruments that can be metal, wood, or plastic, like clarinets. My only clarinet is an old brass student model that sounds quite different from a wooden clarinet despite having the same scale, keying, dimensions, reed, and player. To call its timbre mellow would be stretching credulity to the breaking point – even Paul Desmond couldn’t make this baby sound sweet.
Here’s a little test to whet your appetite for more. The links in the first column of the table below labeled “Instrument” and “Music” will take you to two samples of music played by each of 4 clarinets. The music is a pair of excerpts from two etudes by the legendary 19th century clarinetist / composer / teacher Cyrille Rose. Two of the instruments are classic Grenadilla wood, one is brass with a silver bell, and one is nickel silver. All are built to the 135 year old Boehm system of holes and keys that’s still the standard for most music. Metal clarinets are largely enjoyed only by collectors, but the Buffet and the Selmer are still favorites of many professional musicians. Here’s a chart of the instruments:
Click each of the links below and see what differences you can hear. Hearing the source files on your own system would obviously be preferable, but I don’t have access to them – this was originally posted by Kyle Coughlin on theclarinet.com (LINK to home page, if you want to learn more). The files are 192 mp3s, but they’re good enough to distinguish the clarinets from each other. I wish I could get better files, because the differences would be more dramatic than can be heard in the mp3s – but you’ll get the idea. Here are the links:
Excerpt from Rose Etude 1
Excerpt from Rose Etude 2
Excerpt from Rose Etude 1
Excerpt from Rose Etude 2
Excerpt from Rose Etude 1
Excerpt from Rose Etude 2
Excerpt from Rose Etude 1
Excerpt from Rose Etude 2
Keep in mind that the Buffet and the Selmer are heard in many modern recordings of music of all kinds. Which one is being played is certainly not critical to the enjoyment of the music. But being able to hear the difference between the two adds dimension to your enjoyment and lets you appreciate more fully what each performer adds to the music and how. Benny Goodman played a Selmer for much of his early career, switching in later years to a Buffet R13. Although he endorsed King clarinets, Artie Shaw played Selmers.
Interestingly, Pete Fountain played LeBlanc clarinets similar in design, mechanisms, and structure to the others above. But most top New Orleans & Dixieland players far prefer the sound of a larger and even more cylindrical bore, because of the stronger odd order harmonic content that gives the sound its classic “edge”. And a large part of the character of Dixieland is the “bent” or slurred notes that soar through what sounds to some like cacophony. This is much easier to do with holes that do not have keys or open rings over them, because the fingers can partially block the holes. This is how the classic Albert system clarinet is built, and it remains the instrument of choice for many top Klezmer and Dixieland musicians. You can hear the difference when music meant for an Albert system clarinet is played on a Boehm – most players cannot come close to the same sound or feel. And for those who favor Russian, Turkish, Ukranian, and Belarusian music, Albert system clarinets are also the standard.
EFFECTS OF PHASE AND IMPEDANCE ANOMALIES ON TIMBRE AND PITCH
Human ears and brains tend to group phase-coherent, harmonically-related frequency components into a single sensation – in other words, when we hear an harmonic series of tones, we perceive them to be a single note pitched at the fundamental frequency of the harmonic series. Rather than hearing the harmonics as multiple individual notes, we hear them as one frequency with a characteristic timbre. It only takes a few tones together in an harmonic series to be perceived as the fundamental of that series, even if the fundamental tone is not in that series.
Some instruments (e.g. double reeds) actually produce very little of the fundamental note that is heard. Instead, they generate a rich harmonic spectrum from which the fundamental pitch is generated by a combination of acoustic intermodulation and psychoacoustics. A lot of scientific investigation has gone into understanding what makes instruments sound as they do. We learn from one excellent study that the trumpet and the clarinet, two instruments with fairly cylindrical bores and flared out bells, display strong fundamentals relative to their harmonics. But the oboe and saxophone, with their conical bores, generate weak fundamentals with much stronger second and third harmonics.
If the phase relationships of the fundamental and harmonics are altered, everything from perceived pitch to timbre can change. This is true whether the changes stem from construction of the instrument or from distortion in recording and/or playback. For example, the study linked above identified clearly that phase differences in harmonic structure were associated with clear differences in sound between two oboes.
IMPEDANCE AND TONE
Impedances and impedance matching are major considerations in the design, construction, componentry, and performance of our systems. So it shouldn’t surprise anyone that impedance is a major factor in the mechanical generation and shaping of musical tones and timbres. The study linked above (and others like it) found that impedance matching between the source of the sound and the instrument plus those among various components of the instrument itself has a very profound effect on how the instrument sounds and how it is played. For wind instruments, the vibrating element is either the lips or a reed of some kind plus the mouthpiece that carries it (if there is one – double reeds do not have a mouthpiece). A large impedance mismatch between mouthpiece and the bell suppresses fundamentals and makes some notes more difficult to play.
Similar impedance coupling problems occur in stringed instruments. Consider the example of a piano or guitar string. It’s easily set in motion by a blow from a small felt covered hammer or a gentle stroke of a finger tip because it has low input impedance. Sadly, that poor little string has too small a mass and surface area to move much air by itself, even though it’s vibrating as hard as it can. Think of it as the phono stylus / cartridge / preamp. The sounding board in a piano and the top on a guitar are the power output stages – they’re the engines that form still air into sound waves powerful enough to be heard by our ears and captured by a mic. But the energy from that thin little string has to get to the wood, which acts as an amplifier because it has so much more surface area.
The impedance mismatch between string and wood impedes the transfer of the string’s energy. So there are crude mechanical “transformers” at work to ease the strings’ task of making all that wood vibrate. Both ends of each string are secured to the wood by devices whose design directly affects energy transfer. Bridge pins anchor one end and tuning pegs hold the other. And the strings are set into close-fitting grooves at both ends in the nut and the bridge. Better instruments have better design, materials, and construction to aid energy transfer. Pianos use multiple smaller strings in the higher registers and multi-layer wrapped strings in lower registers, to balance energy flow across the range of the instrument. And the scale design (length, tension, and numbers of strings) varies by maker, each using their preferred combinations in an effort to transfer the right amount and spectrum of energy to get the sound they seek.
This is less of a problem with the violin, because the top has lower mass and the strings are relatively more powerful than in a piano or guitar. But impedance matching is still an issue and sound is sensitive to the couplings between string and instrument. The “nut” (where the strings cross from fingerboard to headstock) can be bone, horn, ivory, plastic, etc and bridges vary greatly in thickness, hardness etc.
Along with their phase relationships, variations in the strength and frequency of harmonics can affect the perceived fundamental pitch. These variations, most clearly documented in the piano and other stringed instruments but also apparent in brass instruments, are caused by a combination of metal stiffness and the interaction of the vibrating air or string with the resonating body of the instrument. Remember that inharmonic content (i.e. frequencies that are outside the natural harmonic scheme of a note) is generated to some degree by many of the structural components of an instrument, e.g. the metal body of a saxophone or the many wood and metal parts in a piano. Even the damped strings not being struck in a piano will generate some sonic output that is inharmonic with the undamped notes that are being played. All of this gives each instrument its unique sound.
THE SONIC ENVELOPE: ATTACK, DECAY, SUSTAIN, RELEASE
This is an essential element of the sound of any instrument. It describes how notes start, how long they are heard, how they change while audible, and how they stop. Every note from every instrument has an envelope, even if one or more components are attenuated. A plucked string has a sharp rise on the attack. A damped string has a rapid decay, a short sustain, and a quick release. Piano notes may have a long steady sustain if the sustain pedal is depressed or decay rapidly if the key is released immediately with the pedal up. When the horn section in Tower of Power hits an accent, the attack is very steep, etc etc. The classic waveform of an envelope is at the right. Below are typical envelope graphics for the clack of a stick on a wood block and for a single piano note that’s struck, held and released (although the sustain would taper off slightly on a real piano):
Any of these elements can be distorted anywhere in the recording and playback chain. Getting the attack right requires crisp, controlled transient response. If it’s a huge one (e.g. the cannons in the 1812 Overture or even some seriously plucked notes from Brian Bromberg’s bass), the power amp has to be able to pump enough peak power into the speakers and the speakers have to trace the waveform without overshooting the peak or bouncing at either end of its excursion. If the levels aren’t perfect, the transition from attack to decay may be blunted or clipped. Any distortion of the harmonic structure of the waveform will affect its sound (and thus the accuracy of reproduction).
It’s easy to demonstrate the importance of accuracy of the envelope in reproduction with this little Bach piece played on a harpsichord. When it reaches the end, do not close it – it will play backwards, reversing the above graph so that the attack is now a mirror image of the release, the release is now a mirror image of the attack, and the events during sustain are reversed. It sure doesn’t sound like a harpsichord when played backwards, does it?
The other important fact to consider about the envelope is that it is affected by the mechanical and electronic events that produce the note. The attack of a note can be facilitated by smoother and more responsive action of valves, keys, hammers, oscillators, etc. This is why Steinway has modified its action many times in 150 years. But it can also be hampered by less facile mechanisms or players. Pressing the keys on one of Bach’s organs was much harder than it is on a modern organ. The attack and rise time of each note were longer and slower, both because the keys and actuating mechanisms were sluggish and because the air power to the pipes was limited compared to a modern compressor.
Think about all the places in the signal chain in which something could affect the sonic envelope. It’s frightening and it’s reality. Further, although we do measure some of the factors above, they are not thought of as affecting the actual sounds of individual instruments. I assure you that they do.
CAN WE REALLY MEASURE ACCURACY?
We can, indeed, measure accuracy to the extent that we can identify and measure every component and interaction in a process. Of course, this remains elusive in audio. There are many measurable objective parameters we can examine – we just don’t know how to relate many (most?) of them to what we hear and whether or why audiophiles have preferences for more or less of anything. And preference is the name of the game. There was a poll on this site about a decade ago called “What is Your Preferred Sound Signature?”. Here are the results:
The responses cover a broad range of thoughts, opinions, and attitudes. Some of the more pertinent comments include:
- What I have tried to have is a system that makes music emotionally involving for me, that touches my heart and soul
- I want my rig to have no sound signature at all, and so I do not personally understand this longing for body or sweetness
- natural (as would be heard at a live acoustic event in a good location i.e. with not little or too much reverberation)
- If it makes my feet bounce and the hair on my arms and neck stand on end, the it is alright in my book. However it has to sound to make this happen is inconsequential
- IMO, all of these 'flavorful' and such descriptors have no reference and are pretty open to interpretation as it applies to music rendering them useless
- How many of us would prefer a live venue to a good recording if everything else was equal and we didn't know if it was the one or the other[?]
- I like an analog sound not too bright. There does not seem to be a category for my kind of sound so I did not vote.
- Well rounded but not fat
Only one respondent used the word accurate to refer to a specific sound signature. Only one used the term “real” or any variant thereof - and it was used not to define the poster’s preferred sonic signature but to describe the sound of an audio dealer’s system:
- “The store owner's personal signature demo rig left me with a sense of realness that i sensed i may not quite ever hear again”
It’s well worth reading the thread that accompanies it, if you don’t remember the discussion. The overall impression left with me by that poll and thread is that even many died in the wool audiophiles avoid the issue of realism vs accuracy, taking refuge instead in how audio reproduction makes them feel. This is not a value judgment – I’m as emotional as anybody about music. I can’t help breaking into a smile when I hear something that really moves me, and I always have. In fact, I clearly remember feeling a huge smile the first time I saw Oscar Peterson on TV when I was a kid - I was literally grinning like the Cat in the Hat. So this is just an observation on the difficulty of assessing realism, accuracy, and their interactions. It’s also an observation on how many audiophiles use emotion before analysis to choose and operate their audio systems. This is wonderful. I, like many of you, am an audio flâneur – and proud of it!
SOME FINAL EXAMPLES FROM THE REAL WORLD
Here are a few comparisons of performances and sounds that many of you have in your libraries. The distinctive sounds of great players and their favorite instruments are truly unique, and those below are seminal players making some of the best music in history. Given endless space, I could fill 10,000 words with comparisons of legendary classical, folk, and ethnic comparisons as well.
These YouTube videos do not carry high resolution audio, and they’re all made differently in different eras with different equipment. Despite this, the players are identifiable by their distinct sounds. The differences are easier to hear and appreciate on better recordings, but even on these examples you can and should be able to hear clearly the essence of each artist’s sound. At the very least, for example, you should be able to recognize Paul Desmond on most recordings played back on most audio systems purely by his tone and timbre.
Here’s Freddie Hubbard playing Autumn Leaves on a tribute album to Miles Davis. And here’s the original by Miles. Hubbard is almost certainly playing his Calicchio trumpet (or a similar instrument) with the large bore and huge bell that he preferred from mid-career. Davis played Martin Committee trumpets for his entire career, specifying a graduated bore (medium to large). They both used a Harmon mute with the center cup removed. But Miles’ mute was aluminum and Hubbard’s was copper. The copper mute has a bigger, fuller sound, while the aluminum version has a thinner sound (more focused on the highs) that helped shape Miles’ classic muted tone. Despite the differences in recording quality, you can easily hear the fatness of Hubbard’s tone in contrast to Miles’ more delicate timbre.
How about some safe sax? Compare the sound of Cannonball Adderly on Autumn Leaves (the same track I used above to contrast Miles with Freddie Hubb ard) to Paul Desmond’s sound on the same song. And for yet another distinctive sound on the same song from one of my favorite musicians, check out Art Pepper’s version.
And finally, here’s a comparison of sounds you know well if you’re into pop, rock, or jazz – the sounds of a fretted Fender Precision bass , a fretted Fender Jazz bass , and a fretless electric bass. The most distinctive difference here is that a fretless bass has a definite bark to its sound. Both of the fretted basses here have the normal attack of an electric bass. But the Precision has the rounder sound typical of old school electric basses, while the Jazz model has more wood and brilliance in its more modern tone. James Jamerson’s Precision bass drove Motown’s greatest hits. Jack Casady’s Jazz bass kept Jefferson Airplane in the groove, and Bootsie Collins played a Jazz bass when he was with James Brown. And look no further than Jaco to hear what a fretless can do. (Yes, strings matter too!)
THE BOTTOM LINE
The character and soul of a musical performance are determined by many things: the composition being played, the player(s), their instruments, the setting, the audience, and even the environmental conditions at the time. Recording a performance accurately begs the question of how to define accuracy. Trying to capture a performance exactly as it sounded to the audience need not be the goal. In fact, it can be a fool’s errand to insert a symphony orchestra into a living room and place it 50 feet behind the back wall, even if that’s how it sounded in the concert hall.
As we discussed in the last article in this series, the sonic image we see and hear in a concert is often nowhere near as precisely defined as it is in a recording of the performance. We can’t localize each and every instrument on the live stage, even though such precision is often engineered into recordings. And this is fine, as long as we recognize that it was created, rather than captured. In fact, as we learned from many of the responses to the previous article, many audiophiles prefer the immediacy and precision of such engineering.
But…...we really need accuracy in reproduction of the sounds and interactions of the instruments themselves to get reproduction that’s both realistic and true to the music. Now you know a lot more about the origin and nature of the sounds of musical instruments. I hope you’ll dive into your libraries with new found verve and start listening even more critically for the distinctions I describe.
I also hope that you’ll soon be able to go out and hear live music again. It doesn’t matter where you do it or who’s playing what. The more live listening experience you get, the more you’ll understand and enjoy the music you love now. And with the understanding above, you’ll be able to expand your experience and grow your knowledge of the true sound of music and the instruments that create it. As you do, you’ll find yourselves tweaking, adjusting, and updating your systems to bring out these essential characteristics.
Accuracy in imaging is often artificial. If it’s skillfully done in a realistic way, it can really enhance the pleasure of home listening. But pure sonic accuracy is essential, and you can’t know how close you’re coming without knowing how it’s “supposed to sound”.
THE NEXT INSTALLMENT
Next, I’ll give you a look into some of the common engineering techniques and tricks used to shape what we hear in a recording. I’ll give you examples and show you how you can try these things at home with Audacity. Many of these were new to me, and some are amazingly effective. Enjoy!