Jump to content
  • bluesman
    bluesman

    Realism vs. Accuracy for Audiophiles: Doctor, Can I Please Get a Prescription for Better Recordings?

     

     

    REALISM VS ACCURACY FOR AUDIOPHILES:

    “DOCTOR, CAN I PLEASE GET A PRESCRIPTION FOR BETTER RECORDINGS?”

     

    In the first article in this series, we discussed realism, accuracy, and reasonable expectations of each for audiophiles.  The second part examined the real sounds of musical instruments, along with their variance, susceptibilities to alteration and distortion, and how these factors can affect the accuracy and realism of recorded performance.  So you already understand some of the differences between realism and accuracy.  You know enough about how and why the “same” instruments can and do sound so different from each other, and you understand that the specific sonic characteristics of a given instrument can be essential to a composer, a performer, a band, a venue, a recording, and a consumer.  You’re allowed & encouraged to have your own sonic preferences – just learn their limits & enjoy!

     

    This article contains a lot of information that’s probably new to many of you.  Although I think it will give you a much deeper understanding of the subject, you do not have to do the DIY project or any of the other things I describe – I strongly believe (and hope) that you’ll get a lot out of reading it whether or not you get your hands dirty.

     

    Having said that, we’ve reached the part where you can get your hands dirty!  This piece will discuss how the signal is managed at every step of the way, from performance to the microphones, consoles, monitors, mastering etc, and all the way to your ears and brain.  Along with the discussions are DIY projects to get your hands dirty making some of these things happen on your own computer with Audacity (or the DAW / sound editing program of your choice) and either your own sound recordings or some simple instrumental files I’ve made for you (with links in the appropriate sections below).  I’ll take you through the first one step by step , and there are links to the rest so you can do them on your own as you wish.  I’d love to do them all with you, but you’ll soon see how much work just this one took.

     

    We’re now going to look at how the sounds of musical instruments (which, as I said in the first article, include the human voice - the original musical instrument and a perpetual and essential contributor to music of all kinds) can be edited, adjusted and optimized to create a musical archive we can play and enjoy at home whenever we want to hear it.   I used the term “manage” deliberately, because almost all definitions of the word seem to me to characterize the systematic capture, optimization, conversion, storage, and presentation of a musical performance, to wit:

     

    • “the coordination and administration of tasks to achieve a goal”
    • “[to] accomplish...objectives through the application of available resources, such as financial, natural, technological, and human resources”
    • “a set of activities directed at the efficient and effective utilization of resources in the pursuit of one or more goals”

     

    The goals that guide the creation of the recordings we play are most often set by some combination of the stakeholders, primarily composer, performer(s), conductor, producer(s), engineer(s), A&R staff, and financial sponsor(s).  I left us out because it often seems that those who make and sell recordings think they know what we want better than we do.  No matter how clear the original goals may be, they’re often modified out of necessity during production by one of a number of things, including

     

    • incompatibilities between the resource base and the original plan
      • running out of time to deadline
      • running out of money (common)
      • being given more money (uncommon)
      • gaining or losing access to scarce resources, e.g. a coveted engineer, studio, etc
      • loss or addition of key people to the team during the project
    • technical limitations on the vision of one or more stakeholders
      • “You can’t make the bass seem to come out of the chandelier!”
      • “You can’t make an iPhone sound like a Marshall stack!”
      • “No one can possibly play that part!”
        • See the history of Steely Dan for classic examples of this problem.  They often had to use multiple top studio players to “assemble” one solo exactly as they wanted it.  There are many examples of solos played in part by Jeff Baxter, Elliott Randall, Larry Carlton, Dean Parks, and/or Denny Dias.
    • disagreements among the stakeholders leading to a shift in or abandonment of project goals

     

     

    MAKING GOOD RECORDINGS IS A TEAM EFFORT

     

    Cliché or no cliché, it really does take a village to turn a musical performance into a playable, enduring, and enjoyable recording.  Here’s an excellent and rather philosophical description of the setting, roles, and interactions inherent in the process of recording music.  Written by Susan Schmidt Horning and published in the journal Social Studies of Science [Vol. 34, No. 5, Special Issue on Sound Studies: New Technologies and Music (Oct., 2004), pp. 703-731], here’s the abstract to whet your appetites:

     

    “The recording studio [is, among other things] the site of collaboration between technologists and artists, and this collaboration is at its best a symbiotic working relationship, requiring skills above and beyond either technical or artistic, which could account for one level of 'performance' required of the recording engineer. Described by one studio manager as 'a technician and a diplomat', the recording engineer performs a number of roles - technical, artistic, socially mediating - that render the concept of formal training problematic, yet necessary for the operation of technically complex equipment.”

     

    The full article is both beautiful and well worth reading.  Use the link above to access it on JSTOR, which is a wonderful academic online database and library to which you can gain free access to 100 articles a month (and unlimited access if you have an academic affiliation with an account).

     

    As Horning and many others describe and document, there has always been a cook at the broth when recordings are made.  And there’s almost always a team behind the effort.  The vacuum tube was invented in 1906, with many contributing to the subsequent development of the electronics that powered recording and playback of music for the next 75 years.  There were recording teams at Bell Labs, Western Electric, Columbia, and Victor working hard (and, often, together) to bring the art of recording voices and music to the highest level.

     

    Then, as now, the main concern we as audiophiles have about members of the teams that make recordings is how sensitive, discerning, and skilled they were / are at capturing a raw recording and turning it into a finished product worth hearing (and buying).  I’m not going to get into what I think makes a recording worth hearing and buying.  Even though many of us agree on the factors that determine that, each of us has a personal value set and there’s no right or wrong.

     

    I’m going to discuss some commonly used tools and tricks of the trade, and I’ll show you how to experience a few of them yourselves with simple audio files on your own computers.  But whether or not you find them beneficial in the recordings you prefer is out of bounds for this article.  You like what you like and I’ll like what I like – there are no value judgments in this piece.  We can all get along just fine that way.

     

     

    THE FAULTY CARPENTER BLAMES HIS TOOLS!

     

    There are so many available alternatives for every single item in a recording studio that it’s a fool’s errand to blame any given hardware or software for something you don’t like in a commercial recording made by any decent team in an appropriate setting.  This is also true for your own systems.  A modest system that was optimally selected, set up, and operated is most often a better choice for audiophiles than an assemblage of high end equipment chosen on the basis of other people’s opinions without audition (i.e. that you never actually heard, either with your own equipment or as a complete system, in your listening environment, playing your choice of reference source material).

     

    So I’m not going to get into which consoles are best, how to choose DAW software, what pickup patterns are best for stereo mic’ing etc.  We’re going to discuss what engineers (and you) can do to a recorded music file to alter it in positive ways - and how the same methods can make a recording sound better or worse worse to some listeners in some systems under some circumstances.

     

    All of the DIY parts of this article are designed to be done with a simple audio recording and editing program.  For Windows users, Audacity is far and away the most popular one, with its long and consistent record and its user friendliness compared to most DAWs.  Keep in mind that it’s “only” a recording and editing program, so it lacks many features needed for music creation and production (e.g. plug-in instruments that can be played with a MIDI keyboard or other controller directly into a recording).   But it’s very versatile for simple recording and for editing existing files, with full plug-in capability apart from actual instruments.  And you can download many MIDI instruments from other sources as stand-alone apps, routing the instruments’ outputs to Audacity inputs and recording the actual audio output of the instrument in audio tracks (rather than MIDI tracks, as is done with plug-in instruments).  One of my favorites is the CollaB3, an excellent freeware emulator of the classic Hammond B3 and Leslie cabinet.

     

    Audacity is an open source download that comes ready to use, with a large set of DSP plug-ins and a pretty consistent GUI across all platforms.  I’ve run it on Windows, Linux, and MacOS with equal success.  It’s the recording package I use on my hot rod Raspberry Pi portable DAW, and it’s on every media computer I use regularly.  You can do the DIY things in this article with other apps as well, e.g. I use Ardour in my studio for multitracking with MIDI instrument plug-ins and on a Raspberry Pi for some location recording.  But Audacity is almost certainly the best choice for audiophiles who just want to learn a bit more about how recordings are engineered and why.  Audacity is also my tool of choice for ripping vinyl.

     

    If you’re on Linux, you might consider Ardour for your DAW.  It’s more capable than Audacity as a DAW, and you can add a plug-in called JAMin that will let you do some pretty fancy mixing, processing, and mastering.  If you’re a Mac person, you already have GarageBand (which is also a pretty fine DAW).  If you want to research options beyond these, just search DAW on the web and you’ll find a ton of free and reasonably priced packages for the OS of your choice.  And if you buy an inexpensive USB audio interface, it will probably come with a “lite” version of one of the many popular recording packages like Ableton, ProTools, PreSonus, or Cakewalk.

     

     

    THE FULL SCOPE OF AUDIO RECORDING FOR AUDIOPHILES

     

    If you’re interested enough to download a DAW and try some or all of the projects in this article, you may want to dive deeper into the world of digital recording.  You can have a lot of fun and maybe even make a few of your digital audio files sound better to you with something you learned here.  The spectrum of recording activity for the audiophile obviously starts with making your own recordings of live music, and that’s quite do-able these days.

     

    Many venues and artists permit the recording of performances, especially if it’s only audio.  Know the law and the rules where you are.  Ask the performer(s) if it’s OK with them for you to record their music for your own personal use.  If you promise you won’t sell or distribute it, keep that promise!  Apart from being an affront to the artist(s), selling such a recording or posting it to public media gores multiple oxen.  The artist(s) have a right to control and profit from their work.  The content creator(s) have the same right – remember that someone wrote the music you’re capturing.  Then there are those who support the artist(s) and may have a contractual right to a share of any returns.  So be respectful and be reasonable.

     

    As an active professional musician, I can tell you that many of us are happy to let you record our performances and ask only that we receive a copy.  You can find many “authorized bootlegs” of performances at the club in which I’m the house band leader on YouTube and Facebook going back over a decade.  The club owner (who’s also our blues band’s bass player) is happy to get the publicity, and we’ve downloaded many of our performances.  The club belongs to a music clearing house and pays monthly fees for performance rights – that’s why clubs have little signs that say they belong to ASCAP, BMI etc.  Those fees go to the composers and copyright owners of every song we play that we didn’t write.  But they do not cover a third party’s commercialization of the same performances and material.

     

    You can buy an excellent stereo digital recorder for $200 or less that will fit in your pocket and make excellent digital files up to 24/192.  They have integral microphones and are very easy to operate. Sony and a few others go even further and record directly to DSD files.  The Sony with which I’m most familiar lists for about $750, so it’s for more serious users.  But you can make pretty fine recordings with even the least expensive Zoom pocket recorder in clubs, auditoriums, parks, etc.  Many in the $200-300 tier record 4 simultaneous tracks, so you can mix and master to some degree and end up with some very listenable recordings.  The $330 Zoom H6 records 6 tracks simultaneously and even has 4 combo XLR inputs plus a stereo mic module on top.

     

    image3.jpeg image1.jpeg image2.jpeg

       

     

    Or you could do what I often do - use Audacity on a Raspberry Pi with a tiny paired stereo mic (like the ones on top of the Zoom and Sony recorders above) and a portable USB power pack.  The Pi and the power fit in your pocket, and the mics can sit on a table top or clip onto your hat or jacket.  You can use a mobile phone or tablet as a display.  See here and here for more on using an old mobile device as a Pi screen.  I use a cheap and simple video capture adapter and connect via an OTG USB cable.

     

    I also use RealVNC to control my Pi from an Android tablet while recording remotely.  This requires a network connection, but you can turn your Pi into a WiFi hub (see here) and connect the tablet directly to it using that WiFi network.  And there’s yet another novel way to use an Android device as a Pi display – use a screen mirroring app like scrcpy (e.g. see this article).  I haven’t tried this yet, but it seems simple and promising.

     

    Making and working with audio files is great fun for many (including me).  If it appeals to you, adding your own little recording and production studio to your audio system is easy and well worth the low cost and effort it takes.  You can carry a tiny and excellent digital recorder with you like a camera to record music, just as you carry a camera or use your phone to capture images on the street.  If you plan to rip a lot of analog music to digital (e.g. open reel tapes, vinyl), set up a program like Audacity on your audio computer, plug in a USB interface, and you’re ready to go whenever the need or desire arises.   Here’s a pretty good web article on building a pretty comprehensive home recording studio from scratch for about $1000, if you’re interested.

     

     

    HERE’S THE BEEF, CLARA!

     

    Let’s start with a few simple, well recorded clips that we can process together.  You’ll see how each is done and hear the effect(s) as we go throughy the first project a little further on. (Right click each link to download).

     

    1. a multi-tracked, unprocessed studio recording clip of 3 acoustic instruments (link)
    2. the full master file I made of the song for which #1 is the backing band (link)
    3. the new master I just made with mid-side processing for this article (link)

     

     

    You’ll need an audio editing program.  Most modern DAWs come with the plugins and filters needed for these maneuvers – I’ve used Ardour, Ableton, Cakewalk, and many others with equal facility and success.  But if you don’t already have a program like one of these on your computer, click this link to download and install Audacity.  If you’re a Linux user, you can almost certainly install it from whatever repository is linked to your preferred flavor of Linux.  You can install it with the resident package managers for Ubuntu, Mint, Raspberry Pi OS, and many many others.  On the off chance that you’re running a Linux distro in which there’s no PPA from which to install it, the source code is downloadable from the same link.  You’ll have to compile it to use it, which is probably not a problem for anyone willing and able to use an exotic distro.

     

    Although it cannot use plug-in instruments that you can play with a MIDI keyboard, Audacity does have fairly complete plug-in capability limited only by your OS and Audacity version.  The latest version (3.0.2 as of May 26, 2021) can also use standard VST plugins, and I’ve confirmed that with dll files placed in the plug-ins folder.  But we’re going to go through the exercises in this article with minimal plug-ins, in order to give you a better idea of what goes into each step and how the processes work.  It’s a lot easier with plu-ins, if you decide you want to do more of this.

     

    Nothing we’re going to do requires a sophisticated audio interface, special equipment, or sophisticated software.  You should be able to hear all of these effects and results through half decent headphones driven by your MOBO’s audio out jack.  You’re much better off with even an entry level USB audio interface, and a lot of the subtlety is a bit better appreciated with a decent DAC and ‘phones or speakers.  In truth, given the fact that many of these tricks are used to make music sound “better” through earbuds and OEM car stereos, you can probably hear them all through quarter to half decent transducers of any kind.  But you can learn more about available audio interface options about halfway througyh my January 2021 article on basic recording with a Raspberry Pi.

     

    Read the user manual for the audio editing program of your choice, so that you know what plug-ins etc are available to you and how to access them.  If you’re new to Audacity (or just a casual user ready to get more serious with it), use this link to find some great documentation.  Just so you know I’m not making all this stuff up, there’s also a comprehensive reference list near the end of this article.  For starters, you might want to review this article about basic knowledge and skills in audio engineering.

     

    For those who don’t like Audacity for some reason, there are many other programs that will do the same things (and even a bit more).  If you’re feeling even the slightest bit more adventurous, try WavePad from NCH.  Even the demo / limited function edition does a lot with full VST plugin use.   WavePad even has a cool surround sound function that lets you make multichannel audio files from any source.  Like Audacity, it cannot use VST MIDI instruments directly - but with MixPad (another NCH program you can download) you can add VST instruments and play them on a MIDI keyboard or a virtual keyboard in the GUI.

     

     

    LET’S GET STARTED

     

    As we embark on what’s probably a new concept to most of you, keep in mind that an excellent raw recording will always sound better (at least to most of us) than a poor one that’s been processed, no matter what you do to it and how good you are at it.  If the basic SQ is good, you can make many recordings better – but if it’s poor, the best you can do is make it less bad.

     

    We’re going to learn how the pros process audio files.  I’m only touching the surface of this topic – there are so many things to do and so many ways to do them that even the best, most senior people in the field continue to learn.  Think of this article as a template for processing audio files – if you follow the flow and use the right equipment and software, you can apply most processes in the same general way.  Reading this article and following the mid-side processing project template will help you

     

    • develop a basic fund of knowledge on which to build your own experience and judgment
    • learn about “mid-side” decomposition and try it yourself
    • learn how engineers make recordings sound better (or just different, depending on what you like)
      • using EQ
      • using compression
      • using delay
    • learn how to improve the quality of vocals
    • learn how to make better mixes
    • learn how to “adjust” soundstage dimensions
    • learn how to change the apparent “size” of an instrument or vocal image
    • learn how to change the overall character of instruments
    • learn how to increase perceived loudness without increasing actual playback SPL

     

     

    MID-SIDE DECOMPOSITION

     

    We’ll start with a step by step tutorial on creating and applying mid-side decomposition to a simple 3 instrument stereo master file.  I put links to 3 files at the beginning of this section under the “Here’s the Beef” heading.  If you didn’t grab them above, here they are again:  (Right click each link to download).

     

    1. a short stereo clip from a multitracked studio recording of 3 acoustic instruments  (link)
    2. the full master file I made of the song from which the first clip was taken (link)
    3. the mid-side master I just made while writing this article  (link)

     

    What is mid-side processing?  Here’s a nice definition from the Platinum Audio Lab blog:

     

    “MS processing is...another tool you can use to add depth to, or clean up, your mixes and in practise it is just another way to apply the fx and mixing routines you are allready familiar with. MS processing at its base is simply a different way of splitting up a stereo signal. Ordinary stereo signals are split between a left and right channel, whereas an MS processor takes a stereo signal and splits it between the sum and difference channels.

     

    The sum channel would be any audio signal which is equivalent in both the left and the right channel, or in other words, the mono audio material which is dead center in your stereo field. The difference channel would be all other audio content. The terms “sum” and “difference” are just another way of understanding “mid” and “side” processing.”

     

    What can you do with mid-side decomposition?  It’s so commonly used that I’m tempted to say “almost anything”.  Just heed the warning in the Platinum blog:

     

    “As with anything else, it can very easily be misused and abused. Even though it is a simple tool at its core, if you are not careful with how much of it you use, you run the risk of making your mixes sound worse even though it appears you are solving the original problem. As always, a little bit goes a long way.”

     

    Here are a few examples provided by Sound on Sound:

     

    • You’re very happy with your featured vocalist, but the backing vocals are dull and lifeless. Applying EQ to the entire master would adversely affect the main content.  But by putting a little high-end boost into the Sides channel only, you can brighten up the backing vocals with little or no change in any signal content with more than a few % in the mid.  Judicious EQ on the side channel only can also add air to the sonic presentation.
    • A high-pass filter in the side channel can add more focus to content in the Mid channel.  All the lows you want in the recording are still in the Mid channel, but unwanted low frequency content is attenuated.
    • A notch filter in the Mid channel only can be more effective than notching the entire signal.
    • Try putting a compressor in the Sides channel and listen to what happens to the reverb. As the compressor kicks in on louder sections, the ambiance will be less prominent. If you want a more subtle effect, try parallel compression.
    • If you want to clean up the middle frequency range of a mix, try compressing the Mid channel of the feed going to the reverb. This can work particularly well when you're using a subgroup with its own dedicated reverb, for instance for drums.  A light hand works best.
    • For a less dramatic reverb, try using it in the middle of the M/S chain and reducing the amount of the Mid signal going to the reverb.  This way, only the difference signal excites the reverb.

     

    Mid-side processing is often used for stereo image processing.  Whether the goal is perceived enhancement, correction, or other sonic effect, mid-side is close to a  default approach for many recording and mastering engineers.  Once the basic decomposition into mid and side signals is done, application of many of the techniques about to be described can effect major perceived changes in SQ, imaging, and even the sounds of individual instruments and voices.

     

    Much of the content of a pair of stereo channels is identical except for amplitude differences resulting from panning.  Placement across the sound stage may be physical (i.e. voice or instrument position relative to microphone array) or electronic (i.e. electronically increasing amplitude in one side relative to the other), depending on microphone technique, recording approach (stereo vs multitrack, live vs overdubbed etc) etc.

     

    The rest of the content of each stereo channel is different from the other side, and this is where much of the spatialization cueing occurs.  The dichotomies between channels include phase, delayed / reflected vs direct content, frequency spectrum differences (e.g. from absorption, reflection etc), and even the sound of the same instrument when recorded from two different perspectives.  For those of you who didn’t read or don’t remember my earlier article on soundstaging, this diagram shows typical reflective patterns for 3 different instruments.  Keep in mind that these radiation patterns are 3 dimensional and that the zones of radiation (which are not quite as sharply demarcated as depicted in these images) extend laterally in addition to the monoplanar illustration below. Microphone placement within the radiation pattern of an instrument can have audible effects on its tone and timbre in the recording:

     

    image4.jpeg

     

     

    There’s a lot of research into the sonic spectral radiation patterns of instruments, with some serious effort expended on understanding the grand piano.  A very nice study by Roginska et al (High resolution radiation pattern measurements of a grand piano - The effect of attack velocity.  JASA 2013) demonstrates how complex the radiation patterns are for the simple striking of middle C, to wit:

     

    “The sound radiation pattern of a grand piano is highly complex and depends on the shape of the soundboard, construction of the frame, reflections from the lid and other parts of the instrument's structure. The spectral energy generated by and emitted from the instrument is further complicated by the sound production mechanism (hammers, strings), the attack velocity, and results in independently complex behaviors depending on the register of the piano.”

     

    And that doesn’t even begin to address the many other issues like whether the top is off, open, or closed, proximity and boundary effects, where the mic(s) went relative to the ends, sides, top, and bottom of the piano, etc.  So the same instrument may (and often does) sound grossly different in each stereo channel.  This is, in fact, part of the image dimension dynamic because the slight differences in phase and frequency spectrum add width to the instrument’s sound.

     

    I often do this when playing solo jazz guitar – I use two amplifiers both driven by the same guitar signal, with very slight delay on each one.  Because different brands and models of guitar amplifier use different kinds of delay (digital, spring, etc), this adds a natural “side” channel to the sound that makes the guitar sound much bigger.  Another way to do this with a quality acoustic guitar that has a pickup on it is to mic the guitar on one track, use a direct input from the pickup on a second track, and mic the guitar through an amplifier for a third track.  It’s amazing how different the final master will sound depending on which track you put in the center and which you pan left and right in the initial stereo mix from which you generate mid-side components.

     

    This is a common approach.  For example, Martin Taylor is a solo jazz guitarist who has even used a guitar with pickups wired for stereo to get a bigger sound than I get from my cheap and dirty tricks.  Listen to his version of All The Things You Are and compare Taylor’s solo guitar sound to the “smaller” sound of Joe Pass in this clip.  Pass is playing his long time favorite Gibson ES175, which has a weak acoustic sound because of its laminated top – so you’re hearing only the mic’ed sound coming from his amplifier.  And then there are the guys who use heavy DSP, including phase shfiting and delay, to get fuller guitar sounds, as you can hear in this tune (heavily processed in performance, using live effects) by Kurt Rosenwinkel.

     

    These effects are magnified by multichannel recording and reproduction, and there are many studies examining mid-side decomposition in MC audio.  We won’t do any MC projects in this article, but you can learn more about it here, here, and here to start.  The MS approach is often used to upmix a stereo master to an MC format.  You can play with this yourself using the surround sound editor in WavePad:

     

     

    MID-SIDE METHODOLOGY

     

    The process of mid-side decomposition of an audio signal creates a second stereo waveform in which one channel (the “side” channel) contains the difference signal between the left and right master channels.  This can be done by subtracting left from right or right from left.  The “mid” channel is created by first summing the two stereo master channels.  MS decomposition and processing is most often applied on the master buss.

     

    image5.jpeg

     

     

    For this part of the discussion, we need to be clear about signal levels and the concepts of amplitude parameters, power, loudness etc.  And we need to know more about placement of mono signals between stereo speakers with specific reference to the process of panning.   Panning depends on the distribution of power across the stereo field rather than simple signal amplitude.  Acoustic power is measured in watts and is proportional to the squared amplitude of that signal - but perceived loudness is not linearly related to signal level.

     

    The process of panning is easy to describe.  Start with the simple fact that a mono signal delivered at equal levels to each of two identical stereo speakers will (all other things being equal) be perceived as coming from the exact center between the two speakers.  Panning is the act of increasing the signal level in one speaker while decreasing it in the other, so that its sound seems to be coming from somewhere between the two speakers, closer to the one receiving the higher signal level.

     

    But, as you might have guessed, it’s not that simple.  Linear panning is the simple linear change of the signal level in one channel combined with the identical but opposite change in the other channel.  Increasing left and decreasing right by the same linear amount is expected to maintain perceived loudness of the signal and change only its apparent position.  But this is not what happens with pure linear attenuation, as many effects (e.g. phase, positioning, reflections etc) attenuate the ideal summation of the two sides and the actual acoustic power of the panned signal decreases a bit from that ideal summation to a maximum of -3 dB at full pan.  This is one cause of the “hole in the middle” effect we often hear and “see” in multitracked recordings that were not mixed and mastered with sufficient skill to compensate for this problem.

     

    The second panning law, which governs the “power panning” approach, is based on use of a sin / cosine function for left and right channel signal level changes rather than a linear function for both.  This results in equal power of the combined signals as the apparent source is moved across the stereo field.  The problem with this approach is that the signal at center position is now boosted by as much as 3 dB, which is also not ideal.

     

    The compromise used by many engineers is the square root approach, which uses the square root of the product of the two other functions at any given point on the panning spectrum.  This gives a function between the linear curves and the sine / cosine curves of the equal power approach.  So how do we know which to use and to what end?

     

    Funny you should ask!  Disney Studios tested this on audiences back in the 1930s and found that the majority of subjects preferred the sound of constant power panning.  But the BBC ran similar tests in the 1950s and their audiences preferred the compromise approach.  Many believe that the different venues and audiences between the two account for this – Disney was looking at ways to improve movie sound, and the BBC was looking at television and radio audiences.  And the BBC’s audience was generally using a single speaker or system, so panning in the production of audio was aimed not at creating an expansive stereo stage but at optimizing single channel reproduction – and the effect of constant power panning on a summed mono signal was perceived in the dynamic range of the program material.

     

    It turns out that almost everything affecting SQ also affects the choice of panning methodology, which adds a new dimension to the process of mid-side deconstruction.  For now, we’ll leave this issue – but being aware of the complexity of the process gives you a much better idea of the flexibility in audio engineering and the power of the recording and mastering teams to affect program material for better or worse.  Here’s a wonderful treatise on the “laws of panning”, if you want to know more.

     

    Why is this relevant to MS decomposition?  Once the editing is done, these tracks will be recombined into a new stereo mix – and the summations will be much too high in level if we don’t follow some convention to control this.  Adding identical content from left and right channels together will boost signal level by 3 dB. Adding the difference content of the two will increase the combined level.  So recombining all of these signal sources after processing could result in a grossly elevated level with which we cannot work.  We therefore control the level of the mid and side channels before any other processing is done to them, and we’re mindful of other gain issues while processing, e.g. EQ can really add up the dB!

     

     

    LET’S DO IT!

     

    You can use any wav file with which you want to play, no matter how complex or heavily processed it already is.  But to learn and to best appreciate the results of your efforts, you should use a simple track with no more than a few instruments that you can hear and “see” on a consistent soundstage. Even a solo piano or guitar recording will work, although it’s hard to appreciate many of the effects and outcomes (e.g. altered soundstage dimensions) without a few instruments, including a voice if you prefer.  The files I’ve provided for your use are all from a simple, live, multitracked studio recording of 3 instruments – my Ibanez 7 string flat top acoustic guitar, my National Tricone resonator acoustic guitar, and a Lee Oskar harmonica.  I recorded each part in mono and mixed them down to the simple stereo sound stage you hear in the first clip.

     

    As suggested earlier, the basic source file is a stereo mix of the raw mono tracks of the above three instruments from one of my songs on the 2020 Philly Blues Society compilation disc. File 2 is a simple mix and master I did last year.  And file 3 is the mid-side master I did for the illustrations in this article.   You can use my music and performance for these projects if you wish.  Please respect the fact that I donated the rights to the final master to the Philly Blues Society, so they could keep all the proceeds from CD and download sales directly.  I own both the composer rights and the copyright.  So you can play with this music for your own listening and tinkering pleasure - but you can’t post it, link to it, or use it for anything else.

     

     

    GETTING STARTED

     

    Download the wav files, if you haven’t already done so, and listen to them.  You should be hearing a harmonica in the center, flanked by a full sounding acoustic guitar on the left and a National Tricone resonator guitar (played with a slide) on the right, mastered from 3 mono tracks.  These are all professional quality instruments and the mics are more than decent (Sony condenser or Shure dynamic, depending on the instrument), so the SQ should be quite good.  The second file is my own master, which has the instruments plus my voice.  It is not processed at all, except that I balanced the gains and tried to add some punch to my voice with a bit of compression and a hint of delay.  The third file is Joe’s master of my music, which is the track that Joe put on the CD released by the Society last year.

     

    Let’s start with a mid-side decomposition on either the instrumental file or my master clip, as you wish,  remembering once again that MS decomposition and processing is almost always applied on the master buss.  You’re welcome to try ti improve Joe’s master too. This is the step by step tutorial – so you might be best off starting by following along and doing it as I did it before trying your own way.  But once you get the hang of it, you can process it to oblivion as you wish, as a learning experience.  And you can follow the links to other projects that let you do and understand the methods and techniques I elaborated above.

     

    Here’s how I do a mid-side decomposition in Audacity without plug-ins (we’ll discuss plug-ins later):

     

    First, import a master track of the source file into Audacity (or the DAW of your choice) as a stereo wav.  Once you’ve imported the wav file(s) of your choice, your Audacity screen should look like this (other DAWs will look similar):

     

    image6.jpeg

     

    If you use my stereo file, you can work from it to get everything you need.  If you use your own source material and the source is a pair of mono files for L and R, you can follow the same general path but be sure you process the track(s) properly to get the desired effect.  For example:

     

    • You can mix a stereo pair down to a summed mono track directly from the menu in Audacity.  But if you use paired mono tracks to start, you may have to link them into a stereo track by selecting both tracks and using the menu under the track title.  Or you can copy and paste each side into the appropriate channel of a blank stereo track in order to use the “mix to mono” function to sum the two.
    • You can invert an entire mono track by selecting it and slicking the “invert” link in the effect menu.  But you cannot invert one channel of a stereo track in Audacity the same way because you cannot click on one channel to select it – clicking either selects both.  You have to highlight the waveform in the right channel by dragging your cursor over it (ALL of it – don’t miss any) before using the invert function.  Then you can sum the stereo track to one mono track that contains the difference information (the “side” channel).  If you somehow alter any of the content and throw it out of sync with the other side, you won’t be able to use your mid-side decomp files.

     

    Rename the track(s) to a working title you will remember from the select-name menu in the title field at the upper left of the track.  I use “STEREO SOURCE” for stereo tracks and SOURCE L or SOURCE R for mono tracks.  Use working names that let you function on autopilot – if you mislabel one or mess any of these up, you’ll have to start over unless you catch the error before you rename and start to use the files.  Rename tracks in Audacity using the “name” choice in the menu you’ll open by clicking on the arrow at the right of the default track title:

     

    image7.jpeg

     

     

    Now we’ll make the MID ans SIDE tracks.  First, we sum the left and right sides of the master track into a new mono track that we’ll label “MID”.  I find the easiest way to do this in Audacity is to duplicate the STEREO SOURCE track and “Mix Stereo Down to mono” from the Tracks – Mix menu:

     

    image8.jpeg

     

     

    This will add a summed mono track that I rename “MID” for convenience while working:

     

    image9.jpeg

     

     

     

    Now it’s time to make the SIDE track, in which the signal waveform is the difference between the left and right tracks of the stereo master.  To make this, we need to invert the phase of one of the stereo channels, then sum then into a single mono track.   Without plug-ins to automate the process, I find it easiest to create another duplicate of the stereo source file and split it into mono tracks.

     

    image10.jpeg

     

     

    Then I invert the phase of the right channel by clicking that channel to highlight it and using the Effect-Invert menu selection in the header.

     

    image11.jpeg

     

     

    Once you do that, select both of the mono tracks that you created from the stereo track and recombine them into a stereo track from the same menu in the title field:

     

    image12.jpeg

     

     

    Then mix the stereo track down to a single mono track the same way you made the MID track above and label it “SIDE”.  Now you have the elements of a mid-side decomposition and are ready to play with the process.  Here’s what you should have:

     

    image13.jpeg

     

     

     

    The top two waves are the left and right channels of the original stereo master recording.  The third wave is the summed mono track and the fourth is the difference waveform we made by summing the intact left and inverted right channels.

     

     

    NOW WE HAVE THE BLOCKS – LET’S MAKE A FORT!

     

    We need to assemble the components properly to get a good effect.  There are plug-ins that will automate the above so that you can simply move a slider in the GUI to adjust the mix of signals on the fly and monitor the effect.  As most of you will be learning this on a simple Audacity instance, I’m taking you through the manual process and use only the plug-ins embedded in your Audacity installation (e.g. to invert, adjust overall track gain, etc).  Doing it without automation is a bit more time consuming in the beginning, but you’ll begin to get the hang of it after a few passes.  If you really enjoy being able to do these things, you can either get a more sophisticated DAW that offers automation for mid-side decomposition, or you can create a template project in Audacity and use it over and over by adding your source stereo file and saving it as a new project.

     

    Why would an audiophile want to be able to do these things?  You (and I) surely have more than one ripped CD or vinyl album that you love for content but hate for presentation.  Once you learn to do the mid-side thing, you can “remaster” (or more accurately,“overmaster”) many of your commercial recordings so they’re more to your preference.  And you can use post processing to enhance what you like about a recording and/or diminish what you don’t like.

     

    If you use a DAW with onboard mid-side processing plug-ins and comprehensive patching capability from track to track, you can set up your program to do a lot of this in real time.  But you need a few more tracks in your Audacity project, so you can combine your source file with your mid and side files and have a set of tracks you can mix in various proportions to find out what sounds best for you.  I just add tracks to the Audacity project as needed.

     

    Let’s put it all together in Audacity.  From the discussion of panning and level effects a bit earlier, we know that we have to attenuate the mid and side tracks by ~3 dB each before doing anything else to them.   Depending on the actual content, this may vary from recording to recording – start with -3 and adjust to taste by trial and error.   Also remember that levels in the recovered master have to be monitored and adjusted carefully to avoid pushing the final mix above 0 dB FS after all processing has been completed and the stereo master recovered.   

     

    We need to recover one track each for left and right in the new master, and we recover them after attenuating and modifying the raw mid and side tracks to our preferences.  The left channel is then recovered as the summation of mid + sides, and the right channel is recovered from the summation of mid – sides (i.e. the summation of mid plus an inverted copy of the modified sides track).

     

    It’s a very good idea to keep each version of your mid and side tracks as a separate track in the project, so you can go back and compare them all.  This is both a great learning aid and a backup in case you decide later that you overdid or underdid something in the final mix & master.  I do all modification on these copies of mid and side.  Once again, do NO destructive editing at all and DO NOT EDIT YOUR ORIGINAL SOURCE FILES.

     

     

    WE’RE READY TO BOOGIE

     

    When using mid-side processing on an already mixed 1st generation stereo master, the Audacity project is set to go when you have a stereo track with a copy of the source mix on it plus the mid and side tracks created as described earlier, along with a working pair of mid-side tracks attenuated by 3 dB each to start trial and error processing.  The beauty of this setup is that you can use the “solo” and “mute” buttons on each track to hear different “recovery” combinations.  For example, you can make a second set of mid-side tracks attenuated by 2 dB each and another down by 4 dB to compare the resulting effect when recombined with the source track.  If your DAW can apply processing on the fly, you won’t need the working tracks.

     

    image14.jpeg

     

     

    On this project run, I’ve copied the mid and side tracks and reduced them each by 3 dB – so they’re ready for processing.  Notice that the track names are guides to their content, to help me avoid doing the wrong thing to the wrong track. I’ve made more than a few unintentional mistakes over the years and speak from broad, distressing experience.  This is why I always work on copies and never use destructive editing.  And here’s what my Audacity project looks like as we start our first attempt at improving a basic master track.

     

     

    For our first run,  I decided to open up the stereo space a bit, tighten up the guitars, and give my anemic voice a bit more depth and character on this stereo master of 2 guitars and a harmonica backing a simple vocal.  For our first practice session, we’ll use only reverb and gain control to alter and (hopefully) improve the bare, dry master.

     

    Here’s the drill:

     

    • follow the instructions above to create and set up an Audacity project with the following tracks
      • a copy of the stereo recording you want to “improve” (the working master source file)
      • a mid track created by summing the two stereo tracks to one mono track
      • a side track created by inverting the right channel of the stereo master and summing the two sides to a mono channel
    • copy the mid and side tracks and attenuate each by 3 dB to start
    • copy the attenuated mid and side tracks and make all changes on these copies
    • To give my vocal a bit more depth and character, add a tiny bit of reverb to the mid track
      • don’t overdo it – you don’t want the voice to sound like the singer’s in a deep cave
    • To punch up the guitars and widen them a bit in the mix, add a bit of reverb to the side track
      • keep the reverb 100% wet (i.e. only use the reverberation being returned – don’t add more dry signal to the mix)
      • attenuate the reverb return by 3 dB below 250 Hz to reduce “muddiness” in the instruments
      • change the delay parameters a bit from those on the mid track
        • it doesn’t matter right now exactly what you change in the actual delay parameters or by how much – this joint exercise is to show you how it works and what kinds of things can be done.  You can get more specific on your own.
        • try a bit more reverb on the side track than on the mid to enhance instrumental space without making my voice sound like it’s coming from a cavern.

     

    image15.jpeg

     

     

     

    • Now we have to copy and “re-invert” the side track to recover it as useful content for the new master, before making a stereo track with the mono side “master” on both sides (so you can pan it to the far left and far right in the final mix & master).
      • You don’t have to make a stereo track of the mid content after editing, because you can play it back along with stereo source and side tracks and it will be sent to the center.  If you want to play with positioning it, make a stereo track with the same mid content on both sides.
    • Finally, we have to create new tracks for the mid and side components to be added to the master mix.  You need to be able to pan the side track to the far left and far right, so copy it after re-inverting it to be in phase with the source stereo track.

     

    This will give you a project that looks like the following, if you’re using Audacity:

     

     

    image16.jpeg

     

     

     

    And now we’re ready to see what we’ve accomplished.  The goal is to combine your source file with your mid-side recovery files, attenuating gains as necessary so that your new master has no peaks above 0 dB FS.  But before doing that, we can take a test drive by muting all tracks except the source and the stereo mid and side tracks.  You can adjust your edits with the gain sliders on the mid and side tracks, along with the pan sliders.  You can re-invert the side tracks to see what that does to your final mix.  And you can adjust gain from track to track to bring the ambiance out or suppress it.

     

     

    CREATING YOUR FINAL MIX AND MASTER

     

    Once you’re happy with your new master constructed from the parts above, you can mix it down to a final stereo master track by highlighting the three component tracks (source, mid, and side) and using the “track-mix-mix and render to new track” choice to create a single stereo track from the components.  And that’s your final master file, which you can export in the format of your choice.  I strongly suggest exporting as a wav and converting that file to whatever format(s) you want afterward.

     

    As you now realize, there’s a lot to the production of a final recording that goes unheralded and unappreciated.  There are many DAW programs that will let you connect track outputs to buses, so you can edit nondestructively in real time as the source plays, before mixing it down to a final master.  And there are many DAWs that will let you automate the process with integral controls on gain etc to free you from worrying about the little things.  But doing it for yourself in Audacity will get the same results, and you’ll have a much better understanding of how our recordings get to sound the way they do – and why so many compromise on absolute SQ for reasons that may fall into technical, economic, strategic, and/or any number of other categories.

     

    I did not use mid-side or any other processing on the master I made last year (file #2).  So I’ve been working the original along with you to see what I could do with it.  File #3 is the master I just made for this article with mid-side deconposition and processing.

     

     

    TRY MID-SIDE DECOMPOSITION BEFORE THERAPEUTIC MIXING & MASTERING

     

    Starting with a mid-side decomposition, you can then add an infinite number of effects and edits to enhance your final master.  Here are some examples from izotope that approach specific problems with various effects applied to a mid-side decomposition during mixdown:

     

    • If a track has multiple guitar parts, route them through a bus, using Mid/Side processing on the guitar bus. Boost the volume of the side channel during a chorus or other section of the track. This makes the guitars sound bigger without adjusting panning, and as a result the section sounds more impactful.

     

    • Likewise, a slight volume boost to the side channel on drum overheads can enhance the room sound, or a slight boost to the mid channel might enhance the snare drum and rack toms.

     

    • On any particular instrument recorded in stereo, a high frequency EQ boost on just the side channel makes the ‘wider’ elements sound brighter. A Baxandall filter or a high shelf filter work best. This helps to add clarity to a reverb, without muddying up the signal too much.

     

    And here are just a few examples from the same article describing the value of mid-side decomposition in mastering:

     

    • If a mix sounds muddy, try reducing low frequencies in the side channel with a low shelf filter. For example, this can tighten up hard-panned guitars while preserving the integrity of the vocal and kick drum captures in the center of the mix.

     

    • If compression on the master narrows or squashes the signal, use a Mid/Side compressor and apply less compression to the side channel than the mid channel. Heavy energy in the center of a mix, where the kick, snare, bass sit, can cause a compressor to kick in and squash the wider, more ambient spatial elements in the mix. Judicious MS compression avoids this.

     

    • Warm up a dry acoustic mix with Mid/Side reverb. Add reverb to the mid channel, but filter out some of the low end on the wet, reverberant signal to avoid muddying the kick drum and bass. On the side channel, add 2-4% more reverb than on the mid channel – no filtering is needed in most cases.

     

    More?  Add a little delay / reverb to the side channel to add depth to the sound field.  Boost the highs in the side channel to punch up your mix a  little and make the stereo field seem a little larger.  Learn more about all this by searching “mid-side” on theproaudiofiles.com YouTube channelThis video is a great start.  And here’s a nice video on mid-side mastering.

     

    To be complete, you should know that there is also a mid-side technique for microphone placement. It accomplishes in real time what mid-side decomposition and editing achieve in post-processing, taking advantage of the same differences between sides that are used to to MS processing. MS mic’ing is (for me, at least) almost always much better at capturing a good image than MS processing is at creating one from multitracked recordings of instruments in isolation from each other. Here’s a fine article on it, and an illustration of the basic microphone setup from that article, for those who want to try their hand at it:

     

    image17.jpeg

     

     

     

    NOW HERE’S THE REST OF THE STORY

     

    We have neither the time nor the real estate in AS to take you step by step through the other projects I describe in this work.  So the rest of this will describe several other processing operations you can use as above to enhance or otherwise alter the SQ of recordings you make yourself as well as those you’ve purchased or downloaded.  Some of you will love this and start working on rips you already have.  Others will store the knowledge away and go on about their business.  And a few will use this to enhance recordings they make themselves.  So here’s some more food for thought.

     

     

    EQUALIZATION IS FAR MORE THAN JUST FANCY TONE CONTROLS

     

    You can use EQ to improve many recordings in many ways, but it will not make a poor recording sound great.  It can’t add anything to a recording except distortion of the source signal.  Any change in any parameter of that signal is technically a distortion of it, whether it’s a simple nudge of the frequency spectrum, a change in dynamic range, or even pitch correction of a vocal.  So if you want the best possible SQ, you have to start with the best possible recording.  This is not to say that some such distortions can’t be pleasing to many ears, especially as “aural enhancement” of one kind of another is the norm in commercial recordings.  So here’s an introduction to aural enhancement through EQ.

     

     

    EQUALIZATION TO IMPROVE VOCALS

     

    The human voice is one of the most common recorded sounds to which equalization is applied.  EQ is most often used to try to make a vocalist sound “better”, although most vocalists do not sound as they do because of a deficiency in their output frequency spectrum.  As with any disorder, you need an accurate diagnosis before you can provide effective treatment.  The problems in most recorded voices run the gamut from marginal voice quality to control problems in pitch, breathing, articulation etc – it’s a rare vocalist who just needs a little more gain at a specific frequency.  So before you try to equalize a vocal you don’t like into one you do, figure out why you don’t like it.  If you can’t identify the source of your sonic discomfort, try some of the edits and tricks we’re discussing in this article on copies of your digital file, to judge the effect. Do not use original files and do not use destructive editing.

     

    The widespread use of earbuds and small near field speakers has made sibilance and other vocal problems into much bigger issues than they were when most audio was heard through wide field speakers and highs were diffused, reflected, and absorbed before reaching the ears.  In many ways, far less has changed in our recordings than has changed in the way we hear them and in the equipment on which we reproduce them.  So we now perceive as aberrance a lot that was either not perceived at all or was better integrated into the sound that we heard from bigger speakers in big rooms.

     

    For those who are making their own recordings, know that some vocal problems are correctable up front with microphone choice and use.  For example, sibilance and exaggerated plosives are made worse by proximity to the microphone.  Both can be minimized by keeping the vocalist’s mouth 8” away from most mics.  And remember that high frequencies are more directional than lows, and these are high frequency problems.  So angling a directional mic slightly to the axis of the vocalist’s mouth can also bring these problems down before they even get into the recorded signal.  It’s often helpful to have the singer (or narrator, in the case of a podcast) sit with mouth perpendicular to a flat pop shield and to angle the mic a few degrees off axis behind the shield.

     

    Of course, you can always put a pop filter on the mic.  Some are more effective than others, but it’s a simple solution when it works.  But there are many other tips out there for solving this and I leave it to you to try them.  Here’s one trick I found hard to believe until I tried it – it actually does work to reduce plosives (and, to a lesser extent, sibilance) when using a mic with a large sensing element.  In fact, combined with a pop filter, it can make the voice sound dull and lifeless.  This is basically a diffuser that shunts airflow off axis, thus reducing the energy transferred to the sensing membrane.  Use a rubber band to secure a plain old #2 pencil along the axis of the microphone’s element like this:

     

     

    image18.jpeg

     

    Once the problem is archived, it takes processing to reduce it.  The simple approach is often to dial back the top of the spectrum with EQ, although that will affect more than just the sibilance that offends.  So a frequency-specific compressor called a “de-esser” is often used to dial the problem down, and it can be used to best effect with a mid-side decomposition.

     

    Unfortunately, problems with sibilance often become apparent only after mixdown and often even after initial attempts at mastering.  Effects like compression and EQ can make marginal sibilance worse, and it may not be apparent until a lot of work has already been put into the recording.  An engineer sitting in the studio with a freshly mastered recording in which the vocal is marred by sibilance that was not so prominent in the raw tracks is not unlike an audiophile who finds the vocal on an album he or she likes overall to be unpleasantly sibilant.

     

    At times like this, mid-side decomposition can make application of EQ and frequency-specific compression practical and effective for both the engineer who doesn’t want to remaster an entire album and the audiophile who doesn’t want to discard one.  You can try this approach on a rip - use a mid-side equalizer to attenuate the mid-channel and accentuate or amplify the side-channel.  By doing so you’ll be cutting out some of the sibilance in addition to masking sibilance based frequencies.

     

    You can also use EQ to reduce plosives, like this:  use either a low roll-off or a high pass filter (or both), depending on the mix.  You want to reduce the low frequency sounds in the vocal that are causing the plosive and/or adding mud to the mix.  Knowing that plosives generally exist around 150 Hz or lower, but occasionally reaching a tad higher to 200 Hz, figure out where you want to place your filter or roll-off.  A roll-off will need to start higher to effectively reduce the power of plosives. In this case you might want to aim for just taming the plosives.

     

    Combined with mid-side processing, selective EQ can tighten up a sonic image and widen it at the same time.  You can do this to a recording you already have in which you find the bass to be muddy and diffused, using this classic approach as described in Sound on Sound:

     

    “It's accepted as standard practice that low frequency instruments such as kick drums and bass should be kept in the centre of the stereo field. There are a couple of reasons for this. Firstly, the human brain finds it very difficult to locate the source of low frequencies, so it's fairly pointless to pan them anyway. The second reason is linked to the production of vinyl records. If bass frequencies are heavily mismatched in the left and right channels, the needle can potentially bounce right out of the groove, causing skipping.

     

    Let's say we have a huge synth bass that has not only a lot of sub-bass energy, but also a lot of additional harmonics created by a stereo distortion effect. Our mix will probably be more successful if we can restrict the stereo component of this effect to higher frequencies. Apply EQ8 and enable M/S mode. In the Edit field, make sure that 'S' is showing, telling us that we are editing the Sides signal. Now ensure Filter 1 is enabled and in Low Cut mode and bring the Frequency up to about 700Hz. By doing this, we have effectively filtered out any Sides signal below 700Hz, leaving only the Mid signal. This will, in effect, make the bass mono below 700Hz, while retaining the nice stereo effect on the top end.”

     

    Again by combining mid-side processing with EQ, you can often make a stereo recording sound lighter and “airy-er” with a gentle boost in the low-mid and high frequencies in the Sides channel. This will enhance  stereo space a bit with no change in the content near the center of your field. Another way to achieve this is to make a duplicate mid track with some reverb on it plus a slight mid-range scoop, and return the mid reverb along with the unprocessed mid track.

     

    There’s no magic formula for how much to change any parameter in any of the above methods.  You have to try a few alternatives – change the Q, the center frequency, the slope, and the attenuation or boost and see how each alteration affects the sound.  Keep notes with each trial – I even add metatags with some critical info that I know I won’t remember the next time I want to do the same thing to another file.  Eventually, experience will teach you enough to come very close to your target within a few tries.  I strongly suggest that you save each new master with appropriate notes, so you can compare them multiple times to see what you’ve really accomplished and how.

     

     

    HARMONIC DISTORTION AS AN AURAL ENHANCER

     

    Say what???  Why would anyone add harmonic distortion to an audio file?  Believe it or not, there are many people who think it improves sound quality.  Here’s the explanation in an introduction to an online audio engineering course from Icon:

     

    “Introducing harmonic distortion to a signal adds musical overtones to the fundamental frequency of a sound. Bringing out these harmonic overtones imparts a pleasing analog characteristic that enhances sounds in various ways. Applying harmonic distortion will also employ subtle compression which rounds off transient peaks more naturally. This type of dynamic control often called ‘soft clipping’ sounds more musical because the peaks are not cut off like when digital distortion occurs.

     

    There is a range of different distortion models, each inspired by the vintage character of tape, tubes, transistors, and other circuitry. There are also several types of distortion effects such as saturation, bit-crushers, overdrive, guitar amps, and expanders. The various types emphasize harmonics differently from subtle to extreme in ways that brings out a unique character to your sounds and increases perceived loudness.”

     

    We’ll get to increasing perceived loudness as a way to “improve” SQ in a bit.  Let’s start with the act of introducing harmonic distortion to recorded music files.  Why would we do that? There’s a lot of support for the idea that “analog warmth” is simply a higher level of even order harmonic distortion than is found in digital audio files and equipment.  To put it succinctly and in the words of an industry expert who just happens to sell plug-ins that (surprise!) add harmonic distortion,

     

    “The coloration involved in digital recording leaves us with a comparatively clean harmonic palette, so if we want to enjoy the warmth of yesteryear’s recording equipment, we must consciously decide to add it to the mix.”

     

    Many AS members and participants are familiar with pkane2001’s app called Distort.  This is a great way to hear the effect(s) of adding harmonic distortion to an audio file.  There are also many plug-ins for this purpose, and some are more directly dedicated to improving SQ with harmonic distortion.  Why do they exist?  It’s simply because so many people prefer the sound of recordings made through one of the classic studio mixers of the 20th century, like Redd, Neve, Helios and EMI consoles used on so many classic recordings from Olympic, Abbey Road, and many other legendary studios.  You can simulate (or, at least, come close to) the distortion spectrum of your own personal grail in recording electronics with DAW plug-ins.  Whether this truly recreates the original sound is debatable.  But many highly regarded commercial studios and production facilities add distortion in post production mastering to pump up their SQ.

     

    You sharp-eyed historians may have observed without my help that not all of these boards were tube driven.  Several early “greats” were SS, including the famous EMI TG12345 from REDD (EMI’s Recording, Engineering, and Development Division) that was developed when the Beatles went beyond 8 tracks.  There were early issues with the new solid state console at EMI.  From abbeyroad.com we get this in the history of the TG12345:

     

    “The Beatles' recording engineer Geoff Emerick would comment that it was practically impossible to get the same drum and guitar sound he and the band had become accustomed to. It just wasn’t possible to get the same warm harmonic distortion you get from the valve desk via the transistor desk.”

     

    Why are we using plug-ins to try for the sound of a device whose SQ was not widely lauded and which was only created because of the need for multitracking beyond 8?   I haven’t a clue!  But you should be aware that some of the most storied and revered boards in the history of recording have been solid state and were designed when SS audio was in its infancy.  Perhaps the real lesson is simply that all that’s old is new again.  You can play with distortion generators and plug-ins the same way we edited mid and side tracks above in Audacity.  Add the distortion of your choice to a copy of your source file and Compare the two side by side by clicking the “solo” button on each track in turn.

     

     

    FURTHER ADVENTURES

     

    If you’re still with me and ready to try even more, here are links to some fine articles on other ways to edit audio files for SQ improvement (or, at least, change).  Learn, play, enjoy, and appreciate how much effort there is behind each and every one of the recordings you love so much!

     

    Using EQ, compression, distortion and filters to “fix poorly recorded tracks” (from behindthespeakers.com)     [https://behindthespeakers.com/fix-poorly-recorded-tracks/ ]

     

    Harmonic distortion and the search for analog warmth   [https://www.soundonsound.com/techniques/analogue-warmth ]

     

    17 tools for editing and massaging audio files     [https://audioalter.com/ ]

     

    Using distortion, compression and EQ in Audacity     [https://www.instructables.com/DistortionOverdrive-Effect-in-Audacity/]

     

    General tips from Sound on Sound magazine to improve your recordings   [https://www.soundonsound.com/techniques/affordable-ways-improve-quality-your-recordings ]

     

    Using plug-ins to add harmonic distortion for analog warmth   

    [https://www.waves.com/add-harmonic-distortion-for-analog-warmth]

     

    Improve vocal SQ in recordings   

    [https://www.aimm.edu/blog/5-tips-to-make-your-vocal-recording-sound-higher-quality]

     

    Mixing vs mastering    [ https://iconcollective.edu/what-is-mastering-in-music/  ]

     

    Width software:  Stereomonoizer (https://www.soundizers.com/), in Ableton Light

     

    Adding / adjusting image width   [http://blog.dubspot.com/stereo-width-ableton-live/ ]

     

    Using Nyquist plug-ins in Audacity  [ https://wiki.audacityteam.org/wiki/Nyquist_Effect_Plug-ins ]

     

    stereo widening plugins

    https://www.soundonsound.com/techniques/classic-stereo-widening

     

    adding / editing image width without plugins

    https://theproaudiofiles.com/how-to-get-stereo-width-without-fancy-plugins/

     

    Using VST width plug-ins  ONE  TWO

     

    Improving vocal quality in Audacity

    [https://www.instructables.com/How-to-Improve-Vocal-Quality-in-Audacity/]

     

    Five tips for better quality in vocal recordings    

    [https://www.aimm.edu/blog/5-tips-to-make-your-vocal-recording-sound-higher-quality]

     

    Using EQ to improve vocal quality in recordings  [https://music.tutsplus.com/tutorials/a-master-guide-to-voice-equalization-how-to-apply-eq-to-voice-recordings--cms-25184 ]

     

    Increasing perceived loudness  [  https://iconcollective.edu/increase-perceived-loudness/   ]

     

    Tips for more effective engineering of your recordings

    [https://www.audio-issues.com/home-recording-studio/20-steps-to-becoming-a-better-audio-engineer/]

     

    Twenty quick and dirty recording tips  [ https://www.musicradar.com/tuition/tech/20-recording-tips-569280 ]

     

    Random recording tips from pros  [https://www.musictech.net/guides/recording-studio-tips-tricks-from-pros/ ]

     

    mixing tips  [ https://theproaudiofiles.com/mixing-tips/ ]

     

     

    https://theproaudiofiles.com/mid-side-processing/

     

    Baxandall eq plugin https://fuseaudiolabs.com/#/pages/product?id=300965965

     

    Calhalla reverb plugin https://valhalladsp.com/shop/reverb/valhalla-supermassive/

     

    Improving audio quality in Audacity [ https://blog.accusonus.com/audio-clean-up/audacity-improve-audio-quality/  ]

     

    Using EQ to enhance recordings  [ https://www.soundonsound.com/techniques/using-equalisation ]

     

    EQ in mixing  [ https://www.teachmeaudio.com/mixing/techniques/equalization ]

     

    Mid-side references:

     

    https://www.sonible.com/blog/mid-side-drum-loop/

     

    https://www.musictech.net/tutorials/cubase/understanding-mid-side-processing-in-cubase/

     

    https://www.platinumaudiolab.com/blog/tutorial-mid-side-processing-basics/

     

    https://www.soundonsound.com/techniques/sides-splitting

     

    https://www.soundonsound.com/techniques/creative-midside-processing  [mid side or “side splitting’]

     

    Stereo-izing [https://www.creativefieldrecording.com/2015/02/25/how-to-stereoize-mono-sounds-a-trick-for-creating-acoustically-sound-clips/]

     

     

     




    User Feedback

    Recommended Comments

    As often happens, I’m starting this series here, will circle back to the first two articles. But I wanted to mention that Tracktion DAW is available as a full-featured free version. This started with their making earlier versions free a few years ago. I may want to update mine...

     

    https://www.tracktion.com/products/waveform-free

     

    But in all honesty I use Audacity more often, partly out of familiarity, but mostly because it’s often the most efficient tool, at least for low track counts. 

    Share this comment


    Link to comment
    Share on other sites

    Can you link the unprocessed wav files again? Sorry i missed it

    Share this comment


    Link to comment
    Share on other sites

    8 hours ago, Rexp said:

    Can you link the unprocessed wav files again? Sorry i missed it

    There are no raw tracks because the processing projects described in the article are done on stereo mixes.  So there are 3 links to wavs in the article, and all 3 are stereo mixes of multiple studio tracks of individual instruments. The first is a mixed and normalized but otherwise unaltered segment with an acoustic guitar, a resonator guitar, and a harmonica.  The second is a complete basic master track of these parts plus the vocal, and the third is a “remastering” of the second one using mid-side processing that added the delays, EQ etc described in the article.  
     

    You could go back to raw original tracks and both re-edit and re-mix them before making a new master recording, i.e. the final version in the final format(s) to be copied for production of distributable / salable recordings.  But remastering alone is done on an already mixed file.  A complete remix of raw tracks is both much more involved and beyond the scope of this simple introduction to the life of a recording between capture and consumer.
     

    The second link provided in the article is a basic stereo mix of the original instrumental and vocal tracks for one song from a CD made for and sold by the Philly Blues Society.  I performed and recorded all parts myself in my studio, and I provided this stereo mix to the commercial lab that mastered and issued the disc last year using this file.  It’s excellent for experimenting - the instruments and voice are simple, clear, and very responsive to editing.

     

    The third link is an example of remastering the second file using mid-side processing with EQ, delay etc as described in the article.  It’s just one example - there’s an endless spectrum of possible results.  I hope you enjoy and benefit from this!

    Share this comment


    Link to comment
    Share on other sites

    11 hours ago, Mike27 said:

    As often happens, I’m starting this series here, will circle back to the first two articles. But I wanted to mention that Tracktion DAW is available as a full-featured free version. This started with their making earlier versions free a few years ago. I may want to update mine...

     

    https://www.tracktion.com/products/waveform-free

     

    But in all honesty I use Audacity more often, partly out of familiarity, but mostly because it’s often the most efficient tool, at least for low track counts. 

    I have Tracktion 7 on my Win10 PC and agree that it's well worth considering.  Strengths include good VST instrument support and a simple one window GUI with logical work flow from left to right.  As I recall, it does not have a separate mixer window and I missed that.  I've also installed it on Linux boxes, RPi 3 and RPi 4.  It's still a 32 bit RPi program, so it doesn't take advantage of the latest 64 bit Raspberry Pi OS (which means that it can't access more than 4 gigs of RAM).  It works very well on my Ubuntu 20 media center.

    Share this comment


    Link to comment
    Share on other sites

    On 6/17/2021 at 4:51 AM, bluesman said:

    I have Tracktion 7 on my Win10 PC and agree that it's well worth considering.  Strengths include good VST instrument support and a simple one window GUI with logical work flow from left to right.  As I recall, it does not have a separate mixer window and I missed that.  I've also installed it on Linux boxes, RPi 3 and RPi 4.  It's still a 32 bit RPi program, so it doesn't take advantage of the latest 64 bit Raspberry Pi OS (which means that it can't access more than 4 gigs of RAM).  It works very well on my Ubuntu 20 media center.

    Ha, I never got past opening a few files; was already using an old version of Studio One for a few simple projects, & have enough I/O to mix analog if I want. None of which is germane to the article. But I do sometimes make certain... adjustments... to favorite music I find “lacking.” It’s really just an outgrowth of decades of making safety copies, maybe using a click reducer or other such toy. So I’ll be interested in any observations, discussion of technique, etc. 

    Share this comment


    Link to comment
    Share on other sites

    1 hour ago, Mike27 said:

    Ha, I never got past opening a few files; was already using an old version of Studio One for a few simple projects, & have enough I/O to mix analog if I want. None of which is germane to the article. But I do sometimes make certain... adjustments... to favorite music I find “lacking.” It’s really just an outgrowth of decades of making safety copies, maybe using a click reducer or other such toy. So I’ll be interested in any observations, discussion of technique, etc. 

    That’s exactly what I had in mind when I came up with the topic and approach!  There are many pro tricks available to us too. And even those who won’t be doing this to their own files will benefit from knowing more about how recordings are made.  The mid-side decomp is particularly useful and common, but there’s an endless stream of tweaks out there.  Enjoy!

    Share this comment


    Link to comment
    Share on other sites

    5 hours ago, bluesman said:

    That’s exactly what I had in mind when I came up with the topic and approach!  There are many pro tricks available to us too. And even those who won’t be doing this to their own files will benefit from knowing more about how recordings are made.  The mid-side decomp is particularly useful and common, but there’s an endless stream of tweaks out there.  Enjoy!

    I stumbled across a free VST “stereo scope” plugin that might be useful here. The X-Y display illustrates the relationship of L/R to M/S in real time. 

     

    https://www.meldaproduction.com/MStereoScope

     

    I have no affiliation with the vendor, but they offer an interesting array of effects for various prices, in some cases free. 
     

    Below is a screen cap, from a major label CD reissue of a well-known ‘70s album. The entire disc has been mastered about 3 dB out of balance. Oopsie!

    6167ED22-B399-4680-981F-C36F50F96149.png

    Share this comment


    Link to comment
    Share on other sites

    8 hours ago, Mike27 said:The entire disc has been mastered about 3 dB out of balance. Oopsie!

    6167ED22-B399-4680-981F-C36F50F96149.png

    Then again, that may be the dynamic of the original recording.  Musical performance is not perfectly symmetric - it’d be pure coincidence if left-right balance were exactly 50:50 even if one or more performers (or their sound reinforcement or pan placement) had been dead center.  You could try “remastering” it to balance content exactly, just to see if it sounds different.

    Share this comment


    Link to comment
    Share on other sites

    7 hours ago, bluesman said:

    Then again, that may be the dynamic of the original recording.  Musical performance is not perfectly symmetric - it’d be pure coincidence if left-right balance were exactly 50:50 even if one or more performers (or their sound reinforcement or pan placement) had been dead center.  You could try “remastering” it to balance content exactly, just to see if it sounds different.

    I completely agree.  The track (first pic) is from a late-1980s CD issue of Wishbone Ash's "Argus."  I have an original MCA LP (you can see where the Decca logo was airbrushed from the jacket art) for comparison, and the difference is quite audible.  Many elements are panned to the center.  I chose this example partly because it's unusually bad, but also it points out the need to trust your ears and your monitoring above all.

     

    Also, as much as we want the engineering to be perfect, there are many opportunities for error, human and electronic/mechanical.  In this case I suspect that, under pressure to get product out the door in the then-new CD medium, someone eyeballed it wrong and QA didn't catch it.

    And, I did "remaster," and it does sound different.  In this case I also messed around a bit with the bass, and removed some noise below 20 Hz.  The second screen cap shows the final result.  Still doesn't sound quite like the LP, though, but I didn't expect that anyway. 

    sword.png

    swordproc.png

    Share this comment


    Link to comment
    Share on other sites

    Wow...what an article ...another professional piece.... i only got a tiny taste of what these guys do for my listening pleasure by searching for low latency tips from the Pro audio world. My computer now sounds more like a tuned linux streamer than Win 10 NUC....

    Thank you for another window into this world.... more like a primer for a budding mixing engineer...

     

    Good luck

    Dave 🙂

    Share this comment


    Link to comment
    Share on other sites



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now




×
×
  • Create New...