You can think of filters as combining amplification and attenuation—they make some frequencies louder, and some frequencies softer. Filters are the primary elements in equalizers, the most common signal processors used in recording. Equalization can make dull sounds bright, tighten up “muddy” sounds by reducing the bass frequencies, reduce vocal or instrument resonances, and more.
Too many people adjust equalization with their eyes, not their ears. For example, once after doing a mix I noticed the client writing down all the EQ settings I’d done. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes.
While certain EQ settings can certainly be a good point of departure, EQ is a part of the mixing process. Just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. Part of this involves knowing how to find the magic EQ frequencies for particular types of musical material, and that requires knowing the various types of filter responses used in equalizers.
What’s a lowpass response? A filter with a lowpass response passes all frequencies below a certain frequency (called the cutoff or rolloff frequency), while rejecting frequencies above the cutoff frequency (Fig. 1). In real world filters, this rejection is not total. Instead, past the cutoff frequency, the high frequency response rolls off gently. The rate at which it rolls off is called the slope. The slope’s spec represents how much the response drops per octave; higher slopes mean a steeper drop past the cutoff. Sometimes a lowpass filter is called a high cut filter.
Fig. 1: This lowpass filter response has a cutoff of 1100 Hz, and a moderate 24/dB per octave slope.
Specifications don’t have to be the domain of geeks—they’re not that hard to understand, and can guide you when choosing audio gear. Let’s look at five important specs, and provide a real-world context by referencing them to TASCAM’s new US-2×2 and US-4×4 audio interfaces.
First, we need to understand the decibel (dB). This is a unit of measurement for audio levels (like an inch or meter is a unit of measurement for length). A 1 dB change is approximately the smallest audio level difference a human can hear. A dB spec can also have a – or + sign. For example, a signal with a level of -20 dB sounds softer than one with a level of -10 dB, but both are softer than one with a level of +2 dB.
1. What’s frequency response? Ideally, audio gear designed for maximum accuracy should reproduce all audible frequencies equally—bass shouldn’t be louder than treble, or vice-versa. A frequency response graph measures what happens if you feed test frequencies with the same level into a device’s input, then measure the output to see if there are any variations. You want a response that’s flat (even) from 20 Hz to 20 kHz, because that’s the audible range for humans with good hearing. Here’s the frequency response graph for TASCAM’s US-2×2 interface (in all examples, the US-4×4 has the same specs).
This shows the response is essentially “flat” from 50 Hz to 20 kHz, and down 1 dB at 20 Hz. Response typically goes down even further below 20 Hz; this is deliberate, because there’s no need to reproduce signals we can’t really hear. The bottom line is this graph shows that the interface reproduces everything from the lowest note on a bass guitar to a cymbal’s high frequencies equally well.
Get the lowdown on low latency, and what it means to you
By Craig Anderton
Recording with computers has brought incredible power to musicians at amazing prices. However, there are some compromises—such as latency. Let’s find out what causes it, how it affects you, and how to minimize it.
1. What is latency? When recording, a computer is often busy doing other tasks and may ignore the incoming audio for short amounts of time. This can result in audio dropouts, clicks, excessive distortion, and sometimes program crashes. To compensate, recording software like SONAR dedicates some memory (called a sample buffer) to store incoming audio temporarily—sort of like an “audio savings account.” If needed, your recording program can make a “withdrawal” from the buffer to keep the audio stream flowing.
Latency is “geek speak” for the delay that occurs between when you play or sing a note, and what you hear when you monitor your playing through your computer’s output. Latency has three main causes:
The sample buffer. For example, storing 5 milliseconds (abbreviated ms, which equals 1/1000th of a second) of audio adds 5 ms of latency (Fig. 1). Most buffers sizes are specified in samples, although some specify this in ms.
Fig. 1: The control panel for TASCAM’s US-2×2 and US-4×4 audio interfaces is showing that the sample buffer is set to 64 samples.
Other hardware. Converting analog signals into digital and back again takes some time. Also, the USB port that connects to your interface has additional buffers. These involve the audio interface that connects to your computer and converts audio signals into digital signals your computer can understand (and vice-versa—it also converts computer data back into audio).
Delays within the recording software itself. A full explanation would require another article, but in short, this usually involves inserting certain types of processors within your recording software.
Optimizing tracks with DSP, then adding some judicious use of the DSP-laden VX-64 Vocal Strip, offers very flexible vocal processing.
By Craig Anderton
This is kind of a “twofer” article about DSP—first we’ll look at some DSP menu items, then apply some signal processing courtesy of the VX64—all with the intention of creating some great vocal sounds.
PREPPING A VOCAL WITH “MENU” DSP
“Prepping” a vocal with DSP before processing can make the processing more effective. For example, if you want to compress your vocal and there are significant level variations, you may end up adding lots of compression to accommodate quiet parts. But then when loud parts kick in, the compression starts pumping.
Here’s another example. A lot of people use low-cut filters to banish rogue plosives (e.g., a popping “b” or “p” sound). However, it’s often better to add a fade-in to get rid of the plosive; this retains some of the plosive sound, and avoids affecting frequency response.
Adding a fade-in to a plosive can get rid of the objectionable section while leaving the vocal timbre untouched.
Also check if any levels need to be evened out, because there will usually be some places where the peaks are considerably higher than the rest of the vocal, and you don’t want these pumping the compressor either. The easiest fix is to select a track, drag in the timeline above the area you want to edit, then go Process > Apply Effect > Gain and drop the level by a dB or two.
This peak is considerably louder than the rest of the vocal, but reducing it a few dB will bring it into line.
Also note that if you have Melodyne Editor, you can use the Percussive algorithm with the volume tool to level out words visually. This is really fast and effective.
While you’re playing around with DSP, this is also a good time to cut out silences, then add fadeouts into silence, and fadeins up from silence. Do this with the vocal soloed, so you can hear any little issues that might come back to haunt you later. Also, sometimes it’s a good idea to normalize individual vocal clips up to –3dB or so (leave some headroom) so that the compressor sees a more consistent signal.
The clip on the left has been normalized and faded out. The silence between clips has been cut away. The clip on the right fades in, but has not been normalized.
With DSP processing, it’s good practice to work on a copy of the vocal, and make the changes permanent as you do them. The simplest way to apply Continue reading “Optimizing Vocals with DSP”
Understand this often-misunderstood processor, and your tracks will benefit greatly
By Craig Anderton
Transient Shapers are interesting plug-ins. I don’t see them mentioned a lot, but that might be because they’re not necessarily intuitive to use. Nor are they bundled with a lot of DAWs, although SONAR is a welcome exception.
I’ve used transient shaping on everything from a tom-based drum part to make each hit “pop” a little more, to bass to bring out the attacks and also add “weight” to the decay, to acoustic guitar to tame overly-aggressive attacks. The TS-64 has some pretty sophisticated DSP, so let’s find out how to take advantage of its talents.
But first, a warning: transient shaping requires a “look-ahead” function, as it has to know when transients are coming, analyze them, filter them, and then calculate when and how to apply particular amounts of gain so it can act on the transients as soon as they occur. As a result, simply inserting the TS-64 will increase latency. If this is a problem, either leave it bypassed until it’s time to mix, or render the audio track once you get the sound you want. Keep an original of the audio track in case you end up deciding to change the shaping later on.
TS-64 TRANSIENT SHAPER BASICS
A Transient Shaper is a dynamics processor that modifies only a signal’s attack characteristics. If there’s no defined transient the TS-64 won’t do much, or worse yet, add unpleasant effects.
Transient shapers are not just for drums—guitars, electric pianos, bass, and even some program material are all suitable for TS-64 processing if they have sharp, defined transients. And it’s not just about making transient more percussive; you can also use the TS-64 to “soften” transients, which gives a less percussive effect so a sound can sit further back in a track.
It’s your one day off, you sit down to make some music, and boom, your computer crashes. Then you start to think; how long have I had this thing? When you get that feeling, it’s time to look at a new music computer. We’re going to outline the top five reasons you should consider a new Windows computer for your music studio.
1. You’re still running Windows XP
Windows XP was a great operating system, really, it was! But, Microsoft recently ended support for the well-known OS, and that means no more updates. This also means your computer could be at risk to crash at any moment. Additionally, many new Digital Audio Workstations, such as SONAR X3, don’t run on Windows XP. It’s ok, it’s time to let go. Windows 7 or Windows 8 offers a slew of new features that will make your music making experience easier.
2. You’re not able to get work done as fast
It stands to reason that if you have a computer that is over five years old and wasn’t built by a music computer manufacturer, it’s probably going to run a little slower than you’d like. Are your tracks taking longer to bounce down? Can’t run as many plugins as you used to? There’s only so much power you’re going to get out of an older computer that wasn’t built for audio, and that’s when it’s time to look at a new rig.
3. Your workflow is suffering
When you spend more time troubleshooting than you do making music, that’s a bad thing. We all know that time is limited, especially for being creative, and you have to make the most of your time. When you have to spend your time making your computer do what you want, instead of making music, your frustration goes up, and your creativity goes down.
4. You’ve outgrown your machine
Let’s face it, it happens. You get a computer, you put it to good use, but with time and growth, your needs outweigh what your computer is capable of. Maybe you need to be able to record more tracks at once, or maybe you simply can’t do what you need with your current hardware. It’s ok, that just means you’re growing as a creative, and it’s time to look for a new computer.
5. Your current machine is loud, filled with bloatware, and doesn’t fit your studio.
Most off the shelf computers – meaning those from big box stores and websites – might seem like great deals on paper, but when you get them home, you find that your computer’s hard drive is filled with bloatware (did you really want all those demo antivirus applications? Yeah, we didn’t think so). That is why your computer was cheap – lots of companies rented out space on your hard drive, and your Windows installation image isn’t really a true Windows image, which means you can’t even re-install Windows cleanly. Additionally, your computer is a lot slower due to all that bloatware, which means you won’t get anything done.
Off the shelf computers aren’t made with silence in mind, nor with the needs of a creative in mind. They use sub-par components that mean your computer will be loud – which is hard when you need to record that perfect vocal cut. You can’t replace many of those components, because they’re proprietary, which means you can’t purchase replacement parts.
What about Support? If you have an issue with your music software, you can’t call a big box computer manufacturer and ask them for help – they simply won’t help you. That can be pretty frustrating when you need answers.
Finally, off the shelf computers rarely fit the needs of the creative – literally. They’re not rackmountable, they don’t have the motherboard slots you need (like legacy PCI slots, for instance), and their case sizes can be limiting at the very least. All of this leads to a less-than-stellar experience with a computer you paid good money for, expecting it to be great for audio production.
These are just a few reasons you should look at obtaining a new computer which has been certified for music production. If you answered yes to even one of these points, you might want to consider getting a new music computer so you can truly get back to being creative with your computer.
Granted, it was hard to narrow it down to five. But these goodies have stood out over the past year as being essentials for my own studio, and they can contribute much to any studio makeover.
Uninterruptible Power Supply
I first became aware of the power of the UPS with ADATs. My ADATs used to do weird things, but stopped doing weird things after I bought a UPS. My friends with ADATs who didn’t have a UPS experienced weird things. Anecdotal evidence? Sure. But the first time a UPS keeps your project alive when some idiot drunk driver slams into a power pole and you lose your electricity, or you live where lightning is a frequent visitor, you’ll be glad you paid attention to this article and got a UPS. Just make sure you find one with sufficient power for your super-duper multi-core wonder box (and your monitor)—a lot of UPS devices in office supply stores are for little old ladies who use Pentium 4 computers only on Sundays to cruise the internet for recipes.
ACT (Active Controller Technology; in SONAR) is a powerful protocol, and its complexity can be sufficiently daunting that some people never take advantage of it. However, one of the rarely-considered advantages of a powerful protocol is that it’s often powerful enough to be used in a more basic way. So if you’ve wanted to take advantage of ACT without having to reach for the aspirin, you’re in the right place.
The conventional approach to ACT is using templates that let you apply hands-on control to various instruments and effects. This usually implies having a dedicated controller, spending some time setting up assignments and creating templates, and so on. However, you can also treat ACT more like a “controller scratch pad” that’s easy, efficient, and works with just about any MIDI controller. It’s the ideal solution for when you simply want some hands-on control without having to venture very far into left-brain territory.
Step 1: Choose Your Controller
One of my favorite ACT controllers is Native Instruments’ discontinued Kore 2 controller. The industrial design is first-class, it’s built solidly, and there’s enough functionality for what we need. Another advantage is that when NI stopped supporting Kore, the eBay prices took a major tumble. Although the examples in this article are based on Kore, please note that the same principles apply to virtually any MIDI controller.
Step 2: Grab Your Software
Many controllers have dedicated drivers, so if needed, make sure you have the latest. NI still offers the 32/64-bit Kore 2 Controller Driver 3.0.0 and the latest NI Controller Editor, which you can download for free from their site. Follow the instructions when installing, or you’ll wonder why the controller doesn’t work.
(Note: With the Kore 2 controller, you may first be greeted with an unusable bright red display. No worries: Hit Kore 2’s F2 button, navigate to Set, hit Enter, and use the navigation buttons and data wheel to control the Contrast and Backlight parameter values.)
The Controller Editor for NI’s Kore lets you specify various characteristics of the Kore 2 controller. In this picture, a button is being assigned to output a trigger when pushed down.
Various controllers may have options—such as assigning buttons to a latch, toggle, or trigger mode. Many of them have editors; Kore 2’s is somewhat more sophisticated than many others, but again, the principles are the same. In the case of Kore you open the Editor, select Kore Controller 2 from the drop-down menu, and use the Edit button in the Templates tab to choose New. This creates a general purpose MIDI control template. (While you’re at it, I recommend assigning the eight main buttons associated with the pots to Trigger, and action on Down. For a shift button, assign the monitor [speaker icon] button to Gate, again with action on down. Go to the file menu, and save the configuration as “Sonar ACT.ncc.”)
Ask songwriters about writing on a computer, and many of them will tell you it’s a creativity killer—as they reach for an acoustic guitar or piano to get their ideas down. But it doesn’t have to be that way. Although DAWs are thought of traditionally as being all about recording, editing, and mixing, for reasons we’ll cover here I’d rather boot up Sonar for songwriting as well.
Approaches to songwriting vary considerably, from those who strum some chords on a guitar for ideas, to those who start with beats, to those who seem to draw inspiration out of nowhere, and want to record what they hear quickly—before the inspiration fades. As a result, this article isn’t about what you should do to write songs, but rather, describes some particular Sonar tools in depth—some (or all) of which might be very helpful if you’re into songwriting.
Although songwriting styles are very personal, I think we can nonetheless agree on a few general points: While songwriting, you want your tools to stay out of the way and be transparent. You want a smooth-flowing, efficient, simple process; songwriting isn’t about endlessly tweaking a synth bass patch, but about coming up with a great bass part—thanks to the fluid nature of digital recording, just about anything can be replaced or refined at a later date. You want an environment that can simplify turning your abstract ideas into something tangible, while losing as little as possible in the translation. So, let’s look at some Sonar techniques that can help you accomplish that goal.
THE MIDI QUICK START
Normally you need to arm a MIDI track before you can record on it, but it’s possible to defeat this so that recording starts on any selected MIDI track as soon as you click on the transport’s Record button. I realize the default setting is there to prevent accidental overwriting of MIDI tracks, but personally, I find not having to arm a track liberating—it saves time and makes the recording process flow faster. To do this:
Go Edit > Preferences > MIDI > Playback and Recording.
Check the box for “Allow MIDI Recording without an Armed Track” (the 1st box under Record).
Click Apply then OK to close preferences.
It’s possible to record MIDI tracks without having to arm them first, which can be a real time-saver over the course of a song.
Sometimes EQ is more about “sonic sculpture” than anything else
by Craig Anderton
One of the most important aspects of mixing is using EQ to “carve out” a specific frequency range for instruments so they don’t conflict with each other. If instruments have their own sonic space, it’s easier to hear each instrument’s unique contribution, which increases the mix’s clarity.
Dan Gonzalez did a great series for the Cakewalk blog on subtractive EQ, and how cutting frequencies can help create a better a mix; this is more of a complementary article about how I carved out EQ for various instruments in a cover version of the song “Black Market Daydreams” (by UK songwriter Mark Longworth). All the displays are set for +/-6dB.
Choirs: Using a low-shelf response to cut starting at the midrange gives the choir more brightness and “air.” This way it sort of floats over the mix. The same approach works well for ethereal pads, and lets you mix them a little lower to make space for other sounds. Also note that I couldn’t resist throwing a little Gloss in there…