Basics: Five Questions about Effects Placement

By Craig Anderton

There are plenty of places in SONAR where you can process the audio signal, but you need to know how to choose the right one.

What’s an “insert” effect? Don’t you always “insert” an effect? You indeed “insert” effects, but there’s a specific effect type usually called an Insert effect that inserts into an individual mixer channel. In SONAR, this inserts into a channel’s FX Bin or the ProChannel (Fig. 1).

Fig. 1: The FX bin for two channels have insert effects, as does the ProChannel for the Vocals channel.

Insert effects affect only the channel into which they are inserted. Typical insert effects include dynamics processors, distortion, EQ (because of EQ’s importance, it’s a permanent ProChannel insert effect), flanging, and other effects that apply to a specific sound in a specific channel.

Then what’s a “send” effect? Also called an Continue reading “Basics: Five Questions about Effects Placement”

Basics: Five Questions About Panning Laws

By Craig Anderton

It’s not just a good idea, it’s the law…panning law, that is. Let’s dispel the confusion surrounding this sometimes confusing topic.

What does a panning law govern? When a mono input feeds a stereo bus, the panning law determines the apparent and actual sound level as you sweep from one side of the stereo field to the other.

But why is a “law” needed? Doesn’t the level just stay the same as you pan? Not necessarily. Panning laws date back to analog consoles. If a pan control had a linear taper (in other words, a constant rate of resistance change as you turned it), then the sound was louder when panned to center. To compensate, hardware mixers used non-linear resistance tapers to drop the level, typically by -3 dB RMS, at the center. This gave an apparent level that was constant as you panned across the stereo soundstage. If that doesn’t make sense…just take my word for it, and keep reading.

Okay, then there’s a law. Isn’t that the end of it? Well, it wasn’t really a “law,” or a standard. Come to think of it, it wasn’t a specification or even a “recommendation.” Some engineers dropped the center level a little more to let the sides “pop” more, or to have mixes seem less “monoized” and therefore create more space for vocalists who were panned to center. Some didn’t drop the center level at all, and some did custom tweaks.

Why does this matter to a DAW like SONAR, which doesn’t have a hardware mixer? Different DAWs default to different panning laws. This is why duplicating a mix on different DAWs can yield different results, and lead to foolish online discussions about how one DAW sounds “punchier” or “wimpier” than another if someone brings in straight audio files and sets the panning and faders identically.

A mono signal of the same level feeds each fader pair, and each pair is subject to different SONAR panning laws. Note the difference in levels with the panpot panned to one side or centered. The tracks are in the same order as the descriptions in SONAR’s panning laws documentation and the listing in preferences. Although the sin/cos and square root versions may seem to produce the same results, the taper differs across the soundstage between the hard pans and center.

This sounds complicated, and is making my head explode—can you just tell me what I need to do so I can go back to making music? SONAR provides six different panning law options under Preferences, so not only can you choose the law you want, the odds of being able to match a different DAW’s law are excellent. The online help describes how the panning laws affect the sound. So there are really only two crucial concepts:

  • The pan law you choose can affect a mix’s overall sound if you have a lot of mono sound sources (panpots with stereo channels are balance controls, which is a whole other topic). So try mixes with different laws, choose a law you like, and stick with it. I prefer -3 dB center, sin/cos taper, and constant power; the signal level stays at 0dB when panned right or left, but drops by -3 dB in each channel when centered. This is how I built hardware mixers, so it’s familiar territory. It’s also available in many DAWs. But use what you like…after all, I’m not choosing what’s “right,” I’m simply choosing what I like.
  • If you import an OMF file from another DAW or need to duplicate a mix from another DAW, ask what panning law was used in creating the file. One of SONAR’s many cool features is that it will likely be able to match it.

There, that wasn’t so bad. Ignorance of the law is no excuse, and now you have answers to five questions about panning laws.

 

Basics: Five Questions About Using Stompboxes with SONAR

by Craig Anderton

Plug-in signal processors are a great feature of computer-based recording programs like SONAR, but you may have some favorite stompboxes with no plug-in equivalents—like that cool fuzz pedal you love, or the ancient analog delay you scored on eBay. Fortunately, with just a little bit of effort you can make SONAR think external hardware effects are actually plug-ins.

1. What do I need to interface stompboxes with SONAR? You’ll need a low-latency audio interface with an unusd analog output and unused analog input (or two of each for stereo effects), and cords to patch these audio interface connections to the stompbox. We’ll use the TASCAM US-4×4 interface because it has extra I/O and low latency, but the same principles apply to other audio interfaces.

2. How do I hook up the effect and the interface? SONAR’s External Insert plug-in inserts in an FX bin, and diverts the signal to the assigned audio interface output. You patch the audio interface output to a hardware effect’s input, then patch the hardware effect’s output to the assigned audio interface input. This input returns to the External Effect plug-in, and continues on its way through the mixer. For this example, we’ll assume a stompbox with a mono input and stereo output.

3. What are correct settings for the External Insert plug-in parameters? When you insert the External Insert into the FX bin, a window appears that provides all the controls needed to set up the external hardware.

  • Send. This section’s drop-down menu assigns the send output to the audio interface. In this example, the send feeds the US-4×4’s output 3. Patch this audio interface output to your effect’s input. (Note that if an output is already assigned, it won’t appear in the drop-down menu.)
  • Output level control. The level coming out of the computer will be much higher than what most stompboxes want, so in this example the output level control is cutting the signal down by about -12 dB to avoid overloading the effect.
  • Return. Assign this section’s drop-down menu to the audio interface input through which the stompbox signal returns (in this example, the US-4×4’s stereo inputs 3 and 4). Patch the hardware effect output(s) to this input or inputs.
  • Return level control. Because the stompbox will usually have a low-level output, this slider brings the gain back up for compatibility with the rest of the system. In this example, the slider shows about +10 dB of gain. (Note: You can invert the signal phase in the Return section if needed.)

4. Is it necessary to compensate for the delay caused Continue reading “Basics: Five Questions About Using Stompboxes with SONAR”

Multi-Track Drum Editing – Identifying & Splitting Drum Hits

You need to start with a great performance

Before you begin to edit drum stems you have to make sure that you are working with tracks that were recorded close to a click. They need to be consistent. Tightening up the performance is something that is very invasive and requires a lot of time. If the drummer can’t put in the time to learn the parts then you should wait until they are ready to record their parts properly. Having this knowledge will make your life easier and should be something you think about during the preproduction stages of any record.

 

A note about the editing process.

The purpose of this type of editing is to identify the strong hits of the drum beat, split them into tiny parts, and then crop and align those small parts. The splits will depend on which part of the drum falls on each down beat.

In this tutorial Kicks happen every 1/4 note, snares every 2nd and 4th beat, and high hats on every 1/8th note. This happens for about 20 measures with various fills here and there and then it switches to a different pattern. We’ll move in measure by measure increments so that we don’t bite off more than we can chew at first.

Engage the metronome so that you can hear the pulse.  This will help you check your work as you edit. Download the project files here (if you didn’t download them from our previous post)  to get started:

Multi-Track Drum Editing Tutorial

Multi-track drum editing requires you to listen intently to the audio you’re editing. I recommend using headphones for this tutorial so that you can hear subtle edits. Erroneous edits are most exposed in the overheads, high hat, and cymbal frequencies so we’ll need to solo those as well as the kick and snare track while we work through this project.

As we work through the session the high hat and ride will need to be solo’d due to the spot mics that were placed on these. Everything else will follow suit with your editing.

You can also adjust your Track Height in the Track View by dragging the borders of the Continue reading “Multi-Track Drum Editing – Identifying & Splitting Drum Hits”

Multi-Track Drum Editing – DLC and Basic Tools

The need for perfect drum production is at an all time high.

In today’s world there is a huge need for all types of drum production. Everything from VST instruments to advanced drum replacement software has been growing in popularity. For the most part, records that require the tracking of live drums always have some sort of drum editing applied. This process is meticulous, long, and can be frustrating if you have never done this much in depth editing before.

Downloadable Content:

Let’s start by getting you the files you need to follow along with this tutorial.

Multi-Track Drum Editing Tutorial

Once downloaded, they should open just fine inside of SONAR X3.

Understanding the basics.

Before diving in, let’s take a look at some essential tools that we’ll be using for major drum editing. These tools may be basic to some, but are definitely the right functions we’ll need in SONAR to edit down these drums.

Creating selection groups

The first step in editing multi-track drums is making selection groups. Once created, these clips will be synced to one another for batch editing tasks – like multi-track editing. During the course of this tutorial we’ll be relying heavily on splitting clips – grouping will make this faster and more efficient.

To create these, choose CTRL+A within the Track View and then right-click on your clips. Near the bottom of the menu there will be an option that says Create Selection Group from selected clips. Select this and a number will appear in the header of your clips indicating that your clips are all in a group now.

As we work through the song the different Split edits will cause the group number to increase. This indicates that a new group has been made. You can change whether or not this occurs within the Preferences here:

 

Tab to Transients

Tabbing to transients locates strong transients and moves Continue reading “Multi-Track Drum Editing – DLC and Basic Tools”

Basics: Five Questions about Filter Response

By Craig Anderton 

You can think of filters as combining amplification and attenuation—they make some frequencies louder, and some frequencies softer. Filters are the primary elements in equalizers, the most common signal processors used in recording. Equalization can make dull sounds bright, tighten up “muddy” sounds by reducing the bass frequencies, reduce vocal or instrument resonances, and more. 

Too many people adjust equalization with their eyes, not their ears. For example, once after doing a mix I noticed the client writing down all the EQ settings I’d done. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes. 

While certain EQ settings can certainly be a good point of departure, EQ is a part of the mixing process. Just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. Part of this involves knowing how to find the magic EQ frequencies for particular types of musical material, and that requires knowing the various types of filter responses used in equalizers. 

What’s a lowpass response? A filter with a lowpass response passes all frequencies below a certain frequency (called the cutoff or rolloff frequency), while rejecting frequencies above the cutoff frequency (Fig. 1). In real world filters, this rejection is not total. Instead, past the cutoff frequency, the high frequency response rolls off gently. The rate at which it rolls off is called the slope. The slope’s spec represents how much the response drops per octave; higher slopes mean a steeper drop past the cutoff. Sometimes a lowpass filter is called a high cut filter.

 Fig. 1: This lowpass filter response has a cutoff of 1100 Hz, and a moderate 24/dB per octave slope.

What’s a highpass response? This is the inverse of a lowpass response. It passes frequencies above the cutoff frequency, while rejecting frequencies below the cutoff (Fig. 2). It also Continue reading “Basics: Five Questions about Filter Response”

Basics: Five Questions about Audio Specs

By Craig Anderton 

Specifications don’t have to be the domain of geeks—they’re not that hard to understand, and can guide you when choosing audio gear. Let’s look at five important specs, and provide a real-world context by referencing them to TASCAM’s new US-2×2 and US-4×4 audio interfaces. 

First, we need to understand the decibel (dB). This is a unit of measurement for audio levels (like an inch or meter is a unit of measurement for length). A 1 dB change is approximately the smallest audio level difference a human can hear. A dB spec can also have a – or + sign. For example, a signal with a level of -20 dB sounds softer than one with a level of -10 dB, but both are softer than one with a level of +2 dB. 

1. What’s frequency response? Ideally, audio gear designed for maximum accuracy should reproduce all audible frequencies equally—bass shouldn’t be louder than treble, or vice-versa. A frequency response graph measures what happens if you feed test frequencies with the same level into a device’s input, then measure the output to see if there are any variations. You want a response that’s flat (even) from 20 Hz to 20 kHz, because that’s the audible range for humans with good hearing. Here’s the frequency response graph for TASCAM’s US-2×2 interface (in all examples, the US-4×4 has the same specs).

This shows the response is essentially “flat” from 50 Hz to 20 kHz, and down 1 dB at 20 Hz. Response typically goes down even further below 20 Hz; this is deliberate, because there’s no need to reproduce signals we can’t really hear. The bottom line is this graph shows that the interface reproduces everything from the lowest note on a bass guitar to a cymbal’s high frequencies equally well. 

2. What’s Signal-to-Noise Ratio? All electronic circuits generate Continue reading “Basics: Five Questions about Audio Specs”

Basics: Five Questions about Latency and Computer Recording

Get the lowdown on low latency, and what it means to you

By Craig Anderton 

Recording with computers has brought incredible power to musicians at amazing prices. However, there are some compromises—such as latency. Let’s find out what causes it, how it affects you, and how to minimize it.  

1. What is latency? When recording, a computer is often busy doing other tasks and may ignore the incoming audio for short amounts of time. This can result in audio dropouts, clicks, excessive distortion, and sometimes program crashes. To compensate, recording software like SONAR dedicates some memory (called a sample buffer) to store incoming audio temporarily—sort of like an “audio savings account.” If needed, your recording program can make a “withdrawal” from the buffer to keep the audio stream flowing. 

Latency is “geek speak” for the delay that occurs between when you play or sing a note, and what you hear when you monitor your playing through your computer’s output. Latency has three main causes: 

  • The sample buffer. For example, storing 5 milliseconds (abbreviated ms, which equals 1/1000th of a second) of audio adds 5 ms of latency (Fig. 1). Most buffers sizes are specified in samples, although some specify this in ms. 

 Fig. 1: The control panel for TASCAM’s US-2×2 and US-4×4 audio interfaces is showing that the sample buffer is set to 64 samples. 

  • Other hardware. Converting analog signals into digital and back again takes some time. Also, the USB port that connects to your interface has additional buffers. These involve the audio interface that connects to your computer and converts audio signals into digital signals your computer can understand (and vice-versa—it also converts computer data back into audio).
  • Delays within the recording software itself. A full explanation would require another article, but in short, this usually involves inserting certain types of processors within your recording software. 

2. Why does latency matter? Continue reading “Basics: Five Questions about Latency and Computer Recording”

Optimizing Vocals with DSP

Optimizing tracks with DSP, then adding some judicious use of the DSP-laden VX-64 Vocal Strip, offers very flexible vocal processing. 

By Craig Anderton 

This is kind of a “twofer” article about DSP—first we’ll look at some DSP menu items, then apply some signal processing courtesy of the VX64—all with the intention of creating some great vocal sounds. 

PREPPING A VOCAL WITH “MENU” DSP 

“Prepping” a vocal with DSP before processing can make the processing more effective. For example, if you want to compress your vocal and there are significant level variations, you may end up adding lots of compression to accommodate quiet parts. But then when loud parts kick in, the compression starts pumping. 

Here’s another example. A lot of people use low-cut filters to banish rogue plosives (e.g., a popping “b” or “p” sound). However, it’s often better to add a fade-in to get rid of the plosive; this retains some of the plosive sound, and avoids affecting frequency response. 

Adding a fade-in to a plosive can get rid of the objectionable section while leaving the vocal timbre untouched. 

Also check if any levels need to be evened out, because there will usually be some places where the peaks are considerably higher than the rest of the vocal, and you don’t want these pumping the compressor either. The easiest fix is to select a track, drag in the timeline above the area you want to edit, then go Process > Apply Effect > Gain and drop the level by a dB or two. 

This peak is considerably louder than the rest of the vocal, but reducing it a few dB will bring it into line. 

Also note that if you have Melodyne Editor, you can use the Percussive algorithm with the volume tool to level out words visually. This is really fast and effective. 

While you’re playing around with DSP, this is also a good time to cut out silences, then add fadeouts into silence, and fadeins up from silence. Do this with the vocal soloed, so you can hear any little issues that might come back to haunt you later. Also, sometimes it’s a good idea to normalize individual vocal clips up to –3dB or so (leave some headroom) so that the compressor sees a more consistent signal. 

The clip on the left has been normalized and faded out. The silence between clips has been cut away. The clip on the right fades in, but has not been normalized. 

With DSP processing, it’s good practice to work on a copy of the vocal, and make the changes permanent as you do them. The simplest way to apply Continue reading “Optimizing Vocals with DSP”

The Art of Transient Shaping with the TS-64

Understand this often-misunderstood processor, and your tracks will benefit greatly 

By Craig Anderton 

Transient Shapers are interesting plug-ins. I don’t see them mentioned a lot, but that might be because they’re not necessarily intuitive to use. Nor are they bundled with a lot of DAWs, although SONAR is a welcome exception. 

I’ve used transient shaping on everything from a tom-based drum part to make each hit “pop” a little more, to bass to bring out the attacks and also add “weight” to the decay, to acoustic guitar to tame overly-aggressive attacks. The TS-64 has some pretty sophisticated DSP, so let’s find out how to take advantage of its talents.

But first, a warning: transient shaping requires a “look-ahead” function, as it has to know when transients are coming, analyze them, filter them, and then calculate when and how to apply particular amounts of gain so it can act on the transients as soon as they occur. As a result, simply inserting the TS-64 will increase latency. If this is a problem, either leave it bypassed until it’s time to mix, or render the audio track once you get the sound you want. Keep an original of the audio track in case you end up deciding to change the shaping later on. 

TS-64 TRANSIENT SHAPER BASICS

A Transient Shaper is a dynamics processor that modifies only a signal’s attack characteristics. If there’s no defined transient the TS-64 won’t do much, or worse yet, add unpleasant effects. 

Transient shapers are not just for drums—guitars, electric pianos, bass, and even some program material are all suitable for TS-64 processing if they have sharp, defined transients. And it’s not just about making transient more percussive; you can also use the TS-64 to “soften” transients, which gives a less percussive effect so a sound can sit further back in a track. 

There are two main elements to transient shaping. The first is Continue reading “The Art of Transient Shaping with the TS-64”