Archive for the ‘Home Audio Recording’ Category

Song: Second Law

April 21, 2009

Bookmark and Share

Second Law (2009) [Download]

I love working with VSTi’s. They provide a cheap (sometimes free) alternative to packing my studio space with a bunch of expensive instruments that I can’t afford and I don’t really know how to play anyway. Sure, the real thing sounds better, and if you have the luxury of easy access to hardware analog synths and grand pianos, you’d be foolish not to use them.

I don’t own a piano. Can’t play piano. But I want to learn.

This song started with a simple idea to piece together a few chords on the piano and see what would come of it. I’d sit at my console with no preconceived notion of a song, and use the DAW as a songwriting tool, starting with the piano — in this case, the TruePiano VSTi that comes with Cakewalk Sonar 8.

TruePiano is modeled (not sample-based), doesn’t sound great, but it loads quickly and plays lag-free. If I came up with something worth keeping, I could always replace it with a nice sounding sampled piano like NI’s Akoustik Piano.

This is one of those songs that, because I’m writing it at the console, never comes out as a proper demo. I create the song in layers, first laying the foundation (piano, here), then adding on rhythm or melodic instruments in iterative passes.

After I sufficiently nail the foundation, I’ll start working the rhythm and melodic tracks in an improvisational manner. I usually need 4 or 5 takes before I collect a decent vocabulary of runs, notes, vamps, etc., to piece into a coherent part.

Then, it’s just a matter of iterating over all the tracks to tighten and further define the specifics of the performances.

I pounded out a few repetitions of the chords on the piano and then copied them across 4 minutes of time, so what we’re looking at is a chord structure that doesn’t really change for the length of the song. This could be really boring, or maybe I could get away with crafting different parts by changing up the instrumental arrangement in certain sections. I’m not so hot at improvising at the keyboard, so I decided to just work with what I had.

Next up, I plugged in my Epiphone Dot, direct, and ran it through Amplitube 2, American Tube Clean preset with some Spring Reverb dialed up. I worked on some melodic lines for the intro and got a good taste of what a solo section might sound like.

I’m a big fan of guitar wankery. When it comes to my own playing, I can’t wait to get to the solo, and I have to restrain myself from wedging a solo in every last song.

Coming to have a strong feel for the melodic guitar part so early in the process seemed to predetermine that the song would be split into two symmetrical parts, divided by an instrumental break in the middle. I don’t know why I made this decision, and it only seems like a conscious decision in hind sight. Regardless, it is now an inviolable boundary of the song. Everything must fit into the container as it has been defined.

When it came time to put some lyrics to paper, I had two blocks of lyrics that began to take shape iteratively as I improvised vocals over the piano parts.

The loss of days
Makes you want to be mine
Our love is urgent, now

It’s good to be our generation
Think how the wretches before us
Lost their minds

Time is the issue. Metaphorically in the sense of it slipping away, and literally in the sense that now is always the latest point in time.

It reminded me of Isaac Asimov’s 1956 short story The Last Question which is concerned with our perception of forever, the universe’s inexorable march toward maximum entropy, and the human preoccupation with reversing the process.

Asimov’s story is told as a series of vignettes, through the eyes of our distant descendants, each one more chronologically advanced than the last. They ask the same questions and struggle with the same inevitabilities, despite the ever expanding scope of their computational powers.

Excerpt from The Last Question:

It was a nice feeling to have a Microvac of your own and Jerrodd was glad he was part of his generation and no other. In his father’s youth, the only computers had been tremendous machines taking up a hundred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been growing in size steadily for a thousand years and then, all at once, came refinement. In place of transistors had come molecular valves so that even the largest Planetary AC could be put into a space only half the volume of a spaceship.

…and, a few paragraphs later…

“So many stars, so many planets,” sighed Jerrodine, busy with her own thoughts. “I suppose families will be going out to new planets forever, the way we are now.”

“Not forever,” said Jerrodd, with a smile. “It will all stop someday, but not for billions of years. Many billions. Even the stars run down, you know. Entropy must increase.”

I continued writing lyrics, and as I laid pen to paper (and voice to mic), my focus became ever more narrow, zeroing in on a more literal interpretation of the story. My creative process had been hijacked! A viral infection, I say!

The result is I’m unhappy with the song after the instrumental break — the entire second half seems mediocre. I think the lyrics are far too literal and artless. I need to re-write them.

I could be one of those people
Living my life for a moment
Out of time

The loss of days
Makes you want to be mine
Our love is urgent, now

It’s good to be our generation
Think how the wretches before us
Lost their minds

Souls were never
Meant to be frozen in time
They have all expired, now

We wish that they were around
But now they’re gone
And that is such a shame

I guess I’m one of those people
Never did think that we’d run out
Of our time

Forever’s not
The sort of word to be kind
We’re all convergent, now

Don’t be afraid when the end comes
Entropy’s fated to claim us
In good time

Mother Nature’s not the sort
To be kind
Our love’s emergent, now

We wish we could be around
To watch us fall
And it would be OK

I tracked a first pass at the lead and background vocals after the lyrics were written. I didn’t want to do too much processing at this point because I felt the lyrics may change and my vocal performances are usually the last part to be set in stone. I double tracked the lead and background vocals with some delay on the second tracks of both, to enhance the feel during subsequent performances.

Next up, drums. I used the Toontrack Vintage kit, which is my favorite default for working out a drum part. I have a sneaking suspicion I’ll try out some of the Addictive Drums Vintage presets before all is said and done. I laid a kick and snare in one take and then some high-hats in a second take, which, you’ll notice, lose the time on several occasions. I must’ve been a fur piece into a bottle of dark, red wine at this point, and the darker the better — the buzz, not the performance. =)

I plugged my Rickenbacker 4001 bass (the one in my blog logo) directly into the Firewire 410, and ran it through the Amplitude Ampeg bass amp sim, don’t remember which amp at the moment. I was able to mostly define the notes I want to play, but I’ll have to wait until my next pass at the drums to tighten up the groove here. I’m not quite catching that kick a lot of the time and I don’t know if it’s the kick’s fault or the bass.

A good idea at this point might have been to tighten the drums and bass, but I lost interest in the fundamental rhythm tracks momentarily, and moved on, adding some more melodic flavors. I wanted the retro-futuristic, warm sounds of an analog synth.

One of my goals for this album is to spin tales that seem, on the surface, far removed from the concerns of our daily existence, but that are rooted, however obliquely, to some relatable emotional truth. The things that we’ll never leave behind, no matter how far we stray from our biological bootstraps.

I was looking for sounds that are synthetic imitations of the organic, and I had two previous synth touchstones in mind.

Eons of synth coronet carved these dunes

Eons of synth coronet carved these dunes

1) The delay-drenched synth coronets (originally an ARP, I believe) from Pink Floyd’s Shine On You Crazy Diamond. I had always imagined the back cover of that album, with the invisible man in the desert, as the backdrop for the beginning of Shine On. The coronets sound of the wind, shaping the dunes on a geological timescale, condensing the eons, bringing us to the point where the narrative begins.

I found mine in Arturia’s Moog Modular V VSTi.

Life forms are full of noxious gasses

Life forms are full of noxious gasses

2) I really liked the tactile quality of the soupy, gurgling, aquatic synth sounds throughout Ween’s album, The Mollusk, particularly the flatulence of The Golden Eel. A gutteral burp, signifying life.

I didn’t quite find the same sound. I latched onto a hivey buzz kind of sound with a bit of the lower gurgling I was looking for. I found it in the z3ta+ synth that comes with Sonar 8.

Next steps: Finalize the second-half lyrics. Tighten the groove. Re-record the vocals.

I’m gonna skip the track-by-track breakdown, unless anyone finds that particularly useful or interesting.

Estimated Song Completion: 60%


add to del.icio.us Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati


Advertisements

Not-So-Common-Sense Home Recording Tips #1

April 19, 2009

Bookmark and Share

First qualifier: I do not speak as an authority or expert in music production, to wit: Your Mileage May Vary.

One of the reasons I started this project was in the hopes that somebody might see how I’m doing things currently and offer some advice on how I can do them better.

I don’t plan on justifying all of my advice with deeply technical explanations. Any information I have to share at this point is simply representative of my own experience and what has worked best for meeting my own standards.

Second qualifier: My entire signal chain stays in-the-box from tracking to mix-down, to wit: if you’re using racks of outboard gear, mixing consoles, and analog equipment, then some of this information will likely not apply to you.

These tips are aimed at the novice-to-intermediate musician and recording hobbyist, the kind of person who may not even be using a computer built specifically to function as a DAW. This is the stuff I wish someone had told me when I first started recording.

Exceptions duly noted, I’d like to share with you some of the not-so-common-sense tips I’ve accumulated. You could say that most of these qualify as common-sense, but for numerous reasons (laziness, obstinance, ignorance, disbelief) it’s taken me years to integrate them all into my recording routine.

Let’s start with the painful one. You know, the one where I tell you that your gear sucks…


You need not invest a fortune building a recording setup capable of capturing the sounds you want.


Let’s not kid ourselves here. Home audio recording is an expensive hobby. While you should always, as a rule, make the most of what ya got, you’re going to hit a wall eventually and have to throw some cash at it.

The key is knowing where to invest your money, and there are several crucial pieces of equipment on which you should not skimp. I don’t mean you always need buy the high-end, top-of-the-line models, but you should do your research on these items because in some cases, the difference between $50 and $250 is immeasurable.

Microphones

If you are going to be recording vocals, acoustic instruments, amplified guitars, or drums, do yourself a favor and get a good microphone or two. Or three.

You’ll probably want a Shure SM57 or two. Or three. This mic is the Swiss Army Knife of the home studio. Use it for vocals, use it to track your acoustic guitar or mic your amp. Expect to pay $75 – $100 for one.

Get your hands on one good condenser microphone capable of capturing a large, round sound for vocals.

I picked up the Audio Technica AT4040 for $250 and have been very happy using that for my vocals and often as a second mic for my acoustic guitar. I also use an M-Audio Aries that came free with my Firewire 410 Audio Interface, but I wouldn’t especially recommend that one for a purchase.

Do your research and find out which one is best for you. The sky’s the limit on mic prices, but expect to pay $200 – $300 depending on how big you want to go for your prized “specialty” mic. It seems like this range is where you start to get into the more interesting and capable mics.

Also, pick up a pop screen if you’re going to be recording vocals. This will reduce the amount of time you spend cleaning up the plosive ‘p’ and ‘b’ sounds after tracking. You can get a cheap one for $30, or you can build your own, if you’re handy like that.

…and one or two mic stands, don’t forget those.

Monitors

Having a pair of studio monitors capable of faithfully (and flatly) reproducing the full spectrum of audible frequencies is utterly essential to producing a good mix. These are your windows to the world.

Your multimedia speakers – even the über-deluxe 7.1 set you dropped a few hundred on for your gaming setup – are next-to-useless for understanding what your mix really sounds like and will sound like on other folks’ home stereo systems, car stereos, iPod earbuds, etc.

Making the leap from cheap headphones or computer speakers to a good pair of monitors is a surefire way to improve your mixes without much effort.

Granted, it won’t do the mixing for you, but just being able to hear an accurate, clear representation of your music for the first time will enhance your ability to shape it by 100%.

If your mixes are a muddy, indistinct and dead sounding mess, you may not have bad ears – I always thought I did and still do to some extent – you may just need some good monitors.

I use a pair of M-Audio Studiophile BX8a monitors and can honestly say they changed my world. I got them for $200 each, and again, like mics, this seems to be the range where quality spikes dramatically. Budget $500 for a pair and you’ll probably look back on this as one of the smartest investments you ever made for your studio.

While you’re at it, get a set of isolation pads to dampen the effects of the environment on the audio coming from your monitors. You don’t want them sitting directly on your desk. I use Auralex MOPADs. $40, done.

Audio Interface

I have to admit I don’t really have a horse in this race.

My definition of a good audio interface is one that has an excellent on-board pre-amp with phantom power, a sufficient number and variety of inputs and outputs, and solid driver support for your platform of choice.

You’ll probably drop at least $250 for a decent interface with two mic/line inputs.

I use an M-Audio Firewire 410, which has two microphone / line inputs (w/pre-amp and phantom power) recording @ 24-bit/96kHz, a MIDI I/O, and line-outs for my monitors. The drivers have always been somewhat flaky, though overall, it’s done the job with a minimum of fuss. The price was right at the time and it came with a free condenser mic. Ultimately my decision was economic, which may not be the best criterion, but sometimes it is the only one that matters.

If I were to purchase a new audio interface today, I’d look for one with a minimum of four microphone / direct inputs, possibly eight. Of course, this is based only on my specific needs and is not necessarily a recommendation.

Ask yourself these questions:

  • What kind of music will I be recording?
  • Will I ever be recording more than 1 microphone or line instrument at a time?
  • More than 2?
  • Am I ever going to run my signal back out through a mixer or other hardware and then bring it back into the box?
  • Am I going to use a mic pre-amp before the audio-interface?

The best advice is to select an audio-interface that satisfies your current needs and will likely accomodate your future needs. Your Creative Labs Audigy 2 probably won’t cut it now OR later.


Divide your recording workflow into 3 discrete stages


The power and flexibility of modern DAWs gives us musicians an unprecedented wealth of options for producing the sounds we seek.

With the touch of a single hot-key, we can conjure aux busses from thin air, send multiple tracks to them, and add virtual racks of software compressors, limiters, EQs and pitch correction with abject impunity. The desire to add something, anything, to your tracks is a constant temptation.

Not only are the sheer number of possible actions sometimes overwhelming, but they’re also dangerous in the sense that with every maneuver, you might be shooting yourself in the foot.

To cop a cliché from old Stan Lee: With great power, there must also come — great responsibility!

Establishing ground rules for what to do and when to do it can provide some valuable structure to your workflow by effectively limiting your options at any given time and, therefore, mitigating the risk of shooting yourself in the foot.

I find that my recordings sound far better if I have the discipline to structure the process into 3 clearly delineated stages: Tracking, Mixing, and Mastering.

These are well-known parts of the recording process to most anyone who’s done minimal reading on the subject. Many a hefty tome has been devoted entirely to each of these stages and the unique (though often wildly divergent) approaches to tackling them.

The problem for even well-educated, but inexperienced engineers working entirely in-the-box with a DAW is that it is not always clear where one stage ends and another begins. From a software standpoint, you have all of your options open at all times.

Certain recording activities obviously belong to a certain stage, for example, exporting a 44.1khz/16-bit WAV file happens during the Mastering stage – and even that one is questionable if you cut iterative, rough mixes for testing purposes. For other activities, maybe applying a band-pass EQ filter, the line may not be as obvious.

Reality dictates that there will always be some fluidity between stages. During the Mixing stage, for instance, I inevitably find myself needing to track a new performance, or re-track an existing one.

Define your workflow, but give yourself permission to violate it as necessary.

Here’s the break-down of my workflow in Cakewalk Sonar. These rules should apply equally across other DAW softwares.

Tracking

The singular purpose of the Tracking stage is to capture performance.

First, get some headphones and mute your monitors if you’re going to be recording acoustic signals as you (usually) don’t want the monitor output to feedback into the mics / pick-ups.

I always set my project to record @ 44.1khz / 24-bit. You may have your own reasons for using different settings, and that’s ok, but if you do not have a reason to do otherwise, just trust me on this one. Do your own research if you’re curious about the rationale.

Before I record any given track, I’ll calibrate my inputs by playing the instrument at average and peak volume and eyeballing the meters on that incoming track in my DAW.

I’m aiming to get the average around -18db and the peaks around -10db, well below the ‘red zone’ @ 0db that represents a clipping audio signal.

If I need to adjust the volume so I’m falling within that range, I do NOT touch the track slider in my DAW, rather, I use the volume/gain knob on my audio interface directly. This goes only for line-in/mic tracks.

If I’m recording a VSTi (say drums or piano), I’ll see if that plug-in has its own internal “output” control and adjust that. Otherwise, all I’ll have are the track sliders, so I’ll pull those down until the meter is falling into the same -18db to -10db range.

Now, you might notice that the output levels are much lower than you’re accustomed to. How can you ever be expected to mix at such a quiet level? There’s an easy solution to your problem. Turn up your monitors! And turn up your headphones if you can’t hear while tracking.

There are many differing opinions on tracking levels, and there is a reason why -18db is a magic number of sorts. Again, you can do that research on your own.

The short reason I’m recommending this is because recording at a lower level gives you the head-room (below clipping) to play with effects and gain adjustments at the mixing stage (EQ boosts, for example). This technique has had the subtle side-effect of opening-up my own mixes. It’s just easier for me to hear how the component tracks should fit together in the mix since I started recording lower and turning my monitors up.

Recording at a consistent level across all your tracks has the added benefit of helping you get closer to your desired mix. You’ll be riding the faders a lot less during mixing.

Make it a goal of your Tracking stage to come as close as possible to your target sound before you enter the Mixing stage. If you’re not using outboard dynamics processors, EQs, or pre-amps your primary tool for achieving that target is your performance.

If you’re tracking electric guitar, this can involve systematically nailing down your pick-up selection and tone knobs and giving serious consideration to your picking technique. Similar technical concerns apply for acoustic guitar, with the added variables of mic selection and placement. For bass tracking, will you be playing with your fingers or a pick?

Think through all the details of your performance before you hit record, and be sure to take notes in case you need to re-track later.

The same concerns apply for vocals. Don’t neglect your ability to “work” the microphone to achieve different effects. Getting in close and whispering your vocal (which may require calibrating the input levels higher), results in a vastly different sound than standing back and wailing the same lines.

It’s much easier to change your sound during tracking than during mixing.

My preference is to record all of my signals dry, no VST effects (delay, reverb, EQ, compression) applied at this point. Reason being, that it forces you to focus on capturing your highest-level technical performance.

I almost always slap a compressor on my bass track by default, but I still do not use one during tracking because my goal is to play the bass line well enough to not need to use one. Capture the correct dynamics in your performance, don’t wait until you hit Mixing.

I make frequent exceptions to the dry tracking rule in cases where an effect is integral to the performance. For instance, if I use an amp simulator like Guitar Rig for electric guitar, I will track with that VST effect already applied. After all, if I was mic’ing a real amp and recording the output, that signal already contains the sound of various effects pedals and the amp itself.

Imagine you’re tracking the guitar part for Pink Floyd’s Run Like Hell in your modern DAW, plugged directly into the audio interface. You’d be hard pressed to nail that performance if you didn’t enable a delay VST during tracking.

Some singers might also find it hard to achieve the right “vibe” for their performance without hearing some reverb. If that’s the case for you, go ahead and slap some reverb on while you’re tracking. It doesn’t even have to be the reverb you end up using in the final mix. If it helps you capture a better performance, use it!

Having some trouble hitting your backing vocals because you can’t hear over the lead vocal? Don’t be afraid to blur the lines between Mixing and Tracking by turning down the levels on your lead and maybe panning it 50%, while panning your incoming backing track 50% the other direction. Just be sure to reset them to zero after you’ve captured the performance and before you start the Mixing proper.

The more time you spend during the Tracking stage, the less time you’ll spend in Mixing and, I find, the happier you’ll be with the final mix itself.

You can always fix problems during Mixing, but it should always be your goal to minimize the need for editing/mixing fixes to the best of your musical ability.

On the other hand, don’t get too hung up shooting for perfection.

There’s an axiom in software engineering: Premature optimization is the root of all evil. I’d argue that this also applies to sound engineering.

Mixing

The purpose of the Mixing stage is to glue your component tracks together so that the resultant sound accurately represents your concept of a complete song.

Mixing requires an entirely different skill set than Tracking.

During Tracking, you’re concerned mostly with your musical ability. A song will only track as good as you’re able to compose and perform it.

With Mixing, you’re entering the realm of sound engineering skills. Honestly, a lack of sound engineering knowledge and experience is the biggest hurdle for musicians just beginning to mix their own songs. There’s really no getting around the time and practice required to become good at mixing. It is difficult, highly subjective work and there are relatively few general-purpose, silver bullets to rely on.

Fortunately, there’s a surfeit of information just waiting for you to absorb. Soak it up, try it out. Browse the deep archives and ask questions on audio community forums like Tape Op Message Board and the Harmony Central Forums.

I’d recommend directing your initial research toward understanding how compression/limiting, EQ, panning/track-placement, and aux busses work. Know exactly when and why you’d use these techniques, before you start employing their VST implementations regularly.

Obviously, you’ll need to experiment with these techniques to figure out their effects on your own tracks, but don’t just blindly send all your vocals to an aux buss with some compressor pre-set because somebody said you always should.

There aren’t many hard rules in the Mixing stage, but there’s a whole lot of opportunity to do something wrong. These are a few of the guard rails I’ve defined for myself, to prevent that:

Only work the individual tracks. Manage your pans, levels, and effects at the track level. Leave the master buss alone and don’t apply any effects to it, yet.

Try to keep the master buss levels around -18db. Do this by managing the levels on your individual tracks. Slide the faders on the tracks, not the master buss. Remember, if you cannot hear the mix well enough at this level, turn up your monitors!

If you’re looking to sonically glue disparate elements together, think about creating an aux buss to send the tracks to and apply the compression there. You might have a buss for drums or one for vocals. You might also have a common reverb buss that you’re sending vocals and guitar through.

Keep your edits non-destructive. Always save your original tracks from the Tracking stage and if you’re going to be cutting or directly altering the waveform in any way (applying a volume cut on a single note, for instance), make sure you’ve cloned a new track from it and do your work there.

As a matter of habit, I usually save 3 different projects, one for the Tracking, one for the Mixing and one for the Mastering, but that’s just a safety precaution.

Give your ears a rest. If you find the mix getting away from you, then walk away and come back later, refreshed.

Train your ears by listening to other songs you like. Try to hear the mixing work that went into it. Where is the guitar in the mix? Where are the vocals? Is the reverb on the acoustic guitar panned to a different side than the guitar itself?

Use a pre-existing song as a mix target. Suppose you really like the interplay between the acoustic guitars and piano on the Rolling Stones’ Angie and you’d like your own guitar and piano to sit similarly in your mix. Import Angie as a separate track into your project so that you can constantly compare the two and tweak yours to approach the ideal. Apply no processing at all to the imported audio, but be sure to bring the levels down around -18db, since that’s your own target level.

Cut a rough mix to test on different speakers. Even if you’re mixing in an acoustically perfect environment, with perfect monitors (impossible situations), your song will NOT sound the same on every computer speaker, home stereo, iPod, or car stereo. You’ll need to test the mix on a variety of systems and return to the console later to compensate for the differences you’ve observed.

Before you export the master track to a 44.1khz/16-bit WAV file for burning to CD (or encoding in MP3), there’s one more thing to consider.

If you’ve hit that -18db mark across the board and already tried to export your audio, you’ll have discovered that playing back the resulting file outside of your DAW results in a very, very quiet mix.

You’ll need to do some quick pseudo-mastering to reach an acceptable loudness. This is arguably the only time during Mixing when it is acceptable to put an effect on the master buss. Some people put apply compression on the master as part of Mixing, and others, like myself prefer to wait until Mastering.

At any rate, for a rough mix, you’ll need to get your master buss as close to 0db as possible without clipping, which is one of the goals of mastering, you’re just not going to put as much thought into it at this point.

The easiest way to do this is to throw a mastering limiter on the master buss and maybe some compression or EQ, spend a minimal amount of time tweaking the parameters (don’t squash the dynamics of your song too much), and cut the rough mix. Setting a ceiling on the limiter at -2db should be a safe bet.

Mastering

The ultimate goal of the Mastering stage is to get the individually mixed songs prepared to become part of an end product, in most cases an album. I don’t think it’s a gross oversimplification to say that mastering is concerned mostly with issues of loudness and dynamics.

In the same way that Mixing is an art unto itself, Mastering requires its own set of highly-specialized skills completely distinct from Mixing. If you’re going to be releasing a commercial album, you’re probably going to be paying someone else to do the mastering for you. For most hobbyists, however, this is not an option.

So, get to reading. There’s no shortage of opinions on how best to master your tracks. This is probably the area of recording with which I’m least confident, so take my advice with a grain of salt.

If you’re recording an album, wait until you’ve mixed all your songs to begin the mastering process. The idea is that you’re going to master your tracks similarly to produce some kind of creatively cohesive whole. If you’re mastering all your tracks at the same time, it is easier to maintain consistency between them.

Export the (clean) master buss from your Mixing project and create a new project for Mastering. Make sure the final mix is exported to a stereo WAV at the full sampling rate and bit resolution as it was recorded, in my case 44.1khz/24-bit, and import it into your new Mastering project with the same sampling/bit rate.

Apply your dynamics effects to the imported stereo track. Exactly what to do at this phase is highly subjective, genre-dependent, and easily one of the most hotly contested topics on audio forums.

This is an oversimplification, but the general goal is to get the master buss output close to 0db without clipping or destroying the dynamic range and play of your song. The quiets should be quiet (but hearable) and the louds should be loud where appropriate.

You’re going to want a good, highly configurable compressor, multi-band EQ, and limiter meant to be used specifically for mastering. There are too many mastering VST plugs (some free) to cover in any level of detail.

I use IK Multimedia’s T-RackS3 Mixing and Mastering Suite because of its wide variety of great sounding presets that offer a novice like myself the perfect starting point from which to start tweaking.

Bounce to one last track, to perform any final edits. This is where you’ll add beginning and ending silence, and perform any track fade-ins and fade-outs during this phase to prepare the segues between songs.

Export a 44.1khz/16-bit stereo WAV file. You have one final choice to make in selecting the dithering algorithm from 24 to 16 bits. I usually use Pow-r, but try some different ones for yourself and see which works best. Use the same algorithm for all your songs.

Burn the files to CD in the correct order and call it an album.


Learn to play with a metronome


You may take it for granted that this truly IS common sense, but I can tell you from experience that it is not always an easy task.

The longer I’ve played a song without a metronome, the harder it is for me to play with a metronome. The performance has become more of a reflex than a conscious action and the subtle variations in rhythm are taken for granted as intentional and in-time.

The point is not necessarily to be able to hit the same beats as the metronome, but to play on AND around those beats in a predictably consistent manner.

You don’t want to sound like a machine – unless of course that is your specific goal – so forgive yourself the small imperfections while trying to stay in the ballpark.

Even if (and maybe particularly if) you consider yourself a god amongst musicians whose inner sense of rhythm beats with the precision of a hummingbird’s wings, you should still track to a metronome.

Tracking to a metronome isn’t just some arbitrary Rule of Recording And Exemplary Musicianship, though. There’s a reason why you should learn to play with a metronome…


Set your tempo and meter early


Take the time to figure out the tempo of your song and set it correctly in your DAW before you lay down the foundational tracks. This is especially important if you’re going to be using any kind of VSTis or other instruments that may record MIDI data.

My drums are pretty much always MIDI-based, so if I need to change the tempo after I’ve already tracked the drums, assuming I set the tempo correctly in the first place, it’s simply a matter of altering the tempo parameters in my DAW and the drums come right along with it, changing tempo automagically. This goes for all of the MIDI data I recorded for my other VSTi’s as well.

If you cannot count out the beats to determine the tempo and meter (because the meter is strange, you’re rhythmically retarded, or otherwise too lazy to do the math), you can turn on the metronome in your DAW and try to estimate the tempo by playing along with it. If the song is fast, start at 120bpm, if it’s slow start at 90bpm.

Play along to the metronome with your foundational instrument and adjust the DAW’s tempo slower or faster accordingly. It’s also good to have recorded a reference track without the metronome prior to this guessing process, so you can go back and compare with what you originally envisioned.

This reference track need not be exact, it is only crucial that you capture the basic rhythm and tempo accurately. For instance, suppose you have an intricate guitar part with picked arpeggios. You can just strum it out for reference. This is not the only situation in which reference tracks never intended for the final mix can come in handy.


Perform your MIDI-based drums live, with a keyboard


No, not that keyboard, you daffy bastard. The one that looks like a piano.

This is a stylistic choice and only applies if you’re aiming to create the realistic illusion of live, acoustic drums without actually mic’ing and tracking a live, acoustic drum set.

If you’re using loops, groove samples, synthesized percussion, a living, breathing human drummer, or are otherwise shooting for a perfectly locked-in beat, then skip to the next tip.

Myself, I do not own a drum kit and probably wouldn’t know what to do with one if I did. This presents a problem as I’m trying to record, essentially, guitar-based rock, on my own without a band. I want the sound of a real drum kit in my songs. Only a few years ago, this would be a real brick wall for folks in my predicament.

Chin up, laddy! It’s the 21st century. We’ve got iPhones, a convergent web, tweets, retweets, twats, and a failing economy, so why shouldn’t we have realistic sounding fake drums?

We absolutely can. There are a bunch of great sounding sample-based acoustic drum VSTi’s out there capable of fooling all but the most discerning console jockey into passing for real drums.

My favorites are XLN Audio’s Addictive Drums Retro Pak and Toontrack’s Vintage Rock EZX. I’ll talk about acoustic drum VSTi’s in more detail in the future.

There are a number of equally valid ways to track your fake drums. Some people will go in and program the MIDI notes by hand, basically painting the drum beat. Others may start with a pre-existing MIDI “groove” track selected from a library and edit from there. That’s all fine and dandy.

You may be aware that there are numerous parameters in your DAW or drum VSTi that can be tweaked to “humanize” a MIDI performance. This usually includes some changes in velocity, miniscule variations in timing and note length as well broader stylistic parameters like how much “swing” to inject.

Either I haven’t figured out the right tweaks to systematically “humanize” a drum track yet (totally possible), or there’s just some palpable bit of life missing from them. There’s not enough consistency in the inconsistency.

Here’s what I do to overcome that synthetic feel.

After recording my fundamental tracks to a metronome, usually a guitar or piano, I either program one or two measures of a very basic drum prototype track (kick and snare) or select a pre-existing groove track that closely matches what I’m imagining for the drums (if anything, yet).

Then, if the original instrumental track sounds out-of-whack with the new drums, I might go back and re-record it to better match. Next up, I’ll lay down a demo of the bass track or other instruments fulfilling a rhythm role, if I feel confident with the interplay between the fundamental track and the drum prototype. Now the fun begins.

Wipe out the drum track. Go ahead, do it (non-destructively, of course).

I own a Korg padKontrol drum pad, but I just don’t find myself using it very much. I’m much more comfortable playing my trusty old Kawai K11 Digital Synthesizer as a MIDI drum controller.

Figure out where each of the drums are on the keys – the mappings aren’t always standardized between VSTi’s – and start playing.

Don’t expect it to sound amazing right away. It does take practice.

As you’re first learning to play drums with a keyboard, concentrate on tracking the kick and snare for the first pass. Use both hands, even if you’re only playing two different keys and let your arms and wrist relax. Move with it. Spaz out.

You can add your high-hat work during a later pass, but it won’t be long before you’re able to handle the kick, snare, hats, crash, rides and toms simultaneously. Maybe you’ll turn on the metronome if you’re having trouble.

The beauty of playing your MIDI drums live is that you get the “humanize” component for free and you don’t have to bail on a take for every little flub. Miss a kick? Go back and fix it afterwards. Find the MIDI note and just slide the flubbed hit right back into place where it belongs.

Don’t try to tackle too much in a single live take. If you can’t do the fills (I usually cannot), just keep playing your main drum line and plan to do a fill run later. Once you’ve firmly established the basic line, and the set-in-stone structure of the song, cut out the areas where the fills should go and punch-in to record each fill, also live, one-at-time.

Playing your MIDI drums live on a keyboard is part of an Iterative Recording process, a concept that deserves its own attention as a discrete blog post.

I record the fundamental rhythm track, record the drums, re-record the rhythm instrument, re-record the drums, on and on, until I hone in on the groove I’m seeking.

Like folding a paper in half, ad infinitum, becoming ever more compact and closer to zero, but never reaching that negative infinity.

Somewhere, half-way to zero, that’s where you’ll find the organic feel of a live human pounding skins.

Or you could just have a drummer friend lug their kit to your basement.


Stick with whatever works for you.


Don’t let me or anyone else tell you how to do things. As long as you’re getting the sound that you want, then you’re NOT doing it wrong.

The latest issue of Tape Op (oh yeah, there’s another tip, Read Tape Op, the subscription is free!) has an interview with Sufjan Stevens in which he admits to committing a slew of recording no-no’s. Home Studio Essentials writes of the interview:

I find it amazing how many things he did “wrong” and still ended up with good sounding recordings. Check out this list of things he did “wrong” when recording 2003’s Michigan.

1. Used 32 kHz sampling rate (instead of the usual 44.1 kHz.)
2. Mics: Only used two SM57s and one C 1000. No mic preamps.
3. Mixed the album on his headphones. He doesn’t even own monitors.

What does this tell us? I think a lot of us (including myself) spend too much time worrying that we don’t have the “perfect” studio setup. So what! Work with what you have. A lot of us have much better setups than Sufjan Stevens had for Michigan and I think that album sounds great. We have no excuses.

The ultimate artifact of recording music is the sound that comes out of your speakers. If that sound makes you happy and is a reasonable translation of the music you originally conceived, then you’ve done it correctly.

These techniques have helped me to better translate the music in my head. I hope they’ll be of similar use to you.

Stay tuned for Not-So-Common-Sense Home Recording Tips #2. I have enough planned material for 2 or 3 more installments, if the interest is there. All comments and suggestions are appreciated.


add to del.icio.us Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati


My Clavinet Fetish

April 13, 2009

Bookmark and Share

Hohner Clavinet D6

Hohner Clavinet D6

I am obsessed with the Hohner Clavinet. Some might call it a fetish.

Ok, I might call it a fetish.

Yes, I have a clavinet fetish. If I can’t work one into a song, frankly, I’m just not trying hard enough, and in theory, I am not opposed to any song employing a Clavinet.

For the benefit of those uninitiated in the Cult of the Clav, I will explain.

The Clavinet is a keyboard instrument that is essentially an electric guitar in a box. When you press the keys, a rubber hammer strikes a set of strings that pass over a couple of pick-ups.

If a Jew’s Harp married a vibraslap, their genetically-enhanced offspring (with spliced-in piano and guitar genes) would sound something like a Clavinet.

Sample from Scarbee F.E.P. (Funky Electric Piano)

That signature biting sound can function as both percussion and melody. It is my favorite way to add some slight melodic undertones while contributing a nice and spicy percussive texture that can lift an otherwise bland song just above the level of mediocrity (speaking for my own music, of course). It’s great for weaving in and out of the pocket or even creating a pocket out of thin air.

A Clavinet can lend a certain mood to a song and because I soooooo love Tag Clouds, here’s a totally non-functional one with all of the adjectives I’d use to describe the clav in different musical contexts.


Mysterious Funky Spidery Menacing Dark Sarcastic
Taunting Arrogant Spastic Relaxed Taut Joking Whimsical


 
If you’ve listened to that broad-genre called classic rock, you’ve heard the clavinet. Some of my all-time favorites:

Stevie Wonder – Superstition

This is pretty much the platonic ideal of all clavinet songs, and the first thing my mind conjures when somebody says clavinet.

I have a pet theory that this song is really about OCD.

Very superstitious, wash your face and hands

Wash your face and hands and scrub ya butt, while you’re at it. I love you, Stevie, I do, but your junk is so filthy in that video I can’t believe you’re still walkin’ the streets. Don’t you know it ain’t legal to sling that kind of hash? The lowest notes sound like rasperries.

You may be surprised, as I was, to learn that Stevie’s classic clavinet line is comprised of 8 different tracks. Funkscribe dissects the multi-track masters in this fascinating video.

Bill Withers – Use Me

Dig that drummer’s grin @ 0:58. These guys are having fun.

The Band – Cripple Creek 

This is what happens when you plug a wah pedal into a clav.

Led Zeppelin – Trampled Under Foot

See also, Custard Pie.

Steely Dan – Kid Charlemagne

This song is directly responsible for my association of the clavinet with sarcasm.

The Commodores – Machine Gun

The clavinet was widely employed during the disco era.

Peter Tosh – Stepping Razor

And very popular in reggae, too.

Herbie Hancock – Spank A Lee

Really, have a listen to anything from his 70s Headhunters period. I’d recommend the excellent Thrust album.

Virtual Clavinets

Unfortunately, I’ve never laid hands on a real clavinet. If the opportunity arose, I’d probably consider buying one, though my skill with keyed instruments is not sufficient to justify putting a lot of effort (or cash) into the search.

No. It’s much easier for me to concentrate on finding a decent virtual representation of a clavinet. That means getting hold of the right VSTi, and there are a few good ones out there for folks like me.

My favorite is Native Instruments’ Elektrik Piano. This is a sample-based VSTi consisting of 4 different models of electric piano, one of which being the Hohner E7 Clavinet.

Native Instruments Elektrik Piano

Native Instruments Elektrik Piano

When I say that Elektrik Piano is sample-based, it means that individual notes were recorded – at differing attack and velocity levels – and stored as raw audio data. When you press a key on your MIDI controller, the sampling engine is triggered, playing back the audio sample for the corresponding note and velocity.

Like any other audio track, you could then run the output into other VSTs in your DAW. I like to put the E7 through another Native Instruments product, Guitar Rig 3, simulating the effect of a Leslie Rotating Speaker or using the Guitar Rig foot controller as a Wah pedal.

The upside of a sampled clavinet is that the resulting sound is, in actuality, the sound of a real clavinet (an E7 in this case). The downside is, well, you’re locked in to the sound of that real clavinet that was used as the basis for the samples and if you’re not liking that sound, there’s little you can do to improve things. Also, sampled instruments can take up multiple gigabytes of hard-drive space and may have a bloated, laggy feel if you don’t have the processing power to handle it.

If space or sonic flexibility are your concerns, you may be interested in playing a modeled clavinet.

A modeled VSTi uses a software model of the instrument to generate the sounds of that instrument being played in real-time. Unlike a sampled VSTi, it is not based on a series of pre-recorded audio files. Instead, the software acts as a full simulation of the physical characteristics of the instrument. The resultant sound is generated from scratch each time based on the characteristics of the input from your MIDI controller. You’ll also have access to many of the parameters of the simulation, so you can tweak the instrument to produce sounds more to your liking.

Die Funky Maschine ZD6

Die Funky Maschine ZD6

There are a few decent modeled clavinet VSTis out there. I’ve used Die Funky Maschine ZD6 and can speak to it’s high quality. The ZD6 is a simulation of a D6 Clav and it comes with some useful built-in effects like Wah, Overdrive, and Phaser. Some folks prefer Ticky Clav, and while I’m not a huge fan, its price can’t be beat (FREE).

The range of sounds you can get from a modeled instrument is more diverse, but to my ear, the sampled Elektrik Piano just sounds better. Often for the sake of speed and performance, I’ll record using the modeled VST, and then at mix-down, I’ll replace my clavinet track with the better-sounding sampled instrument.

My First Clavinet

I’m not very far into recording Transhuman Highway and though I haven’t tracked a clav yet, considering my irresistible attraction to and history with the instrument, I’d be surprised if it didn’t pop up on a couple of songs.

Digging through the archives, I found my first recorded use of a clavinet. I’m guessing I used a soundfont-based clavinet, but it was so long ago, I don’t remember specifics. The clav line starting @ 2:28 kind of reminds me of the one from Showdown by ELO, and the disco drums only reinforce the likelihood of that inspiration.

No Wonder by Jonathan Griggs (2000) [Download]

Time and again, I’ve returned to the clavinet, most often in a reggae context, just to add a bit of texture to the songs. These two songs, for instance, are very similar in their use of clavinet (NI’s Elektrik Piano). Disclaimer: These songs are unfinished and unmixed. Almost everything I pull from the archives will be in such an imperfect state.

Alt-0246 by Jonathan Griggs (2003) [Download]

Turing Test by Jonathan Griggs (2005) [Download]

Long live the clavinet!


add to del.icio.us Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati