Posts Tagged ‘Songwriting’

Accidental Influences and Unintended Plagiarisms

April 30, 2009

Bookmark and Share

“Good artists copy. Great artists steal.” – Pablo Picasso

Chances are you’ve heard this or one of the abundance of quotes, clichés, truisms, axioms, or maxims describing the fine line between homage and theft, borrowing and copying, and imitation and impersonation.

And if you’ve been writing songs for any length of time, chances are even better that you’ve more than once written an amazing composition, sat down with your guitar to play it for your best friend and heard something like this…

“Oh. That’s Lay Lady Lay, by Bob Dylan”

No, really, it’s not…no…wait…hmm. I guess it is.

Bob, I'm glad I didn't have to pay you everytime I tried to re-write your song

Bob, I'm glad I didn't have to pay you everytime I tried to re-write your song

If this has happened to you, don’t feel bad. I’m here to commiserate.

I’ve conjured the chord progression of “Lay Lady Lay”, independently, at least four or five times. If I had a dime for every time I attempted to write “Sittin’ On The Dock of The Bay”, well, I’d have enough to get me drunk and foot-taxied out to the dock itself.

And don’t get me started on Felice and Boudleaux Bryant’s “Love Hurts”.

Really, though, neither of us should feel bad. We can’t help it.

The point is not that these songs we accidentally pilfer were necessarily easy to write — though the best often seem simple, and therein lies no small part of their genius. The point is that, as music aficionados, we’ve consumed and digested these songs so completely, they’ve seeped into our subconscious mind and forever infected us with their viral essences.

Further, it is not necessary that we’ve directly ingested the influence in question. We could just as easily have absorbed these influences from the culture at large, in the same way that every musician who has written a modern pop song owes a debt of gratitude to The Beatles, even if they have never listened to the band first hand.

Sometimes when we’re trying to create something wholly original, these little bits of pre-digested influence spew forth. If we’re lucky, the pieces come out rearranged, mixed with our own contributions, in some fresh and exciting new configuration.

Other times, the bits of unintentional plagiarism plop out fully assembled, whole hog, and cause our hearts to sink when (if) we finally realize our mistake. It can all make you feel a bit like the computer in Searle’s Chinese Room, that has taken its input and spit out a perfect translation without ever truly understanding the processing that has occurred.

The most famous, real-world example of unintentional plagiarism can be found in George Harrison’s song “My Sweet Lord”, from his 1970 classic triple album All Things Must Pass.

Exhibit A: "My Sweet Lord"

Exhibit A: George Harrison’s “My Sweet Lord”

My Sweet Lord (1970) – George Harrison

“My Sweet Lord” was first conceived in December, 1969 when George Harrison “slipped away after a show in Copenhagen from a press conference and began vamping some guitar chords, sitting the chords to the words ‘Hallelujah’ and ‘Hare Krishna.'”

He further developed the song’s music and lyrics with musicians in his band and, in the following week, Billy Preston, for whom he was supervising the production of an album. The song was recorded for Preston’s album and the sheet music printed.

Harrison recorded it himself in late 1970 as the first single from his long awaited solo debut, All Things Must Past. It shot to number one, and soon after, the holder of the copyright for The Chiffons’ 1963 hit “He’s So Fine”, Bright Tunes, filed suit against Harrison for copyright infringement.

Exhibit B: "He's So Fine"

Exhibit B: The Chiffons’ “He’s So Fine”

He’s So Fine (1963) – The Chiffons / Robert Mack

Excerpt from a reprint of Joseph C. Self’s 910 Magazine article (1993) detailing the court case:

The Court noted that HSF incorporated two basic musical phrases, which were called “motif A” and “motif B”. Motif A consisted of four repetitions of the notes “G-E-D” or “sol-mi- re”; B was “G-A-C-A-C” or “sol-la-do-la-do”, and in the second use of motif B, a grace note was inserted after the second A, making the phrase “sol-la-do-la-re-do”. The experts for each party agreed that this was a highly unusual pattern.

Harrison’s own expert testified that although the individual motifs were common enough to be in the public domain, the combination here was so unique that he had never come across another piece of music that used this particular sequence, and certainly not one that inserted a grace note as described above.

Harrison’s composition used the same motif A four times, which was then followed by motif B, but only three times, not four. Instead of a fourth repetition of motif B, there was a transitional phrase of the same approximate length. The original composition as performed by Billy Preston also contained the grace note after the second repetition of the line in motif B, but Harrison’s version did not have this grace note.

Harrison’s experts could not contest the basic findings of the Court, but did attempt to point out differences in the two songs. However, the judge found that while there may have been modest alterations to accommodate different words with a different number of syllables, the essential musical piece was not changed significantly. The experts also pointed out that Harrison’s version of MSL omitted the grace note, but the judge ruled that this minor change did not change the genesis of the song as that which previously occurred in HSF.

With all the evidence pointing out the similarities between the two songs, the judge said it was “perfectly obvious . . . the two songs are virtually identical”. The judge was convinced that neither Harrison nor Preston consciously set out to appropriate the melody of HSF for their own use, but such was not a defense.

Harrison conceded that he had heard HSF prior to writing MSL, and therefore, his subconscious knew the combination of sounds he put to the words of MSL would work, because they had already done so. Terming what occurred as subconscious plagiarism, the judge found that the case should be re-set for a trial on the issue of damages.

The judge ruled in favor of Bright Tunes. To the tune of $260,103.

I don’t disagree with the Court’s assessment of the similarities in the least. But I do take exception with the decision in favor of the copyright holder.

Clearly, the song “My Sweet Lord” sounds very much like the song “He’s So Fine”.

Clearly, the song “My Sweet Lord” is not the same song as “He’s So Fine”.

George Harrison conceded that he’d heard “He’s So Fine” before recording “My Sweet Lord” but admitted to no more conscious influence than the Edwin Hawkins Singers’ “Oh Happy Day” during its composition.

This case set the precedent for nearly every musical copyright infringement suit to come during the sample-happy 80s and 90s. Cases that mostly had little to do with songwriting and more to do with repurposing existing recordings of old performances in new contexts.

To me, now, in 2009, both songs sound as equally distinct and distant members of the pop-culture cannon. I could not confuse the two as the same song, and in a case where the judge has essentially ruled-out intent-to-infringe, what other criterion is there?

Here, on the precipice of the Tweens (oh, yah, they’ll be calling them that more quickly than we reckoned our Oughts), we ingest and regurgitate our pop culture so fast that no court in the same situation would be able to justify a similar ruling. Not in these early days of convergent, mash-up, youtube euphoria.

I didn’t veer into legal territory for a cheap scare. I only mean to demonstrate that accidental influence can affect even the most distinguished of songwriters. You shouldn’t make your songwriting decisions based on fear of getting sued. That’s not going to happen to you, unless you should someday become bigger than Jesus.

While there are many valid artistic reasons (outside of an explicit cover) for appropriating parts of a song – composition, arrangement, lyrics, samples of the actual recording – I don’t think anybody wants to write the same song that someone else wrote 30 years ago. That defeats the whole purpose of songwriting, in my mind.

So, how can we protect ourselves from the possibility of accidentally pinching someone else’s tune?

Here’s what I do.

1) Use your ears.

Be honest with yourself. Does your song sound a lot like another song?

Listen to the chord progression, rhythm, and melody. More importantly, consider the three of those together. You can get away with any one of those sounding vaguely familiar, but the more aspects your song has in common with another, the harder it will be to avoid comparisons.

Don’t spend too much time on this exercise, it’s usually a pretty immediate recognition once you’ve committed to looking for similarities as its own activity.

If you do happen to think yours resembles another song, take a breath and don’t worry yet. You’re probably just being overly self-critical. Move on to step 2.

2) Use Someone Else’s Ears

Play your music for a trusted, knowledgable third-party. Don’t prime their opinion by mentioning your suspected doppelgänger at this point. Just play the song and ask for feedback.

Ask them if it sounds like any other songs they can think of. Ask the question even if you think you have invented an entirely new genre with this one.

If your friend does not immediately identify a song you’re subconsciously ripping off, you’re probably in good shape. Repeat the exercise for another friend or two and be done with it.

If your friend DOES call you out on the similarities, all hope is not necessarily lost. Sometimes a minor tweak to the melody is all that’s necessary to erase the similarities. Sometimes it is merely a single yet crucial note suggesting the earlier tune.

Always get a second opinion. It is possible our trusted critics are simply unreliable sources. You may also get a sort of wishy washy this-sounds-a-little-like-that response that may not be good for much.

For example, here’s a song that my friend Dan wrote and we recorded in 2005. I played it for my Mom and the first thing she said was “that sounds like a Simon and Garfunkel song”. Not a specific song, just like a song that should’ve been theirs.

Palisades (2005) – Dan & Jon

You’ll probably get that a lot. Your song sounds like another song or another artist. Know that there’s a fine line between your intent and accidentally going too far. Check out the supposed similarities and if you disagree, then disregard their opinion.

In the case of our song, I took my Mom’s feedback as a compliment and validation of our intent.

3) Use the cold, indifferent, mechanical ears of the internet.

You could try a music search like Midomi that takes a sound (a hum / whistle / or recording of a song) and sets that as the criteria for a search. This is commonly used for identifying a song for which you have no name or artist information, but why not use it to see if it thinks your song sounds like another?

I’m not sure how accurate or consistent the searches are, but it worked for me twice on the same song last weekend.

I was strumming a jaunty D – C with some sing-songy playground melody. It sounded very familiar, but I couldn’t place it. I pulled out the iPhone version of Midomi and played the chorus into the phone for about 10 seconds.

“Gigolo Aunt” by Syd Barrett.

Of course! That made perfect sense. I had been listening to Wouldn’t You Miss Me? The Best of Syd Barrett just the week before. That should’ve been obvious.

I changed up the melody a little and added a chord change. Played it back to Midomi.

“Handshake Drugs” by Wilco.

Damn it! It was, it really was. Time to throw that one out.

4) Don’t worry about it.

If you’re happy with the song, who cares? Unless you’re selling millions of albums, nobody else will care, I can tell you that much.

Just do us all a favor and think twice before unleashing another “What’s Up” by Four Non Blondes on the world.

I pledge to do the same.

For the love of all that is holy, please spare us another "What's Up"

For the love of all that is holy, please spare us another “What’s Up”

add to Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati


Song: Second Law

April 21, 2009

Bookmark and Share

Second Law (2009) [Download]

I love working with VSTi’s. They provide a cheap (sometimes free) alternative to packing my studio space with a bunch of expensive instruments that I can’t afford and I don’t really know how to play anyway. Sure, the real thing sounds better, and if you have the luxury of easy access to hardware analog synths and grand pianos, you’d be foolish not to use them.

I don’t own a piano. Can’t play piano. But I want to learn.

This song started with a simple idea to piece together a few chords on the piano and see what would come of it. I’d sit at my console with no preconceived notion of a song, and use the DAW as a songwriting tool, starting with the piano — in this case, the TruePiano VSTi that comes with Cakewalk Sonar 8.

TruePiano is modeled (not sample-based), doesn’t sound great, but it loads quickly and plays lag-free. If I came up with something worth keeping, I could always replace it with a nice sounding sampled piano like NI’s Akoustik Piano.

This is one of those songs that, because I’m writing it at the console, never comes out as a proper demo. I create the song in layers, first laying the foundation (piano, here), then adding on rhythm or melodic instruments in iterative passes.

After I sufficiently nail the foundation, I’ll start working the rhythm and melodic tracks in an improvisational manner. I usually need 4 or 5 takes before I collect a decent vocabulary of runs, notes, vamps, etc., to piece into a coherent part.

Then, it’s just a matter of iterating over all the tracks to tighten and further define the specifics of the performances.

I pounded out a few repetitions of the chords on the piano and then copied them across 4 minutes of time, so what we’re looking at is a chord structure that doesn’t really change for the length of the song. This could be really boring, or maybe I could get away with crafting different parts by changing up the instrumental arrangement in certain sections. I’m not so hot at improvising at the keyboard, so I decided to just work with what I had.

Next up, I plugged in my Epiphone Dot, direct, and ran it through Amplitube 2, American Tube Clean preset with some Spring Reverb dialed up. I worked on some melodic lines for the intro and got a good taste of what a solo section might sound like.

I’m a big fan of guitar wankery. When it comes to my own playing, I can’t wait to get to the solo, and I have to restrain myself from wedging a solo in every last song.

Coming to have a strong feel for the melodic guitar part so early in the process seemed to predetermine that the song would be split into two symmetrical parts, divided by an instrumental break in the middle. I don’t know why I made this decision, and it only seems like a conscious decision in hind sight. Regardless, it is now an inviolable boundary of the song. Everything must fit into the container as it has been defined.

When it came time to put some lyrics to paper, I had two blocks of lyrics that began to take shape iteratively as I improvised vocals over the piano parts.

The loss of days
Makes you want to be mine
Our love is urgent, now

It’s good to be our generation
Think how the wretches before us
Lost their minds

Time is the issue. Metaphorically in the sense of it slipping away, and literally in the sense that now is always the latest point in time.

It reminded me of Isaac Asimov’s 1956 short story The Last Question which is concerned with our perception of forever, the universe’s inexorable march toward maximum entropy, and the human preoccupation with reversing the process.

Asimov’s story is told as a series of vignettes, through the eyes of our distant descendants, each one more chronologically advanced than the last. They ask the same questions and struggle with the same inevitabilities, despite the ever expanding scope of their computational powers.

Excerpt from The Last Question:

It was a nice feeling to have a Microvac of your own and Jerrodd was glad he was part of his generation and no other. In his father’s youth, the only computers had been tremendous machines taking up a hundred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been growing in size steadily for a thousand years and then, all at once, came refinement. In place of transistors had come molecular valves so that even the largest Planetary AC could be put into a space only half the volume of a spaceship.

…and, a few paragraphs later…

“So many stars, so many planets,” sighed Jerrodine, busy with her own thoughts. “I suppose families will be going out to new planets forever, the way we are now.”

“Not forever,” said Jerrodd, with a smile. “It will all stop someday, but not for billions of years. Many billions. Even the stars run down, you know. Entropy must increase.”

I continued writing lyrics, and as I laid pen to paper (and voice to mic), my focus became ever more narrow, zeroing in on a more literal interpretation of the story. My creative process had been hijacked! A viral infection, I say!

The result is I’m unhappy with the song after the instrumental break — the entire second half seems mediocre. I think the lyrics are far too literal and artless. I need to re-write them.

I could be one of those people
Living my life for a moment
Out of time

The loss of days
Makes you want to be mine
Our love is urgent, now

It’s good to be our generation
Think how the wretches before us
Lost their minds

Souls were never
Meant to be frozen in time
They have all expired, now

We wish that they were around
But now they’re gone
And that is such a shame

I guess I’m one of those people
Never did think that we’d run out
Of our time

Forever’s not
The sort of word to be kind
We’re all convergent, now

Don’t be afraid when the end comes
Entropy’s fated to claim us
In good time

Mother Nature’s not the sort
To be kind
Our love’s emergent, now

We wish we could be around
To watch us fall
And it would be OK

I tracked a first pass at the lead and background vocals after the lyrics were written. I didn’t want to do too much processing at this point because I felt the lyrics may change and my vocal performances are usually the last part to be set in stone. I double tracked the lead and background vocals with some delay on the second tracks of both, to enhance the feel during subsequent performances.

Next up, drums. I used the Toontrack Vintage kit, which is my favorite default for working out a drum part. I have a sneaking suspicion I’ll try out some of the Addictive Drums Vintage presets before all is said and done. I laid a kick and snare in one take and then some high-hats in a second take, which, you’ll notice, lose the time on several occasions. I must’ve been a fur piece into a bottle of dark, red wine at this point, and the darker the better — the buzz, not the performance. =)

I plugged my Rickenbacker 4001 bass (the one in my blog logo) directly into the Firewire 410, and ran it through the Amplitude Ampeg bass amp sim, don’t remember which amp at the moment. I was able to mostly define the notes I want to play, but I’ll have to wait until my next pass at the drums to tighten up the groove here. I’m not quite catching that kick a lot of the time and I don’t know if it’s the kick’s fault or the bass.

A good idea at this point might have been to tighten the drums and bass, but I lost interest in the fundamental rhythm tracks momentarily, and moved on, adding some more melodic flavors. I wanted the retro-futuristic, warm sounds of an analog synth.

One of my goals for this album is to spin tales that seem, on the surface, far removed from the concerns of our daily existence, but that are rooted, however obliquely, to some relatable emotional truth. The things that we’ll never leave behind, no matter how far we stray from our biological bootstraps.

I was looking for sounds that are synthetic imitations of the organic, and I had two previous synth touchstones in mind.

Eons of synth coronet carved these dunes

Eons of synth coronet carved these dunes

1) The delay-drenched synth coronets (originally an ARP, I believe) from Pink Floyd’s Shine On You Crazy Diamond. I had always imagined the back cover of that album, with the invisible man in the desert, as the backdrop for the beginning of Shine On. The coronets sound of the wind, shaping the dunes on a geological timescale, condensing the eons, bringing us to the point where the narrative begins.

I found mine in Arturia’s Moog Modular V VSTi.

Life forms are full of noxious gasses

Life forms are full of noxious gasses

2) I really liked the tactile quality of the soupy, gurgling, aquatic synth sounds throughout Ween’s album, The Mollusk, particularly the flatulence of The Golden Eel. A gutteral burp, signifying life.

I didn’t quite find the same sound. I latched onto a hivey buzz kind of sound with a bit of the lower gurgling I was looking for. I found it in the z3ta+ synth that comes with Sonar 8.

Next steps: Finalize the second-half lyrics. Tighten the groove. Re-record the vocals.

I’m gonna skip the track-by-track breakdown, unless anyone finds that particularly useful or interesting.

Estimated Song Completion: 60%

add to Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati

Not-So-Common-Sense Home Recording Tips #1

April 19, 2009

Bookmark and Share

First qualifier: I do not speak as an authority or expert in music production, to wit: Your Mileage May Vary.

One of the reasons I started this project was in the hopes that somebody might see how I’m doing things currently and offer some advice on how I can do them better.

I don’t plan on justifying all of my advice with deeply technical explanations. Any information I have to share at this point is simply representative of my own experience and what has worked best for meeting my own standards.

Second qualifier: My entire signal chain stays in-the-box from tracking to mix-down, to wit: if you’re using racks of outboard gear, mixing consoles, and analog equipment, then some of this information will likely not apply to you.

These tips are aimed at the novice-to-intermediate musician and recording hobbyist, the kind of person who may not even be using a computer built specifically to function as a DAW. This is the stuff I wish someone had told me when I first started recording.

Exceptions duly noted, I’d like to share with you some of the not-so-common-sense tips I’ve accumulated. You could say that most of these qualify as common-sense, but for numerous reasons (laziness, obstinance, ignorance, disbelief) it’s taken me years to integrate them all into my recording routine.

Let’s start with the painful one. You know, the one where I tell you that your gear sucks…

You need not invest a fortune building a recording setup capable of capturing the sounds you want.

Let’s not kid ourselves here. Home audio recording is an expensive hobby. While you should always, as a rule, make the most of what ya got, you’re going to hit a wall eventually and have to throw some cash at it.

The key is knowing where to invest your money, and there are several crucial pieces of equipment on which you should not skimp. I don’t mean you always need buy the high-end, top-of-the-line models, but you should do your research on these items because in some cases, the difference between $50 and $250 is immeasurable.


If you are going to be recording vocals, acoustic instruments, amplified guitars, or drums, do yourself a favor and get a good microphone or two. Or three.

You’ll probably want a Shure SM57 or two. Or three. This mic is the Swiss Army Knife of the home studio. Use it for vocals, use it to track your acoustic guitar or mic your amp. Expect to pay $75 – $100 for one.

Get your hands on one good condenser microphone capable of capturing a large, round sound for vocals.

I picked up the Audio Technica AT4040 for $250 and have been very happy using that for my vocals and often as a second mic for my acoustic guitar. I also use an M-Audio Aries that came free with my Firewire 410 Audio Interface, but I wouldn’t especially recommend that one for a purchase.

Do your research and find out which one is best for you. The sky’s the limit on mic prices, but expect to pay $200 – $300 depending on how big you want to go for your prized “specialty” mic. It seems like this range is where you start to get into the more interesting and capable mics.

Also, pick up a pop screen if you’re going to be recording vocals. This will reduce the amount of time you spend cleaning up the plosive ‘p’ and ‘b’ sounds after tracking. You can get a cheap one for $30, or you can build your own, if you’re handy like that.

…and one or two mic stands, don’t forget those.


Having a pair of studio monitors capable of faithfully (and flatly) reproducing the full spectrum of audible frequencies is utterly essential to producing a good mix. These are your windows to the world.

Your multimedia speakers – even the über-deluxe 7.1 set you dropped a few hundred on for your gaming setup – are next-to-useless for understanding what your mix really sounds like and will sound like on other folks’ home stereo systems, car stereos, iPod earbuds, etc.

Making the leap from cheap headphones or computer speakers to a good pair of monitors is a surefire way to improve your mixes without much effort.

Granted, it won’t do the mixing for you, but just being able to hear an accurate, clear representation of your music for the first time will enhance your ability to shape it by 100%.

If your mixes are a muddy, indistinct and dead sounding mess, you may not have bad ears – I always thought I did and still do to some extent – you may just need some good monitors.

I use a pair of M-Audio Studiophile BX8a monitors and can honestly say they changed my world. I got them for $200 each, and again, like mics, this seems to be the range where quality spikes dramatically. Budget $500 for a pair and you’ll probably look back on this as one of the smartest investments you ever made for your studio.

While you’re at it, get a set of isolation pads to dampen the effects of the environment on the audio coming from your monitors. You don’t want them sitting directly on your desk. I use Auralex MOPADs. $40, done.

Audio Interface

I have to admit I don’t really have a horse in this race.

My definition of a good audio interface is one that has an excellent on-board pre-amp with phantom power, a sufficient number and variety of inputs and outputs, and solid driver support for your platform of choice.

You’ll probably drop at least $250 for a decent interface with two mic/line inputs.

I use an M-Audio Firewire 410, which has two microphone / line inputs (w/pre-amp and phantom power) recording @ 24-bit/96kHz, a MIDI I/O, and line-outs for my monitors. The drivers have always been somewhat flaky, though overall, it’s done the job with a minimum of fuss. The price was right at the time and it came with a free condenser mic. Ultimately my decision was economic, which may not be the best criterion, but sometimes it is the only one that matters.

If I were to purchase a new audio interface today, I’d look for one with a minimum of four microphone / direct inputs, possibly eight. Of course, this is based only on my specific needs and is not necessarily a recommendation.

Ask yourself these questions:

  • What kind of music will I be recording?
  • Will I ever be recording more than 1 microphone or line instrument at a time?
  • More than 2?
  • Am I ever going to run my signal back out through a mixer or other hardware and then bring it back into the box?
  • Am I going to use a mic pre-amp before the audio-interface?

The best advice is to select an audio-interface that satisfies your current needs and will likely accomodate your future needs. Your Creative Labs Audigy 2 probably won’t cut it now OR later.

Divide your recording workflow into 3 discrete stages

The power and flexibility of modern DAWs gives us musicians an unprecedented wealth of options for producing the sounds we seek.

With the touch of a single hot-key, we can conjure aux busses from thin air, send multiple tracks to them, and add virtual racks of software compressors, limiters, EQs and pitch correction with abject impunity. The desire to add something, anything, to your tracks is a constant temptation.

Not only are the sheer number of possible actions sometimes overwhelming, but they’re also dangerous in the sense that with every maneuver, you might be shooting yourself in the foot.

To cop a cliché from old Stan Lee: With great power, there must also come — great responsibility!

Establishing ground rules for what to do and when to do it can provide some valuable structure to your workflow by effectively limiting your options at any given time and, therefore, mitigating the risk of shooting yourself in the foot.

I find that my recordings sound far better if I have the discipline to structure the process into 3 clearly delineated stages: Tracking, Mixing, and Mastering.

These are well-known parts of the recording process to most anyone who’s done minimal reading on the subject. Many a hefty tome has been devoted entirely to each of these stages and the unique (though often wildly divergent) approaches to tackling them.

The problem for even well-educated, but inexperienced engineers working entirely in-the-box with a DAW is that it is not always clear where one stage ends and another begins. From a software standpoint, you have all of your options open at all times.

Certain recording activities obviously belong to a certain stage, for example, exporting a 44.1khz/16-bit WAV file happens during the Mastering stage – and even that one is questionable if you cut iterative, rough mixes for testing purposes. For other activities, maybe applying a band-pass EQ filter, the line may not be as obvious.

Reality dictates that there will always be some fluidity between stages. During the Mixing stage, for instance, I inevitably find myself needing to track a new performance, or re-track an existing one.

Define your workflow, but give yourself permission to violate it as necessary.

Here’s the break-down of my workflow in Cakewalk Sonar. These rules should apply equally across other DAW softwares.


The singular purpose of the Tracking stage is to capture performance.

First, get some headphones and mute your monitors if you’re going to be recording acoustic signals as you (usually) don’t want the monitor output to feedback into the mics / pick-ups.

I always set my project to record @ 44.1khz / 24-bit. You may have your own reasons for using different settings, and that’s ok, but if you do not have a reason to do otherwise, just trust me on this one. Do your own research if you’re curious about the rationale.

Before I record any given track, I’ll calibrate my inputs by playing the instrument at average and peak volume and eyeballing the meters on that incoming track in my DAW.

I’m aiming to get the average around -18db and the peaks around -10db, well below the ‘red zone’ @ 0db that represents a clipping audio signal.

If I need to adjust the volume so I’m falling within that range, I do NOT touch the track slider in my DAW, rather, I use the volume/gain knob on my audio interface directly. This goes only for line-in/mic tracks.

If I’m recording a VSTi (say drums or piano), I’ll see if that plug-in has its own internal “output” control and adjust that. Otherwise, all I’ll have are the track sliders, so I’ll pull those down until the meter is falling into the same -18db to -10db range.

Now, you might notice that the output levels are much lower than you’re accustomed to. How can you ever be expected to mix at such a quiet level? There’s an easy solution to your problem. Turn up your monitors! And turn up your headphones if you can’t hear while tracking.

There are many differing opinions on tracking levels, and there is a reason why -18db is a magic number of sorts. Again, you can do that research on your own.

The short reason I’m recommending this is because recording at a lower level gives you the head-room (below clipping) to play with effects and gain adjustments at the mixing stage (EQ boosts, for example). This technique has had the subtle side-effect of opening-up my own mixes. It’s just easier for me to hear how the component tracks should fit together in the mix since I started recording lower and turning my monitors up.

Recording at a consistent level across all your tracks has the added benefit of helping you get closer to your desired mix. You’ll be riding the faders a lot less during mixing.

Make it a goal of your Tracking stage to come as close as possible to your target sound before you enter the Mixing stage. If you’re not using outboard dynamics processors, EQs, or pre-amps your primary tool for achieving that target is your performance.

If you’re tracking electric guitar, this can involve systematically nailing down your pick-up selection and tone knobs and giving serious consideration to your picking technique. Similar technical concerns apply for acoustic guitar, with the added variables of mic selection and placement. For bass tracking, will you be playing with your fingers or a pick?

Think through all the details of your performance before you hit record, and be sure to take notes in case you need to re-track later.

The same concerns apply for vocals. Don’t neglect your ability to “work” the microphone to achieve different effects. Getting in close and whispering your vocal (which may require calibrating the input levels higher), results in a vastly different sound than standing back and wailing the same lines.

It’s much easier to change your sound during tracking than during mixing.

My preference is to record all of my signals dry, no VST effects (delay, reverb, EQ, compression) applied at this point. Reason being, that it forces you to focus on capturing your highest-level technical performance.

I almost always slap a compressor on my bass track by default, but I still do not use one during tracking because my goal is to play the bass line well enough to not need to use one. Capture the correct dynamics in your performance, don’t wait until you hit Mixing.

I make frequent exceptions to the dry tracking rule in cases where an effect is integral to the performance. For instance, if I use an amp simulator like Guitar Rig for electric guitar, I will track with that VST effect already applied. After all, if I was mic’ing a real amp and recording the output, that signal already contains the sound of various effects pedals and the amp itself.

Imagine you’re tracking the guitar part for Pink Floyd’s Run Like Hell in your modern DAW, plugged directly into the audio interface. You’d be hard pressed to nail that performance if you didn’t enable a delay VST during tracking.

Some singers might also find it hard to achieve the right “vibe” for their performance without hearing some reverb. If that’s the case for you, go ahead and slap some reverb on while you’re tracking. It doesn’t even have to be the reverb you end up using in the final mix. If it helps you capture a better performance, use it!

Having some trouble hitting your backing vocals because you can’t hear over the lead vocal? Don’t be afraid to blur the lines between Mixing and Tracking by turning down the levels on your lead and maybe panning it 50%, while panning your incoming backing track 50% the other direction. Just be sure to reset them to zero after you’ve captured the performance and before you start the Mixing proper.

The more time you spend during the Tracking stage, the less time you’ll spend in Mixing and, I find, the happier you’ll be with the final mix itself.

You can always fix problems during Mixing, but it should always be your goal to minimize the need for editing/mixing fixes to the best of your musical ability.

On the other hand, don’t get too hung up shooting for perfection.

There’s an axiom in software engineering: Premature optimization is the root of all evil. I’d argue that this also applies to sound engineering.


The purpose of the Mixing stage is to glue your component tracks together so that the resultant sound accurately represents your concept of a complete song.

Mixing requires an entirely different skill set than Tracking.

During Tracking, you’re concerned mostly with your musical ability. A song will only track as good as you’re able to compose and perform it.

With Mixing, you’re entering the realm of sound engineering skills. Honestly, a lack of sound engineering knowledge and experience is the biggest hurdle for musicians just beginning to mix their own songs. There’s really no getting around the time and practice required to become good at mixing. It is difficult, highly subjective work and there are relatively few general-purpose, silver bullets to rely on.

Fortunately, there’s a surfeit of information just waiting for you to absorb. Soak it up, try it out. Browse the deep archives and ask questions on audio community forums like Tape Op Message Board and the Harmony Central Forums.

I’d recommend directing your initial research toward understanding how compression/limiting, EQ, panning/track-placement, and aux busses work. Know exactly when and why you’d use these techniques, before you start employing their VST implementations regularly.

Obviously, you’ll need to experiment with these techniques to figure out their effects on your own tracks, but don’t just blindly send all your vocals to an aux buss with some compressor pre-set because somebody said you always should.

There aren’t many hard rules in the Mixing stage, but there’s a whole lot of opportunity to do something wrong. These are a few of the guard rails I’ve defined for myself, to prevent that:

Only work the individual tracks. Manage your pans, levels, and effects at the track level. Leave the master buss alone and don’t apply any effects to it, yet.

Try to keep the master buss levels around -18db. Do this by managing the levels on your individual tracks. Slide the faders on the tracks, not the master buss. Remember, if you cannot hear the mix well enough at this level, turn up your monitors!

If you’re looking to sonically glue disparate elements together, think about creating an aux buss to send the tracks to and apply the compression there. You might have a buss for drums or one for vocals. You might also have a common reverb buss that you’re sending vocals and guitar through.

Keep your edits non-destructive. Always save your original tracks from the Tracking stage and if you’re going to be cutting or directly altering the waveform in any way (applying a volume cut on a single note, for instance), make sure you’ve cloned a new track from it and do your work there.

As a matter of habit, I usually save 3 different projects, one for the Tracking, one for the Mixing and one for the Mastering, but that’s just a safety precaution.

Give your ears a rest. If you find the mix getting away from you, then walk away and come back later, refreshed.

Train your ears by listening to other songs you like. Try to hear the mixing work that went into it. Where is the guitar in the mix? Where are the vocals? Is the reverb on the acoustic guitar panned to a different side than the guitar itself?

Use a pre-existing song as a mix target. Suppose you really like the interplay between the acoustic guitars and piano on the Rolling Stones’ Angie and you’d like your own guitar and piano to sit similarly in your mix. Import Angie as a separate track into your project so that you can constantly compare the two and tweak yours to approach the ideal. Apply no processing at all to the imported audio, but be sure to bring the levels down around -18db, since that’s your own target level.

Cut a rough mix to test on different speakers. Even if you’re mixing in an acoustically perfect environment, with perfect monitors (impossible situations), your song will NOT sound the same on every computer speaker, home stereo, iPod, or car stereo. You’ll need to test the mix on a variety of systems and return to the console later to compensate for the differences you’ve observed.

Before you export the master track to a 44.1khz/16-bit WAV file for burning to CD (or encoding in MP3), there’s one more thing to consider.

If you’ve hit that -18db mark across the board and already tried to export your audio, you’ll have discovered that playing back the resulting file outside of your DAW results in a very, very quiet mix.

You’ll need to do some quick pseudo-mastering to reach an acceptable loudness. This is arguably the only time during Mixing when it is acceptable to put an effect on the master buss. Some people put apply compression on the master as part of Mixing, and others, like myself prefer to wait until Mastering.

At any rate, for a rough mix, you’ll need to get your master buss as close to 0db as possible without clipping, which is one of the goals of mastering, you’re just not going to put as much thought into it at this point.

The easiest way to do this is to throw a mastering limiter on the master buss and maybe some compression or EQ, spend a minimal amount of time tweaking the parameters (don’t squash the dynamics of your song too much), and cut the rough mix. Setting a ceiling on the limiter at -2db should be a safe bet.


The ultimate goal of the Mastering stage is to get the individually mixed songs prepared to become part of an end product, in most cases an album. I don’t think it’s a gross oversimplification to say that mastering is concerned mostly with issues of loudness and dynamics.

In the same way that Mixing is an art unto itself, Mastering requires its own set of highly-specialized skills completely distinct from Mixing. If you’re going to be releasing a commercial album, you’re probably going to be paying someone else to do the mastering for you. For most hobbyists, however, this is not an option.

So, get to reading. There’s no shortage of opinions on how best to master your tracks. This is probably the area of recording with which I’m least confident, so take my advice with a grain of salt.

If you’re recording an album, wait until you’ve mixed all your songs to begin the mastering process. The idea is that you’re going to master your tracks similarly to produce some kind of creatively cohesive whole. If you’re mastering all your tracks at the same time, it is easier to maintain consistency between them.

Export the (clean) master buss from your Mixing project and create a new project for Mastering. Make sure the final mix is exported to a stereo WAV at the full sampling rate and bit resolution as it was recorded, in my case 44.1khz/24-bit, and import it into your new Mastering project with the same sampling/bit rate.

Apply your dynamics effects to the imported stereo track. Exactly what to do at this phase is highly subjective, genre-dependent, and easily one of the most hotly contested topics on audio forums.

This is an oversimplification, but the general goal is to get the master buss output close to 0db without clipping or destroying the dynamic range and play of your song. The quiets should be quiet (but hearable) and the louds should be loud where appropriate.

You’re going to want a good, highly configurable compressor, multi-band EQ, and limiter meant to be used specifically for mastering. There are too many mastering VST plugs (some free) to cover in any level of detail.

I use IK Multimedia’s T-RackS3 Mixing and Mastering Suite because of its wide variety of great sounding presets that offer a novice like myself the perfect starting point from which to start tweaking.

Bounce to one last track, to perform any final edits. This is where you’ll add beginning and ending silence, and perform any track fade-ins and fade-outs during this phase to prepare the segues between songs.

Export a 44.1khz/16-bit stereo WAV file. You have one final choice to make in selecting the dithering algorithm from 24 to 16 bits. I usually use Pow-r, but try some different ones for yourself and see which works best. Use the same algorithm for all your songs.

Burn the files to CD in the correct order and call it an album.

Learn to play with a metronome

You may take it for granted that this truly IS common sense, but I can tell you from experience that it is not always an easy task.

The longer I’ve played a song without a metronome, the harder it is for me to play with a metronome. The performance has become more of a reflex than a conscious action and the subtle variations in rhythm are taken for granted as intentional and in-time.

The point is not necessarily to be able to hit the same beats as the metronome, but to play on AND around those beats in a predictably consistent manner.

You don’t want to sound like a machine – unless of course that is your specific goal – so forgive yourself the small imperfections while trying to stay in the ballpark.

Even if (and maybe particularly if) you consider yourself a god amongst musicians whose inner sense of rhythm beats with the precision of a hummingbird’s wings, you should still track to a metronome.

Tracking to a metronome isn’t just some arbitrary Rule of Recording And Exemplary Musicianship, though. There’s a reason why you should learn to play with a metronome…

Set your tempo and meter early

Take the time to figure out the tempo of your song and set it correctly in your DAW before you lay down the foundational tracks. This is especially important if you’re going to be using any kind of VSTis or other instruments that may record MIDI data.

My drums are pretty much always MIDI-based, so if I need to change the tempo after I’ve already tracked the drums, assuming I set the tempo correctly in the first place, it’s simply a matter of altering the tempo parameters in my DAW and the drums come right along with it, changing tempo automagically. This goes for all of the MIDI data I recorded for my other VSTi’s as well.

If you cannot count out the beats to determine the tempo and meter (because the meter is strange, you’re rhythmically retarded, or otherwise too lazy to do the math), you can turn on the metronome in your DAW and try to estimate the tempo by playing along with it. If the song is fast, start at 120bpm, if it’s slow start at 90bpm.

Play along to the metronome with your foundational instrument and adjust the DAW’s tempo slower or faster accordingly. It’s also good to have recorded a reference track without the metronome prior to this guessing process, so you can go back and compare with what you originally envisioned.

This reference track need not be exact, it is only crucial that you capture the basic rhythm and tempo accurately. For instance, suppose you have an intricate guitar part with picked arpeggios. You can just strum it out for reference. This is not the only situation in which reference tracks never intended for the final mix can come in handy.

Perform your MIDI-based drums live, with a keyboard

No, not that keyboard, you daffy bastard. The one that looks like a piano.

This is a stylistic choice and only applies if you’re aiming to create the realistic illusion of live, acoustic drums without actually mic’ing and tracking a live, acoustic drum set.

If you’re using loops, groove samples, synthesized percussion, a living, breathing human drummer, or are otherwise shooting for a perfectly locked-in beat, then skip to the next tip.

Myself, I do not own a drum kit and probably wouldn’t know what to do with one if I did. This presents a problem as I’m trying to record, essentially, guitar-based rock, on my own without a band. I want the sound of a real drum kit in my songs. Only a few years ago, this would be a real brick wall for folks in my predicament.

Chin up, laddy! It’s the 21st century. We’ve got iPhones, a convergent web, tweets, retweets, twats, and a failing economy, so why shouldn’t we have realistic sounding fake drums?

We absolutely can. There are a bunch of great sounding sample-based acoustic drum VSTi’s out there capable of fooling all but the most discerning console jockey into passing for real drums.

My favorites are XLN Audio’s Addictive Drums Retro Pak and Toontrack’s Vintage Rock EZX. I’ll talk about acoustic drum VSTi’s in more detail in the future.

There are a number of equally valid ways to track your fake drums. Some people will go in and program the MIDI notes by hand, basically painting the drum beat. Others may start with a pre-existing MIDI “groove” track selected from a library and edit from there. That’s all fine and dandy.

You may be aware that there are numerous parameters in your DAW or drum VSTi that can be tweaked to “humanize” a MIDI performance. This usually includes some changes in velocity, miniscule variations in timing and note length as well broader stylistic parameters like how much “swing” to inject.

Either I haven’t figured out the right tweaks to systematically “humanize” a drum track yet (totally possible), or there’s just some palpable bit of life missing from them. There’s not enough consistency in the inconsistency.

Here’s what I do to overcome that synthetic feel.

After recording my fundamental tracks to a metronome, usually a guitar or piano, I either program one or two measures of a very basic drum prototype track (kick and snare) or select a pre-existing groove track that closely matches what I’m imagining for the drums (if anything, yet).

Then, if the original instrumental track sounds out-of-whack with the new drums, I might go back and re-record it to better match. Next up, I’ll lay down a demo of the bass track or other instruments fulfilling a rhythm role, if I feel confident with the interplay between the fundamental track and the drum prototype. Now the fun begins.

Wipe out the drum track. Go ahead, do it (non-destructively, of course).

I own a Korg padKontrol drum pad, but I just don’t find myself using it very much. I’m much more comfortable playing my trusty old Kawai K11 Digital Synthesizer as a MIDI drum controller.

Figure out where each of the drums are on the keys – the mappings aren’t always standardized between VSTi’s – and start playing.

Don’t expect it to sound amazing right away. It does take practice.

As you’re first learning to play drums with a keyboard, concentrate on tracking the kick and snare for the first pass. Use both hands, even if you’re only playing two different keys and let your arms and wrist relax. Move with it. Spaz out.

You can add your high-hat work during a later pass, but it won’t be long before you’re able to handle the kick, snare, hats, crash, rides and toms simultaneously. Maybe you’ll turn on the metronome if you’re having trouble.

The beauty of playing your MIDI drums live is that you get the “humanize” component for free and you don’t have to bail on a take for every little flub. Miss a kick? Go back and fix it afterwards. Find the MIDI note and just slide the flubbed hit right back into place where it belongs.

Don’t try to tackle too much in a single live take. If you can’t do the fills (I usually cannot), just keep playing your main drum line and plan to do a fill run later. Once you’ve firmly established the basic line, and the set-in-stone structure of the song, cut out the areas where the fills should go and punch-in to record each fill, also live, one-at-time.

Playing your MIDI drums live on a keyboard is part of an Iterative Recording process, a concept that deserves its own attention as a discrete blog post.

I record the fundamental rhythm track, record the drums, re-record the rhythm instrument, re-record the drums, on and on, until I hone in on the groove I’m seeking.

Like folding a paper in half, ad infinitum, becoming ever more compact and closer to zero, but never reaching that negative infinity.

Somewhere, half-way to zero, that’s where you’ll find the organic feel of a live human pounding skins.

Or you could just have a drummer friend lug their kit to your basement.

Stick with whatever works for you.

Don’t let me or anyone else tell you how to do things. As long as you’re getting the sound that you want, then you’re NOT doing it wrong.

The latest issue of Tape Op (oh yeah, there’s another tip, Read Tape Op, the subscription is free!) has an interview with Sufjan Stevens in which he admits to committing a slew of recording no-no’s. Home Studio Essentials writes of the interview:

I find it amazing how many things he did “wrong” and still ended up with good sounding recordings. Check out this list of things he did “wrong” when recording 2003’s Michigan.

1. Used 32 kHz sampling rate (instead of the usual 44.1 kHz.)
2. Mics: Only used two SM57s and one C 1000. No mic preamps.
3. Mixed the album on his headphones. He doesn’t even own monitors.

What does this tell us? I think a lot of us (including myself) spend too much time worrying that we don’t have the “perfect” studio setup. So what! Work with what you have. A lot of us have much better setups than Sufjan Stevens had for Michigan and I think that album sounds great. We have no excuses.

The ultimate artifact of recording music is the sound that comes out of your speakers. If that sound makes you happy and is a reasonable translation of the music you originally conceived, then you’ve done it correctly.

These techniques have helped me to better translate the music in my head. I hope they’ll be of similar use to you.

Stay tuned for Not-So-Common-Sense Home Recording Tips #2. I have enough planned material for 2 or 3 more installments, if the interest is there. All comments and suggestions are appreciated.

add to Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati

Song: The Engineer

April 14, 2009

Bookmark and Share

The Engineer Demo (2009) [Download]

The seeds for this song, and the entire album, were planted almost a year ago. My experimentation at the time was tending toward dense, heavily electronic and synthesized music. I was writing most of my songs at the console, piece-by-piece inside of Cakewalk Sonar, and starting to feel tapped out creatively. The music was missing something. It all just felt so…cold and synthetic…and though that fit the nature of the lyrics I was writing, I didn’t find it all that interesting to listen to.

It was time for a change in direction. Time to strip it down and get back to what I knew best, using the guitar as my primary songwriting tool.

I’d restrung my acoustic 6-string after a long period of disuse and started strumming the open strings. There’s something about the drone of an open E-string that I’ve always loved. A hypnotic quality that I adore despite the repetitive nature inherent in a droning sound. Something organic, like a pulse.

Fooling around with open strings and some arpeggios in 6/8 time (I think), I came up with 3 different parts that felt like they fit together nicely. The first trickle of lyrics and melody flowed quickly and easily soon afterward.

The Engineer Demo – Deconstructed (2009) [Download]

She was made of polymers
She came along and broke my heart

Her icicle stare turned my blood cold
It tore me apart

Just let me have a crack at it
Oh, let me into your skull

A silly proposition;
That emotions come from the heart

She’s a fountain of life
Sprung from the minds of men
She’s a real work of art
She belongs in a museum

(Round Two. Fight!)

It’s a barometer of our relationship, girl
That we should contantly cuss and fight
For the world

Knowing is half of the battle, like GI Joe says
Let’s get together and conquer the world

Songs usually start out as music for me. Rarely with a melody, more often with the chords, the rhythm, and the changes, and almost never with the lyrics. After I get a skeleton of the song, I’ll start working on a melody and lyrics, filling in the blanks with nonsense words and whatever stream-of-consciousness profanities might dribble out.

Sometimes, it’s like pulling teeth fitting words to music. This one was different. The words came easy. I can’t say that I understood them, but that seems the case with almost everything I write. It starts with instinct, and flows with feeling. To realize any sort of meaning, that must wait until I go back and edit With Intent.

I had already been in a sci-fi headspace, slinging robots and spaceships like a bad episode of The Outer Limits. At first blush, this song seemed to be exploring a similar space.

I put the song down, incomplete, and wrote more music over the next 6 – 8 months.

Now, with some distance, I can see how this one song marked a definite change in direction for me, both musically and lyrically. All of the songs that were to follow seem to have grown naturally from this single starting point.

The Engineer – as I came to call it – was cut from a different cloth than the songs before it, though it shared certain thematic elements with those dense and tedious sci-fi expeditions I’d been lost on. The music had an earthier, folky quality and the lyrics, while still rooted in some far-off future, conveyed a very distinct humanistic perspective that had been missing.

I’d rather not over-explain any particular meaning or concept behind a song because a) it just sounds pretentious and b) it undermines any potential emotional connection I might have with the song. I find it dangerous to know too much about the song when I haven’t yet finished the lyrics. Things may just get a bit too literal, as they have for another song I’m nursing and will write about soon.

The important thing for me to take away is that The Engineer is the prototype for the album, Transhuman Highway.

Next steps are to finalize the structure of the song and finish the lyrics. After that I will find an acceptable metronomic drum beat and start tracking the guitar parts.

Session Notes

Since this is just a demo, there’s nothing particularly exciting going on here. The chorus vocals get a little bit pitchy, but I’m OK with that right now. Let it serve as a lesson that you must know precisely which notes you’re trying to sing before the noise leaves your throat. This sounds like common sense, but I struggle with it daily.

I recorded 2 tracks live, one each for the vocals and guitar, so there will be some bleed-through on the mics. Most of the time I prefer this as long as the signals are kept in-phase. The vibe on the vocals is usually much better for me if I can play my main instrument while I’m singing. My time-keeping sometimes suffers, though, so it’s a tradeoff and really depends on the song.

The microphones were run directly into a Firewire 410 audio interface. No compression or any effects were applied beyond a low-shelf EQ on the vocals to roll-off some boom from the low-end.

  • Track 1: Rhythm Guitar – Tacoma 6-String Acoustic via M-Audio Aries condenser mic – panned 35% left
  • Track 2: Vocals via Audio Technica AT4040 Cardioid Condenser mic – panned 5% right
  • Track 3: Guitar Fills (overdub) – Tacoma 6-String Acoustic via M-Audio Aries condenser mic – panned 35% right

The Deconstruction track was recorded with the same setup, except I used my girlfriend’s Little Martin LXM for the guitar track. It’s a fine instrument made for cute little girl hands. I like to strum around on it when I’m sitting on the couch or out on the front porch. This will be used frequently for quick and dirty demos, I predict.

add to Digg it Tweet This Share on Facebook  Stumble It! Favorite on Technorati