NOTE: This post was originally published onthe now defunct Beemo blog on July 7, 2018
Neanderthals assembling an album, continued.
I., Tracking
Part II
II. Mixing
III. Mastering
Part II
II. Mixing:
Just having the parts recorded does not mean it's finished. There's some sifting through them and seeing if there's anything you want to adjust or throw out. It's very easy to take parts out so you have the benefit of throwing mud at the wall while recording and seeing what sticks. I classify this as part of the arrangement and editing process: pick the parts you want to keep and choose the takes or parts of takes that are the best performance, with "best" being subjective. It might be the most technically precise take, or the one with the most expression.
Recording software is really powerful. You can adjust the timing of individual notes, copy and paste measures or vocal phrases from one place to another. If you record to a click grid (metronome) you can hit a quantize button and everything will automatically align with the grid (in theory). We generally don't quantize because a little wander in the timing makes it sound closer to what a human would actually play and that fits our particular sound better.
You can also put different effects on via the plugins I mentioned before. There's a lot more than just reverbs that can be done, but all the subtleties are a little beyond me.
The software can do a lot but it isn't a magic wand. Getting a great take is way better (and faster) in the long run than getting a mediocre one and trying to fix it later.
The element of positioning is really important in a mix. A track will sit in a certain position in left-right space, forward-back space, and a sonic space as you listen. (Close your eyes when you listen to something with headphones; things will sound like they're coming more from the right or the left, or more from the front or back of your skull). Right / left balance comes from how much you pan the track into each side. Fully right makes the sound come only from the right side, half right and half left makes it sound like it's coming from the middle and everything between.
Lead vocals are usually right up the middle, as is a single rhythm instrument, assuming it isn't fighting for frequency space with the vocals (more on that later). If you have two rhythm instruments going at once often it's best to put them on opposite sides so the recording doesn't feel unbalanced.*
*On Amanda Lyn's song "Greatest Night" the original mix had her guitar, which was the only rhythm element at the time, coming out of the center which worked until we also as an afterthought had me double it on the mandolin. When the guitar, mandolin, and vocals all came out of the center, it sounded muddy and differentiated. The final mix has the guitar and mandolin left/right separated.
Forward-back spacing is done with the volume and dryness of a part. The dryer a part is (i.e. the less reverb that is on it) the more it punches through the mix. Adding reverb will pull it back into the mix. You have to be careful with melodic instruments fighting with either each other or the vocal line if they're all too front at the same time, though not having more than one instrument play a lead at the same time also can separate them from each other.
Sonic space has to do with the frequencies of the track. A note is a vibration of the air at a certain frequency and every instrument has frequency characteristics that gives it it's characteristic sound. A saxophone has different characteristics than an cello or a guitar. Tracks can fight with each other if they occupy the same space. As an example, on Back Seat Down Sean had to rework his solo because his first cut was in an octave that put it right in the vocals and acoustic guitar's frequency space. Because of this the dobro solo completely disappeared in the mix until he took it into a different octave.
The frequency content of an instrument is a spectrum; a sound basically contains all possible audible frequencies, but very little of some and a lot of others. The engineer can EQ tracks to adjust the frequency characteristics, bringing out some more prominently or dampening some down and has to balance making each track sound the best in isolation and in context with the others. Tweaking these to get the best combination is part of the mixing process and is, frankly, something I don't have a great ear for. From a mix perspective, if it is in tune, in time, and not in spatial conflict with anything else I'm mostly good.
III. Mastering
This last step is applied to the final mixes. It gets all the individual songs up to the same volume and smooths out differences in dynamic range and overall EQ between songs to make the album sound like a coherent album. As I understand it, EQ and compression* are the bread and butter of mastering.
*Audio compression or dynamic range compression basically makes the loudest sounds in a song quieter and the quietest sounds louder, reducing the difference between the loudest and quietest parts. This compresses the volume range, hence "compression."
We should be looking at a finished product in the coming weeks at which point we're going to start working on a release strategy. The temptation is to push it out into the ether as soon as it's done, but that generally isn't wise. We need to figure out what supporting content (e.g. videos) we're going to make, how we're going to distribute it, what the album art is going to be.
My vote is going to be cover art that is scrimshawed onto a mastodon femur, in an homage to the era of hominid evolution that saw the beginning of this project. I'm not sure the rest of the band will sign on though.
Stay tuned
M
コメント