Audio Ordeal

Music Production, Podcast, and DJ Tutorials

How to Mix a Track: Mixing Basics and Essentials Part 1

5 min read
Mixing and composition are the two halves of a successful track. To have a track worthy of any recognition, you have to nail both. Traditionally, music was written and jammed out by a band, before being recorded and sent for mixing by the mix engineer. Nowadays the massive shift to one-person compositions, and computer music, has led to mixing being an art undertaken by the artist too. So this series will guide you to be able to make better mixes for your music so it will really stand out.

What is Mixing?

Essentially mixing is balancing the components of a song to make the fullest yet clearest sounding music. Imagine a song as a recipe, you have all your ingredients, and together they will make a nice batch of pancakes, but you don’t want to add equal amounts of flour and salt as it will ruin the pancake mix.
Much like with recipes, you can’t throw each instrument into the mix, each at the same volume, it needs to be carefully balanced. Of course, even with the right ratios of ingredients, if you just mix the pancake ingredients together in a bowl, it doesn’t magically make a pancake. 
There has to be a level of processing to make the song (or pancake mix) into the final product. This is the other side of mixing, where you are adding sonic effects and changes to the precisely mixed instruments so they truly blend together. 
The human range of hearing spans from 20Hz to 20kHz. The ideal mixed song will have a roughly equal representation of sound across the frequency spectrum. If you mix a song with very prominent bass frequencies, it will sound like it’s heard from the outside of a club. Likewise, with the converse, if it is an extremely mid-range prominent mix, it will sound like music played over a telephone.
Another thing that really makes a mix shine is the stereo image of the sound. Most songs are in stereo which means that each ear plays a slightly different representation of the sound. Mixing allows you to create the space and the position of sounds in front of the listener. 
A thing to note when mixing, is a word known as “Headroom“. This is probably the word that needs to be most respected when mixing a song, as the consequences of messing with it are very bad for your track.
The easiest way I can describe headroom is if you imagine a trampoline in a room. You are bouncing on the trampoline having fun. Of course, we all like loud music, just as we like jumping high on a trampoline, so let’s compare the big jumps with the volume peaks. 
If we decide to have the trampoline just off the floor, you will be able to jump, but you won’t be that high, so let’s raise the trampoline to halfway up the room, now when you jump, you are going very high. If you want to go any higher though, you’ll bang your head on the ceiling. Hitting the ceiling in music is just as fun as hitting it with your head. It produces distortion.
Long story short, you do not want to mess with running out of headroom because you will destroy the mix. This analogy brings us to the difference between mixing and mastering. Mixing is to do with the relative levels and sound, mastering is bringing it up to commercial volume levels (the space just below where you’d run out of headroom). This means that when you are mixing, you absolutely CAN turn the volumes of everything down.

The First Stage of Mixing

I would say your very first stage of mixing is in the composition. This is good because as discussed above, these days more and more musicians are making music on their computers as well as mixing it and mastering it themselves.
If the composer and the mixer are the same person, you can respect your limits and abilities with both processes. Consider how the best mixes have an equal representation across the spectrum, you don’t want to go and write ten different parts for trumpet and only a small amount for the bass. 
Ideally, you will be able to write and design/choose your instruments so they don’t share the same frequencies to any great extent. So let’s say you have a simple song with really powerful vocals and a synth, you may want the main frequencies of the synth to sit an octave above the voice. This would work much better than having the synth in the same octave as the voice because they will mask each other. 
Another thing to realise is that sounds add up. If you have a singer singing a note, and a synth playing that exact same note, the volumes will add up and you will have to turn them both down so that frequency isn’t so prominent, or worse exceeding headroom.
Light, just like sound is made of waves, we can imagine a well balanced instrumentation like a good photo, where every colour is equally represented relative to the intended sound. If we make a picture with an unbalanced representation of frequency, then it will turn out like this.
Here, I have over emphasised the low frequency light (red)
while underestimated the mid and high colours (green)
and (blue)

As the photo above looks far from natural, so would a composition sound if it was that unbalanced. If the photo represented the melodies and movement of the song, then by arranging the instruments and frequencies to spread evenly we will get a much nicer result.
As well as the notes you are writing, consider the tones of the instruments, a guitar could get away with sharing some of the frequency range of a male voice if perhaps the brightness of it was turned up.
A synth pad may share the same fundamental as the lead synth, but the pad’s harmonics are programmed to be prominent and cover a much broader spectrum than the lead. The clashing frequencies can be EQ’ed out of the pad synth as the note is still held up by it’s other frequencies.

Leave a Reply

Copyright © Tom Jarvis 2020 All rights reserved. | Newsphere by AF themes.