Audio Ordeal

Music Production, Podcast, and DJ Tutorials

8 Questions To Help Troubleshoot Your Mix

8 min read

Mixing a a song factors in many different tweaks to the tracks. Obviously there is no order to working, beyond what the specific song requires, so a definitive quick mix guide would be pointless. The point of this post however, is to highlight some of the issues I frequently find in my tracks during the mixing process. If you are a beginner, or just can’t find what your mix need, save this article for reference.

The first thing you should set up, is a reference track. I normally set this up as track 1 in my DAW so I can easily reference it whenever I need. If you are struggling to figure out what track to use, consider tracks in the same genre, perhaps it’s a song which you have taken heavy influence from, or just one you want to sound as good as. Now we have a track to compare to, let’s get on with the checklist:

1. Can You Tidy Up Your Project?

This is an essential transition in the process of making a song. You no doubt have many, many tracks, full of short clips of MIDI, audio, and perhaps even notes. Chances are everything looks cluttered and there is a lot going on in view. What you want to do is tidy up your workspace. Imagine you are sending your mix to another professional, you’d want to send them a clear project to mix.

Dragging all your muted tracks to a folder at the bottom is great 
for decluttering!

One of the reasons to do this is to set the boundary between the song-writing and the mixing stage. Another reason is that with so much going on, if you accidentally nudge a retake out of line, you might not notice. The best thing to do is render every track (remember to avoid clipping WITHOUT EXCEPTION at this stage).

Rendering the tracks also turns the MIDI tracks into the audio meaning the computer doesn’t have to process the synths, this will free up the CPU for the mixing plugins and reduce any latency or crashes. It also will give you the audio waveform instead of the MIDI roll which will be much more useful to see in the mixing process.

Most DAWs allow you to group your tracks however you like. Group drums together and leave the drum groups by the bass. Group you vocals together and leave them by your lead synth. The idea is any groups that will/may clash are right next to each other and easier to work with.

Finally, If you have any discard tracks, sound design clips, or muted items, drag them all down to the bottom of the project and leave them in a muted folder. That way you can go back to the early stages if you ever remix the track, and there will be unused ideas and motifs that can be used in other tracks.

2. Have You High-Passed Everything?

Everything, yes, everything. 

Humans have a limited bandwidth of hearing (20Hz-20kHz). There is no point reproducing sound out with that band of hearing as it will be unheard, yet take up headroom. Bass frequencies especially take a lot of headroom, so it is important to have control over them, and remove them where possible.

Every instrument has a lowest note, think of a guitar, it’s lowest note is the open E string. When a microphone records an instrument, it records every sound, including background and room noise.

There is less need in having these sounds and so, if any are below the pitch of the instrument, we can remove them. Let’s keep the guitar example. It’s lowest note is E or 82.41Hz (we know this from a quick Google search).

From this information, we can safely say that every sound picked up below that frequency is unwanted noise. So this means we can remove everything below this frequency.

A high-pass filter (which only lets the high frequencies “pass” through) is used and is set just below the frequency needed. In this case, I’d set the high-pass just below 80Hz, removing all of the sound below the guitars’s lowest pitch.

If you don’t know the lowest frequency, perhaps because it’s a singer, set a high-pass and move it up in frequency until you hear it affecting the quality of the voice. Bring it back down just below that threshold and you have the right frequency. Make sure to do it on a part where the full range of their voice is used.

Should you decide to have a bass sound that goes super “subby” and low frequency, remember the limit of human hearing is 20Hz and most good speakers can struggle below ~30Hz. So there is no point having anything less than 20Hz, and you will hear little difference even at 30Hz.

You can safely high-pass that frequency on each track. I do it for all tracks just to get the absolute best amount of headroom. Just bear in mind headphones do go down to 20Hz but in my honest opinion, there is very little musical value at that frequency. Remember, in the context of a busy mix, you can remove more than you think.

Don’t just make your judgement on a solo’ed instrument as taking that little bit extra out may sound better when everything is playing at once.

3. Is It Still Muddy?

I don’t want this article to turn into an EQ cheat sheet but one of the dodgy areas which always seems to be an issue is the 200-250Hz range. This is essentially a low-mid area which makes things potentially very muddy.

At first all I ever did was EQ the ~250Hz range down on the master track as it was always an issue but as time went by I realised that it had to be done on a track-by-track basis.

Rational Acoustics have this great chart for sound systems,
it equally applies to instruments.

The difficult part of this is, the track doesn’t sound great with that frequency removed, nor does it sound good when more than one instrument occupies it. The best thing to do is figure out which instrument needs to carry the weight and allow that instrument to keep that frequency. All other instruments can have it turned down.

Continuing on the above point of High-Passing, you may be able to get away with doing it quite high up on the frequency scale, especially with the background instruments. Try high-passing a few above 250Hz and see if that clears things up.

EQing often is a case of trial and error, take advice, but don’t stick to a formula.

4. Are the Main Elements Clear?

Vocals especially can get lost in the mix, they sit perfectly in the range of most lead instruments. If you have a synth lead getting lost in the dense background pads, same applies. To fix this you need to stop the clashing and masking frequencies.

Imagine someone playing a triangle as they are speaking. You will most likely hear both the triangle and their voice very clearly. Now imagine they are playing a piano note in the same pitch as their voice, things may become a bit harder to hear.

In the context of a mix, removing a frequency can be quite hard to hear. If you want to make your vocals sound clearer over a piano, try removing the fundamental notes of the vocals from the piano’s EQ.

The brain will hear the vocals better as they are the only sounds occupying their frequency range. Another thing to do is to side-chain your vocals to a compressor on the piano. Set the vocals up to trigger the compressor which will reduce the volume of the piano every time the vocalist sings.

Don’t compress too hard with this as we don’t want the piano to bounce back up when the vocals end. Try low ratios and longer releases

5. Have You Overcompressed Everything?

Throughout the mixing stage, I like to render the track and look at the waveform. while this is by no means the best way to check for over-compression, a beginner may find it useful.


Any quick Google search will give you examples of heavily vs lightly compressed audio. The more the waveform resembles a uniform rectangle (or sausage), the more compressed it is.

Now, this doesn’t mean anything in itself, plenty of good mixing engineers can get their tracks to sound great with a large amount of compression however, if you are in the first stages of the mix and already the track resembles a censor bar, then you probably should dial it back.

A compressed waveform.

6. Does It Hurt At Loud Volume?

Before everyone flips their shit at me for suggesting they deafen themselves, there’s a way to do this. Play your reference track and turn up the volume to a moderately loud level that is comfortable to listen to. Don’t play it at an unsafe volume as you will lose your hearing, but for this brief test as loud as is comfortable will do.
Now play your track at the same volume. If it hurts or is harsh then you know there are things that need to be fixed. You don’t need to play the music loud for long at all, just make sure to pick the busiest bit of the song.
Tinnitus sucks, don’t blow your hearing.
If you are an electronic dance producer especially, your music is written to be played loud in clubs so making sure it isn’t a painful experience to your listener is essential. 
I have always found 2.5kHz to be a harsh frequency and tend to dip it in most of my tracks at some point. If you feel your track is too harsh, EQ isn’t the only tool.
Multi-band compressors are great at de-harshing certain frequencies. Most good Multi-band VST compressors should have a de-harsh preset so if you are scared to do it yourself, see if the preset helps the issue and note down the settings so you know how to replicate it.
7. Is It Clear At Low Volume?

Obviously the converse also applies to the loudness test. It must be audible during quiet playback. Make sure that your mix is clear on a quiet speaker (try turning down your monitors as well as a bad quality phone speaker).
Identify the parts which aren’t apparent (note that quieter speakers may not reproduce the bass as well) and compare with your normal listening level. It may help you hear a problem frequency that didn’t seem like an issue at the time. 
Often quiet playback can tell you if an instrument is too loud or quiet, do this stage a few times for best results. 
8. Is Your Mix Actually OK, And You Just Need A Break?

Not hard to check, make a coffee, watch an episode of something on TV and go back to the mix after a break.

Leave a Reply

Copyright © Tom Jarvis 2020 All rights reserved. | Newsphere by AF themes.