Mixing audio can seem like a black art. There are millions of tips around the internet on how to do this or that to create a great mix. And I'm sure you've seen just as many tutorials by the masters as I have. The problem is that 99.9% of these are based on some minute little trick to handle this tiny little situation, or it's using gear you don't have to fix problems you also don't have.
The truth of the matter is that there's far too much information on the specifics, but not enough on the generals. That's what we're going to focus on today: the big picture of mixing. Though I'll include a few specifics for examples, and to spur your creativity.
So what is the view from 10,000 feet? Where do we begin?
Like it or not, the core of a mix comes down to the levels of each track in the mix. Getting the levels right is far more important than some magical plugin chain you like to use on this or that instrument.
What do we aim for? Two things:
First, make sure every instrument can be heard. Even in dense mixes with a lot of layers, you should be able to pick out each specific instrument and hear what it's doing. To do this, you'll naturally have to mix percussive elements louder than steady elements: if drums are the same volume as the organ or pad, you're going to hear all organ/pad and no drums. It's good practice to mix steady instruments like strings and pads low in the mix since they can be heard between drum beats, and the drums come in loud to carry the song. When mixing, make sure you can hear each and every instrument. Each instrument doesn't need to be bold, or to carry the interest all the time. In fact, most listeners can't easily keep track of more than three elements at a time, so it's okay for smaller sounds introduced earlier to fade into the background of the mix. But they should still be audible to you and others if you're listening closely.
Second, you want to let the powerful instruments speak when they need to. Are the vocals popping out during the chorus? Is the guitar carrying the song during the solo, or the synth hook carrying the song during the drop? They should be. While you want everything to be heard, you want the most important parts to be heard the loudest.
- Tip: it can be a lot easier to hear the balance of instruments when things are really quiet. Turn your speakers down very, very low. You know your mix is sitting right when you can still hear everything when the volume is low, but the most important parts still stick out of the mix and sound special. If the song sounds good quiet, it will sound great loud.
I'll be honest: I don't believe in trademark EQ curves for specific instruments or to somehow give a certain signature sound. That all happens elsewhere. If you're using EQ right, it's more of a cleanup tool than a space for creativity. Though it sometimes takes creativity to best clean up a mess with EQ.
What is EQ for? Two things:
First, make sure each track doesn't have any problems when soloed (or when not-soloed, if you're an experienced mixer). If a voice is honky or a hi-hat is sizzly or a kick drum is boomy, these are things you can clean up with EQ. You want to make sure each instrument and layer in your mix sounds good in isolation. Though boosting what sounds good can seem easy, usually cutting what isn't needed is the better and faster route to fixing your problems.
If I hear a problem but don't know where to begin, I insert a big boost with my parametric EQ and sweep around the frequency until I find the heart of the frequency range that's giving me problems. Then I cut where that big boost is, removing the problem I was looking for. Special use cases aside, I strongly recommend using a parametric EQ over a graphic EQ. You just have more control. And it can be a big help if your parametric EQ has a spectrum analyzer built in: it makes it super easy to see where unwanted peaks live, and also to know at a glance if there's too much very high or very low energy in your song.
Second, EQ is a great tool for getting the many layers in a song sit well together. Sure, an electric guitar might sound great when it's played full-spectrum while soloed. But there's a better than good chance it will blend with the low-end and high-end of the song better if you band-limit the electric guitar to the frequencies relevant to where it sits in the mix: if you cut the low-end out of the electric guitar, the synth bass or electric bass will sound that much cleaner and more powerful because of it. And that's a worthwhile trade. For the same reason, make sure no two dominant instruments are masking each other in frequency, and that no background instruments are masking dominant instruments.
Another example of using EQ to help instruments fit nicely with each other is to roll off unneeded frequencies. If my cymbals or drums are sounding a bit too shrill, I like to add a low-pass filter at the very top of the frequency spectrum, just to tame the extreme highs a little. And it's common practice for me to roll off the lows on most every instrument that has them. I even use a high-pass filter on my kick drums and bass synths to cut out the extreme lows: getting rid of the stuff below 30-50 Hz (depending on the song) not only frees up headroom to give the song more perceived volume, but it just makes the bass sound punchier and cleaner, even when played on a system that can produce evenly down to 20 Hz. Oftentimes those lowest frequencies just aren't adding anything to the song.
- Tip: EQing in mono is a fantastic trick to help get your mix sounding right. Not only does it help your song's mono-compatibility, but it gives you an edge on instrument separation. Stereo sound is a magnificent thing, and of course, your mix will sound its best when it's good and wide, full of exciting stereo content. But if your speakers are set up even halfway right, you're mixing with a lot more stereo separation than the average listener ever will hear. Stereo separation implies that frequency carving for specific instruments is less valuable, but that simply isn't true. Set your monitor controller to mono, or put a plugin on the end of your master fader that converts your mix to mono, then do your EQing for separation. When you turn stereo back on, your mix will sound better than ever.
As I mentioned, the invention of stereo is an incredible thing. Nothing beats having a wide, clean mix that just sings from the speakers. This might be second-nature to some of you, but maybe others could use a little direction here.
Generally speaking, the lead elements of the song and the bassy elements of the song are panned center. There's a very good chance you want your lead vocal, your snare drum, your kick drum, and your bass panned to the middle. And if there's a lead guitar solo, for example, that would probably sound good centered too.
Auxiliary instruments usually sound good spread out. Maybe a piano layer is panned off to one side, and a rhythm instrument to another side. In the context of the lead instruments in the center, the mix will start to sound bigger with these less important instruments panned out. There are two ways to do this: the natural way and the artificial way. The natural way is to pan all instruments as if you were looking at the musicians on a stage: the lead vocalist stands center, the keyboard player is off to one side, the rhythm guitarist is off to the other side, etc. Try to replicate how you see bands arranged on stage for this. The artificial way is more common for largely digital music, where the song may be many layers of synths, all of which are stereo. Lead elements should sound in the middle, important wide elements can be panned as wide as the sky, and less important stereo layers can still be panned directionally. For example, a stereo synth sound could be panned 90% left and 20% left to give it a left-side bias. And then pan a different synth to the opposite side to balance out the mix.
Generally speaking, the mix will sound best when the volume from left and right are about equal. My meters show that a lot of my mixes aren't exactly balanced, especially not all the time. But as long as both sides of the mix sound balanced to the ear, it all works out.
Also, doubling is a powerful tool for adding width and dimension to a track. Recording two layers of a rhythm guitar part and panning them 100% left and 100% right is the oldest trick in the book for adding stereo power: if the performances are tight, it will sound like one guitar part, but your ear hears the two are different and perceives it as one instrument with incredible space and size. Doubling vocals and panning them is also a great trick to add weight and power and width to the vocals, especially with harmonies and layers additional to the lead vocal.
- Tip: don't forget about stereo effects. You might be accidentally routing an instrument to mono reverb when stereo reverb likely sounds better. Guitar effects emulators can sound a lot different in stereo too, even when fed a mono guitar signal. You can add a lot of perceived space by sending a left-panned instrument to a right-panned reverb. And adding ping-pong delay can really amp up the width and space of your mix.
The last job of the mixing engineer is to make the track exciting. One might say this is optional, but all of the best mixes add a little spice to keep the interest flowing.
In modern electronic music, sidechain compression or volume-shaping can add a lot of excitement to the mix. Dialed in correctly, you get a core element or the entire song pumping and moving to the rhythm of the song. The classic technique is to put a compressor on a lead element or a bus of multiple elements sidechained to the kick drum, so the other instruments duck out of the way each time the kick drum is heard. I prefer using a volume-shaping plugin instead: it's easier to implement than setting up a sidechain, it's far faster to get a desirable shape dialed in, and you can intentionally use a volume-shaping pattern separate from the kick drum. For example, if you have the kick hitting every quarter note, using volume shaping on the pad synth or upper bass synth layers set to dotted quarter notes could sound really interesting.
Reverb is a staple tool in mixing. Use it to add dimension and space to your mix. This might take the mix from a dull, tight room to a small hall or a large hall, depending on your preference. Used subtly, it can add glue to the mix and presence to the vocals and key layers while still sounding dry. Used moderately, it can make lead synths and guitars sound huge and anthemic. Used aggressively, and you can make instruments sound muffled and in the background, contributing to a vintage, lo-fi sound.
If you want to learn more about reverb, check out my post on maximizing reverb. And be creative: remember that you can put effect plugins on a reverb bus: maybe distortion, maybe amp emulation, maybe volume shaping, maybe sidechain compression to the source so the reverb swells only when the source goes quiet.
Delay is another staple tool in mixing. Used subtly, it bolsters the strength and warmth of vocals and keyboard instruments. Used more aggressively, it can fill up holes in the mix: for example, vocal delay is a great way to add interest to a pause after a vocal phrase. Get creative and see what options your delay plugin gives you. I love the flexibility and control my favorite delay plugin offers.
These are just examples. But if the instruments in the mix aren't enough to make the song exciting, then reach into your toolbox to find an effect that can help. Which effect you use and how you use it is up to you. Just be sure that it doesn't significantly alter the levels, EQ balance, and panning that you worked so hard to achieve.
- Tip: the output of a virtual instrument or the recorded track from a physical instrument doesn't have to be the final sound. Get creative by throwing amp emulators or distortion plugins or filtering plugins or multi-effects plugins onto instruments. A lot of what you try won't sound good, but once in a while, you'll stumble across a killer effect that adds incredible character to the instrument. In my own music, often the bite and character of the hook owe all of their interest to the happy accident of finding the perfect preset in an effects plugin added after the instrument was recorded. If this interests you, check out my guide on adding character to your tracks.
If you've made it this far, you know the core of mixing. Get the levels sounding right, subtly shape with EQ to solve problems and create space, pan things around for width and separation, and add excitement through effects. I can't promise your mixes will sound 100% better after reading this. After all, a beginner drummer can't suddenly become a master after reading a single how-to article. Learning takes time, and it almost entirely comes down to how much experience you have: how many hours you've spent mixing, how many mixes you've made, and how proactive you are in learning from pro mixes as you hear them.
It's quite possible that in your experimentation with mixing, you've developed some bad habits. Approaching the mix from a minimalist's perspective can free you from those bad habits.
Also, you'll probably have to use reference checks to hear your song in perspective.
That said, the purpose of today's article is to largely bypass the subtle tricks here and there in order to focus on the heart of mixing. And if you get these elements right, your mixes will sound really strong.
Now that we've covered the basics, is there anything that you'd like to add? Any favorite tricks you'd like me or my followers to know? Please write them in the comments below.
P.S. You'll notice I didn't include a section for compression. I don't believe compression plays a major role in the fundamentals of mixing. Sure, it can even out the volume of a dynamic instrument to better keep the levels of your mix sounding consistent, or it can add sustain to percussion or pumping to the mix for extra character. But all of these uses fall under the Levels and Excitement components of mixing. Compression is just another tool to be used only when it's needed and otherwise ignored. Not a core component of mixing.