EQ, short for ‘equalisation’, is responsible for controlling the tone of a sound and it plays a crucial role throughout the production process. To understand how EQ can be used, we first need to understand what ‘tone’ means, before moving on to examine the controls on offer to a producer via a hardware or software EQ.
The classic Neve 1073 EQ and preamp
Every sound you hear is constructed from a combination of harmonics, sitting atop what’s called the ‘fundamental frequency’. If you play middle-C on a piano (or C3, if you prefer), the reason that you recognise that note and can sing it back is because the fundamental frequency – C in this case – is the loudest and most prominent ‘event’ that makes up the experience of hearing that note. However, it’s by no means the only frequency that you can hear when you strike that specific key. In addition to the fundamental frequency, any notes that ‘relate’ to the frequency of middle-C will also sound – known as harmonics – and these are triggered by a series of vibrations based on a mathematical relationship.
Pitch and Frequency
As musicians, we talk about ‘pitch’ a lot as it’s a key part of the way we learn music. You only have to read the paragraph above to see that I’ve used ‘middle-C’ as an example of describing a pitch, rather than its related frequency. But in terms of understanding fundamental frequencies, harmonics and the mathematical relationship between them, it’s much easier to switch from talking about pitch to talking about frequency, as the two words mean the same thing. In other words, every pitch can be measured as a frequency and every frequency relates to a specific musical pitch, regardless of it being in tune or not.
Every note of a piano can be correlated to a specific frequency
To understand this more easily, let’s abandon middle-C and instead imagine a note whose fundamental frequency is 100Hz. This means that if you could see a graphical analysis of the note as played, you’d see that its waveform, as a rising and falling shape, would ‘cycle’, or occur, 100 times per second. Cycles per second are measured in Hertz, or ‘Hz’ for short.
As we’ve already understood, a note played with a fundamental frequency of 100Hz would produce a dominant pitch at that frequency but to understand harmonics, 100Hz is a useful starting point. That’s because harmonics occur at ‘multiplications’ of the fundamental frequency. This is where the relationship between our scientific understanding of frequencies and our musical interpretation of these as ‘pitch’ unites, as ‘doubled frequencies’ of the fundamental sound at octaves; at 200Hz, 400Hz, 800Hz, 1600Hz etc, in this example.
A3 or 440Hz is often used as a reference frequency to tune synths to the Western scale
Perhaps the most ‘celebrated’ frequency is that of the A which occurs just above middle-C – A3 to keyboard players, otherwise known as A440. The ‘440’ part is A3’s frequency in Hertz, so it follows that the octave above this, at A4, will have a frequency of 880Hz, the frequency at A5 will be 1760Hz, just as the frequency of A2 will be 220Hz. For every octave jump, double the frequency, for every octave drop, halve it.
So, to go back to our pianist playing C3, the hammer in the piano strikes the string and the air emanating from it vibrates, so the frequencies that are mathematically related to this frequency begin to vibrate in sympathy. These vibrations produce the harmonics that become an essential part of our understanding of the sound of the instruments we’re hearing. Indeed, so important are harmonics that if you were able to strip harmonic content away from a piano completely, so that only the fundamental frequency was heard and, for example, do the same thing with a violin, the two instruments would sound exactly the same.
Pure sine waves have no harmonic content and can therefore not be produced by any acoustic instrument
Sounds without harmonic content are sine waves – those pure, rounded rising and falling wave shapes you’ll know from hard and software synthesizers. No acoustic instrument can produce a sine wave but it’s worth remembering that the tonal differences between every acoustic sound you can hear – not just pianos and violins but trumpets, snare drums, human voices, the sound of cars driving past or the sound of a football being kicked – is different in large part to tone. And as we now know, this in turn is due to the relationship between the fundamental frequency of a sound and the harmonics that are allied to it.
Why should this make such a difference? Simply because the number of harmonics, and their volume, is different in every sound we hear. Some sounds prioritise the volume of relatively few harmonics to produce pure, hollow sounds. Others are so rich that it’s clear that they’re a combination of several thousand harmonics, all playing a part to produce a tone much more complex. Other sounds, like bells, might feature a loud fundamental, skip the first few harmonics altogether, then produce a clanging cluster of harmonics, or overtones, at higher frequencies.
Software EQs like Pro-Q from FabFilter offer almost limitless sound-shaping options
Producers have long experimented with mixing sounds together and it’s frequently the case that we make judgments about which sound to add next to a track in progress based on frequency-related grounds. For instance, if we’re adding leads, pads, percussion and keyboard parts to a track, it won’t take long before our track will feel like it needs ‘rooting’ with a kick drum or bass part (or both). However, it’s also the case that if you’re working with lots of instruments, all of the accumulated harmonic content will need to be addressed before your production is complete, otherwise, you’ll run the risk of overloading some frequency areas while leaving others under-employed.
The Role of EQ
The role of EQ is to address exactly this issue and it’s not uncommon for all (or at least the vast majority of) sounds within a track to feature their own EQ plug-in, so that the tone of the sound on each channel can be configured as a mix progresses. Some sounds will need little or no tone control, whereas others might need much more radical settings which, when played back alone, can even sound ‘wrong’. As part of the whole track, however, those more radical changes may be exactly what are required for the track not to become overblown or bloated in certain frequency bands.
Frequency analysers give us a visual representation of a sound, allowing us to spot any unexpected peaks or troughs
Before we look at some practical examples of how EQ can be used to shape the tone of individual sounds in the context of a production, let’s start by saying that while there are some useful rules to observe about tone balance, no two producers or mix engineers would use EQ in exactly the same way. The most useful tool you have at your disposal when adjusting EQ settings are your ears. A great place to start is to import your favourite tracks into your DAW and to listen to them carefully through your studio monitors. Make some notes – how full and rich is the bass? Does it sound thin and pure, or does it sound richer, fuller and louder? Do any frequencies ‘hurt’ your ears either because they’re too piercing or harsh?
These are just are just three of dozens of tone-related questions you can pose and you can find answers not only with your ears but also your eyes, if you have an EQ plug-in with a built-in frequency analysis tool. These give an interesting ‘picture’ of which frequencies are most active and how loud they are in the context of the whole track, all of which becomes useful study as you begin your own EQ experiments.
The bass drum and bassline are two classic elements that tend to clash in a mix
The best place to start analysing frequency build-up and how to address its issues is to program a looping one-bar phrase for just two instruments. On the first track, program a four-to-the-floor kick pattern so that a kick sound plays on beats one, two, three and four of a single bar. Choose a rich, open-sounding kick as this will contain a collection of harmonics on top of the fundamental frequency. Then, program a bass part on top of the kick with notes playing on beat one and beat three but with syncopated notes playing in the gaps between, so that these other bass notes don’t coincide with the kicks on beats two and four.
Again, choose a saw or square wave bass sound that contains some harmonic content. Then, put a loop around this bar and listen carefully as it plays back. What you’ll hear is that the kicks on beats two and four will sound ‘clearly’, as will the bass notes which don’t occur on beats one and three. However, where the bass and kicks combine on beats one and three, the sound may well be smudged or slightly ‘flammed’, as your speakers struggle to play back the massed bass content produced by the kick and bass at the same time.
Some EQs use chromatic keyboards to relate frequency to specific notes and keys
This is the crucial point; your monitors or headphones are responsible for turning the frequency content you put into your tracks into a physical experience, with the speaker cones transmitting the electrical vibrations they receive into sound. Any frequency overload won’t translate well as this process occurs, producing anything from subtle sonic artifacts through to sounds which simply sound ‘wrong’. These might be piercing sounds that are unpleasant to the ear or unwanted distortion, as a monitor struggles to turn frequency overload into sound.
Parameters in EQ
So how might you resolve the kick and bass example described above? One answer might be to identify the most dominant frequency in both sounds and to tame that frequency from one or both sound sources, using a narrow band EQ to dip the volume of the frequency in both sounds causing the problem. Alternatively, you might decide to prioritize one sound over the other, performing a more radical frequency cut on the bass sound, for instance, in order to preserve the fullness of the kick. Experimentation will allow you to discover which method produces the best results.
Bell band EQs offer the ability to cut or boost only a specific range of frequencies
This raises the question of which parameters are included in EQs; while there is some variation between certain hardware and software designs, most EQ units feature the same three parameters, per individual band. Firstly, EQs allow you to choose a centre frequency, which will determine the point at which tone control will take place. Secondly, they allow you to decide on the ‘width’ of each band, with either a wide range affecting a larger group of frequencies around the centre one, or a narrow, more targeted band. The third common parameter is the amount of volume cut or boost to be applied; remember that as well as being able to bring up the volume of a group of frequencies, individual bands can be cut in level and frequently, employing this latter approach leaves more room in the mix for other sounds.
It’s not a bad rule of thumb that, within busier mixes, for every EQ boost you apply to one sound, you employ a cut to the same frequency group of another. This won’t always be required but, without question, boosting the same frequency areas in several sounds will create more problems than it will solve. The three parameters available explain a single-band EQ but the vast majority of EQs allow you to work with several bands, with the same three controls – frequency, bandwidth and volume cut/boost – available to each. The most common ‘type’ of EQ band is a ‘bell’ shape, but at the ‘outer limits’ of an EQ, both at the treble and bass ends, don’t be surprised to find shelf shapes instead.
Unlike bells, which boost a targeted group of frequencies but leave others, outside of that band alone, high and low shelf EQs continue to ‘slope’, so that high frequencies are progressively pushed upwards or downwards by a high shelf band, with similar behavior at the bottom end via a low shelf.
A shelf band can everything below or above a certain frequency, with a harshness determined by the slope
Some EQs also provide low- and high-pass filters. Unlike the resonant filters more commonly associated with synths and dedicated filter plug-ins, filters within EQs are usually used to allow you to more radically shape a sound to remove unwanted sonic artifacts captured at the recording stage. Hum, room noise and unwanted high frequency reflections can all be a factor while recording and filters at either end of the frequency spectrum allow for a radical approach to dealing with such problems than high and low shelf bands allow.
To finish, let’s look at three practical examples of how EQ can be employed to improve your mixes. Firstly, if your track lacks overall energy and drive, you may well find that it’s deficient in the low mid-range, between roughly 120Hz and 260Hz. Particularly with tracks which employ quite sub-heavy basslines, it’s not uncommon for a frequency ‘hole’ to occur in this area, which can leave a track lacking the power of others. To address this, choose a sound that is dominant in this frequency area; either a busy sequenced synth part, the lower end of a pad, or even the bottom of a snare drum, if you’re using a deeper one.
Try an EQ on this chosen sound, select the frequency area stated above and employ a boost of a few dB, with a bandwidth wide enough to cover this whole area. See whether or not this brings more power and energy to the mix and, if so, revise the bandwidth to focus on a narrower frequency group, to avoid bloating the mix. You can even add an EQ to the output channel to produce a boost in this area if the whole track is lacking some energy in this frequency range.
The lo-mid range (250-750Hz) can often cloud a mix
A second common example is removing ‘whine’ from vocal recordings. It’s not uncommon for the frequency group between 1kHz and 3kHz (1000Hz to 3000Hz) to become dominant as a singer performs loudly but while some of the vocal quality in this area will be required for character, this isn’t an attractive group of frequencies to the ear, so backing this area down (again with the aid of a frequency analyzer to target particularly offending frequencies) can be hugely advantageous to avoid a piercing vocal tone.
Some EQs use fixed bands – that means it’s not possible to change the frequency which they affect
Lastly, whenever you hear mix engineers talking about ‘needing more air’ at the mix stage, they tend to be referring to ultra-high frequencies above 10kHz, which bring a ‘sheen’ to mixes, giving them ‘light’ and making them sound shinier. If you seek this quality in your own tracks, high shelf EQs can help, as they’ll continue to push up the volume of frequencies as they head up towards 20kHz which is, for most people, the upper audible frequency threshold.
Successful use of EQ comes from trial and error and remember that no ‘common’ settings will work, as the individual requirements of each track you make will differ from the last. Keep experimenting and your tracks will be all the stronger for your tonal tweaks.
Register to Access Free Courses, Plugins, Projects, Samples & More
When you register with Point Blank, you access an array of free sounds, plugins, online course samples, access to our social network Plugged In and much more! Simply register below and visit our Free Stuff page to get your hands on a range of exclusive music-making tools and tutorials provided by the team. Fill your boots!
This post is included in Tutorials