17 March 2013

Home Studio: Mixing - Part 2

Introduction

In part 1 we talked about the basic ideas around Digital Audio, and explaining what the sampling frequency and bits resolution are and how they affect to our work.

In this part, we will try to describe the tasks that are part of the mixing phase, trying to distinguish the boundaries with the mastering phase.

Mixing tasks

The final goal of the mixing phase is to obtain a mix where every instrument is clearly distinguishable from the rest. The personal touch gives some spicy to the mix to make it appealing to rest of people.

Let breakdown the different tasks that take place in the mixing process.

Track cleansing

The first task of the mixing is to clean the recorded takes. It's impressive the amount of noises that mics catch. We aren't conscious of all those noise until  we carefully hear the track. Small click (pickup selection), background noises (electrical hums or buzzes), far sounds (door slams, children playing, ...).

If we have some good cleansing software, as the Sonnox Restore, it can be of great help to remove unwanted noises from our tracks and, very specially, clicks and clacks.
If we don't have such a tools, we will need to remove each click by hand, editing the track.
We will first delete the portions of track where there is no musical information, but just "silence" or "noise". We want to keep just the parts that are necessary to create music, nothing else.
At least, we should clean the start and finish of our song, when we go from silence to music and, from music to silence, because it's there where you can easily detect such a noises.

The Bass Guitar is an instrument that generates high energy in bass frequencies. For a speaker, to be able to reproduce a low frequency, it needs to consume a lot of energy and, therefore, if it must handle both, low and high frequencies at same time, low frequencies will dominate, wasting practically all the available energy to represent the low frequencies and, probably the range of frequencies that interest most to the human ear will be weak.

Therefore, some engineers tend to cut the Bass Guitar tracks to the exact time of the note that is being played at every instant. This means a tone of chunks in the Bass Track but, it that way, they are releasing unnecessary energy that can be used by rest of instruments.

When we have no other solution, we can try a Noise Gate in that track, to help to control the amount of signal that we want to cross the gate, filtering the floor noise. The issue with Noise Gates is that they can ruin the musicality of that track. If you are a guitarist with a very dynamic playing (high differences between the quietest and the louder notes), it's possible that part of the quietest sounds will be lost or the trails will be cut. Trying to recover those parts, lowering the threshold of the Gate will bring back the unwanted noise and, even in an ugly intermittent way that is worst than leave the noise there.
Why is it important to clean tracks?

During our mixing processes, you will be in the need of raising up the average volume perceived in the mix. Sound Processors, as Compressors, will potentiate that sound but, the drawback is that they will also raise the floor noise present in the track. So, the less noises in the track, the cleaner sounding will be the final mix.

Also, cleaning the parts of the track without useful information, will leave more room to rest of instruments to be represented with a clearer image.

Equalizing

One of the main tasks of the mixing phase is the equalization of the several tracks, to distinguish each instrument and make them more presents in their natural frequency ranges. There are two kind of equalization: corrective and cosmetic. We will discuss each one later.

Panoramize

One more basic task of the mixing is to panoramize. It consists in to put each instrument in a certain position inside the stereo image, where we would naturally find such a instrument in a real situation (concert, gig, ...).
We will go in details later.


Modify dynamics

We modify the dynamics of the sound by using dynamics processors, as gates, compressors, decompressors and limiters.
Those tools change the perceived average volume, tame peaks and can modify some basic characteristics of the sound, as the attack or release.
We will develop this aspect in depth in other blog's entry.


Restoring the spatial information

Most of the Studio recordings are made in what is called a Dry ambient. Those rooms were designed to minimize the sound reflexions, removing reverberations and echoes. The geometry of the room makes that some frequencies resonate (excessive reinforcement of a certain frequency). This resonating frequencies are known as Room Modes. Room Modes distort the original sound, exaggeratedly reinforcing some frequencies, that usually are in the range of basses and subbasses. In any case, we would end recording something different of the original source.

The drawback is that those dry takes aren't musicals. So, after having some good image of the original sound source, we need to re-create the ambience where we would like it was sounding.

In real world, each sound is bouncing in the objects around and, therefore, the sonic information that reaches us is the mix of the original source with the sum of the different reflexions on the objects around. For this, in Studio they use some controlled ways to reproduce such a reflexive ambience, with the help of reverberations and echos, which parameters are being controlled and doesn't depend on the Room Modes.
This also allows that a recording take in a very small room can sound as if it was recorded live in a concert.

We will discuss this subject later.


Apply effects

Here is where the Art begins, where the Sound Engineer adds his/her own tricks and experience, to enhance the nuances of the mix and make it distinct. From effect pedals or effect racks, up to external manipulations with equalizers, dynamic processors, filters, etc.


The 3 Dimensions of Mixing

This concept was introduced by Tyschmeyer, in his books and videos, and I find it really easy to understand and very illustrative. So, I am glad to share it with you.

Imagine your mix as a cube of 3 dimensions. The X-axis  is the front face of the cube and, will represent the panoramization of tracks. The Y-axis is vertical, the height, and will represent the equalization of tracks. The Z-axis is the depth and represents the depth of each instrument in the mix.


Y-Axis: Equalization

After cleansing the tracks, the most important task is the equalization.
With all the tracks without panorama and, without any kind of additional processing, we should be able to clearly distinguish every single instrument.

In the worst of the cases, our song will be reproduced in a mono environment, where the stereo panorama will not help. A correct equalization will make it distinguishable even in the worst cases.

Imagine that we have a picture of each instrument and that, we thrown all them over a table. If there is not a certain transparency in some parts of each picture, we will be able to distinguish just the instrument that is in the upper picture.

Or imagine a building full of windows. What we want to achieve is to see just one instrument across each single window.

With the help of a corrective equalization, we will try to bump or dim certain frequency ranges, in a way that we can leave each instrument clearly represented in their natural frequencies and, removing any possible conflict with other instruments.

For a corrective equalization, we need a "surgerical tool", that don't introduce any kind of "digital artifact". A good example is the Sonnox Equalization.
Usually, for each instrument, we can choose two regions were to enhance its own nature: in the range of its fundamental frequencies (the set of frequencies corresponding to its scale) and, in the range of its harmonics.

To do good corrections in equalization, we need some knowledge about the frequencies that are natural to such instrument in one side and, in other side, how each range of frequencies (sub-lows, lows, mids-lows, mids-highs, highs, super-highs) affects to the instrument timber.

Results of bump or dime the perceived volume in each of those areas, for an specific instrument, can enhance the intelligibility of the instrument or totally ruin its sound.

One of the first Equalization tasks consist in to dismiss the content in sub-low frequencies (below 50 Hz and, very specially, below 30 Hz). Why?. We already explained that speakers waste lot of energy to reproduce low frequencies and we will need such an energy to reproduce rest of frequencies.

If your instrument isn't rich in low frequencies (Electric Bass Guitar, Kick, Contrabass, etc), it's low frequencies content help nothing to the mix and, therefore, you can dismiss the low frequencies below 90 Hz (in some cases below higher frequencies, in some cases below that frequency).
This gives some air to the low frequencies instruments and benefits their representation.
There is no Magic Receipt. Each instrument has its own representative frequencies and, even same instruments can be slightly different in their timber. But, alright, I will share with you some notes (taken from another places) and that I find of true help.
Kick Drum

Any apparent muddiness can be removed dismissing around 300 Hz.
You can try an slight bump around 5 and 7 KHz, to add some high frequencies.

50-100Hz ~ adds bottom
100-250Hz ~ adds roundness
250-800Hz ~ muddiness area
5-8kHz ~ adds presence
8-12kHz ~ adds air

Snare Drum

You can try an slight bump between 60 and 120 Hz, if the sound is really snappy.
Try to increase around 6 KHz to get that "pap" sound.
100-250Hz ~ Adds fullness
6-8kHz ~ Adds presence

Hi-Hat and Plates

Any muddiness can be fixed dismissing around 300 Hz.
To add some brightness, try an slight bump around 3 KHz.
250-800Hz ~ Muddiness area
1-6kHz ~ Adds presence
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

Bass Guitar

You can try to enhance around 60 Hz, to add some body.
Any muddiness can be removed dismissing around 300 Hz.
If you need more presence, try bumping around 6 KHz.
50-100Hz ~ Adds bottom
100-250Hz ~ Adds roundness
250-800Hz ~ Muddiness area
800-1kHz ~ Fattens small speakers
1-6kHz ~ Adds presence
6-8kHz ~ Adds presence in highs
8-12kHz ~ Adds air

Vocals

It's the complexer equalization. Depends a lot on the mic used to record the voice.
Anyway, try a bump or cut around 300 Hz, depending on the mic and song.
Apply an slight bump around 6 KHz, to add some clarity.

100-250Hz ~ Adds "in your face" presence
250-800Hz ~ Muddiness area
1-6kHz ~ Adds presence
6-8kHz ~ Adds clarity but also sibilance
8-12kHz ~ Adds brightness

Piano

You can remove any muddiness dismissing around 300 Hz.
Try applying an slight bump around 6 KHz, to add clarity.
50-100Hz ~ Adds bottom
100-250Hz ~ Adds roundness
250-1kHz ~ Muddiness area
1-6kHz ~ Adds presence
6-8Khz ~ Adds clarity
8-12kHz ~ Adds air

Electric Guitar

Also, complexer equalization. It depends again on the used mic and the song itself.
Try dismissing or enhancing around 300 Hz, depending on the song and the target sound.
Try to bump around 3 KHz, to add some edge to the sound or, cut here to add some transparency.
Try to bump around 6 KHz to add some presence.
Bump around 10 KHz to add brightness.
100-250Hz ~ Adds body
250-800Hz ~ Muddiness area
1-6Khz ~ Cuts the mix
6-8kHz ~ Adds clarity
8-12kHz ~ Adds air

Acoustic Guitar

Any apparent muddiness can disappear dismissing between 100 and 300 Hz.
Apply gentle cut around 1 to 3 KHz, to raise the image.
Apply gentle bump around 5 KHz, to add some presence.
100-250Hz ~ Adds body
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

Strings

Completely depends on the song and the target sound.

50-100Hz ~ Adds bottom
100-250Hz ~ Adds body
250-800Hz ~ Muddiness area
1-6hHz ~ Adds crunchyness
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

This was just an small example about how the different ranges of frequencies can affect to each instrument.
When we hear our mix, it is necessary to ask ourselves what is missed and what exceeds in each instrument, to perform the right corrections in its equalization.

There are several diagrams available in Internet, like the interactive diagram of frequencies that you can find in this link: (http://www.independentrecording.net/irn/resources/freqchart/main_display.htm), which helps a lot to understand in which frequency ranges moves each instrument, as well as what happens with an excess or a lack of representation in each range.

The quicker way to understand how each region affects to the instrument is to try exaggerated bumps or cuts in each area. Exaggerating the effect, we will easily understand how that particular range is helping to define the timber of the instrument.

You should be able to find many other articles and tricks related to corrective equalisation of the several instruments but, your ear will be schooled with practice. In the beginning, isn't easy to valuate the impact of each correction. Be patient, it worth every minute you lost training your ears.

To finish this introduction to corrective equalization, just to indicate that there exist certain well known competitions between instruments that share some representative frequencies.
In the low side, the Kick drum competes with the Bass Guitar.
In mids, vocals compete with the Snare drum.

It's a good idea to draw a map of corrections of frequencies, in a way that we can avoid to reinforce the same range for two different instruments that compete for that range. What we enhance in one instrument should be dismissed in the other, for that particular range.
Most of engineers prefer to cut instead of to bump ranges. By dismissing certain range, you are automatically freeing such a range for other instruments. So, you wouldn't need to bump the range in the competing instrument to make it distinguishable from its competitor.

So, in the Y-axis, we are stacking the instruments "vertically", in the frequential spectrum, in a way that each one can take "its head" out of one of the "windows".

Here you have an example of a corrective equalization I did for one of my songs. Click on the image to see it full sized.



Left to Right and, Up to Down: Kick drum, Snare drum, Hi-Hat, Timbali, Overheads, Bass Guitar, Electric Guitar and Vocals.

It makes no sense that you try to copy it, because it should work in your mix.
Just try the indications above and find what works better in your mix.


To be continued...

Well, I wrote about the Y-axis more than I expected so, therefore, I will explain rest of dimensions in next entries. But, I honestly think, this part is a very good starting point for anywho that feels lost in mixing tasks.

No comments:

Post a Comment

Please, feel free to add your comments.