Sunteți pe pagina 1din 42

Basic Mixing I

Mixing or Mix. Mixing is not only an art by itself as music is, it is called mixing because the word means just what it is about. Mixing or making a Mix is adjusting all different instruments or individual tracks to sound well together, composition wise and mix wise. How to start mixing a mix is a simple task when you understand what to do and what not. Later on we will also discuss the static mix and dynamic mix. According to some common rules, the Basic Mixing chapters explain common mixing standards as well being informational about sound subjects. The Starter Mix, Static Mix and Dynamic Mix. As of a process being broken down into parts, we can divide mixing into three basic steps. When starting a mix, mostly you will have some previously recorded tracks you need to mix furthermore. We will explain to setup all tracks fast, so you can have a default setup and progress to the static mix. Mostly the starter mix can be setup in less than 1 hour of working time. The static mix takes a bit longer, about 4 hours or so. The Dynamic mix and finishing up a mix can take from 4 to 12 hours of working time. Finishing off the mix can take 1 o2 two days or more depending on creativity, style and experience. It is good to know the total working time in hours finishing a mix, can be divided into three parts. First the Starter Mix. Then the Static Mix. Then the Dynamic Mix. Starter, Static and Dynamic mix are the basic three standard parts. Then finishing off. At last part 4 should be just working until the mix is finished. Before we discuss these subjects, we will start off with some more sound or audio details. Overall Loudness while mixing. The first mistake might be in thinking that how loud this mix will sound is important; a lot of beginners who start with mixing will actually try to get their mix as loud as they can get it to be. They try to push-up all faders until they get a desired overall loudness level, don't do that. The master vu-meter does look attractive when it is showing all green and red lights, you might get confused into thinking that louder is better. Louder is not meaning better when mixing, as we are in the mixing stage loudness is less important as this is part of the mastering stage. In the mixing stage we try to have a balance in the three dimensions of mixing, therefore creating seperation and togetherness (at the same time). Though separation and togetherness might seem contradicting, every instrument needs to have a place on the stage, together they sound as a mix. So mixing is more about balancing (adjusting) single tracks to sound well. By a general rule on digital systems we do not like to pass 0 dB on the master track. Keeping a nice gap between 0 dB and -6 dB can help your mix well without distortion going on. Some like to place a limiter on the master track and so try to mix louder, maybe it works for them but we do not recommend doing this until you are experienced with a common dry mix under 0 dB. Anyway if you need your mix to be louder, just raise the volume of your speakers instead. That is a normal way of doing it. We will explain later on what to do with the master track of your mixer. Also when mixing do not place anything other on the master fader, so no plugins, reverb, maximizers etc. Just maybe a brikwall limiter on the master fader with a threhold -0.3 db, or reducing just 1 or 2 dB only when peaks occur. For real beginniner and not so experienced, we recommend nothing on the master fader and set to 0 dB. Volume or Level. As the human ear can detect sounds with a very wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale in dB. Commonly used are faders from a mixer or a single volume knob of any stereo audio system. Because volume is commonly known as level, beginning users might overlook the possibilities. The different volume faders of any mixer count up all levels towards the master fader as a mix. Summing up levels of tracks towards the master bus. When talking about sound or a note that has been played,

the frequency and amplitude (level, volume) will allow our ears to record and our brains to understand it's information. You can guess playing at different frequencies and amplitudes, our hearing will react differently, allowing loud or soft sound to be understood. Allowing to percieve loud or soft, left, center or right, distance and envoiroment. Our hearing is a wonderfull natural device.

The Fletcher Muson chart shows different hearing amplitudes for frequencies at certain loudness levels. As you can see, how loud a note is played is affecting the frequency a bit. As well as with Frequency and Volume (amplitude, loudness), we can get a sense of direction and distance (depth). Our brains will always try to make sense as if sounds are naturally reproduced. Music or mixing is mostly unnatural (or less natural), but our brains understands music better when it is mixer for our natural hearing in a natural way. Mixing to affect our natural hearing by perceiving natural elements correctly (dry signal, reverberation, effects, summing towards the master bus). So as well for separating or togetherness, we can refer fist to the volume of a sound, instrument, track or mix that is playing. As well as Balance or Pan, Volume is an easily overlooked item of a mix. You might want to fiddle with effects more or keep it to more interesting things, volume is most important. Actually volume and pan (balance) are the first things that need to be set when starting a mix and throughout the mixing process. Not only fader, level and panning is important for a mix, composition wise volume or level is a first tool when you are using the mute button for instance. Balance or Pan. On a single speaker system (mono) where Frequency and Volume is applied, we would not have to worry about pan or balance, so all sound is coming from the center (mono). With a pair of speakers (stereo) it is possible to pan or balance from left, centre to right. We call this left, centre and right of the Panorama. So we are allowed to perceive some direction in the panorama from left to right. Just as effective to our hearing, the volume or level, panning or balance, is mostly overlooked by beginning users. What can be difficult about setting two knobs, fader and balance? Easy it sounds, but planning what youre doing might avoid a muddy or fuzzy mix later on, keeping things natural to our hearing. Pan (Panorama) or Balance are both the same. As to where instruments are placed, Panorama is important it is the first sense of direction. By a common rule Volume Faders and Balance Knobs are the first things to do, and refer to, when setting up a mix. Beginning users who just setup Volume and Panning without a plan or understanding dimensional mixing are quite often lost and are struggling to finish off a completed mix. Dimensional Mixing. As a concept dimensional mixing has got something to do with 3D (three dimensional). You can understand that Frequency, Amplitude and Direction, make the listener understand (by hearing with our ears and understanding by brains) the 3D Spatial Information. When mixing a dry-signal towards a naturally understandable signal, we need some effects as well as some basic mixer settings to accomplish a natural perception. Setting the Pan to the left makes the listener believe the sound is coming from the left. Setting the Pan to centre makes the listener believe the sound is coming from the centre. Setting the Pan to the right makes the listener believe the sound is coming from the right. All very easy to understand. As we focus on frequency we can also do something about the way the listener will perceive depth. As sounds with a lot of trebles (higher frequencies) are perceived as close distance, and a more muddy sound (with lesser trebles) is perceived as more distanced (further backwards). Next our human brain can understand reverberation when for instance we clap our hands inside a

room. The dry clap sound (transients) from our hands is heard accompanied by reverberation sound coming from the walls (early reflections). Reverberation, specially the delay between the dry clap and the first reverberations (reflections), will make our brains believe there is some distance and depth, as we hear first the transient original signal information of the clap then the reverberations. The more natural the more understandable. So there are quite some influences on what our hearing believes as being 3D Spatial Information. Make the listener believe in the mix as being true. Our hearing also likes natural and believable sounds, sometimes addressed as stage depth. With all controls of a mixer you can influence the way the 3d spatial information is transmitted to the listener. You can assume that Volume (Fader or Level), Panorama (Balance or Pan), Frequency (Fundamental Frequency Range) and Reverberation (Reverb or Delay) are tools you can use to make the listener understand the mix youre trying to transmit. We will discuss dimensional mixing later on; now let's head to the frequency or frequency range of a sound. We perceive distance, direction, space ,etc, through clues such as volume, frequency, the difference in time it takes a sound to enter both ears (if it hits the left ear louder and quicker than the right) and reverberation. The Frequency Spectrum. A normal Frequency Spectrum is ranged from 0 Hz to 22000 Hz, actually all normal human hearing will fit in this range. Each of instruments will play in this frequency range, so the Spectrum will be filled with all sounds from instruments or tracks the mix is filled with. On a normal two-way speaker system these frequencies will be presented as Stereo. A speaker for Left hearing and a speaker for Right Hearing. So, on a stereo system there are two frequency spectrums played (Left Speaker and Right Speaker). Basically the sound coming from both Left and Right speakers together, makes up for the Stereo Frequency Spectrum as is presented below. Combined Left and Right (stereo), makes Centre (mono).

This chart is showing a commercial recording, finished song or mix. The x-axis shows the frequency range of the spectrum 0 Hz to 22 KHz. The Y-Axis is showing level in dB. On digital systems nowdays we go from 0 dB (loudest) downwards to about -100 db (soft or quit). In this chart (AAMS Analyzer Spectrum Display) you can see that the lower frequency range <1 KHz is much harder and louder in level then all higher frequencies > 1 KHz. The loudest levels are at about 64 Hz and -35 dB, while the softest levels are about -65 dB and range from 4 KHz to 22 KHz. The difference is 65 dB - 35 dB = 30 dB! As with every -10 dB of level reduction the sound volume for human hearing will halve (times 0.5). Instruments like bass or base drum (that have more lower frequencies in their range) are generating way more power (level) than the hihat or higher frequency instruments. Even though we might perceive a hihat clearly when listening, the hihat by itself produces mainly higher frequencies and generates way less volume (amplitude, power, level) compared to a basedrum or bass. This is the way our hearing is working naturally. But however a master Vu-meter of a mix will only display loudness, youre actually watching the lower frequencies responding. The difference between lows and highs can be 3 times the sound level. From left to right mainly above > 120 Hz towards 22 KHz are the levels of frequencies all going downwards. Speakers will show more movement when playing lower frequencies and less movement when playing higher frequencies. This chart is taken from AAMS Auto Audio Mastering System, this software package is for mastering audio, but actually can show also spectrum and can give suggestions based on

source and reference calculations for mixing. This can be handy to investigate sound of finished mixes or tracks, showing frequencies and levels. Human Hearing. Human hearing is perceptive and difficult to explain, it is logarithmic. As lower frequency range sound levels are measured louder. Higher frequencies measured as soft. They are both heard good (perceived naturally) at their own levels independent. Not only is human hearing good at understanding frequencies and perceives them logarithmical, also acoustics from rooms and reverberations play a great deal in understanding direction of sound. Generally a natural mix will be more understandable to the listener. The Basic Frequency Rule. The rule for mixing, that the bottom end or lower frequencies are important, because the bottom end or lower frequencies are taking so much headroom away and have the loudest effect on the Vu-Meters (dymanic level). The lower frequencies will fill up a mix and are the main portion to be looked after. The Vu-Meter is mainly showing you a feel of how the lowest fundamental frequencies are behaving. The Vu-Meter will respond more to lower frequencies and responds lesser to higher frequencies (3 times lesser). Mainly the mix fundamentals of loudness are ranging from 0 Hz to about 1 KHz; these will show good on a Vu-Meter. A range from 0 Hz to 4 KHz, will be shown by the VU-Meters as loudness, and is the range where you must pay attention to detail. If you can see the difference in loudness of a basedrum and a hihat you will understand that the hihat (though can heard good) brings way less power than the basedrum does. A beginners mistake would be mixing the basedrum and bass loud and then try to add more instruments inside the mix, thus will give you limited headroom inside your mix (dynamic level). Most common to adjust frequency are EQ or Equalizers, but as we will learn later on, there are quite a bit more tools to adjust the frequency spectrum. As we did explain before, Volume (Amplitude), Panorama (Pan or Balance) and Frequency Range (EQ or Compression, limiter, gate) are the main components of mixing (dimensions). Before we add reverberation, we must get some mix that is dry and uses these components; we call this a starter mix. Notes and Frequencies. To make frequencies more understandable, you can imagine a single instrument playing all sorts of notes, melodies, in time on a timeline. To have some feeling where notes are placed in the frequency spectrum and how to range them, the chart below is showing a keyboard and some instruments and their range of notes (frequency range) they can normally play. All notes from C1 to C7 on a keyboard have their own main frequency. You can see Bass, Tuba, Piano, etc, in the lower range and Violin, Piccolo and again piano that can play high notes.

It is important to know about every instruments range, but as you go mixing it is better to know to give an instrument a place inside the available spectrum. The colored areas are the fundamental frequency ranges. It is

likely when we need to do something about the quality of each instrument we will look inside their fundamental frequency range. It is likely when we boost or cut in these areas, we can do something about the instruments quality of playing. More interesting are the black areas of the chart above, these will represent the frequencies that are not fundamental. These frequencies are unfundamental frequencies and therefore when saving the mix for some headroom and get some clearness (separation), we are likely to cut heavily in these area's with EQ. Most of the hidden mix headroom is taken up in the first bass octave and the second octave (0 Hz - 120 Hz). Most notes played or sounds from instruments are notes that have a fundamental frequency below < 4 KHz. And when you really look at the fundamentals of a mix the frequencies 50 Hz to 500 Hz are really filling it, this is where almost any instrument will play its range and is much crowed therefore. The misery area between 120 Hz to 350 Hz is really crowded and is the second frequency range to look after (1st is 0 Hz - 120 Hz). The headroom required for the proper mixing of any frequency is inversed proportional to its audibility or overall level. The lower you go in frequency the more it costs hidden energy of the mix or headroom (dynamic level). This is why the first two frequency ranges need to be the most efficiently negotiated parts of any mix (the foundation of the house) and the part most often fiddled by the inexperienced. Decide what instruments will be inside this range and where they have their fundamental notes played. Keeping what is needed and deleting what is not needed (reduction) seems better than just making it all louder (boosting). To hear all instruments inside a mix, you need to separate, use Volume, Panorama, and its Frequency Range. You can get more clearness by cutting the higher frequencies out of the bass and play a piano on top that has cutted lower frequencies. By this frequency rule, they do not affect each other and the mix will sound less muddy and more clear (separation). Both bass and piano have therefore funded their place inside the whole available frequency spectrum of a mix. You will hear them both together and clean sounding following the fundamental frequency range rules. Anyway for most playing instruments a nice frequency cut from 0 Hz upward to 120 Hz is not so uncommon, actually cutting lower frequencies is most common. Apart from Basedrum and Base that really need their information to be present, we are likely to save some headroom on all other instruments or tracks, by cutting some of its lower frequency range anywhere up to 120 Hz. The lower mid range misery area between 120 and 350 Hz is the second pillar for the warmth in a song, but potential to be unpleasant went distributed unevenly. You should pay attention to this range, because almost all instruments will be present over here. Fundamental Frequencies and their Harmonics. Now as notes are played you expect their main frequency to sound each time. But also you will hear much more than just a main fundamental frequency. An instrument is sounding (playing notes), so there is a fundamental frequency range to be expected to sound, the frequency range of this particular instrument. Also recorded instruments like vocals contain reverb and delay from the room that has been recorded in and also quite a few instruments come with body, snare, string sounds as well (even those nasty popping sounds). The whole frequency range of an instrument is caused by its fundamental frequency and its harmonics and several other sounds. As we mix we like to talk in frequency ranges we can expect the instrument or track to be playing inside the frequency range (fundamental frequencies). Therefore we can expect what is important (the frequency range of the instrument or track) and what is less important (the frequencies that fall outside this range). Harmonics. The harmonic of a wave is a component frequency of the signal that is integer multiple of the fundamental frequency. For example f is the fundamental frequency; two times f is the first harmonic frequency. Three times f is the third harmonic and so on. The harmonics are all periodic to its fundamental frequency and also lower in level each time they progress.

. Harmonics double in frequency, so the first harmonic range will be 440 times 2 = 880 Hz. Harmonics multiple very fast inside the whole frequency spectrum. You can expect the range 4 KHz to 8 KHz to be filled with harmonics. If you are looking for some sparkle, the 4 KHz to 8 KHz range is the place to be. Over > 8 KHz towards 16 KHz expect all fizzle and sizzle (air). The hihat will sound in the range 8 KHz to 16 KHz and this is where the crispiness of your mix will reside. Also when the harmonics double in frequency, their amplitude or

volume goes softer. The main fundamental sound will play loud, as de harmonics will decrease in amplitude each time.

Here are some instruments with their fundamental ranges and harmonic ranges.

In this chart you can see that the highest fundamental frequency (the violin) is 3136 Hz. So as a general rule you can say all fundamental frequencies somehow stop at < 4 KHz. For most instruments common notes are played in the lower frequency range < 1 KHz. You can also see that the lowest range of a bassdrum < 50 Hz or bass is at about < 30 Hz. This means we have an area of frequencies from 0 Hz to 30 Hz that is normally not used by instruments playing; this area contains mostly rumble and pop noises, and therefore is unwanted. Cutting heavily with EQ in this area, can take the strain of unwanted power out of your mix, leaving more headroom and a clear mix as result (use the steepest cutoff filter you can find for cutting). Anyway try to think in ranges when creating a mix inside the whole frequency spectrum. Expect where to place instruments and what you can cut from them to make some headroom (space) for others. Need more punch? Search in the lower range of the instrument up to 1 KHz (4 KHz max). Need more crispiness? Search in the higher ranges of the instrument 4 KHz to 12 KHz, where the harmonics are situated. Expecting where things can be done in the spectrum, you can now decide how to EQ a mix or use some compression, gate, limiter and effects to correct. By cutting out what is not needed and keeping what is needed is starting a mix. Starting a mix would be getting a clean mix a as whole, before adding more into it. Effects like adding reverb or delay will be added later on (static mix), lets first focus on what is recorded and getting that clean and sounding good.

Recorded Sound. First and foremost, composition wise and recording wise, all instruments and tracks need to be recorded clean and clear. Use the best equipment when recording tracks. Even when playing with midi and instruments all recordings need to be clean, clear and crispy. The recorded sound is important, so recording the best as you can is a good thing. For mixing the recorded sound can be adjusted to what we like as pleasant hearing. So knowing where an instrument or track will fit in, will give you an idea what you can do to adjust it. Also giving an idea to record it. Getting some kind of mix where you hear each instrument play (separation) and still have some togetherness as a whole mix combines means also composition wise thinking and recording.

Cutting / Removing is better than Adding / Gaining. Often throwing in Reverb or Delay (too early) will taste up the sound of instruments and most beginners will start with adding these kinds of effects. Trying to make more sound that they like. Well just don't! You wont have to add effects at first; you will have to decide what will stay and what must go. As well as setting up for some togetherness of all combined tracks, you will need some headroom for later freedom (creative things) to add into the mix. It is quite easy to fill your mix with mud; this can be done with adding a reverb or two. It is quite easy to make a booming sound by adding all kinds of effects or just pump up (boost) the EQ. To take away mud when you have already added it is a hell of a job. So starting with a nice clean mix that has all important sounds left over (without adding), is way better and gives less change for muddiness. Remember to do more cutting then boosting or gaining. Manual editting comes as a first task to decide what must be removed and what can stay. Leaving some headroom for furthermore mixing purposes. This is quite a task. In most cases EQ or Equalization can be used to do work with the frequency spectrum (range) as a whole. But on a DAW you can also delete what is not needed or mute it. You can decide to cut all lower frequencies out of a hihat, just because you expect they are not useful. Leaving some frequency space (headroom) in the lower frequencies for other instruments to play. This kind of cutting (the hihat) in the lower frequency range to leave some lower frequency space unaffected is the way to make every instrument have their own place inside the whole frequency spectrum or mix. Using Level (Fader), Balance, EQ and Compression (limiter and gating), these are good tools to start a basic mix setup. But a good start is meaning better results for later on, when your adding more to the mix to make it sound better and together. Starting with a clean mix is starting with a clean slate. With EQ for instance cutting/lowering can be done with a steep bell filter, raising can be done with a wider bell filter. The Master Fader. What not to do while mixing is adjusting the master fader each time you need to correct the overall level of your track, keep the master fader always at 0 dB (Only when youre using the master fader to adjust the main volume of your monitor speakers, headphones or output to you listening system, it is allowed to adjust only that single master fader of your desk while mixing). This means that all other master faders (soundcard, recording program, sequencer, etc.) must be left in the same 0 dB position while mixing. Also this will go for the direct Master Fader of summing up the mix and Balance (Mater Pan), keep this always centered. The main reason is simple; the master fader is not for mixing, leave it alone. When you set the main master bus (summing) fader below 0 dB you are lowering the overall volume, this might seem plausible but especially with digital systems you will have problems not hearing distortion while you are pushing the instrument faders upwards. Also by lowering the master fader you will have less dynamic range, This means that internal mixing can be going over 0 dB (creating internal distortion) but it will not be visible or show on the VU-meter, will not light up the Limit Led, it will give you no warning that youre going over 0 dB. When a signal goes over 0 dB on a digital system, there will be distortion of the signal going on (set your DAW for 32 bit float processing). But you will not notice any distortion going on when this happens internal. If you hear this or not, this is (mostly) not allowed. Try to keep all master faders and master balance in the same position when mixing, preferred at 0 dB. Also the human ear is hearing frequencies different at variable volume's (loudness). Listening while playing soft might reveal to your hearing in a certain way, when you raise the volume it will be slightly different to your hearing. So listening loud or soft, it is close but differs, by this it is always good when you like it loud, play your mix soft and see what happens to the sound (disappearing?). It is a good check to see if your mix will stand out as well played loud or softly. How the human hearing is responding is showed in this chart.

This chart shows different loudness levels, you can see that the frequency range between 250 Hz to 5 KHz is quite unaffected by playing loud or soft. But however the 20 Hz to 250 Hz is greatly different in loudness when played loud or soft. Also the higher frequencies transfer different when played loud or soft. This is the way human hearing perceives loudness.

Instruments. Everything that you record on a track is likely to be an instrument. Common instruments are Drums, Bass, Guitar, Keyboard, Percussion, Vocals, etc. So when talking about instruments we do mean the full range of available instruments or sounds that are placed each on their own single track. Instrument Faders. When you mix, you only adjust the instrument faders to adjust the volumes (levels) of the different instruments or single recorded tracks (don't touch that master fader). Hopefully you have recorded every instrument separately like Drums, Bass, Guitar, Keyboard, Vocals, etc. On single tracks and on your mixer they are labeled from left to right. Each fader will adjust volume (or the level) of a single instrument or track, as a total summed up by the master bus fader. It would be wise to start with Drums on the first fader and then Bass. The rest of the faders can be Guitar, Keyboard, Vocals, etc, whatever instruments you have recorded. Separation and Planning, Labeling and placement on a mixer. Most likely you will start with the Base drum on fader 1 and working upwards with Snare, Claps, Hihat, Toms, Etc, each on their own fader 2,3,4,5,6,etc. So the whole Drums are sitting on the first faders. Then place the Bass, Guitar, Piano, Keyboard, Organ, Brass, Strings, Background Vocals, Vocals, Etc. on the next faders. You can use any kind of system. If you have some Send Tracks, place them far right on the mixer, just next to the master fader. Be sure to label all tracks and set the fader at 0 dB and Pan at Centre for each mixer track. To Label names and tracks (instruments) of a mixer is keeping it visible. Most digital sequencers allow this naming of a track on a mixer. Also it is good to work from the loudest instruments (Drums, Bass, Etc) towards softer instruments. Plan this on your mixer from left to right, faders 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,etc. Most likely the Basedrum will be the loudest peaking sound, place it first on the right. Maybe you have no drums on your tracks, just work out what sounds would be mixed and heard the loudest and what would be softer heard. Making things easier for you to understand, we use labeling the Drums as an example. Keeping things separated when recording drums is a must. You can do more on drum mixing when Basedrum, Snare, Claps, Hihats, Toms, etc are, each recorded on their own track (separately). This will mean that you are using more tracks on the mixer, but are rewarded by flexibility in mixing. Nowdays with digital recording, sequencing and sampling instruments, the drums often come from a sampling device, drumsynth or recorded

with multiple microphone setups. As long as your recording technique allows you to separate tracks or instruments, you will profit from this while mixing. Also for sampled instruments or synthesizers that can output at several multitracks, it can be rewarding to separate each sound, giving each a single track on the mixer. Again, spreading and separation works best and is most common mixing technique. Deep sounds spread all across the panorama is not a good thing, depending on fundamental instruments (bassdrum, snare, bass, main vocals) must have a center placement. Any variation off-center will be noticeable. Follow the panning laws for fundamental and unfundamental instruments, fundamental lower frequencies are centered and higher frequencies more outwards, lower unfundamental instruments more owards center, higher instruments more outwards. Use a goniometer, correlation meter. Working on Daw's (digital audio workstations) keep goniometer, correlation meter, level meters and spectrum available as constant checking tools. Maybe even place a second monitor or even another computer to do this job. Sound Systems. As with many questions about sound systems, there is no one right answer. A well designed mono system will satisfy more people than a poorly designed or implemented two channel sound system. The important thing to keep in mind is that the best loudspeaker design for any facility is the one that will work effectively within the, programmatic, architectural and acoustical constraints of the room, and that means (to paraphrase the Rolling Stones) "You can't always get the system that you want, but you find some times that you get the system that you need." If the facility design (or budget) won't support an effective stereo playback or reinforcement system, then it is important that the sound system be designed to be as effective as possible. Preferred is a room with no acoustics for recording. For monitoring a room with some acoustics (room reverberation). Quality is an assurance, but however when on a budget at least choose equipment with less or no noise (background noise). Mono or Stereo. Well this question is asked and debated. But for me and many others I like all tracks to be stereo. So I do not like to record in mono at al. But we can refer to fundamental instruments (Bassdrum, Snare and Vocals) as panned straight in center and be upfront. So these can be rocerded or have converted original signal in mono, this will assure the left speaker and right speaker play both exactly equal and make them appear straight in center where they should be. Most of times I will convert mono tracks to stereo (left right the same) or just record in stereo even when it's a mono signal. So it's no mono for me, but this can be debated. Although offcource i respect the fundamental instruments are straight centered all the time. Specially using a computer or digital systems and recording sequencing software, working in stereo all time will allow you to have all effects in stereo and channels in stereo. Most digital mixer and effects like delay, reverb, phaser, flanger, etc are working in stereo and need to sound in stereo anyway. When playing a mono signal some digital systems will not perform that well, so it is stereo that is creating lesser problems with digital systems. Offcource working in complete mono will reduce correlation problems, we mix in stereo with 2 speakers. It is better to have all tracks in stereo even when a recorded bass or guitar is actually recorded in mono. I always convert from mono to stereo or start by recording in stereo, this is just an advice. As long as the original signal is exactly the same left and right, you can work with mono signal in stereo mode. Knowing your tracks are all in stereo, you would not have to worry anymore about mono or stereo tracks at all (and to worry a effect or plugin is not outputting that well). You just know its Stereo all-time! This can help for setting up and making things easy. A well-recorded mono sound source on the other hand (recorded mono or stereo both channels), can be placed with relative ease onto the sound-stage allowing you to much better handle what and how any effects should be applied with regard to your other neighboring instruments, and their positions and frequencies in the mix. Stereo sounds that sway around the panorama alike synths, can be hard to handle. Especially when you have a bunch of these swaying instruments inside your mix. In natural world, it is likely that a dry signal is transmitted as mono, but with reverberation added and perceived as stereo by both our ears. Also in steady mixing, mono signals work best, even when they are filling up a stereo track both channels playing the same amount of sound gives a more steady and natural mix. Remember you can always add an effect to make instruments sway around. So recording a dry and clean signal is rewarded when later mixing purposes have to be free and creative. If two mono sound parts are sharing the same frequency range then just try and simply pan them slightly one to the right, other to the left. A couple of notches either side is usually enough. You must record in stereo, use two mono channels to capture right and left respectively as mono or as stereo. Test your mix in mono mode as well as in stereo mode. Use the mono button on the mixing desk to sum the channels together into one mono channel. This will put all the sounds into the centre. Listen for phasing or any sounds that might disappear, so you can correct them. Use a correlation meter, goniometer, spectrum analyzer and level meter on the masterbus to have

checking tools available when needed. Basic Mixing. This is going to be hard to explain, but an example will help you get started mixing. For example you have recorded a Pop, Rock, House or Ballad song. And now you have finished recording it (composition wise and recordingwise in audio or midi), you will need to mix to make it sound better and more together. At first separation is needed, cleaning and clearing (single tracks). Second quality and togetherness of a mix is what your aiming for, mixing it up (groups towards the master bus, summing up). What youre not aiming for is loudness or level, how loud your mix is sounding is of lesser importance then having your mix sound well together. Togetherness is what youre aiming for. So watching the VU-meter go to maximal levels is not so important while mixing, pushing all faders upwards all the time will get you nowhere. So forget how loud your mix is sounding, this is called Mastering and is a whole different subject. Mastering comes after you have finished mixing. Mixing is what youre looking and that is why it is called mixing, for this means , cleaning, cutting, separation as well as togetherness. Mixing steps. We have three sections to fulfill while mixing from beginning to end. First the Starter Mix, where we setup a mix and start off working inside dimensions 1 and 2. Then the Static Mix, where we apply dimension 1, 2 and introduce dimension 3 as a final 3d dimensional mixing stage plan. Finishing of to this part Starter and Static mix is giving a basic reference static mix for later use, and needs to be worked on until the static mix stands as a house stands on its foundation. Then finally the Dynamic Mix, where we introduce automated or time lined events. Make progress in mixing, plan on finishing your projects within a predetermined period of time. This is the only way to see your development in time. Don't fiddlle around with DAWs function but be concrete, improve your mixing skills and decision making capabilities, then learn to trust them . Give yourself a limited amount of time per mix. A static mix should be 80% done after hours of work. The rest is fine tuning and takes the largest amount of time. Building confidence in rhythmic hearing. Trust your ears for listening for rhythmic precision and keep it natural. A DAW and its graphic interface allow for seeing all you need, but allow to trust your ears not the display. When rhythmic timing is needed, your ears will decide something is early or late, or spot on. Trust your ears. When you are not happy with results, make a copy of your project, remove all insert and send effects and put all panning to center. Start right from the beginning, redefine your stage plan with a clear mixing strategy. Re-setting levels, pans, EQ, to zero and start from the beginning, removing all effect or plugins. Necessary to obtain a good mix lies in intelligently distributing all events in the three spatial dimensions, width, height and depth. The Starter Mix. Basically as we are staying inside dimension 1 and 2. We will explain the dimensions later on, but for a starter mix we only use Fader, Level, Balance, Pan, EQ, Compression and sometimes some more tools alike Gate, Limiter. Our main goal is togetherness, but as a contradictive we will explain why we need to separate first. As a starter mix will start off good, only when we first separate the bad from the good. Rushing towards togetherness is never doing any good, so this comes second in line. To understand what we must do (our goal for starter mixes) we need to explain the stage and the three dimensions now. Panning Laws. Crutial to understading the first dimension of mixing are the panning laws. Frequency ranges or instruments/events with a low range, are more placed in center. High ranges are more placed outwards to the left or right. This will mean that Bassdrum, Snare, Bass and Main Vocals (fundamentals) are allways in the dead center, especially with their low frequency content. All other instruments or events are more placed outwards (unfundamental), even if they contain lows, when they are not part of Bassdrum, Snare, Bass, or Main Vocals, they are placed outward to the left or right. Lows more centered and Highs more outwards. Also take in mind that send effects that are placed more in center, will draw outward instruments towards the center. So placement of a delay or reverb must be considered for what instrument (fundamental or unfundamental) it is required. The Masking effect, the time and effort of using left/right effects is only correct if the reverb part becomes too large to convey all the spatial information as a result of the masking effect. The more complex a mix, the more time and effort is required for placing all events accurately within the three dimensions. Starting

off with panning in the first dimension. Before mixing start, make a sketch of your panning strategy (stage plan). Anything that is not bass, bass drum, snare or lead vocals, should not be in the center. Instruments present in the same or overlaying frequency sectors, should be placed at opposite ends complimenting each other within the panorama. Well panned and carefully automated panning often creates greater clarity in the mix than the use of EQ and is much better then unnecessary EQing. If sounding mush, your first step is panning then to resort to EQ. Be courageous, try extreme panorama settings, and make the center free for the fundamental instruments. Never control panning trough groups, only by its individual channel. Never control straight panning or expanding with automation, just small panning and expanding settings for clearing a mix temporarily. The Stage. With an orchestra or a live band playing (we are going a little ancient here) there is a always stage to do so. Back in the old days people could only listen to music when played by real performing players or artists. There was no means of electricity or even amplified sounds coming from speakers. And furthermore a human is always hearing natural sounds in life. Anyway listening to music just appeals most when the instruments are staged and naturally arranged. We as human's are used to listen to music in this fashion for ages and now we have the common pattern inside our DNA. Human ears like hearing naturally and dislike unnatural hearing. When playing music we hear Volume, Panorama, Frequency, Distance and Depth. Therefore we talk about the musical stage. Mixing is the art of making a stage, this is called orchestral placement and sets all players to a defined space of the stage they are expected to play. For any listener it is more convenient to listen as natural as possible, so a stage is more appealing for the human brain to recognize and understand. A live concert of an orchestra might reveal the stage better in this picture below.

No matter what stage is set, what you are trying to accomplish is stage depth. The next chart display's a setup plan for recording and mixing a whole orchestra. We cal this orchestral placement.

In this chart we present a whole orchestra of instruments. The x-axis is showing Panorama, Pan or Balance (left, centre and right). The y-axis is showing depth (stage depth). As listeners we do like to hear where instruments are, some are upfront, some are more in the back of the stage. A mix would be quite boring and unappealing to the human ears when all sounds seem to come from one direction only (mono). Anyway we as humans can perceive Volume (level), Direction (Panorama, Pan or Balance), Frequency Spectrum and Depth. These are the three dimensions of mixing. Taken in account we are using two (or more) speakers. It is quite common to think in stage depth when mixing. Even when your material is modern funky house music, still thinking in stage depth might help you mixing a good understandable mix and have some idea where to go and what to accomplish.

Stage Planning. So it is better to have some kind of system and planning, before starting a mix. Knowing where to place instruments or single tracks inside the three dimensions. Basically all parts of dimensions (we explain the dimensions later on) are easily overcrowded. Therefore we must use a system to give all instruments a place inside the dimensions, just to un-crowd. Making a rough sketch can simplify and visualize the mix. Therefore you will have some pre-definition before you actually start mixing. You will know what youre doing and what you are after (your goal in mixing). We start with a basic approach. We start with the most crucial or fundamental instruments first.

The Base drum is fundamental, keeps the rhythm and because it is mostly played in the lower frequency range. The base drum is most fundamental, because it keeps rhythm and second because it's fundamental frequency range is mainly lower or bottom end based (dynamic high level). All main fundamental instruments are placed dead centre. The Snare is important for the rhythm, but however does not play as much lower frequencies as the base drum. The Bass is fundamental because almost all notes play in the fundamental lower frequency range. Vocals must be understood, upfront and are therefore fundamental to the whole mix. As you can see all important fundamental instruments are planned in the centre inside Dimension 1 (Panorama). All instruments that are fundamental and are playing lower frequencies must be centered, because two speakers left and right, will at the same time give more loudness and therefore can play and represent lower frequencies best (center is coming out evenly on left and right speaker). The centre position is now a bit crowded by the fundamentals, Basedrum, Snare, Bass and Main Vocals. To give some more space between each other (separation) dimension 1 (panning) and 2 (frequency spectrum or frequency range) and dimension 3 (depth) are used to separate them and give some idea what is in front of each other. Most likely you would like the main vocals to be clear and upfront. Think of it as a stage setup. The bass (or bass player) would stand behind the vocals, on a real stage the bass player might move around a bit, for modern mixing still dead is centered (because of transmission problems in the lower frequency range or bottom end, only placed centre, and we ere still busy with the starter or static mix, no automation can be used). As the drums would be the furthest away backwards on the stage, we place them in the back but still dead centre. Anyway placing these fundamental instruments in the centre gives definition and clearness to them, without interfering instruments overlapping. Especially Base drum and Bass must be centered to make the most out of your speakers. As the spectrum will fill up in the centre because already Base drum, Snare, Bass and Vocals are filling it up (fundamentals), discard and leave this area alone (off limits) for any other instruments (unfundamentals) . Other instruments can be placed in dimension 1 (panorama) and panned or balanced more left or right. This is common in practice for many mixes, but a beginner will hesitate to do this (Panning). Still think of it that guitars and keyboard on stage are always placed left and right. Simply because else the stage would be crowed in the centre if all players have their position taken. To imagine where an instrument or player will be placed is also being a bit creative and then be experienced, adding to what a human perceives as natural keeping it all understandable for the listener (finding the clear spots). Keep in mind that lower frequencies play better when played by both speakers (centered) and therefore higher frequencies can be more panned left or right (outwards). Fundamental instruments with bottom end or lower frequency ranges mustbe more centered,

while higher frequency range instruments must be panned more outwards. Next we will place the other drum sounds.

As a decision we place the hihat next to the snare, by panning the hihat a bit to the right. Planning the stage or dimensions, this is a creative aspect; the hihats are placed right from the snare, but also could be placed left. This depends on the natural position of the hihat, for setting the stage we could look at real life drum placement and take this in account while planning the stage, so mostly the hihat is placed more right. Now we have the right speaker playing more highs then the left because we placed the hihat more right. To counter act and give the left speaker some more highs we can place an existing shaker to the left. This counteracting gives a nice balanced feel between left and right, because mostly we like to whole mix to play balanced throughout. Then the toms are only played scarcely in time (toms are just suddenly played once in a while) so are less important in planning, still we place them to show where they are. For toms we place hi-tom far out and low-tom far out, in between the mid-toms. The overheads are placed behind and with some stereo expanding or widening this will give some room and sounds more natural. The main vocals are upfront. The rear can be used for the background vocals (choirs) and strings, bongo's, conga's, etc. Next we place some other instruments and we are looking at not so crowed places to place them in. Separating more and more.

See that Guitar 1 and Guitar 2 are placed Right and Left (this could also be guitars and keyboards), so they are compensating for each other and keep a nice balance. Also Synths and Strings are compensating and in balance, tough with some more distance (we use the strings as counter weight over here). Strings can also be placed back of the stage with a stereo expander to widen the sound at act as a single sound filler. Remember when you place an instrument, it is likely to counteract with another instrument on the opposite side. Also taken in mind

instruments that play in the same frequency range can be used to counteract and balance the stereo field. For that we can say the Hihat and Shaker are complimenting each other (togetherness), as well as Guitar 1 and Guitar 2 do. And the Synth with the Strings. So we keep a balance from left, centre and right. Don't be afraid to place unfundamental instruments more left or more right, keeping them from the already crowded center. Unbalanced mixes will sound uneven, when the whole outcome of the mix is centered we can hear the setup (stage plan) better and more naturally. When the left speaker plays louder than the right speaker, it will give unpleasant (unbalanced) listening. The total balance of your stage planning should be centered. Adjusting the master balance for this purpose is not recommended. Keep the master balance centered as well as the master fader at 0 dB, as well as any effects on the master bus, we allways try to correct things inside the mix, not on the master bus fader. Whenever you have an unbalanced panorama, go back to each instrument or single track and re-check your stage planning. As stage panning or balancing in the first dimension is one of the first tools for setting anything else. With the help of dimension 2 (trebles, boosting for close sounds or cutting higher frequencies for further away sounds) and dimension 3 (reverberation, room, ambience) we can create some kind of distance and depth. A final mix or mixing plan should refer to all of this. Depending on the musical style and what you want to accomplish as a final product. Also do not hesitate to use panorama, beginners will be resultant to do so.

Although this looks a bit crowed when you have all instruments playing at the same time together, it is likely you will not have all instruments inside the mix anyway or playing all-time together (composition, muting). It would be quite boring when all instruments where audible throughout the whole mix. We do fill in our stage plan with all our instruments. We give an indication what is a general setup and a good starting point, planning where instruments play and giving them a place is defining your mix, a foundation to build your mix on. This planning is called stage depth because almost any mix has some relations to what the human ear likes to visualize in our brains. Most likely natural placement is the way to go and is most common. So you can be creative and come up with any kind of planning or setup. Remember it is likely for instruments that need a bottom end, to stay more centre (especially the fundamentals). All other instruments that do not need a lower bottom end (unfundamentals) can be placed more to left or right (apart from the dead centered and upfront main vocals). Decide what your fundamental instruments are, then setup panorama and depth (distance) accordingly. 3D - Three Dimensional Mixing. Strangely creating togetherness means separating more than overlapping each other, it means you will have to separate first. What most beginners do not know about is the masking effect, where two instruments that play in the same range are masking each other. Try have two guitars in mono mode, then drop one guitars level with 15 db or more. You cannot hear this guitar anymore do you ? Well now pan this guitar to the left, you can hear it again, even now its -15 db lower then the other guitar. Basically when playing every instrument just leaving centered (no panorama) it is getting quite crowed in centre position and is quite boring (and enhances the masking effect). Masking is so common in mixing, we are in a constant struggle to avoid it. With avoiding masking, we can have more dynamics, or to say it the other war "we have more room for each intrument to play and be heared, with less volume level needed, therefore leaving more room for others to be heard. Therefore

every instrument will get its own place inside the three dimensions. Below is an example of the three dimensions.

The Three dimensions. 1. Width (Lelf +Center+Right ), Panorama, Panning, Widening and Expanding. 2. Height, Frequency, Level, EQ, Compression (Gate,mrte,etc). 3. Depth (Front to Back Space), Reverb & Delay, EQing Reverb & Delay. Dimension 1 - Panorama. Panorama is mostly achieved by setting Pan or Balance for each instrument on each independent single track. Basically setting the panning to the left, the sound will play from the left speaker. Setting to the right will play the sound from the right speaker. Setting it to center will play the sound from both speakers. Think of dimension 1 as Left, Center and Right. Three spectral places in dimension 1, Panorama. When its more crucial to you, you can also use 5 places for naming panorama when mixing or planning stage depth, 9:00 (Nine O' clock), 10:30 (Ten Thirty), 12:00 (Twelve O'clock), 1:30 (One Thirty), 3:00 (three O' clock). Panorama is most a underestimated effect in mixing (masking effect). Just because turning a simple pan or balance knob is easy to setup. Panorama in fact is a most important design tool (option) and the first start of defining a mix (apart from the fader level). Use Panning first before setting the fader level, apply the panning law and the relative volume of a signal changes when it is panned. Even when youre fully on your way with a mix, turning all effects off (bypass) and listening to the panorama is often used for checking a mix is placed correctly.

There is a mixing solution for deciding what instruments stay centered and what instruments go outside of center. Instruments that are crutial or fundamental to your mix, like Base drum, Snare, Bass and Vocals are all

in the centre (fundamentals). Any other instruments (unfundamentals) will be more or less panned left or right. The most common place for Basedrum and Bass are center because two speakers playing at the same time at centre position will play lower frequency signals better. Panning or balancing lower fundamental instruments left or right, is not recommended therefore at all. Even the effects alike delay or stereo delay can move instruments more left or right in time, so watch out to use these kinds of effects on fundamental instruments. And as automation is not a part of the staic mix, we do not use it. The main pathway is dead centre, so even when using a stereo delay, the main information should be dead centered for fundamental instruments. The Snare and Vocals are just as important, because the snare combines with the basedrum rhythmically and vocals must be heard clearly always (so we also place them all dead centre upfront). By having the Basedrum, Snare, Bass and Vocals in the center (fundamentals), there is not much centre panorama and spectral room (Dimension 1 and 2) left over for other instruments to play in the center. Ror more widening the stereo sound (outside left and outside right) a Stereo Expander or Widening effect (delay, etc) make the stereo field more than 180 degrees and will widen the panorama even more, giving some more space inside dimension 1 and more room to spread the unfundamentals around. Be couragous!

Do take into account that correlation (signals cancelling each other out in mono mode) will be more when you widen or pan more, so check for mono compatibility. Use a correlation meter to check or goniometer. Maybe you have to reduce the stereo field to prevent a mono mix from cancelling out instruments. Also Bassdrum and Bass can have signals that need to be reduced that fill the spectrum left or right, cutting this will keep them centered more (in time) and keeps them from swaying around. As a general rule lower frequency range instruments or tracks are placed at center, while higher frequency range instruments or tracks a panned more outwards. There are basically two ways op perceiving the dimensions. Fist panning from left to right in front of you, alike a stage. And second the ambient effect. This is to move any panning sounds right around your body, rather than just from left-to-right in front of you. Meaning you are in center of the sound, meaning ambient sound or surround sound. This is apart from the stage planning, the listeners position. We like the listeners position to be mostly straight in the middle of two speakers, hearing an equal divided sound on both speakes overall (RMS, Left+Center+Right, LCR spectrums). Dimension 2 - Frequency Spectrum. The frequency spectrum or frequency distribution of a single instrument or whole mix is the second dimension. It is understood that a Bass is a low frequency instrument will sound most in the lower frequency range 30 Hz to 120 Hz (bottom end). The frequency spectrum of a mix is specially crowded in the lower 'misery' range 120 Hz to 350 Hz (500 Hz) or 2nd bottom end, where almost all instruments play somehow. From 1 KHz to 4 KHz we find most nasal sounds and tend to find harmonics starting to build up. The 4 KHz to 8 KHz can contain some crispiness, can sound more clear when boosted, but also unnatural. A hihat will play mostly in the higher frequency range 8 KHz to 16 KHz (trebles). So giving each instrument a place in the second dimension where it belongs is important filling up a frequency spectrum. We tend to talk in frequency ranges, so words alike low, mids or highs are common in the mixing department. Also words alike, bottom end, lows, misery area, trebles, mids are only indications where to find the main frequency range. The main tools for working with the frequency spectrum and making the sound of an instrument fit inside a mix are EQ, Compression and Level. Also tools like gating and limiting can prevent unwanted events to pass. There are two purposes for these tools. First to affect quality, thus boosting or cutting frequencies that lie inside the frequency range of the instrument. Second to reduce unwanted frequencies, mostly lie outside the instrumental frequency range, thus cutting what is not

needed to play. Most intruments alike Bassdrum for its bottom and skin, have two frequency ranges that are important. The bassdrum must convey its rythmic qualities for instance. A bass instrument plays a note it will have its own main frequency, its harmonics and instrument sounds around it, alike body and string attack sounds. This is the frequency range the instrument is playing in, it's main sound. For bass this does mean a lot, we expect that the range 0 Hz to 30 Hz can be cut, while leaving 30 Hz to 120 Hz (180 Hz) intact (first fundamental range of the bass). Higher frequencies can be cutout or shelved out. Because this will separate the bass and give it place (space, headroom) to leave dynamic sound to rest of instruments. By doing this using EQ on the bass to make the sound more beautiful (quality) and to leave some room for other instruments to play by cutting out what is not needed (reduction), is leaving headroom and will separate instruments. As you can see we basically boost or cut when doing quality purposed mixing. And we mostly cut when we are reducing. As a result we are likely to cut more and are likely to boost less. We tend to cut with a steep EQ filter and to bosst with a wide EQ filter. The bass has now got a clear pathway from 30 Hz to 120 Hz (180 Hz), maybe the basedrum is in the bass range (60 - 100 Hz), but we try to keep all other instruments away from the bass range (0 - 120 Hz). The range 30 to 120 Hz (180 Hz) is mainly for Basedrum and Bass (especially in the center spectrum). As this frequency spectrum is easily filled up, it is better to cut what is not needed on all other instruments. You might think it is not necessary to cut the lows out of the hihat, but it is best to know that the hihat will play in the higher frequency range, to remove all lower range frequencies, you could use a low cut with EQ over here also. So now you have separated the Bass and the Hihat from each other and have given each a place inside the whole spectrum (tunneling, seperation). The same will apply for all other instruments that combine the mix, even effects used. Knowing where the ranges are of each instrument and having planned the panorama and frequency spectrums will help to understand how separation works when mixing and this is building the basis start of a mix, the fundation of a house (reference or static mix).

The Spectrum of a finished mix could look like the figure on the left (we have shown this before), you can see a good loud 30 Hz -120 Hz section, that is the range the Basedrum and Bass play with each other. And the roll down to 22 KHz. Though sub bass 0 Hz to 30 Hz is still quite loud in this spectrum, still this is quite a bit lower than the 30-120 Hz range. On the figure on the left you can visualize the range of instruments and their frequencies, refer to it whenever you need to decide the instrumental frequency range and what to cut out (reduction) and what to leave intact (quality). We have discussed these subjects before. Dimension 1 and 2 are most important for creating a starting /static/rerence mix, so do not overlook these dimensions. Return to these dimensions when your mix is not correctly placed, sounds muddy or fuzzy (masking). The Volume Fader, Balance or Pan Knobs must be your best friend in mixing and first starting and refering points. Then refer to EQ or compression as a second measure (gate or limiter also allowed). Knowing where instruments must be placed according to plan, works out best in dimensions 1 and 2. Dimension 2 frequency spectrum can be also working a bit inside dimension 3, as we perceive depth when trebles (high frequencies) are loud and upfront, but perceived backward in depth when trebles are less loud. Use an enhancer to brighten dull sounds to keep them upfront. Always when working with trebles > 8 Khz, be sure to use quality/oversampling EQ and effects. Separating instruments in dimension 2, frequency range. EQ can do a good job by cutting out the bottom end of all the instruments that are panned left or right (unfundamental) and instruments panned dead centre (fundamental). That is why we will discuss some effects alike EQ now, even though we have an EQ section explained later on. Basically the low bottom cut for basedrum is a decision you can make when you are combining basedrum and bass together. It is most likely a 0 Hz to 30 Hz cut can be applied to all instruments and tracks, even bassdrum and bass. You can start off using a low bottom cut around 0 Hz to about 30 Hz, this is most common.

The cutoff figure shown above would be a good cut for the most fundamental instruments alike Base drum and Bass, but really applies for all fundamental or unfundamental instruments or tracks. Cutting from 0 Hz to about 30 Hz (50 Hz) can remove some sub bass range as well as pops, low clicks and lower rumble for every instrument. Anyway the range 0 Hz to 30 Hz is really sub bass levels, so you actually do not hear much of them at all and is more of a feeling kind then hearing. If you need sub bass frequencies in you music, you must know that most speakers do not even play them. When for instance a basdrum is believed by beginners to make more power and raise the whole 30 - 120 Hz range with EQ, please do not. So you can't hear them in the first place, even with a big bottom speaker this is not heard much (filling up your headroom without even hearing it correctly). Even in a club or live event the bassdrum will have effect around 60 - 90 Hz. In general most household stereo systems do not play bottom end frequencies < 50 Hz or even < 100 Hz at all (depending on the quality of the system en speakerset). Thinking sub bass (0 - 30 Hz) will enhance your mix by boosting or leaving unaffected is a beginners mistake. Leaving it intact for instruments that are unfundamental is also mistake. Do not hesitate to cut the 0 Hz to 30 Hz range of frequencies out of all fundamental or unfundamental instruments. We now have removed some really low frequencies out of all instruments or tracks with a steep lowcut EQ filter and therefore removed some unwanted loudness, leaving some precious headroom and will unmuddy your mix (masking), making your mix more clear (dynamical, rythmical).

The above figure shows a bottom cut and a highs cut, for a more distantly placed instrument. We need our Bass to play, and not be overcrowded. As well as we need the Basedrum to play, keeping 30 Hz to about 120 Hz (150 Hz) free for bass drum and bass only. This means we are creating a clear dead centre blast of lower frequencies (L + R = C power) free for playing only basedrum and bass. Even fundamental instruments alike snare and vocals will give problems with headroom and are playing somehow inside the base drum and bass range, cut them all.

A low bottom cut for all other fundamental instruments (snare and main vocals) is shown in the above chart. The snare and main vocals are playing somehow in the lower end of the frequency spectrum, but do not actually play in the bottom end range (where bass and base drum are already playing in). So maybe we can do some more cutting from 0 Hz to 120 Hz (180 Hz). Second, the bottom end 0 Hz to 30 Hz range is filled with mostly rumble, pops and other unwanted events for the most part. So cutting with an EQ steep filter is quite understandable to be sure to remove these elements or events. To keep the lower fundamentals bassdrum and bass free in their own 30 - 120 Hz range. To avoid overcrowding we can cut out the bottom end of all other unfundamental instruments, leaving more space (headroom) for the fundamental instruments to shine and separate, avoiding muddiness and overcrowding

(masking). Don't be afraid to cut more out of a Synth or Guitar, anywhere from 100 Hz to even 250 Hz is quite understandable. This is where most beginners will hesitate. It is better to do a bottom end cut on all other instruments, just to un-muddy the lower frequencies and make a clear path for the base drum and bass to play unaffected. For unfundamental (all other) instruments, you can cut some more or less lower frequencies with a steep low-cut filter or some good cutting EQ. We can avoid pops, low clicks or rumble out of our mix and keep the lower frequency range free. If there is any information at all over in the sub bass range, it would be Bass. Bass is the only instrument that can reach this low. So therefore we don't cutoff the bass, we do cut-off the rest of all instruments playing. Well normally that is, sometimes a piano can reach this low but really still does not contain a relevant sub bass range. Do not hesitate to use quite a lot EQ cutoff shelving on all instruments, better to do more cutting then less. Apart from Bassdrum and Bass, a good roll off at 120 - 150 Hz is a good starting point, setting higher until you affect the main frequency range of the instrument. You can always adjust the cutoff frequency range later on for better results once you have placed it. Unfundamental instruments can be cut anywhere from 0 Hz to 180 Hz, basically they almost never play the C1 note range (octave). In order to find the lowest note played by an instrument, listen solo throughout the whole mix. Find the lowest note and its frequency. You can decide where the cutoff frequency lies, but remember the Basedrum and Bass need room to shine, so their main range is from 30 Hz up to about 120 Hz (180 Hz). Any other instruments that play in this range will crowd it and is better to avoid (muddyness and masking). So leaving the lower frequencies for Basedrum and Bass will have you deciding to make cut-off's or roll-off's on all other interfering instruments.

The cutoff figure shown above would be a good cut for the unfundamental instruments like Keyboards, Synths, Guitars, Organ, Vocals, etc. Depending on the low cut by dynamical intent, depending distance by controlling highs. By listening to each instrument you can decide where the cutoff frequencies are exactly. This can only be done if you understand what the frequency range is of the playing instrument and decide what is needed and what is not needed to heard. Most drums (all drums that are in the drumset) have two main frequency ranges, as well as most instruments. Remember in our stage planning, we now have to decide how our separation plans must work out in each different instrument or track. Use more cutoffs on unfundamental instruments. Subs (0 Hz to 30 Hz) can mostly be removed. The lower frequency range (30 Hz to 120 Hz, 180Hz) is mainly for Base drum and Bass. The frequency range between 180 Hz to 500 Hz is overcrowded anyway by most instruments playing over here, you can make a difference over here paying attention and spending time to get it correct sounding. The loudness that comes from the lower frequency range from 30 Hz to 500 Hz upwards 1000 Hz is basically generating the most loudness out of your whole mix and will show up on the Vu-Meter. Especially the lower frequencies of the Basedrum and Bass are fundamental for rythmic content, power, clearness and are generating the most loudness, keeping them separated by giving them a free frequency range 0 Hz to 120 Hz. Remember the lower the frequency to more power, you can save headroom (power) by cutting out all unwanted frequency ranges. Quality and Reduction. Basically we for a good starter mix we will try to achieve quality as well as reduction of unwanted events. Quality involves boosting with EQ (wide) and cutting with EQ (small), likely inside the main range of frequencies sounding from the instrument playing a range of notes or main frequencies. Quality can be boosted, but counteracting cuts can avoid boosting (better). Quality relies on how good an instrument is sounding. Reduction means mostly cutting some lower frequencies (0 Hz to 250 Hz depending on the instrument) and cutting high trebles for distance. Where the cutoff frequency is placed relies on the instrument and mix decision (stage plan). But apart from this, it can mean also a cutoff in higher frequencies for instance on bass or base drum just to separete. By using reduction methods we try to separate instruments and give them each headroom to play inside the frequency spectrum. Compression alike EQ has quality and reduction features. Compression can raise transients (quality) or sustain (quality), but can reduce peaks as well (reduction). For reduction a gate keeps out unwanted events or we can use manual muting. Maybe a limiter can scrape off some peaks (or a peak

compressor, reduction). Anyway these two purposes (quality and reduction) are the main tools for a starter mix. Separation. Making separation and headroom. In dimension 1, as we explained panorama separates instruments and spreads them from left, center, right. In dimension 2, we can adjust the frequency spectrum. Both combined are the basics of a good starter mix and can take up to four hours of time to accomplish a mix that is dry and according to your planned stage and still have some headroom for furthermore mixing purposes. As if youre not fully trained and experienced, then spend a great deal of time inside dimension 1 and 2. Stepping too fast into dimension 3 might set you up for some troubles you could not fix otherwise. Understanding what is going on inside each dimension and where to place instruments according to human natural hearing (your stage plan), is the key to successful mixing. Swapping for instance left and right is off course ok. As long as you understand that placing a high frequency range instrument (hihat) on the right will affect the total balance of the mix, to compensate we have added the another high frequency instrument (shaker) to the left. This kind of thinking goes for the mids and lows also. As long as you counteract your actions, you are doing fine. Counteracting is a most common many methods of mixing. Again how youre planning of the dimensions will unpack; the final mix will have to be balanced (meaning the combined sound of your mix must be centered over two speakers). We as human's dislike when the left speaker plays louder than the right speaker or otherwise. It is artistic rights and being creative that defies the rules, but still can have a good outcome. Generally fundamental instruments are centered, and lesser fundamentals are placed more left and more right. Dimension 3 - Depth. The Spatial Depth is a more perceptive sound, giving space and room to each instrument, single track or mix. The most common tools are Reverb and Delay. Reverberation is a common depth (dimension 3) tool. When a note or sound is played at the first time, the transients are an important factor (from the original sound event). The transients make our brain understand what sound is played and for recognizing the instrument. This we will call the dry signal. From the dry signal a room will present reverberation after some time in milliseconds, mostly the early reflections will make our hearing understand distance and placement. The pre-delay of first reverberations/early refclection is making our brain understand depth or distance. Mostly when pre-delay and reverberation is naturally understandable to our brains, we perceive depth, because a Reverb (and Delay in a lesser fashion) will muddy up the mix (masking), careful attention must be applied over here. With Reverb or Delay it is common to cut the lower bottom frequencies because this will clear up the mix and wipe away some muddiness (separates the reverb from the fundamentals alike Base drum and Bass). Also when you apply the rules of Dimension 1 and 2 correctly, the panorama and spectrum of each instrument will create a place or stage for each instrument. For that we can cutoff or raise the trebles of the reverb to be closed upfront or more distanced. Now that reverberation is making our brain believe there is some distance, dimension 3 is a fact. Separation is the key to successful mixing, balancing unfundamental instruments more left or right and not over pumping the frequency spectrum as a whole. Basically the lower frequency range of a mix is the place where all instruments will play their main ranges, so filling this with Reverb or Delay will only add to muddiness or add unclear (fuzzy) sounds and enhance the masking effect. Especially Base drum and Bass are instruments you want to hear straightforward, so must be separated at all time from the rest by controlling all lower frequencies that play in their range (use an ambient, drum booth, small room). Instead depth can be interesting when applied on clear and dry starter mixes, making them sound more natural and less fabricated. Also Reverb and Delay are not the only factors for depth. Instruments will not play all the time; it would be boring to hear them all throughout the whole mix. It is likely you have some kind of composition going on and the timed events of instruments can create more depth also. The level (volume or amplitude) of the played note will create depth by itself. As we perceive louder sounds as closer and softer sounds as further away. Also we perceive close sounds when the higher frequencies are more present, the further away in the background the less high frequencies can be heard (dimension 2). These are good starting points to address when mixing (in dimensions 1 and 2) before adding any delay or reverb (in dimension 3). Therefore when you need background vocals to be heard as if they have some distance, you can roll off some higher frequencies in dimension two first, before you add some delay or reverb to make some kind of depth or distance inside dimension 3. Even when adding delay or reverb, you can decide by rolling off (or cutting) some high frequencies from the effect output or input what the distance or depth they will be perceived as. A good parameter to set depth or distance is the pre-delay of any delay or reverb (or any effect). Reverb can only do a good job when it's a really good quality and setup correctly. Mostly for fundamental instruments alike Bassdrum, Bass, Vocals we can use an ambient room or drumbooth reverb type, these will have more early reflections and have less reverb tail, therefore less fuzzy and more upfront. On

the vocals use no trebles cutoff for keeping upfront of the stage. Bassdrum and Bass inheritly have lesser trebles so they automaticly faal behind the vocals with an ambient small room drumbooth reverb. For unfundamental instruments that are placed at the back of the stage we can use way more reverb, alike a hall or large room, and cutoff their trebles more to set distance. For achieving our stage plan to be true, we can prepare the dry signal and/or adjust the reverb accordingly. Delay can do a good job, but with percussive instruments (Drums, Percussion) the rythmics can be influenced, timing the delay to the beat or notes can be of importance. Especially a stereo delay with its movements can avoid masking. So for drums and percussive elements we try to stay in tempo and setting almost no pre-delay. For Vocals delay can give more depth and placement inside a mix, without moving backwards and keeping them upfront. Reverb is a good tool for creating depth, but can be processor hungry for digital systems. A good reverb does not get muddy fast and stays inside the mix and does not have to be loud to be perceived as depth. Depth is the last dimension, so working first our starter mix in dimension 1 (panorama) and dimension 2 (frequency range) before working on dimension 3 (depth) is recommended. The static mix contains dimensions 1,2 and 3. Use a brighter reverb ambient small room or drum booth for upfront sounds and a duller larger reverb for distanced sounds. A short pre-delay or no pre-delay can help prevent the reverb from pushing the sound back into the mix. Give the reverb a wide spread for upfront sounds. Use narrow panned or even mono reverbs for distanced sounds with longer reverb times. The three dimensions together make up any static reference mix. For Stereo Mixing the three dimensions are Panorama (1), Frequency Spectrum (2) and Depth (3). Basically Panorama is controlled by Pan or Balance mostly and sometimes using a stereo expander or widener. The Frequency Spectrum is controlled by amplitude, level, volume, EQ (Compression, limiter, gate) of the sound. Depth is perceptive and can be controlled by High Frequencies (trebles), delay (pre-delay), Reverberation or Reverb. There are quite some other effects that generate some kind of reverberation or can be perceived as depth or distance to human hearing, we will not discuss them all. A sense of direction for each individual instrument can be found in all dimensions. Also the three dimensions can influence each other, by rolling of some highs for instance in the frequency spectrum (dimension 2) of a single instrument, track or group, you can affect depth (dimension 3). Coexistence and placing instruments inside the three dimensions can be a fiddly job and maybe you would like to rush this. Pre-planning is a better idea. Also we cannot use a lot of reverbs on processor hungry systems, so we choose a few and use them on groups mostly. Offcourse mixing is creative. Bypassing the dimensions without some thoughts and planning and throwing in effects and mixing uncared, will soon give muddy unclear fuzzy results (masking, correlation, etc). Maybe you have ended up in this situation before? Then it is time to get some understanding about the three dimensions, quality, reduction, overcrowding, making headroom, masking, separation and togetherness. Re-start with a clean slate setting all levels to 0 db and panning to center, remove all plugins, re-start with the dry mono mix.

The chart above shows how the three dimensions can be adjusted using common mixing tools. For summing up,

dimension 1 is controlled by the Panorama (Pan or Balance and maybe some widening/expanding), dimension 2 is controlled by the Frequency Spectrum (EQ, Compression, mutes, gates and limiters), dimension 3 is controlled by dimensions 1 and 2 as well as using reverberation/early reflection effects (Reverb, Delay, Etc). Making use of the 3D visualization or 2D stage visualization can help improve your mixing skills. Some like to write down a plan (stage plan) or some just like to remember and visualize in their head (the experienced). The easiest dimension is dimension 1, setting pan and we hear left, center or right (but easily underestimated). Next dimension 2 is more complicated, because we are working inside the frequency spectrum of each instrument to create a whole spectrum for the mix. Composition wise muting, level, amplitude, transients and balance are good tools to start with then reverting to EQ. Compression can be a hassle to master, mostly when we hear compression, we know we have gone too far. Rather use a more even amout of compression, when compressing only peaks very hard we achieve pumping. Dimension 3 is all about quality reverberation and needs skill and very good ears, as well as understanding how human hearing reacts. As we can say the difficulty of mixing progresses with the dimensions in place, so we start with dimension 1 and progress towards dimension 3. When we need to adjust an event, we first resort to dimension 1 and progress towards dimension 2 and 3. Hunting for quality and reduction (boost wide, cut small). Changing an event or instrument in one dimension means a change in the other dimensions also. So careful planning and preparation is a must, it is better to know what youre doing while mixing. Knowing what you want out of a mix beforehand can make mixing easy and keep you from struggling towards the end. Understanding the three dimensions is crutial and do not hesitate to apply, it is a common way of mixing and very much accepted generally. At least to our natural hearing ears, to keep it all acceptable to our brains, we apply the natural rules and laws mostly.

3D Mixing. Mixing, as if the listener is listening to a stage is common practice, it seems more natural. The more natural a mix sounds, the more natural the human brain can receive the 3D Spatial Information. Unnatural placement can make a listener feel unpleasant, so only use this when you need it. Most likely Basedrum, Snare, Bass and Main Vocals are more centered and fundamental. And all other instruments are placed more outward of the centre field, more left or more right. Lower frequency unfundamental instruments are more or less centered, as unfundamental instruments playing a higher frequency range are more placed outwards. The main vocals are up-front and drums more in the back. Sometimes a choir would stand behind the drummer even further backwards. Just experiment with a mix and play with the dimensions, make some different plans to where you are placing the instruments. Experimenting with 3D Mixing. Do some mix setups and learn from the differences, learn from your mistakes and remember when having progression to keep notice of what you did correctly. A good start of a mix can take hours to accomplish towards a completed static reference mix. Maybe your ears do not listen very well when mixing this long. So returning later or have some fresh ears can do wonders. Also visualizing things is better, especially when working on the whole frequency spectrum or planning your staged mix. So any metering you can do over here with a spectrum analyzer is visualizing what you hear. Also use a correlation meter for avoiding the masking effect and check for mono compatibility. Use a goniometer to keep unwanted events from the left or right side that correlate. For listening to a whole mix you can visualize mostly, but remember that listening without all of these tools is of importance. After all listening/hearing a mix is the end result what youre trying to accomplish. So what you can see by your eyes is interfering with your hearing. Sit down and relax and only listen (do not look at any metering). For the listening experience to be true for a normal listener of your music, maybe close your eyes. Do listen on multiple speakers, home audio sets, in your car, walkman, almost anywhere possible to get a good view of what your mix is doing. Stereo and Mono. Mono is a single speaker system. Stereo is Left and Right Speakers only (still the most common way of playing music authentically). A mono speaker setup alike TVs and small Radio's is quite common still. As we explain

mixing in stereo, mono compatibility can still be an issue. Below we have a common stereo speaker setup. Even having the availability of surround sound with multiple speakers, humans nowdays are quite known with the stereo sound. We have been listening for so long in stereo, it is kind of baked in our DNA. It is so common that adding more speakers (directions) might influence the way it is been perceived.

The most direct sound is a single mono speaker and the more speakers you add, the more you can control the dimensions (3D Spatial Information). Adding more speakers can widen dimensions or separate frequencies more, still stereo is closest to human hearing. With Stereo there is a lesser degree of dimensions (compared to surround sound systems), still it listens close to what we will hear or perceive as natural. Our brain is not so much confused with dimensions as with Surround Sound. Multiple speaker setups are more difficult to perceive straightforward, especially when an each room is filled differently with the placement of the speakers. You can imagine a household surround system being placed differently each time. As each living room is setup differently. With only two speakers for stereo, many households know where to place them to get a good sound. Depending on where a user can place the multiple speakers, is affecting the way your music is perceived in the dimensions. Offcourse they all should be setup the same way theoretically and according to the operation manauls instruction, in real life every user or listener will have their own setup's for speaker placement.

As we explain stereo mixing over here, surround sound does apply almost the same rules for mixing. Although with more speakers it will be giving more opportunities for 3D Spatial Placement, therefore more room for instruments to play and be clearly heard. Above is a figure containing surround with more than two speakers. For this kind of mixing a different set of rules will apply to the amount of dimensions and we do not explain this any further. We concentrate on conventional stereo mixing (and check mono compatibility). When we are mixing in Stereo we try to accomplish a sound that compares to natural human hearing, a try accomplisch our stage plan, so the mix will transmit 3D Spatial Information very well. As for Stereo Mixing we might be more persuasive and throw the 3D Spatial Information upon the ears of the listener. Sometimes this means you might use a little bit more force then naturally is percieved, to get the listener to hear as it would be naturally be perceived. Preparing a Mix, Starter to Static mix. You can set all faders to 0 dB and all Pan or Balance to Center position. Set all EQ to its defaults. Basically no effects are used; else turn all effects to off (dry, bypass) even better to remove them. As a start of mixing it is best to clean up all single tracks by listening solo and removing all that is not needed (unwanted). Do this by listening every track in solo mode and listen trough all parts until the end, removing anything not needed to hear. Functions you can use are, audio track or sample based editing or midi event editting. This is more a recording thing, comosition wise, but removing clicks, pops and any other unwanted material is crutial and can be done now. Listen every track or instrument from start to end, they all should sound clear and unaffected before going any further in mixing. This can be a tedious job, removing all unwanted material, but you would not like it when you hear it in the mix (and cannot figure out where it is coming from). Any listener easily hears clicks, so take care of this problem first and foremost. Maybe using a gate or just delete all unwanted audio parts. Sometimes at vocal level any breaths or 'sss' and 'tss' sounds are taken care of (removed), using a deesser or just simple audio cutting / muting. Remove backgroup noise while an event is not playing (manual edit or gates). You cannot overlook anything here, check, re-check when you need to. All tracks and instruments must be clean and only play what you need to be played. The rest can be cut out. Time-consuming it is, it is better to work on this beforehand, before you actually start mixing. Noise is difficult to remove once recorded.

We would like to remove noise, but really we cannot do this process really effective, so when recorded allready we try to cut, delete and mute. Maybe a steep cut in EQ can help or some noise reduction tools, but they will mud or fuzz and even do not remove all noise. So noise should be avoided and therefore each recording of a track needs to be noise free or almost noise free. White or Pink Noise and Humming Sounds are to be avoided at all time. When you need EQ to remove backgroup noise use quality EQ or oversampling EQ, especially working in the higher treble ranges, cut with a small steep filter. Clear up, before going any further in mixing. Make sure the audio files and samples you are using are at a decent level, so that the levels don't have to be boosted and therefore the noise floor does not rise. Starting to Mix. Provided you have prepared a mix (see above), you have labeled all tracks from left to right, you have cleaned them up and are ready for mixing. Again you can set all faders to 0 dB and all Pan or Balance to Centre position and set all EQ to its defaults. Set the faders and pots so they are around unity. Zero everything on your onboard and outboard equipment, mixing desk, etc. Basically no effects are used, else turn all effects to off (dry, bypass) or remove them. Even when you are not mixing your own material, when you have received a mix for mixing or re-mixing purposes, we can re-set to defaults. We are starting default keeping it basic. This is a good saving point on digital systems, if you save your project now, you can always return to the default starter mix. Starting a Mix (Example). Only by example we can try to explain what we are after. Provided that you have recorded drums, the base drum will be the loudest of them all (fundamentally the loudest). So a good start is to listen to the track you have recorded the basedrum on. Solo listen the Basedrum track solo and adjust the fader until the VU-Meter shows levels of about -6 dB to -10 dB. Basically you are soloing the basedrum now, so the track Vu-Meter or Master Vu-Meter should look the same. Somewhere in the range of -6 dB to -10 dB is a good start. Basically you are now creating headroom for the other instruments to fit (when added later on) while not going over 0 dB. So by setting the Basedrum at the VU-Meter is giving back some headroom for other tracks to play. It is a good thing to hear the Base drum solo and adjust EQ, Faders and Balance. Looking for quality and reduction. Do some lower frequency cutoff 0 Hz to 30 / 50 Hz or so. Roll off some highs, drums are behind the main vocals and bass. Just remember to set the level of the basedrum back to -6dB to -10dB afterwards, this will have changed because you have used EQ, Reverb, Delay or anything you did to make the basedrum sound better. When the Base drum is a sampled instrument maybe you could work on the basedrum sound beforehand. You have to reposition the track fader level again each time you adjust the basedrum sound. Keep the balance straight in the middle, do not let the bassdrum sway out of the middle center position. Overall when using send effects or an effect group that show up on sends or another track, keep doing the same thing, keep the base drum level steady at the master VU-Meter, advised between -6 dB to -10 dB and in center all the time. When you do not have a Basedrum recorded or no Drums, you can seek the nearest loudest (fundamental) recorded track as reference starting point (solo it), specially choose an instrument that is playing center and has got lots of lower frequencies and has a good part throughout the whole composition (rytmically). When you adjust this Basedrum or Loudest Track at any time when mixing, you must repeat the same rules and seek the Master Vu-Meter again. Solo the basedrum and set it back to -6dB to -10dB. This Basedrum (or loudest) track is your starting reference track (most fundamental track) for headroom purposes and it is the main focus of your mix. It is way better to be happy the way the Basedrum is sounding and really make it sound good (beforehand), you will be happy with a finished drum kit before starting with other instruments. Because each time you adjust the basedrum (or your reference instrument) later on inside the mix, you can adjust the whole mix again accordingly (repeat the operation with the master vu-meter). Because you are now using the basedrum as static reference, it is better not to change it once you set it at start. Set it at start and be satisfied with the basedrum sound, then leave it alone. At least until you have setup all tracks, maybe you need some adjustments, still keeping your Reference Headroom (Basedrum) start track steady is best. So you have adjusted the Basedrum and youre happy with the sound and Vu-Meter's levels? Lets go to the Snare. Keep listening to the Basedrum and turn on the Snare, listen both basedrum and snare together. Now adjust the snare fader level until you are satisfied with the combined Base drum + Snare sound and levels. Do not touch the basedrum fader, only adjust the snare fader until it sounds correct together (using fader, pan, balance, eq, etc). Whenever you need to EQ or use compression, do this while listening only the snare solo and combined base drum + snare. It is wise to cutoff the snare in its lower frequency range below 120 Hz, not interfering with the Base drum. Whenever your applying effects or change the snare (quality or reduction,

separation), you need to check the levels again and recreate the togetherness. So it is best to not apply any furthermore effects at this time, and leave this adding into the mix for later purposes. For the bassdrum we should have used an ambience reverb or small room booth (that is on the drumset group), for the snare we can use a larger reverb (to convey) and send it back into the ambience reverb of the drumset group to give it the same properties (coherence, ambient). Only touch the snare fader at this time, do not touch anything from the basedrum track. When youre happy with the combination of the Base drum and Snare sounding together, in center, the same will apply. Do not change these faders anymore when mixing further more. If you have to change these later on, you must go back to start and re-check all your work. So it is again better once set, to leave it alone and go to the next instrument or next drumkit item. This might sound a bit tedious, but remember we are building the fundamentals of the mix over here (starting a mix), when you lose attention over here, you might lose the mix. We will progress with finishing off the drumset/drumkit. So at this point you could work on the Hihat and mix this together with the Basedrum and Snare. Remember that the Hihat can use quite a good low and heavy EQ cut (reduction) to make some headroom for other instruments. Finish off the rest of the drum set by adding each single drum track (un-mute). Panned more to the right as it is more unfundamental (but rythmically enclined). Take into consideration placement in the dimensions, quality and reduction. Maybe when finished assign all single drum tracks to group track for later purpose mixing (we have the ambient reverb on the send/group anyway). At this point you can do a lot off stage planning on the drumset, keeping snare and basedrum in center and pan the rest of the drum set more outwards. We explain each instrument later on and give exact instructions for each instrument. We finish off the drumkit first, with the available tools in dimension 1, 2 and 3. Now turn on the Bass track. With the bass track you can apply a low cut to < 30 Hz and roll off some highs. According to your stage plan, place bass in center, behind the vocals, rolling off the highs will make it more distanced but bass does not have a lot of highs anyway. Maybe for quality boosting some 30 Hz to 120 Hz frequencies. Solo the Basedrum and Bass, adjust the bass until they sound good together (do not adjust the bassdrum). Turn on the rest of the drum set and compare. Keep adjusting the bass until it sounds correct. Start introducing new tracks or instruments each time looking for quality and reduction, separation and togetherness. Basically working from left to right on your mixer is building the mix, you set the faders and effects and then move on to the next nearest track and repeat the same. This goes for all other tracks you have on your mixer until you have finished all tracks and are on the right side of your mixer. Anyway when you start with Drums and Bass sounding well together, this is a good starting point for a mix. Basically placing them dead center. Then work with snare and main vocals also dead centered. Then introduce the hihat and rest of the drum kit. Then introduce bass. Then the rest of all unfundamental instruments placing them more left or right, keeping them out of the already crowded center Once you have worked on all tracks and are satisfied, try not to adjust too much afterwards. Listen to it for a while, save your mixer settings (or save the song on a computer or digital system). Once you have the starter mix running, like Drums, Bass, Guitar and Keyboards sounding well together, this routine becomes more free. You can adjust faders like Guitar, Keyboard, Vocals, etc more freely now, add some more EQ, compression, delay or reverb, any effect will do. What you can feel while working is that you have created some headroom for doing things and still have a good level on the Master VU-Meter (output) and you have some headroom to work before hitting 0 dB. This is a good start and makes mixing possibilities for furthermore mixing possible (freedom) without having to adjust every time for making headroom. Stay in the boundries of dimension 1 and 2, applying fader, balance, eq and compression (gate, limiter) but not adding effects. Then workout dimension 3. Digital Distortion. Remember to keep track of the master VU-Meter; if this goes over 0 dB on a digital system you will get distortion in the signal as additional unwanted effect. Depending on the bit rate your digital system is running on internally, internal distortion is not easy to spot. When you going over 0 dB, do not adjust the master fader for loudness, adjust all other faders in accordance with the same amount of gain. So each track fader can be set -1 dB lower (or the amount you think is needed to lower the Master Vu-meter under 0 dB). This can be a hassle and you must be precise with this job. Anyway it is better to lower all faders the same amount and keep the master fader at 0 dB at all times. Some digital mixers have options to do this job more easily by grabbing all faders and correct them all with the same amount of gain. You will be tempted to touch the master fader anyway because it the easiest solution, but it will not work for your mixing purposes. Keeping the signal internally good is adjusting single track faders. That is why you need to create some headroom from start. Even for 32 Bit Float or higher (64 bit) digital systems that can address the 0 dB problem better and can handle > 0 dB signals, it is better to stay below 0 dB. For Integer 32/24 and 16 Bit digital systems, do not go over 0 dB at any time, this will surely add distortion and add unwanted artifacts. Sometimes as a feature we add a little distortion, but most

likely when starting a mix towrads a static mix, we do not need it. We tend to keep away any distortion for now. Limiters are good to just scrape the peaks whenever the threshold is set at -0.3 dB or setting for peak reduction levels -1 dB to -2 dB, thus affecting only signals that would otherwise jump shortly over 0 dB. Tough limiters are not a first solution, limiters are to be avoided but sometimes needed. For mixing only use a bricklimiter on the master fader (for starters, but even try to avoid this). When your mix goes over 0 dB, be sure the metering your watching is fast enough to intercept (spot) peaks that go over 0 dB. Else the limiter on the master track will tell you when this is happing by showing the reduced amount in dB or with its warming (red) lights. Sometimes with a brickwall limiter or digital mixing console two red lights (left and right signal) will tell you when youre passing over 0 dB. Try to lower your group tracks or individual tracks by the same amount to get back some headroom, keeping the master fader at that same 0 dB position. Sometimes an instrument or track is unbalanced, even a whole mix can sound unbalanced, this can cause left or right signal to be of uneven levels and sway around. Single Track Mixing. Adjusting individual instruments is commonly done with level, balance, EQ, Compression, muting, gating and limiting. Within the three dimensions some planning can be done before or while you mix further, stage planning. Most single or multitrack mixers do have some EQ bands and some even have compression settings per track. By Single Track Mixing we mean the Fader, Level, Gain, Balance and all other buttons, knobs on this single track. Also for all effects we apply to single tracks or instruments, we are talking single track or instrument effects.

On digital systems we can add effects as inserts. For this refer to your mixer manual how a track is build up technically, some insert effects can be placed before the track fader and panning (pre-fader). This will affect the signal with the effect first, before track EQ, Fader and Panning is applied. Some insert effects can be added after the track fader (post-fader) and will first process Level, Panning, EQ and track Compression before going to trough the effect inserts. Thus deciding where to place an effect insert (pre-fader or post-fader) can rely on the equipment you are using or the decisions you make while mixing. In general we place effects like EQ, Compression, gating and limiting in front of the fader (pre-fader), just because we like to adjust the sound before it goes through the mixer furthermore. Reverb and delay we place post-fader or on sends and groups, as a second in line feature. Anyway what happens on single tracks are the individual instruments, so whenever you need to change something that applies to a single instrument, do this on the single track instrument only. Fist fiddling with level, balance, EQ, compression, gate, mute or limiter. Fisrt look for reduction, keeping the balance panorama planned, use EQ cuts for separation and dynamic headroom. Control level or transients with a compressor. Compistion and reduction/separation wise use manual editting or the mute button cuts and limits. Then enhance quality of the instruments in dimension 2 and 3. The group tracks explained below are for combining tracks as a group and therefore control the ' layer' of combined instruments together. Group Track Mixing. Routing single tracks to a group will give you more flexibility in handling the mix as a whole, for this you can route all drum tracks (basedrum, snare, hihat, drumset, etc) to a single group track. Now you can control each single track individually and at the same time control all single tracks with the group track (as a general we place an ambient room or drumbooth reverb on a group or send anyway for the complete drumset to convey). It is common to add all drum sounds to one group track. This group could include also the Bass; this is a matter of mixing purposes or decision. The single bass instrument or track could also be routed to its own group (but mostly we like to use the ambient reverb on the drumset group or send anyway). If you have the availability of multiple groups (like a digital mixing system can handle) you can create layers of groups. By combining the Drums Group and the Bass Group and route it to a new Group, you can control both drums and bass with this

group. By combining into groups this is called welding and forms a layer. By welding instruments together we tend to get some togetherness, so grouping towards the master mix is layering (summing). Building layers of instruments that combine together as a group (welding), will give control to the different sound sets of a mix. By having group tracks on a digital system that has different mixer setups, thus can show a mixer that has only the group tracks and the master left over. With the group track mixer you can more easily control the layering of your mix and therefore adjust the welding process and your planning of the three dimensions for each layer. For digital summing (emulate anolog summing), we can even add some tube amp or analog tape deck simulator, to get some of that analog summing feeling. Therefore when mixing, we tend to use single tracks for adjusting each instrument (separation). And we use the group tracks to combine instruments (together). When you need to affect a single instrument use its single track, when you need to adjust a whole layer of instruments use the group, you can decide. So now we know where to adjust level and balance, muting or manual editting, place EQ, Compression, gating and limiting or place delay or reverberation effects and can decide to use it on groups or single tracks, depending on what we need to adjust.

Each group track combines single tracks together, for this we can call a group track a layer. With the Drums Group for instance you have combined all drum sounds together (layer) and can control them as one with the group. For instance when you have a guitar on the left and one on the right, this combined coexistence in a group guitar track does add another layer to your mix. When you have combined already the drums group with the bass group, you can now control the Drums, Bass and Guitars with only two group tracks. When you have for instance an Organ and a Piano, group them when they coexist within the three dimensions of your planned mix. Decisions when ever to make a group of combined single tracks is a matter of taste, planning and creative mind. It is likely that if tracks coexist and form togetherness as a layer for your mix, you can combine them into a group. The last step is to combine all groups to be routed towards the master track (the output of your mixer).

This is figure above shows how final grouping could look like; you now have three kinds of ways to adjust the mix. At single track level you can control all individual instruments separately. The welding groups contains the groups of individual tracks and therefore controls the first layer of your mix (some togetherness). The second layer and the master control the final mix for further more welding and layering, summing to emulate analog feeling (some more togetherness). Depending on instruments at hand, pre-planning and labeling all tracks and groups can help you get a whole picture of your mix design. Mostly a DAW has got label and some even have a notepad per track, keeping track of things for the old days when we do not remeber anymore what we dit to achieve. How you arrange is a matter of coexistence and creative mind, but mostly follow the rules of our hearing and the laws for the dimensions, starter and static mix. For most cases starting a mix design will start off from the left side of the mixer, adding the most fundamental instruments first. Building up as a stage separating instruments as single tracks. Also we start with fundamental centered instruments, then unfundamental lower instruments, then at the right hand side the higher unfundamental instruments. As you progress with adding groups, look at your dimensional planning as you combine, looking for instruments that coexist (counteract) in your planning can make decisions easier. This layering and welding is common, but artistic and creative matters will be furthermore discussed later on, for now we are designing and planning the staged mix. Layering and Welding. Using compression on groups can weld instruments or tracks together, making a more coexisting sound. Even placing an EQ to correct the sound can have welding purposes. Each group that combines individual instruments or tracks together as one is called a layer. (Summing up into the later groups before entering the master bus, we can do some analog summing by placing a tube amp or analog tape based effect to create that analog together feeling. Summing up analog style affects all settings we did before, so we do not tend to use while mixing. You can decide to use analog summing on a digital system or not. right now we do not recommind this at all, it will affect our mix we so timestakingly have been trying to put together). Design. The most of the togetherness of a mix can be found in a well setup design for dimensions and layering together. Ending up at the master bus of your mixing console. The togetherness of your mix is all combined instruments sounding together, trough each single track and grouped towards the master bus fader (output). As far as planning your mix and starting off, first adjust individual instruments and tracks, then weld them together with groups that coexist towards the master track. When you have to control the mix or having an idea to change it,

you must know where at what level you can do this best. Resorting to single tracks first. Remembering the dimensions. Placing some cutting EQ or Compressor will affect the behavior of the layers or single instruments. Place effects only when and where they are needed. Deciding what you need and where you will place it, is understanding where elements are adjusted at what level. This searching for separation as well as for togetherness, as we search for a nice clean starter mix toward a static mix is the only way to make more headroom and leave some space for designing purposes and issues later on. By being scarse with adding (effects, reverberation), it is better to remove what is not needed first (quality and reduction), cleaning up the mix as well as individual instruments and sounds. Design a stage plan, deciding where all instruments have their space or location. After finding some balanced mix with Level, Panorama, Frequency Spectrum and Depth with the faders, balance, pan, eq or compression (gate and limiter). Only then you can add some more depth in the last dimension 3. This kind of mixing is quite common, but dimension 1 is most overlooked in ways of setting up, dimension 2 is at least as important and can be difficult to hear or understand. Combining dimension 1 with dimension 2 and then dimension 3, will be the best fprogression for clearness and you will not have fight and return to correct as much later on. When you start with adding a Reverb before finishing off dimension 1 and 2, you might end up with a muddy or fuzzy sound (masking, correlation) , mostly EQing and compensating for the Reverb over blowing the other instruments or mix. So first the instruments, then the layers, then the mix, then the master. First dimension 1, then 2. Then 3! Effect Tracks or Send Effects. Common effects can be used on Send Tracks and this will make the effect available to use on all tracks/instruments when placed on groups. On a DAW we can use send or groups depending on the way we want to sum up levels towards the master bus fader. The normal way of a mixer is to route send effects toward the master bus. But routing sends to groups can also be done. Most likely the default configuration for a send track is to end up at the master bus. Sometimes a send track can be routed otherwise. So if you need routing on special an effect Group, create some new groups and place insert effects on these groups. Now youre able to route anything to the effect groups.

Send Effects that end up directly to the master bus are for adjusting the final mix as a whole (summing). But remember you have the Group Tracks to place effects on also as well as single tracks and sends, so maybe you can be a bit more scarse using effect sends and the use of effect on single tracks. It is likely to place Send Tracks (send effect tracks) on the far right of the mixer. So drums start left on the mixer and the send effects are last right on the mixer after the last vocals. Then you have last, the master track. Remember you can assign the outputs of the send effects to return to any track or group, to be creative. Some mixers in the digital domain do not allow you to return to previous tracks because of feedback reasons, and therefore only assigning to higher tracks or groups. By default send effect tracks are routed to the master bus. It is up to you to assign differently according to your needs. Also if youre using a send effect, think of groups and instead place an

insert effect inside the group, this can be clearer for the overview of your mix and can have better sound mixing results. The fewer send effect tracks the better, the more controlled and adjustable your mix will be for later use. EQ or Equalization. EQ or Equalization is referred to as a dynamic processing tool, not an effect. The equalizer comes in all forms and shapes and works in the vertical dimension 2. The frequency range mostly goes from 0 Hz to about 22 KHz. All EQ is caused by a filter or some kind of filtering. But for adjusting how an instrument will sound, EQ is the best starting point (quality or reduction). Probably the most important tools in the mastering engineers toolbox are equalizers. When we cut we do this with a small and steep filter, when we boost we do this with a wide filter. We tend to cut more then we boost. We tend to use fader level, panning, before using any EQ. Then use EQ. Secondly compression, limiting or gating. Don't hasty overlook the fader level and balance or panorama as a first dimension tool. Most beginners will understand what EQ equalizing is; they know it from home stereo systems or have some experience already. Most will understand when they adjust lower frequencies the sound of a bass will be more heavy or less. And when they adjust the higher frequency range of a hihat it will sound brighter or less bright (trebles). Mostly we talk about cutting or boosting, lowering or raising the EQ amount. The most common are Parametric EQ and Graphic EQ. Remember that pushing the EQ frequency levels (raise, boost) upwards will give more level and this can affect in the result ending up with less headroom or going over 0 dB on the master VU-Meter. Cutting more than boosting, that is a fact. So lowering levels with EQ is better than pumping or boost the levels upwards. Anyway it is better to take away then to add while doing EQing (for quality and reduction). Giving each instrument a place in the frequency spectrum is what youre looking for (quality, reduction, dimensions). Almost all instruments will play in the range of 120 Hz to 350 Hz, 500 Hz (misery range) and are represented here, this range can be crowded and most be well looked after.

So whenever you can, make a plan and make way for other instruments to have a place in the field (stage). When two instruments are playing in the same frequency range (masking), like two guitars playing, it is likely that you will not cutout frequencies with any of them, so balancing one left and one right can solve this problem at fist hand (of overcrowding), this is the first solution in dimension 1 panorama. Most place them off centre anyway keeping a clear path for fundamental instruments. You must decide what sounds best and when to use EQ, but leaving space in the frequency spectrum from Left, Center and Right, by cutting out frequencies of instruments you do not need is more common EQ style and recommended. Instead of raising the Bass because you think it's not been heard, you could check if other instruments do muddy up in the lower frequency range of your mix or just lower all of them instead (cutting all lower 0 - 120 Hz frequencies out of unfundamentals). Boosting frequencies can mean you enter a zone of another instrument or track its main frequencies and the sound of them playing together combines. This can muddy up or fuzz your mix and with a low quality EQ produce artifacts (use quality EQ or oversampling EQ). However, there is a twist. It does not mean that all two sounds in the same frequency range cannot sound together, that is just how you listen to it and that is called mixing. Yes we have some mixing freedom. Remember by applying balancing can separate instruments and must be done first (dimension 1), so with two guitars sound just the same balancing guitar 1 to the left and guitar 2 to the right might solve the problem. Most of the time the frequency range from 30 Hz to 22 KHz is filled with all instruments layered, sounding together as one mix. Also a second rule is the that lower frequency fundamental instruments will stay more centered, as higher frequency unfundamental instruments are panned more outwards, more left or more right. Just remember cutting is better and spreading is better. Make room and plan the frequency range. Place instruments inside the frequency range, spreading them, balancing them. Do use EQ only where needed. First use EQ a on single instrument track can help creating a better instrument sound (quality and composition wise/rythmical intent). Second by cutting out frequencies, you will leave open space for other instruments (reduction) to play clearly. For lower frequency range instruments you can use a high cut also control the distance. All instruments can use some kind of low-cut. By doing this we can be sure that no rumble or high noise is entering the mix and as well leave headroom in the whole frequency spectrum. Remember you almost always need a steep cut EQ from 0 Hz to 30 Hz on all instruments except the maybe Bass. This way

more or less all instruments need EQ on their own single track (quality and reduction), just to make these kind of corrections to make every instrument sound clear and at its defined placement inside the three dimensions. When using sampling, maybe you could process the EQ offline. Or use the EQ offline inside digital sequencers (digital audio tracks), be sure you can always revert back to the original file (without EQ). Some digital systems have unlimited undo functions. Processing instead all in real-time, you can more easily adjust the mix without reloading or undo (timesaver). This means you can always adjust the EQ settings. Offcourse the more you process online, the more computing power you need, but keeps it adjustable for later purposes. Latency can be a problem when processor computing speed is low, you might hear clicks or unwanted audio signals inside your mix when this happens. Use oversampling EQ, for high frequency instruments and working > 8 Hkz ranger, at least you should know your EQ does not produce artifacts in any range, especially the high ranges. First remove, then add. Removing/lowering can be done with a small Q band filter, while adding/raising with a wide filter. Remember L+C+R and panning laws. Know sweet spots frequencies of different instruments. First lower then raise. Lower steeply, raise broadband. Almost any change in one band will affect the sound in other bands. Remember level and panning concepts, clear and logical panorama mixing, balanced frequency distribution Left + Center + Right, frequency ranges, each instrument can fulfill its role inside the mix. Many instruments are have two main frequency spots, others only operate within a single frequency band. A mix requires at least the same number of low-cut filters as there are tracks. A frequency component between 0 and 1Hz is called DC offset and must be eliminated, use a the DC removal tool for this purpose. The misery area between 120 and 350 Hz is the second pillar for the warmth in a song after 0-120 Hz, but potential to be unpleasant when distributed unevenly (L+C+R, panning laws). You should pay attention to these range, because almost all instruments will be present over here on a dynamic level. Graphic Equalizer. A common type of equalizer is the Graphic Equalizer, which consists of a bank of sliders for boosting and cutting, different bands (or frequencies ranges) progress upwards in frequency. Normally, these bands are tight enough to give at least 3 dB or 6 dB maximum effect for neighboring bands, and cover the range from 20 Hz to 20 kHz (the full frequency spectrum). A typical equalizer for sound reinforcement might have as many as 24 or 31 bands. A typical 31-band equalizer is also called a 1/3-octave equalizer because the center frequencies of sliders are spaced one third of an octave apart. Any graphic EQ will be more adjustable with more EQ Bands.

A graphic equalizer uses a predetermined Q-factor and each frequency band is equally spaced according to the musical intervals, such as the octave (12-band graphic EQ) or one third of an octave (31-band graphic EQ). These frequency bands can each boost or cut. This type of EQ is often used for live applications, such as concerts because they are simple and fast to setup. For mixing the Graphic EQ is not precise because the EQ bands do crossover each others next range and affect them. Also mostly using a single type of filter. But however a > 20 band Graphic EQ can do a good job, because it is fast and easy. As a whole the more EQ bands the more precise the graphic EQ becomes. For overall setting of a track and with instruments just needed to correct a bit, the Graphic EQ is best when you need to setup fast and be less accurate. Also because the Graphic EQ is defined, the Graphic EQ will give you a feel of understanding and commitment. Once you know what you can do with Graphic EQ as you get more experienced, you might not need so much peaking EQ or parametric EQ. Also the more EQ bands the better, like 30 > or more EQ bands. Because ranging from 0 Hz to 22 KHz it can also give a view to the spectrum once you look at the whole EQ banding picture. Working with the same brand or manufactured Graphic EQ, maybe will give a steadier outcome each time, compared to Peaking EQ. For quality and reduction purposes the Graphic EQ is a good all-rounder. For removal of frequency ranges, use a parametric filter with a high q-factor and strong raise, sweep towards the problem area, and then lower them, mostly we use parametric EQ for this more exact and precise job. Parametric EQ or Peaking EQ. A parametric equalizer or peaking EQ uses independent parameters for Q, frequency, boost or cut. Any frequency or range of frequencies can be selected and then processed. This is the most powerful EQ because it

allows full control over all three variables. This parametric or shelving EQ is predominantly used in recording and mixing. You can hear easily when raising or lowering the frequency band, what is going on. You can hunt down and find where the nasty and good parts are, finding out what to cut and what to boost. Very precise EQing can be done using a small range steep filter. Like a scalpel you can cut or boost certain adjustable frequency ranges and be a sound doctor in EQing. Just remember more cuts then boosts are the main key to get doors open. Cut what is not needed. Boost only when necessary. Watch out for using small band frequency ranges for EQ, depending to the quality and natural behavior of EQ filters, there can be nasty side effects (alike a harsh sound, artifacts). Also when we boost high frequencies (use oversampling quality EQ) we can create a harsh sound and artifacts. Generally with most EQing we try to use medium or large frequency bands for EQ boosting. This means we will use low q-factors more than high q-factors. For cutting we use steep low cuts and steep filters just to revome what we need. For quality and reduction purposes parametric EQ can be an outstanding tool. But however depending on the features (brand, manufacturer), they need to be very flexible to setup. Some are outstanding for bassdrum and bass, while others have their focus on vocals, strings, highs, etc.

F - Frequency, all equalizers are built on peaking filters using a Bell Curve which allows the equalizer to operate smoothly across a range of frequencies. The center frequency occurs at the top of the bell curve and is the frequency most affected by equalization. It is often notated as fc and is measured in Hz. When using a cut-off filter the frequency will be cut before or after this frequency. Q - This is a variable Quality Factor which refers to the width of the bell curve or the affected frequency range. The higher the Q, the narrower the bandwidth or frequency range, the more scalpel-like (removing, cutting, lowering). A high Q means that only a few frequencies are affected, whereas a low Q affects many frequencies (boosting, raising, be gentle). Staying with a low Q guarantees the EQ quality, as with a higher Q most equalizers do not perform as well. As well as the higher the frequencies we need to EQ, we tend to use more quality EQ or oversampling EQ. The quality of the equalizer is of importance, specially using a high Q, so use the best and leave the rest. G - Gain (Level, amplitude). This determines how much of the selected frequencies should be present. A boost means that those frequencies will be louder after being equalized, whereas a cut will soften them. The amount of boost or cut (gain) is measured in Decibels, such as +3 dB or -6 dB. A boost or gain of +10 dB generally amounts to the sound being twice as loud after equalization. Boosting above + 6 dB can create some nasty sounds, so use a quality EQ. Generally for boosting we tend to use less and be wide, so anywhere up to -3 dB (5dB max) is great. When boosting more, nasty side affects tend to enter to the sound, we use a wide filter and quality EQ. Shelving EQ. Shelving filters boost or cut from a determined frequency until they reach a preset level which is applied to the rest of the frequency spectrum. This kind of EQ filter is usually found on the trebles and bass controls of home audio units EQ mixers. High pass and low pass filters boost or cut frequencies above or below a selected frequency, called the cutoff frequency. A high pass filter allows only frequencies above the cutoff frequency to pass through unaffected.

In this chart two shelving EQ's are used, one to cut lower frequencies and the second for raising the highs. With shelving frequencies below the cutoff frequency, are attenuated (boost or cut) at a constant rate per octave. Low

pass filters will cut off all frequencies below the cutoff frequency. All higher frequencies are allowed to pass through unaffected. High pass filters will cutoff all frequencies above the cutoff frequency and all lower frequencies are allowed to pass through unaffected. Common attenuation rates are 6 dB, 12 dB, and 18 dB per octave. These filters are used to reduce noise and hiss, eliminate pops, and remove rumble (reduction). It is common to use a high pass filter (at about 60 to 80 Hz) when recording vocals to eliminate rumble. Best used as a reduction or separation tool, shelving EQ is used to separate instruments, to give each a place in the spectral dimension (2). EQ and dimension 2. The Base drum and Bass will be most common in the lower frequency range 30 Hz to 120 Hz (180 Hz). Keeping the lower frequencies and lowering or cutting the higher frequencies is making headroom for all other instruments to sound clearly. You are trying to give each instrument a place in the frequency spectrum (instrument rangees) and give them an open pathway (unmasking). The hihat is working and showing (sounding) better when other instruments are not in the same frequency range, so the bass or basedrum will not affect the hihat with its higher frequencies when they are cutoff in the higher frequency range. How much you cut out or adjust is a creative factor, but keeping Bass and Basedrum separated (as dominating the lower frequency range 30 Hz to 120 Hz) and keeping other instruments or tracks away from this range is common. This will give a clear path for the fundamental instruments to play in the lower range of frequencies and stay at center, where speakers do their best job on producing low level events, without other instruments or tracks playing in this range or center position. Also all instruments who have a similar panorama settings, alike the Basedrum, Snare, Bass and Main Vocals (at dead centre), these can be set in distance by using EQ to roll of the trebles for setting distance. Thus for all being played at centre position, you can still adjust their perceived depth (dimension 3) to separate them a bit. Ok you can make adjustments to make the bass sound better (quality, boosting), remember when other instruments play in the same range, this added combined sound is the result of a muddy bassrange 30 - 120 Hz. You are aiming that each sound or instrument to be heard. Heard the way you want it, leaving open space (headroom) for all instruments is better than to just layer all instruments on top of each other (muddy, fuzzy mix). Especially when youre running a clean mix without effects the placement of instruments is best heard. So keeping away effects as long as you can, while mixing dry is best to sort out some placements. For quality often two frequency ranges are applied for boosting, for reduction mostly a low steep cutoff filter on single tracks, groups, etc. For distance we tend to cutoff more hight trebles. EQ Example. Every instrument must be clearly heard, progress from the fundamental instruments towards the unfundamental instruments. Using EQ cuts on lower or higher frequencies can free up space (headroom) for other instruments to play and make clear pathways. Muddiness of a mix will happen very fast when not paying attention to the mix (separation, reduction) or do not align according to your stage plan. Specially the misery range 120 Hz to 350 Hz (500 Hz) is the second range we need to pay attention to (quality), you can make some difference over here while EQing. Adding a reverb will clutter up very fast. So it is better to start listening to a clean mix and concentrate on this for a while (dimension 1 and 2). Be scarse with adding effects until you are quite sure your clean mix (starter mix toward static reference mix) is running well and can be heard well. Again anything you add or raise will muddy up, anything you cut or lower will unmuddy the mix. But still you cannot prevent muddiness altogether (masking), so don't get stuck with it, setting up a mix must be a bit of routine (planning the dimensions and having a stage plan ready made). Starting clean is best and can work fast as a routine, later on you can work more freely and add more. A good clean start according to these rules means better end results. Even when adding effects we tend to use EQ to control the signals to keep everything according to stage planning (dimensions, quality, reduction, headroom, etc.). EQ is the first effect or tool to reach for, after fader levels and balances in the panorame are setup. So you can be sure (almost) that on each track you will use some EQ and is most common, espcially use as many lowcuts as there are single tracks. Again how your instrument will sound is adjusting EQ and be happy with the sound. Remember there are two ways we can use EQing as a tool, quality and reduction. A guitar can sound thin when played in solo mode, it can be sounding very well inside a mix.When a sound is recorded badly and unattractive, it is likely you cannot change a lot when using EQ or when correcting it in any other way. So it is better to record the best sound you can. EQ can bring out any instruments quality. But also with the same EQ you can make headroom inside a mix by cutting out what is not needed and at the same time make the fundamental sound ranges heard more clearly. The less muddy and clearer your mix will sound (in the lower frequency range) is started by separating what you really need to hear and cutout what you do not need. The lower frequencies will give more power and is really the focus of the

mix, the lower frequencies most be in center all the time, so when using a stereo EQ watch out for swaying more left or right. The higher frequencies are also important to watch, but are not really adding to the overall power of your mix, they mainly adjust for rythmical compistional intent and is a good measure for the distance of individual instruments. Another thing is being fitted with good sounding speakers or monitors while adjusting EQ. Even heaphones need to be of pure quality. Remember when you do play on monitors a frequency range 0 Hz to 50 hz will not be heard at all. This will mean you will not hear them as loud as your mix is really putting out, only because you do not hear them trough your speakers. Not hearing lower frequencies correctly out of your speakers can mean you will counteract this failure by pumping up the lower frequencies. When listening on good speakers that play the lower frequencies well, you might avoid this mistake without adding more then you need. The bigger base speaker you can get or a better frequency range from your speakers will improve your mixing and hearing correctly what is being played. Also monitor speakers tend to be more natural when their whole frequency range is linear. Also the room you listen in is of importance. For monitor speakers to really shine, they need to have a flat frequency spectrum. You can't EQ when you do not hear it correctly played. Get good monitor speakers or when you listen on headphones get a good one. This can be costly, but the best equipment is needed. Headphones are cheaper and for EQing they have a better frequency range. Tough headphones can be less effective playing reverberation sound as they are close distanced to our ears and do not include the room reverberation sound, they can be a good tool for EQ and Compression, unmasking, correlation and balance, dimension 1 and 2. Listening to good speakers is important, when you listen on a home stereo set you are missing out on hearing the correct amount of frequencies played. Get good monitor speakers instead. Good equipment starts with good monitor speakers that represent frequencies well from low to high and are as flat as can be. EQing is almost impossible when you can't hear what youre doing. Invest in speakers and a good soundcard or mixer is helping you hear what is being played. Invest in noise free and quality equipment, will help you to hear what you mix is about, without interference. Only then you can hear what you are doing, thus using quality or reduction without compromise. Common Frequency Ranges. Highs, > 3.5 KHz. Mids, 250 Hz < > 3.5 KHz. Lows, > 250 Hz. Brilliance, > 6 KHz. Presence, 3.5 KHz < > 6 KHz. Upper mids, 1.5 KHz < > 3.5 KHz. Lower Mids, 250 Hz < > 1.5 KHz. Bass, 60 Hz < > 250 Hz. Sub Bass, 0 Hz < > 60 Hz. EQ Frequency Range. Just to cut the frequency spectrum in sections, here is a chart that can help. Low Low 0 - 60 Hz Low Mid 60 - 120 Hz Low High 120-350 Hz Mids Low 350-1000 Hz Mids Mid 1000-4000 Hz Mids High 4000-7000 Hz High Low 7000-10000 Hz High Mid 10000 - 14000 Hz High High 14000 - 22000 Hz Compression. Supporting Transients, Sustain, increasing level of quieter sections. Compression is refered to as a dynamic processing tool, not an effect. A compressor reduces the dynamic range of an audio signal if amplitude exceeds the threshold. The amount of gain reduction is determined by Attack, Delay, Threshold and Ratio settings. The Compressor works the like an automatic volume fader, any signal going above the threshold is affected. It is better to compress frequently and gently rather than rarely and hard.

A compressor is a good tool to reduce instruments peaks and give some more dynamics (headroom) back to the mix (reduction). The major issue with a compressor is pumping (quality). We as humans like our music to pump, just as we like our hearts to continue pumping and beating. Just as we like to pump it loud. Pumping can be achieved by single band or even multiband compressors to decent effect. The only way we actually hear a compressor at work is when it is hitting hard at its threshold levels. Most likely you have gone too far and must be more subtle. Anyway the compressor is a subtle effect and only really good heard when pumping starts to sound. We tend to compress more evenly with a low ratio level, and with a lessor degree scraping of peak with a limiter (as this a compressor with higher settings on ratio, etc).

The setting of the Threshold level is of importance, this will set the level for anything that goes over the threshold is to be reduced by a certain amount of level. This reduction is progressive and will be more when the level of the sound inputted goes further over the threshold level. By setting Attack and Delay times for the compressor, you can play around with how fast the compressor will act in reducing the amount and releasing this reduction after the signal goes below the threshold level. By setting attack and delay we can affect transients or sustaining sounds. By setting ratio we can adjust the amount of compression.

This is simple ADSR volume compression. Sometimes an evelope effect can workout greatly for instruments, so refer to your instruments settings first. With the envelope from the instruments adsr we can achieve a good sound before even using compression. A peak compressor with an Threshold of -10 dB and Attack time set at 10 ms and release at 100 ms, will reduce any signal that goes over -10 dB and is longer than 10 ms, after the signal goes below -10 dB the reduction will gradually reduce for 100 ms. The same procedure will follow when the threshold level is reached again.

Most compressors have the following controls, though they may be labeled slightly differently. Mostly used on a general instrument rms level, a general compressor setting is being subtle and just try to remove some hard signals and making some headroom again for other instruments. Even adjusting transients or suistain of the original sound, the rms level, or peaks. Threshold - This is the level at which gain reduction begins to happen. Usually measured in dB. Lower threshold values increase the amount of compression, a lesser signal is required for gain reduction to occur. Ratio - This is the ratio of change between input level and output level once the threshold is reached. For example, a ratio of 4:1 means that an input level increase of 4 db would only result in an output level increase of 1 db. The compression result is a reduction of -3 dB. The Ratio is the amount of reduction. When ratio is set at 1:1 there will be no reduction when the threshold is passed, the comprosser is bypassed. But with 2:1, each 1 dB of more signal over threshold is reduced by halve, and will be compressed to 0.5 dB and so on. The more amount of Ratio the more compression and reduction will be done. A limiter is a compressor that has high ratio settings, alike 10:1 to 50:1 or infinite. Like from a brickwall limiter you would expect everything that goes over the threshold level will be reduced to the threshold level, as the amount of ratio is so much, it will be close to the threshold level. A compressor with ratios between 1:1 and 5:1 are being more subtle then a limiter. Attack Time - The amount of time it takes for gain reduction to take place once the threshold is reached. The ratio is not applied instantaneously but over a period of time (the attack time) usually measured in microseconds or milliseconds. Use longer attack times when you want more of the transient information to pass through without being reduced (for example, allowing the initial attack of a snare drum). Specially for keeping the transients the attack can be set > 10 ms or even more. This can enhance rythmic and compositional intent, enhance the quality of our stage plan. Release Time - The amount of time it takes for gain to return to normal when the signal drops below the threshold. Usually measured in microseconds or milliseconds. With a fast attack and a fast release, the more you will sustain the end part of a note (sustaining a bassnote or baseline, to bring out longer standing bassnotes.). Thus reducing the transients therefore boosting the parts sounding after the transients (sustain). Makeup Gain - Brings the level of the whole signal back up to a decent level after it has been reduced by the compressor. This also has the effect making quiet parts (that are not being compressed) louder (see Release). For mixing purposes when compression has reduced the original level, we can boost with make-up-gain to get the signal up to its original level again. Sometimes a compressor has automatic make-up gain. For mastering purposes we tend to stay away from using make-up-gain.

Hard knee and Soft Knee is the way reduction takes place above and around the threshold. Soft knee is more curved and hard knee is at a certain angle. Soft knee tends to be more natural/analog and hard knee tends to be

more aggressive/digital. Opto or RMS : Opto behavior is more digital and straightforward and for precussive instruments and drums (fast). RMS for the rest (slower). Side chain compressors. Side chain compression can solve mixing problems when two sounds are played together on two different tracks inside a mix (masking, when a bass note and bass drum are sounding together in the same frequency range). Split-mode side chain compression is most scalpel-like dynamic shaping tool to ever exist. Compressing dynamically according to a key input as you can choose which frequency range you want compressed by your keying value. On Vocals for instance compression can reduce some difference between loud and soft parts, correcting sudden louder parts of the vocals that jump out. Maybe you need to compress the accoustic guitar part only when the vocalist sings ? To create some headroom and unmasking you would like when a part goes over a set loudness level, that the loudness is reduced for that short instance of time. Sometimes a Bass note and the Basedrum do appear at the same moment, thus the bass note is overcrowding the basedrum for a short while. A nice trick is reducing the Bass only when the base drum and bass play at the same time moment, this makes the Base drum more clear and will not affect the baseline as much. This can be done manually by editting, muting or cutting out bassnotes, or with a sidechain compressor trick. For this instance we could use a sidechain compressor to correct the problem by reducing the bass note when the basedrum goes over a certain threshold, thus temporally reducing the bassnote. This will keep the boom of your basedrum to hear unaffected, as this is the fundamental reference sound (frequency wise and rytmically) that can be crutial to your mix.

Multiband Compressors. This compressor is mainly used at the mastering stage but also can come in handy while mixing. Most multiband compressors do have 4 Multibands. Each multiband has got its own frequency range and the reduction of each multiband can be setup separately. For instance controlling the bassdrum or bass, we can adjust low, mid, and hight with different compression setting. Normal Multiband Default settings. Band 1, 0 - 120 Hz, Power. Band 2, 120 Hz - 2 KHz, Warmth. Band 3, 2 KHz - 10 KHz, Treble, Upper Harmonics. Band 4, 10 KHz - 20 KHz, Air. Adjust the bands when needed, for instance. Band 1, 0 - 120 Hz, Power, first low band. Band 2, 120 - 350 Hz, Misery range, second low band. Band 3, 350 - 8 Khz, Mid range. Band 4, 8 - 20 Khz, Air, Trebles Each band will be the same acting as a single band compressor or normal compressor, just that the spectrum can be adjusted in multiband ranges. Now you can control the Bottom End and not affect the higher frequencies while compressing. Each multiband crosses over in the next multiband. You can understand with vocals that can be expected to be handled carefully, maybe only the mids can be compressed a bit, without harming the crispy highs or lows. For mixing purposes the multiband compressor could become handy, but however setting up a 4 multiband compressor can be a fiddly job. Even with 4 compressors running at the same time, you might not

hear as good what youre doing. Because of this complexity, multiband compressors are most likely only used for mastering purposes and scarcely used for mixing purposes, but can become a handy tool when resorting for a trick to solve problems. Specially when you need spilt signals to be controlled, but you do not like to have copyed instruments, a multiband compressor can help solve things for you in the mix. For use on single instruments try to avoid, only as a last resort. For use on groups use only when they have the desired effect without much fiddling around. Multiband compressors tend to show less pumping, but this soly depends on what frequency band or instruments your working on. To control pumping better use a single band compressor instead, controlling 4 multibands can be a hassle. Compressing. Compressors on individual instruments or tracks are almost always used as an insert effect (pre-fader) and (almost) never used as a send effect, because the main function is to change the signal directly. Compressors can be inserted at single instrument track level or as an insert on groups or sends. What we try to achieve is a cleaned and better sound (better transient, sustain, rms levels then before), so making sure what goes into the compressor needs to be as clean as can be. Prior to compression we can place an EQ for cleaning purposes. Use manual editting. Popping sounds and air noises are best rolled off with a low cut 0 Hz - 35/50 Hz to 120 Hz for unfundamentals. A gate can also help clear up the input signal as well as automated or manual muting. When recording you can use compression just to scrape some peaks, the real compression can be done later in inside the mix. Maybe you already placed an EQ for cleaning up (quality, reduction), and then place the compressor behind the EQ (all pre-fader). If for instance youre working on a digital system then you would have more places to insert an effect on a track or instrument, send or a group. When you place a compressor as insert effect, do this in effect slot 2, so effect slot 1 stays free for EQ (all pre-fader). Compression is highly dependant on the source material, and as such, there is no preset amount of compression that will work for any given material. Some compressors do have presets for certain types of audio, and these can be a good starting point for the inexperienced, but remember that you will still have to adjust the input and threshold for it to work properly. Because every recording is done with different headroom and dynamics, every compressor will also their own sound and main purpose. The main purpose of the compressor in mixing is give some structure and dynamics to the sound that is passing through the compressor. Compression is done by controlling the dynamics (level) of the input by compressing the output. Basically there are some good reasons to use a compressor. For controlling the Transients (start of each note 0 ms -25 ms) and controlling the Sustain (30 ms >) a compressor can do a good job to make certain instruments more clear and work them into the dimensions you need (quality). Also by compressing a loud part, will give softer parts more volume (level). This is why we need to clean the input signal of unwanted noise; else the compressor will only make them louder. Pops and clicks in the lower frequencies can make the compressor react, while you do not want it to react. So better be sure your delivering a signal input into the compressor that is good, else try to remove with EQing upfront, gate or even edit the audio manually (removing pops, clicks, etc). The ratio setting for individual instruments is about from 4:1 to 10:1, don't be shy. Setting the ratio lower will make you use the threshold more. Setting the ratio too high, the compressor almost starts to act as a limiter. By chance the only limiter that is used in a mix is on the master bus (for scraping some peaks a brickwaal limitor), so ratios like this are out of order on group tracks and individual tracks or instruments. We can use general rms compression on a group track to join or weld the individual tracks together even more (also use some compression on the sends) as well as we can use summing. With a ratio setting from 1:1 to 4:1 (that is lesser then when working on individual instrument tracks), the more subtle the compressor will be and weld (blend) the group into a layer. For mastering purposes a ratio from 1.5:1 to 3:1 is commonly used. Very Short release times emphasize the quieter sounds, after the transients have passed. This is handy with Bass, Guitar or any other instrument that does not hold its sustain very well. You can get each note sound straight until end doing this (sustain). Set the decay time for rythmical content to tempe, a measure or beat. When you reduce the peaks of a signal and then add the same relative amount of makeup gain, you are raising not only the instrument by x amount of dB, but raising the Noise Floor as well. This is why we need cleaned up material. While usually not an issue in quality recordings, it can become apparent when compressing quiet acoustic recordings or recording with a low Signal to Noise ratio. That computer running in the background while recording suddenly becomes more apparent or you forgot to turn off the ventilator in your living room. Unheard sounds could become from being unnoticeable to being an annoying hum if you compress and raise the makeup

gain. Even when using EQ. That is why the input must be as clean as possible and cleared of unwanted sounds. The pumping sound you might hear occurs when the compressor initiates but then has too fast of release and the rest of the mix comes up to fast after the hit (lesser transients and more sustain). To fix this have a slower release, lower ratio, slower attack or higher threshold. They all have a different effect so listen and decide what sounds best and gives you what you are trying to achieve. When pumping is noticeable, after a while this becomes apparent. When pumping occurs, it is likely we have gone too far. If you train your ear pretty much all radio signals have a certain "acceptable" amount of pumping. When the compressor is set previously, do not affect the input signal, because this will affect the threshold placement and needs to be set again. This is why we first make use of level, balance, EQ before adding a compressor. Hunt down and up for hearing the correct setting of a compressor. Listen and go extreme before backing down to a good sound, it is the only way to really hear the reduction good while setting up a compressor. Do not fiddle around -5 dB change of threshold, go extreme and go way lower or way higher, or crank or lower the ratio and listen to the difference (pumping or not). A good rule is when you hear a compressor start to work, you have gone too far. Experiment. Generally you will get better results by learning to use compression, and understanding how the controls affect the audio signal. Experiment, listen and visualize, then apply. When compression is not working to adjust levels, use event fader level or balance automation (unmasking). Even after the compressor. Also automation of level (the fader) is a kind of compression that can be done manually, maybe the first choice in line when overall compression does not seem to workout. Using the mute button for instance. Compression is easely available, but the original audio must have some good even sound before entering the compressor. In most cases midi notes can be raised of lowered in volume / level by manually editting. Samples can be manually adjusted. Also audio on a track can be editted and maybe you might take the time to do this note by note, level by level. The more even of level or controlled the original is audio enters the compressor (RMS, Peaks, noise, artifacts, etc), the less work the comressor has to do (less artifacts and pumping), the better the result. Limiter. A Limiter is nothing more than an automated volume fader. Commonly a limiter will top (scrape off) the signals. Unlike its big brother the compressor, the limiter has fewer buttons and knobs to play with, in comparision to a compressor a limiter has got a ratio setting that is high on value, therefore compressing power is high. Limiters work good on a whole mix on the master track. A good between version is the peak compressor, combining functions of a compressor and limiter together. A limiter is basically reducing all signals that do come over the set threshold. Mostly used to scrape off some peaks while on the master track. Uncommonly used on groups or single tracks, but for the same purpose used on the master bus fader preventing overs on the main mix. For scraping the peaks set the threshold to -0.3 dB or a reduction amount of 1 dB to 2 dB and does not hurt the transients. Limiters can have artistic and creative purposes that are uncommon. Gate. A gate is basically cutting all signals that do come over the set threshold. A gate can be compared to a compressor, instead of using reduction by measuring the signal; the gate cuts all signals to inaudible. For removing unwanted material (cleaning and reduction) a gate can make a difference. For rhythmical sound content (drum set, percussion, etc) a gate could cutoff the reverb or any other effect, according to tempo. A gate could cutoff sustaining sounds. For instance when a pre-recorded snare has got room sounds or sustaining sounds recorded, a gate could clean or clear the reverberation sounds or sustaining sounds, by only passing the first transient sound. Then after the gate you will have a more dry snare, you could now create the room by adding a reverb that fits the dry snare signal. Endless creative quality and reduction possibilities over here. Delay's and gates are often synced to tempo of the track. Use the mute button for composition wise intent or manual gating. Finishing a first starter mix. For now we have discussed all features for starting a mix towards a static reference mix. Once you get the hang of starting a mix, this will be a good basic setup. Mixing is just more than setting up all faders and knobs, but for starting/static a mix we can only give some guidelines and proven togetherness. Starting a mix we like to stay in

dimension 1 and 2 and use the common tools available. We try to avoid dimension 3 for now. Keep on mixing with the tools for dimension 1 and 2, until satisfied. Then we will discuss dimension 3, as we need depth also to make our stage plan true. The Static Mix Reference. But most likely you want the best out of your mix and you will be adding more effects later on. Do anything to make the whole sound better. Using EQ, Compression, Delay, Reverb (discussed later on), Limiter or any other device or effect will change the way your mix will sound (the three dimensions, your stage plan). Remember when you know to add something to your mix, you are changing the levels. So check, adjust and re-check whenever you can. It is quite ok to mix freely and set faders and knobs as you want, setup however you like. As long as it sounds good, it must be good. But keeping headroom (open space for adding) and keeping the VuMeter below 0 dB is important. Also it is general for most beginning mixer to pump all levels as loud as can be; this is not what youre looking for. Loudness can seem to be better, it is actually the same and we will pay attention to overall loudness while mastering. So keeping the total levels (summing) on the master fader VUMeter is keeping you ready for mixing purposes applied for later use. If you are happy with the togetherness of your sounding mix, maybe you can raise all track faders so that the VU-Meter is more on the upper side closer to 0 dB, still remember doing this is not changing sound but the level only (and will produce more artifact when raising too high, you will just lose some headroom instead. Keeping headroom anywhere from -4 db to -14db is allowed and good accepted in mixing. Because in the mastering stage there is plenty of power for loudness to get your mix to sound as loud as can be, care less about loudness levels when mixing, care about how your mix sounds as a whole. Using quality and reduction first (apply the dimensions in order). Care about how your stage planning is perceived. So once again to hammer it down, your mixing now, so separation as well as togetherness is important only. Loudness we wait for as we have finished the mix and go for mastering. As a rule for a good starter mix, we tend to stay inside dimension 1 and 2 more. We only add dimension 3, when we are satisfied finishing off all earlier dimensions (the static mix). Resorting first to panning, level, EQ, compression, gates, mutes, limiters, reverb, delay, overall effect and the correct order. Review of our start. At least in mixing an EQ and Compressor, Limiter and Gate are good tools to adjust the mix, before throwing in more effects and more sounds. Together with Fader level and Balance, EQ and Compression are the most used carving tools for a mix (starter mix towards a static reference mix). Basically EQ will do a good job on just reducing or gaining frequencies overall on the whole part or frequency spectrum. Compression, limiting and gating will give you something an EQ can't do, that is to affect only certain signals when they are passing a defined border. Thus controlling transients and sustain. Taken in account that for overall level use the level faders first, manual editting and muting, panning the panorama first (separation). Use EQ when you need to cut (separation) or raise overall instrumental frequency ranges (quality). Use Compression when some parts of instruments at certain times peak and need to be lowered or reduced to give more dynamic range back, keeping things tidy and together (headroom). Use a compressor for transients and sustain (quality). Use a gate to really cut unwanted events. Use a limiter to scrape off some peaks. Use manual editing for removing pops, clicks etc (sometimes breathing noises on vocals). A Good start is giving each track or instrument a place in the spectrum available (stage planning). These are good tools to get some headroom back, thus reducing or scraping peaks. Try to imagine what the whole mix can sound like, and after some repeated times you have setup a mix, you will get the hang of it. Remember to get some separation/togetherness out of your mix, reduce frequencies that are not needed per instrument. Try to be natural and close to the original sounds, but keep what is needed and wipe away what is not to be heard (wipe away more, raise less). Try to transmit natural signals towards the listener, so our brain does not get confused (dimensions, 3d spatial information, stage planning). This will mean sometimes using EQ and just cutoff outside ranges of an instrument with shelving low or high cuts (reduction). Sometimes the internal range of the instrument needs to be sounding better (quality), use EQ for overall editing of the sound, while using Compression (Gating or Limiting also) for more time and loudness related peaks that you need to correct (transients, sustain). Not forgetting to balance the instrument from left to right and to keep track of the Vu-Meter, correlation meter, goniometer, spectrum analizer. Do some checks and rechecks on your reference tracks alike Basedrum or any track you choose as reference loudest track. Soloing as well as listen trough a mix summing up towards the master bus fader, towads the last output. Take in account that mixing is always debated and can be explained in different ways, because mixing is a creative thing. But having some guidelines and working by it will increase effectiveness. Specially knowing

panning laws, stage planning, where and what to cut, masking and unmasking, dimensions and 3d spatial informational hearing, the more natural the better. Understanding how to do things will take time and is repeated learning process, it is pure expirience in the end that makes the speed and time needed for mixing towards a starter, static and dynamic mix. This will mean you will mix good or bad, but you will continue to learn from it when doing so. Also the human brain needs time to take all information by learning and processing information, ordering this into something you can understand later on. We will get tired when hearing for longer times to loud music. Getting to much information and working just too hard is not getting you there any faster. Take some time off and give it a rest, give your fatique ears a good rest, this will help you find your mix on another day sounding different than before. Making better decisions. Each time you will learn for a while and then some realization will set in afterwards. Then you will understand the whole picture.

What your aiming for is seperation and still have some togetherness !
Remember it is better to reduce then to add, and cut away what is not needed, the headroom that you create with it will be rewarded when you need to add things to the mix later on. Getting things to sound louder each time you mix is not important, that we do later on while mastering. Relatively we have now worked more on dimensions 1 and 2. And have avoided dimension 3 until now, although we have discussed it we did not apply dimension 3 really as an example. The end of the Basic mixing part I! You can read more about mixing furthermore and what to do next in : Basic Mixing II Here we introduce dimension 3 and some more effects and being less restricted and more creative with the mix (Static Mixing). End of 'Basic Mixing I'.

S-ar putea să vă placă și