Another question about EQ’ing

I have another question for you about EQ’ing…you answered an earlier one for me with great detail and I learned alot from it…..

I was experimenting with sweep EQing after watching a video of George Massenberg? demonstrating a parmetric EQ he’d designed. Anyway, after reading up on it and trying it I found I was able to get my vocals sitting much better in the mix by identifying and diminishing “artifacts” or funny frequencies.

I also read that a properly recorded vocal did not need any EQing (which probably means that I still have a long way to go in learning how to record).

So my first question: before you record a vocal, do you have the vocalist record say a short segment in order to “ferret out the artifact frequencies”, and then set up the EQ to account for the frequencies? In other words,  do you drop say 1000k a few decibels when recording a vocal if that frequency had been identified as a problem frequency? Or do you record it without any EQ and then just EQ the recorded track?

Second question: after recording all your tracks and then assembling them into the finished song, do you go back through the mixed song with all the tracks playing and begin EQing the whole thing? Wouldn’t combining various tracks, say two tracks that had already been EQ’d, create the need to EQ the whole thing again because of combining all those tracks with their frequencies? It seems that you could end up chasing your tail in this kind of process.

Could you share your thoughts on this process?

As always, there is no right or wrong answer, and no one way to do anything.  It’s all about what sounds good and fits the particular song you are working on.

I’m not sure where you read that remark about a properly recorded vocal not needing any EQ, but even with a “properly” recorded vocal, most modern pop and rock mixes will still have some EQ used on the vocals during the mixing process.  However, there are certainly some styles of music where you wouldn’t need much, or any, EQ on the vocal, especially if the arrangement is sparse with lots of room for the vocal, and you have a big choice of microphones to choose from.

Part of it is matching the right microphone to the singer.  If you have a decent microphone collection, and know the response characteristics of each, you should be able to take a pretty good guess at what microphone will work well for certain types of voices and styles of music.  That’s half the battle right there.  If you’ve got enough microphones to choose from, and the time to experiment, you can find a microphone that perfectly compliments the singer’s voice and also fits the particular song you are recording.  In that way, you are actually using the built-in frequency response of the microphone (it’s own “EQ”, if you will) to pre-EQ the vocal to make it sound the way you want.  If you have that luxury of time to experiment, and enough different vocal microphones, then you might be able to get away with little to no EQ during mixing.

However, things often change once you start adding lots of other instruments and sounds to the mix.  So, what may have sounded good when you initially recorded the vocals, might not work when the rest of the instruments are in the mix, and you need to use EQ to make the vocals sit better.

When I worked at the major studio with a nice analog board and some really nice analog compressors and EQs, we would often track with a bit of compression and EQ, but only because we knew what we were doing and the sound we were after… and, we wouldn’t go overboard with it (unless intentionally to create an effect).

However, in my own studio these days, I don’t own any analog EQ.  I still do a bit of compression on the way in as I have several really nice analog compressors, but I can’t do any EQ on the way in since I don’t have any analog EQ.  Adding digital EQ while recording makes no sense at all, since you are doing it after the conversion to digital anyway, and you might as well save that until the mixing stage.  Don’t put a digital EQ on the input channel in your DAW and record it that way, unless you have a REALLY good reason to do so, because you won’t be able to undo that later… and, better to preserve the raw audio exactly the way it came into the computer instead of using digital EQ twice.

So, the answer to your first question, in summary, is, NO.  These days I don’t EQ anything on the way into the computer.  Plug-in EQs are pretty damn good these days, so I save EQ for mixdown, and have not bothered to purchase any analog EQ devices for my studio.

I sort of answered your second question already, although it’s not totally clear what you are asking.  But, even if you did EQ stuff on the way in, you usually do have to do more EQ during the mixing stage to get everything to fit together better… that’s why it’s often best to save all EQ duties until mixdown so you aren’t adding those EQ phase shifts to the audio more than once, and overly processing things.  However, back in the analog days, it was very common to EQ on the way in, and also again during mixing.  Many times, the EQ on the way in was to not only fix some issues with the source, but also to compensate for the high frequency loss you would get from analog tape.  You would often boost the high end a bit on the way to tape to compensate for what you know you would lose from the tape itself, and also so you hopefully wouldn’t have to boost the high end again during mixing, which would also raise the level of the tape hiss.

Another example, though, is that for most rock, pop, and metal type stuff, where you want punchy drums, I always know that I’m usually going to EQ out a lot of the “boxy” frequencies from the kick drum, which are around 400 Hz, and then boost some low end thump (around 60 Hz) and some upper mids for the attack (around 3K to 6K, depending on the beater and drum).  So, with a good analog board and good analog EQ, we would often do that type of EQ on the way to tape… again, being careful not to overdo the EQ.  Then, during mixing, once we had the final bass and guitars all in the mix, you would usually add even more EQ to all the drums to get them even punchier (as needed)… so, we may be adding a bit more to what we already did.  Definitely a common occurrence.  Not really chasing our tails, just making additional tweaks to what we already did.

However, these days, even when recording drums in my own home based studio, I can get very close to the sound I want with proper choice of microphones and very careful placement, and NO EQ on the way it (don’t have any analog EQ).  Of course, that’s only with a great drummer who has a great sounding kit to begin with, and who knows how to set it up and tune it correctly.  I can get a really good drum sound that sounds pretty good on its own without any EQ at all.  But, then, if it’s a very dense and heavy mix, once those guitars and bass are in, I almost always have to do some EQ to the drums during mixing to get them where I need them to be with everything else in the mix.

BTW… started a new site for my “internet mixing” service:
http://www.stephensherrardmixing.com

Hope this helps!

Yes, it does help. Thanks for the information.

What I meant by “chasing your tail” was that everytime you mixed 2 or more tracks together it seems that the potential of introducing phase anomalies or clashing frequencies exists; therefore, even if I did a “sweep EQ” of a track, removed a few problem frequencies, and then mixed it with another track or tracks, then suddenly I could find myself having to EQ again and again because of newly introduced phase anomalies created by the new tracks being mixed together. So then you EQ and EQ again and again….thus chasing your tail….perhaps it doesn’t really work this way because eventually you run out of EQs….or most probably you hit the biggest phase problems and move on.

So your thought of EQing primarily during mixdown would help with this “chasing your tail” or having to EQ tracks already EQ’d.

Many of us here in the “home studio sector” do not have a multitude of microphones, and I know I only have 3 to work with: a large diaphragm condensor, a small diaphragm condensor, and a dynamic microphone.  Therefore, I have to make them all work as well as I can.

But anyway, you did answer my questions as usual and I appreciate your input.

OK… I understand what you are trying to say by “chasing your tail”.  However, audio doesn’t really work that way.  If you have two different tracks or different instruments, or even different vocal takes, summing them together via mixing doesn’t have any type of cumulative EQ affect.  Even if they share a lot of the same frequencies, the signals are not correlated, so adding EQ in one frequency range on one track isn’t going to affect the frequencies on the other track at all.  Separate EQ’s on separate tracks don’t add together or subtract in any way.  So, there isn’t any kind of chasing in that regards.

However, there are masking effects that are more of a psycho-acoustic thing than anything to do with frequencies or math.  The presence of certain frequencies at certain volume levels will mask out other frequencies at other volume levels.  I don’t know how to explain this very well, but it is the fundamental principle of lossy forms of audio compression, such as MP3.  MP3 encoders use these principles to figure out what frequency content our ears won’t be able to hear because it is being masked out by other content, and then they throw that data away so they can reduce the file size.  Obviously, there are artifacts and the more info you try to throw away (with lower bit rates) the more noticeable it becomes, but that’s an entirely different topic.

When mixing, one of the most obvious examples of this type of masking is bass guitar in a guitar and drum heavy mix.  If you solo the bass guitar and make it sound really deep and big and full, and then put it in the mix with the drums and all the other guitars, all of a sudden, your bass becomes pure mud and loses all definition and is very hard to hear.  Our ears are tuned for mid to high frequencies where the human voice lives, so it’s very hard for us to hear bass properly when there is a lot of mid-range and high frequency content to get in the way.

To combat this effect with bass guitar, you usually need to take away a HUGE chunk of low end, and boost the mid-range to the point where the bass guitar would sound really thin and “bitey” or even “wimpy” on its own.  But, when you put it in the mix, it sounds big and full, like there is a lot of bottom there, even if you removed all the bottom with a filter.  This is another psycho-acoustic effect where our brain can fill in the missing fundamental frequency if it hears the harmonics.  Thus, you can completely filter out the fundamental frequency of the bass, but still “hear” it in your head as long as all the harmonics are still present.

Basically, though, the point I’m trying to make is that sounds do affect each other, but it’s not the affect of EQs are different tracks interacting with the frequencies of other tracks.  It’s just our brain selectively hearing things and other things being masked out.  That’s why it’s always important to EQ, or at least check your EQ, with everything in the mix so you can see what it sounds like with everything together.  The biggest mistake beginning mixing engineers make is to solo every instrument and work on EQ and compression until the track sounds really big and full and great on its own.  They do this with every track, and then when they put everything together, they are left with a big muddy mess!

Anyway, I could go on, but I already wrote a lot of this in an article already on this site called “3D Mixing and the Art Of Equalization” (or something like that).

Posted in Ask MusicTECH! Tagged with: , , ,