Resources for the Recording Musician
August 5, 2007

Phasing and phase correction


Doctor:I've recently become aware of phasing and attempts to correct it.  It's a bit tricky.  An out of phase track or instrument really disappears when you listen to the final product.First of all, is it good to have some amount of phase differential in a song.  It seems like phasing can add a bit of perceived depth to a track.

Also, can virtual instruments get out of phase? It seems they can.  It also seems that an individual track can be in phase (when you meter it) but when you listen to that particular track or instrument in the final mix by soloing the track it can be less in phase.  Why might that be?

As far as correcting the problem do you have any tips?  I have Ozone3 which I like.  T
It also has a phase meter and correction tools within the program.  Those have been helpful.

Any explanations or tips are always appreciated.

Thanks again!

Phase can indeed be a complicated thing to understand, but, in practice, it all just comes down to listening and adjusting things until it sounds best.To read up more on the technical details of phase, you might want to do a Google search for such phrases as "understanding phase" or "phase relationships".  There is a Mix magazine article that will show up near the top of the list if you type in "understanding phase" which does a decent job of explaining it, even though it seems to emphasize live applications a bit.The most basic thing you need to understand is that when you are talking about phase in how it relates to recording and mixing, you are generally talking about the phase relationship of two signals to each other... usually two different signals with the same sound/audio source.  This can be two (or more) different microphones used to record the same sound/instrument, or it can be your two speakers playing back a stereo mix.  You can also have a single microphone that picks up the direct sound of a source, but then also the sound reflected off of nearby surfaces, which will have a different phase relationship to the direct sound.  Also, most equalizers are going to add some sort of phase shift to your signal, unless they are "linear phase" equalizers.  Plenty of other types of processing affect phase as well.  In short, there are many things which affect the phase of a signal (many more than the few examples I listed here).

On many mixers (software and hardware) and microphone pre-amps, you'll often find a "polarity" switch, which is often mislabeled as a "phase" switch.  In correct terms it is a polarity switch as it simply swaps the positive and negative.  It would be the same thing as switching the positive and negative wires on your speakers... you aren't really switching the "phase", you are just switching polarity.  However, if you flip the polarity of a signal and then add it back to the original version at an equal amount, you'll get perfect cancellation.  People often say that this is "out of phase" when more correctly the polarity is just reversed.  Phase also relates to time, and a 180 degree phase shift would also imply a time shift and would not necessarily totally cancel out a signal if added back to the original signal, unless it was a steady state signal (such as a pure sine wave) that didn't change in amplitude or frequency over time.

You mentioned that phasing can add a bit of perceived depth to a track.... well, yes... in a way.  Many of those stereo widening type plug-ins basically use "phase" or polarity tricks to make audio seem wider.  The most basic ones simply reverse the polarity of one of the two channels of a stereo signal and then add that back in at different amounts depending on how wide of an effect you want.  The problem with those is that if you sum those two channels back to mono (such as on AM radio or a mono TV, or even FM radio where the reception is poor and the receiver switches to mono), then the signal that's been widened with the simple polarity switch method will be greatly lowered in volume or even disappear completely, depending on how much of the reversed polarity signal was used.  More advanced stereo widening tools will use other methods to make it more mono compatible.

You also ask if it's good to have some amount of phase differential in a song.  That question by itself really doesn't make sense.  What phase difference are you talking about?  The phase difference between the left and right channel of a stereo mix?  Or between a couple of different microphones used to record one instrument?  Or using a stereo widening program on the whole mix?

Unless you pan every signal dead center in the mix, i.e., make every thing mono (no stereo signals), then you will always have a phase difference between the left and right channels of your stereo mix (if you are looking at a phase meter) by the simple fact that the two channels are different.  If you want a nice sounding and wide stereo mix, you'll typically pan different instruments to different sides of the stereo mix, or you may have some stereo signals that were recorded with two microphones, or that have stereo effects applied to them, or whatever.  So, for a typical stereo mix, there is really no such thing as having the whole mix "in phase" or "out of phase"... the two channels are simply different.  However, there will be some signals that will be mono, or mono to some degree, such as a dry vocal recorded with one microphone and panned right in the center, or the bass guitar, or kick and snare (if from a mono drum sample, or a single microphone without any overhead microphones or room microphones panned out).  But, if your whole mix was mono, that would be pretty boring for someone listening in stereo and used to wide sounding stereo mixes.

Really, the main time that we as recording engineers worry much about phase is when we are recording something using more than one microphone.  When you do that, you need to listen to the sound of the microphones mixed together to see if it is pleasing or not.  There will always be some phase differences between the microphones simply because of the distance differences between the microphones and the source, as well as differences in the response characteristics of the microphones themselves.  It's not an "all or nothing" type of deal, though.  If you have two microphones at different distances from the source, the phase relationship between those two microphones will vary over the course of the frequency spectrum since each frequency has a different wavelength, and thus the amount of phase shift at each frequency will vary given a fixed distance difference.So, when you add the sound of those two microphones together, certain frequencies will be emphasized while others will be attenuated.The trick, then is to move the microphones around until you get the most pleasing combination when adding them together.  Now, as you add more microphones, it becomes even more complex!  Recording a live drummer with modern techniques usually requires many microphones, and they each pick up sounds from the whole kit (not just the drum they are pointed at), so adjusting positions of the microphones to get a good overall sound can require a lot of time and patience.  Even moving a microphone by an inch or less can sometimes make a big difference in the sound.  You'll also want to use that polarity reversal switch on your microphone pre-amp or mixer while listening and moving microphones to see if one way sounds better than the other... For example, if you have one microphone on top of the snare, and another underneath pointing up, typically the combination will sound better if you reverse the polarity of one of the microphones (but not always).

Then, the next time you worry about phase is when you are mixing.  Once you start doing a lot of EQ adjustments to tracks, then you are also messing with the phase of the signal.  If it just a single track for an instrument, then it really doesn't matter as much since that track isn't being mixed in with another track of the same instrument, so there is nothing for it to have a phase difference with.  Again, mixing a live drummer that was recorded with multiple microphones to multiple tracks is generally where you start to run into trouble... EQ or other processing on one track of the drum tracks can change the phase relationship of that track with regards to the other tracks, and you have to listen to how they all sound together, possibly even using that polarity reverse switch again to see which sounds best... or trying alternate types of EQs to see which one sounds the best.

Subscribe via Email

  • This field is for validation purposes and should be left unchanged.

Get Help!

Got a technical question for the Ask MusicTECH blog?
Submit your Question

Need more personal help or consulting?
Contact Me

Buy me a coffee?

If you find this site helpful, please consider leaving a tip/donation to help cover the server costs and encourage me to write more.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram