4_.phrantek._4 random title
| 5 Oct 2010 14:39 xxx.xxx.xxx.xxx |
Re: Would really love some feedback on this mixdown...
My pleasure, and thanks for the nod.
As far as your process goes, I would actually suggest bouncing your individual tracks so you can work with the audio BEFORE you do any EQing with the Reason EQs, unless it's absolutely essential to the sound. IMO Reason is a prosumer product unless used in tandem with a host that can use professional-quality processors, so you'll get better results if you save the EQing for Sonar. For the record, EQ and dynamic plugins are usually more effective (and consistent) processing audio vs. realtime playback.
I'm very much of the school of thought that you should pack a sound with as much relevant frequency content before you start taking stuff away from it. Obviously, if cutting a notch out of one sound helps the layering process, that's another story entirely. But if you work with good quality samples from the outset, it shouldn't be too big of a deal. I hardly ever do any EQing on my individual drum hits before the individual elements are summed. But I've also run most of my favorite breaks through analog EQs / gain stages and re-recorded and re-sliced them in Recycle, and I tend to strive for drumsounds composed of no more than three breaks/samples.
As far as the Stereo Space goes. I did plan on doing more with that for the finished product. Was thinking of having the mid hi fade in and pan w/ the bass line & other things like that.
I'm not sure quite what you're saying, but I wasn't necessary suggesting automated panning (although for certain elements, like the aforementioned swells/stingers/FX, it could be really sweet). Rather, I was suggesting that you use widening/narrowing to create a fuller stereo image.
One of my instructors at Full Sail said he was always fascinated by the mixes a friend of his pulled off - using gestures, he described the mix as an inverted triangle. What I took that to mean is that sub bass (the point of the inverted triangle) was rendered in mono, and the higher the frequency, the more of the stereo field it could utilize.
A lot of drum and bass engineers are pretty obsessed with mono (Tech Itch, for one, although I'm not sure if he still does strictly mono mixes), and understandably so; it helps you avoid a lot of phasing issues. But you sacrifice production depth considerably. I'm of the opinion that it is okay to have mid-bass elements in stereo, provided you take extra care not to overdo the width. I've had good luck limiting the stereo width of mids to something like 30-50% or less. Jo-s recommended once that I pan no further than 10 o'clock & 2 o'clock positions. Generally, I agree. But...
With the drums I didnt add any compression cuz I beleive the samples were already compressed. I have been messing w/ a stereo widener on drums for the past couple days and that seems to help w/ things so I may try that for this track.
...for drums I make something of an exception, which is largely due to my processing methodology. It's okay to compress drum samples multiple times, but you have to have a game plan. If you're working with compressed samples that need work, I would highly recommend using an expander plugin on your first summing bus to boost dynamics, i.e. when you output all drum tracks to a single stereo channel.
At that point, I would start prepping for parallel compression. First, you need to mult the summed drum track - meaning you need to duplicate the signal somehow. In Reason, you would use a Spider Audio Splitter. But you could also just bounce your stereo drum track to audio (I recommend keeping crash cymbals and percussion, like congas, separate) and run it on multiple audio tracks.
I usually use three instances of my drum track. One I run in mono and add no effects whatsoever. I run the other two in stereo and use different compression / harmonic maximization plugins (my favorite pairing of late is CamelCrusher and Stillwell Audio's Rocket Compressor) and stereo width settings for each. As a general rule, I keep the distorted signal narrow (less than 100% stereo width) and the compressed signal wide (200%). Keeping in mind that the wider the image, the quieter the signal will sound, I balance the three signals together. Standard parallel compression methodology suggests leveling the effected signals slightly below the un-effected signal, but play around with it until it sounds right to you. Then output all three tracks to a single bus again (which is where you will adjust the level of the signal in your mix).
The reason this trick sounds so good is that you preserve signal dynamics which might otherwise be lost to compression, without forgoing the benefits of compression. Furthermore, neither signal is competing for gain in the stereo field, because all of them utilize the stereo field complementarily. Your mono track fulfills the need for a strong center image, and your stereo tracks keep your drums from sounding too narrow.
Another note on drums - engineers may mix drums according to a player or audience perspective. Supposing you were seated on the throne of a drum kit, your hat would be over to the left, your ride would be over to the right, and (assuming it wasn't a stripped-down punk trap kit) your toms would each sit at a specific space in the field, etc. From the audience perspective, each of those positions would be reversed. And in recording, panning is sorted out by the placement of a pair of microphones, which themselves define the stereo field of their sources. Consider this when you're setting up your drums, but remember that it's by no means the only way. These are just tried-and-true, logical methods. But never pan your snare!
Let me know if I can offer any additional explanation. Also, I'd be totally happy to work with you on the mix, but ideally there would be a (small) fee involved for my time and training. I did just have to move back to L.A. because I'm broke and apparently unemployable (at least in Portland).