Composing : Convolution/Reverb Wet/Dry/somewhere in the middle? by Joel Irwin

Joel Irwin

Convolution/Reverb Wet/Dry/somewhere in the middle?

This is mostly an opinion question since there is likely no single solution/consensus. For those of you who score/output electronically, what do you do for convolution reverb? I am particular interested for those using Kontakt-based samples though this discussion is not limited to that. Also keep in mind, what you use could be dependent on how your soundtrack is mixed. The higher it is mixed - higher/more heard or software/barely heard.

In some studio films, the soundtrack is mixed completely or nearly dry. In others, there seems to be a lot more depth comparatively sounding like the music was played in a soundstage or concert hall.

Many/most commercial instrument samples come with their own options for convolution/reverb. Some composers merely use what comes with each instrument sample. However, if there is a mixture of instrument samples from many different vendors/sources, the 'space' the instrument plays in could sound differently - which may be what the composers want or perhaps not (if they want consistent space definitions).

Samplers, like kontakt often allow the composer to create (for each output line) an additional convolution/reverb definition (with or without the one that comes with the sample itself).

On my last score, I left the convolution/reverb definitions intact for the samples I used (for example for my cinesamples instruments, I left the default of "Dennis Sands" on max). I then added for all but that piano, an convolution 'preset' of "Concert Hall A". I used less of a convolution for the piano.

When the score was done, the filmmaker loved the score (including the convolution setting), but the faculty in the class I take a composition class where very emphatic that I had used way too much space/convolution and insisted it be made much drier.

I have been unable so far to come up with a setting that is dryer that I like that does not sound like it has nothing at all.

So for those of you who work with Kontakt, do you use a convolution preset or create your own or a combination or you don't use Kontakt at all for convolution?

Simon Lambros

Hi Joel, I have started to use the EW Spaces reverb for my mixes, and reduce the reverb in the VI's. This way, I feel I am "placing" all the instruments in the same hall in their respective places! However, making the music sound great in the studio is not necessarily good for the film. Because the music is mixed with both dialogue and sound effects/atmos, different frequencies can be masked, and the dubbing mixer will want to add reverb to the music so that the music will sit well in the completed sound dub. Reverb can get swallowed up when added to a sound mix, but too much reverb will make the music sound muffled and muddy when mixed. Also , dubbing mixers I work with tend to want separate stems; either Brass/Perc/WW/Strgs/Electronics, or sometimes in 2/3 frequency bands ie, low, mid, high. This is because different instruments need to be "boosted" relative to the others when adding to sound - an oboe solo over strings can sound great in the studio, but when placed under dialogue you may lose the sound of the strings and just hear the oboe (which is not what you had in mind when you mixed it!) - I speak from experience, as this happened to me once!! Having said that iZotope's Neutron may well change working practises! Don't know whether I have helped or just gone off at a tangent. Happy experimenting!!

Joel Irwin

Great feedback for all. Thanks. The only thing to keep in mind is that 'at the low end' - typically the person doing the mixing merely merges all the tracks together, defines some settings for the tracks (especially the music track) with keyframe values and goes with it. To the extent that I have been at the mixing sessions, I have not yet seen any more sophisticated 'audio engineering' techniques such as you described above being used. In fact, oftentimes, there is no qualified audio engineer available and the mixing is either done by the video editor or the filmmaker themselves (who may or may not be versed in areas you describe). So whatever reverb/convolution is in the music mix is generally what gets mixed.

Simon Lambros

In that case, always try to mix your music at the volume you think it will be played at: so if under dialogue, make sure you mix the music so that it doesn't drown out the words. The amount of reverb you will want will then vary dependent upon the volume. You can simply bus your music tracks to a separate bus and pull the volume down on that bus. You could even insert a "master" reverb on the bus, unless you want to use different reverbs for different effects.

Alternatively you can use a send from each of your VI's to a master reverb bus, which will allow you to balance the different amounts of reverb in the different sample libraries. Then you can send that bus output to the other bus with the "dry mix". All methods will work. The main thing to remember is that you are best off with 2 versions of your mix - one for the film, and a different one for your "album"!

Joel Irwin

That's what happened to me on Monday. I had a moderately wet soundtrack which I played to some academics (been taking classes for the last 14 years and use the classes among other things to have my scores reviewed/critiqued) without the film (which is still in post). Wanted them to hear it at full volume by itself. It sounds great in the film but was much too wet standing by itself (and that was the first critique). That was exactly what I forgot to do.

Jonathan Price

I emulate the live recording process:

1) If I were in a recording studio, the orchestra would have a natural room sound. If the library I'm using has ambient mics and a wet sound, like the Spitfire libraries, I don't do anything at this step because the natural room sound is already there. If the library I'm using is dry, like Sample Modeling, I'll use a convolution reverb to approximate the placement/sound of a studio (not a church or concert hall, mind you...a recording studio like Sony or Abbey Road). For this, I use MIR Pro with the Studios soundpack: Teldex (wide). If I didn't have that, I'd probably use Spaces (which is great, but you don't have as much control over placement) or you could check out Ircam SPAT or Altiverb if your budget allows. The idea is to emulate the studio's natural reverb: no more/no less. If you get a chance to sit in on a recording at a studio like Sony, or Abbey Road, or Air, let your ears get used to the natural room sound. That's my goal with a convo reverb: just get the space and the instrument placement right.

2) The second step in a live recording is that the audio engineer will use a reverb (traditionally algorithmic, traditionally a Lexicon 480L) to extend the tails of the natural reverb. Something like a 24ms pre delay with the top end rolled off. So, for a virtual mix, I'll add an algo reverb to whatever artificial room (or baked-in room) I created in step 1. This is to taste...not a lot. You can listen to soundtrack recordings for this balance. There are reverbs out there that emulate the 480L (like NI's Reverb Classics), but there are a lot of algo reverbs to choose from. I use the inexpensive, but awesome sounding, Valhalla VintageVerb. It's has a low CPU footprint, which is great for stems where I need a reverb instance on each of the stems.

That's my process. It's certainly subjective, but there is a "sound" you can hear in soundtrack recordings that is more or less in a certain bandwidth of taste. If you're shooting for that sound, A/B your tracks against something in a similar style that was mixed for a studio release.

Other topics in Composing:

register for stage 32 Register / Log In