Convolution Reverb The Convolution Reverb engine in the Editor works by multiplying two audio signals in the frequency domain (in a sort of moving average). A reverb effect is obtained when an 'impulse' file is convolved with the original wave file.
What is a Convolution reverb?
Convolution reverbs can be used to enhance an entire mix or bring out specific instruments within a mix. Similar to algorithmic reverbs, you can adjust a wide variety of parameters on convolution reverbs, including the decay time, wet and dry mix, and more. It is important to note that convolution reverbs utilize impulse responses of all kinds. @geekisthenewcool - many convolution reverb plug-ins come with at least one or two outdoor spaces. Altiverb is the most popular convolution reverb out there.you may want to check that. – Shaun Farley Dec 10 '12 at 12:53. Rematrix Solo is a great convolution reberb and has many useable presets to get you started, included ProChannel module as @scook has pointed out. So is Waves IR1 as @Twisted Fingers pointed out. If you would like to make your own 'Impulses' for Convolution Reverbs, 'Acoustic Mirror' a Sony Sound Forge exclusive is the ticket. Convolution reverb plug-ins will also tend to include all adjustable standard reverb parameters (decay time, adjustable level for early and late reflections, reverb frequency response, etc.). Reverb settings can be customized even after loading an impulse response, allowing for extreme fine-tuning.
Convolution Reverb Plugin
Convolution Reverb digitally simulates the reverberation characteristics of an acoustic space using a piece of software. For instance, it can recreate a realistic simulation of any acoustic space like cathedrals, symphony halls, your bedroom etc.
Brief History:
The development of this sort of reverb is as recent as two decades. Since the dawn of digital reverb in 1976, people were using digital reverb based on multiple feedback delay lines. These provided workable sounds but they were not as realistic as a convolution reverb. In 1999 Sony came up with DRE S777, which was the first real time convolution Processor. It was a game changer for that time.
The earlier processors used to take a day to process the Impulse responses. Since then we have come a long way to be using DSP based Convolution processors, which is pretty amazing.
How does it work?
Audio samples like a gunshot, snare hit or a sine sweep tone are played in a real space. In addition, a microphone captures the impulse response of the place. This audio sample is consequently fed into the convolution processor. After that, the calculations treat any incoming signal to replicate the characteristic sound of that space. In simple words, an acoustic space is excited with a sound. So, the resulting impulse response of that space is recorded and used to react to an entirely different signal.
In terms of processing the software processes two forms of reflection:
- Early Reflections:
As the name suggests these are the reflections, which are captured first. Therefore these reflections are discrete and occur as peaks.
- Diffused Reflections:
The sound strikes various surfaces, creating the first reflections. In turn, these reflections are reflecting on other surfaces and creating other reflections, due to the presence of many surfaces and objects. These reflections are continuous and appear as continuous noise.
What is Impulse response?
A sample recorded by excitation of an acoustic space is called an Impulse Response (IR). Additionally, it contains all the information regarding the reverberation of the space. In the context of acoustical analysis, you might also think of an impulse response as the acoustical “signature” of a system. The IR contains a wealth of information about an acoustical system including arrival times and frequency content of direct sound and discrete reflections, reverberant decay characteristics, signal-to-noise ratio and clues to its ability to reproduce intelligible human speech, even its overall frequency response.
Most Common methods to create Impulse Responses:
- Sine Sweep method:
A long sine wave tone encompasses all the frequencies.
Specifically, this means that it generates reverberation for all the frequencies, creating high-quality simulation of the space.
- Short Impulse method:
Sound engineers create and record a very short impulse likes a clap, balloon burst, gunshot etc. in an acoustic space and record the impulse response. Moreover, this method is very common and is used by companies making such reverb.
Generally speaking, unless you’re recording IRs inside a concert hall, it’s more rewarding to seek something that sounds good than something that’s ‘correct’. You can also push this principle further put mics near walls, under sheets, under seats, in sinks. Experimentation is the whole point of making IRs yourself.
Commonly used Convolution Reverbs:
- Space Designer – Logic Pro
- Altiverb – Audio Ease
- TL Space Native Edition – Trillium Lane Labs
- IR 1 – Wave
Additional Resources & Source Texts
Convolution reverb is said to be the go-to tool for realistic artificial reverberation. But what needs to be true so it’s really authentic?
At first sight, convolution reverb technology looks quite fool-proof. In theory, you can practically sample the exact behavior of an acoustic space or reverb algorithm. Or can you? Let’s look a bit closer at how it works.
Convolution Reverb Basics
The general concept behind convolution reverb is to capture the impulse response of an acoustic space. The impulse response is the signal you get if you feed any linear and time-invariant (“LTI”) system with a perfect impulse. In digital signal processing, such an impulse is a single full-scale sample and only zeros otherwise. Please understand that I’d rather not confuse you with the definition of such an impulse in the analog world right now.
If you know the impulse response, you know the exact behavior of an LTI system. At least in the digital world it is obvious why that is. Every digital recording is just a series of many impulses, and an LTI system responds each one in the same way: by outputting a scaled version of it’s impulse response. It’s thus trivial to reproduce this behavior artificially using digital signal processing. It gets a little harder if you want to do it in real-time, latency free and with a reasonable amount of CPU. But the basic technique is trivial.
A practical problem with this approach is also immediately obvious: an impulse response is just a static measurement. Your chances of changing some properties like reverberation time or source position after the fact are very limited, compared to other artificial reverb techniques. To realistically place orchestral instruments in a concert hall, you need lots and lots of impulse response measurements. Also, things like the sound source and microphones used for impulse response measurement affect the outcome. Especially their directivity. However, that’s not what I want to focus on today.
Let’s revisit one sentence from above again: If you know the impulse response, you know the exact behavior of an LTI system. I highlighted the important parts here, where two questions arise:
- Does the assumption of an LTI system hold for reverb?
- Can we exactly determine the impulse response?
Today we’ll focus on the first question. Next week we’ll look at the second one.
Is Reverb Linear and Time-Invariant?
For now I want to focus on the original meaning of a reverb: reverberation in an acoustic space such as a room, a cathedral or whatever place in the real world comes to your mind. It is generally assumed that these environments behave – acoustically – in a linear and time-invariant way.
Linearity means that it doesn’t matter if you reverberate a mixture of several signals together, or reverberate the individual signals one after another and then mix the results. The result will be the same. Similarly, the reverberation is the same – relatively – no matter how loud the signals are.
Time-invariance means that at any instance in time, the result of feeding a signal through a system will be the same, thus characteristics don’t change over time.
So is this true for an acoustic environment?
Generally, sound is rapidly moving air resulting in small local changes in air pressure, which in turn results in other moving air molecules and so on. To get from moving air and air pressure to a set of wave equations describing a linear sound field, some assumptions and prerequisites need to be met:
- Magnitude of sound pressure changes are small compared to the static air pressure.
- Consequently, static air pressure must be constant.
- The sound field must be irrotational, which means air doesn’t move around in circles.
Only if these criteria are true can we assume the sound field to be linear.
Under Pressure
The first criterion is easily met as long as we are talking about non-lethal sound pressure levels on the peak of mount everest and below. At sea level, 120 dB SPL is still around 1/5000th of the static air pressure. Lucky us!
Speaking of static air pressure: we know from the weather report that it is not constant. So there’s that. But at least it changes only very slowly. Nevertheless, we have a problem here, because the Wiener Musikverein might sound different depending on the current weather. Anyone in for a blind listening study?
Similarly, the air in the acoustic space should be generally at rest. Larger air movement will cause drastic changes in sound radiation properties. At every open air concert, when you’re a few 100 meters away from the stage, you’ll notice that during gusts of wind the sound becomes louder or softer for a short time, and high frequencies are damped more or less, depending on wind direction.
So it looks like we already have a problem here. But what about this irrotational thing?
You Spin Me Right ‘Round
Convolution Reverb Logic
In the free field, a sound field with only air and no other obstacles or boundaries, the sound field is indeed irrotational. The same is still true if boundaries and obstacles are present, but large compared to the wavelengths of sound.
In real-life situations however, this is not the case. There are objects, walls and edges of all sizes (which is actually a good thing – perceptually). What happens is that when air moves in the vicinity of sharp edges, it often creates little whirlwinds, which invalidate our irrotationality criterion. You can notice the problem for example with closed loudspeaker designs when the enclosure is not perfectly airtight. Bass frequencies can then sometimes produce subtle wind noises (some kind of pumping white noise).
The same goes for easily moving objects that collide with other objects. A typical example are door panels in your car that start rattling when the subwoofer goes to work.
Of course, these rattling and blowing issues are extreme examples that you don’t want anyway, so there’s no point in whining about the fact that impulse responses cannot accurately capture it. But still there are very subtle nonlinear effects in real acoustic spaces, especially with very long reverb times. The result is a certain amount of chaos and time-variance in real rooms that will get lost in impulse response measurements.
Convolution Reverb
Some Perspective
Unfortunately I don’t know about any studies to quantify how strong the influence of these small chaotic ingredients actually is to a realistic reverb tail. The experimental design would be very difficult anyway. And then it’s still unclear how important this would be perceptually.
The thing is, human perception doesn’t care so much about the finest details of reverberation. In fact, it goes to some lengths to ignore most of this information. We recognize a space by very basic characteristics such as the sequence and coloration of early reflections and the frequency-dependent buildup and decay of the reverb tail. Very fine differences in the exact reflections don’t matter much once they are sufficiently dense.
In non-huge spaces, the rather academic effects described above likely don’t build up sufficiently before the reverb tail decays into inaudibility. As reverberation is all about small effects that are repeated over and over in chaotic feedback loops, it would take some time for such small effects to become dominant. The additional decorrelation however can have an impact on perceived spaciousness and audibility of late reverberation in conjunction with long sustained tones (more on that next week!).
Conclusions
Convolution Reverb Pedal
Although acoustic reverberation can be largely considered LTI, there are sources for time-variance and nonlinear effects. The larger and more complex the space, the more these may (subtly) matter. Nevertheless, for the most important audible features, convolution reverb is a great way to convincingly simulate real acoustic spaces. And it’s no doubt the most exact and realistic method available to us.
Convolution Reverb Ableton
Next week we’ll start out with something we omitted so far: algorithmic reverbs. And afterwards we’ll have a look at some methods to actually measure impulse responses. And at how the results are affected by noise, time-variance and other disturbances.
Convolution Reverb Plugin
Head right on to Part 2!