- Music Business Alchemy
- Posts
- Understanding Audio Effects
Understanding Audio Effects
Through properly understanding audio effects, you can add flavorful spices and unique characteristics to your recordings.. They can add warmth to acoustic instruments, put life into your virtual instruments, or soften a brittle sounding sample. They can also strengthen an electric guitar track, or create the illusion of a vocal performance from an acoustically sound cathedral.
In This Article You Will Learn:
What audio processors can do to a recorded audio signal.
What the difference between an effects processor and a signal processor is.
Practical uses for modifying a recorded signal.
The unique functions of popular processors.
The difference of hardware and software audio processors.
The two root-categories of audio processors are:
Audio Effects (fx) Processors, and Signal Processors.
Audio processors edit the waveform, and then send the waveform with the signal changes to the output. They can do this by recording a new audio file over the old one, adding the processed effects; or - they can often preview the effects in real time from within the multi track recorder. This lets you monitor the changes, or compare and contrast the new signal before the recording is permanently rewritten. Effects processors manipulate the waveform it processes, but in a different way than signal processors.
Effects processors can be understood as artificial additions to a recording. For example, adding chorus replicates a recording or track. This duplicate signal is changed and a slight change to the original pitch is added. This effect results in a more pronounced section or louder, doubled, highlighted, instrument section. Signal processors can be understood as natural modifications that control the signals output. For example, Equalization adjusts the output of frequencies. The modified frequency response can be used to highlight a high hat by 6db at 15khz, which would make the frequency have a higher sizzling sound to it.
The frequency spectrum is the scale of frequencies heard by the human ear. This is understood to be 20 HZ – 20KHZ.
Note: Audio processors only process audio. (There is now such a thing as Midi Effects, but they are much different and not explained in this section) This means that every audio software effect you use within your home recording environment will require more resources from your computer.
Manipulating audio requires more DSP and hard work than midi and data. Further, FX processors and audio tracks can be taxing on your CPU. Use discretion according to your digital audio workstation’s DSP power. Hardware processors can be integrated units or standalone modules. Both can be used in a versatile way and are not outdated, yet. They are both very capable in audio processing power. Hardware processors are usually more expensive than competitive music editing software applications. Whereas they can be more reliable than music software due to the fact that they are not run on a computer. They both have their advantages and disadvantages.
*There are hardware FX processing units and music software FX processors.
Music software processors also known as plugins, are typically designed to emulate hardware units. As the hardware ranges from decent, to state of the art; the same is true for software processors. Both do the same thing - just have a different form of interface and control. In most cases, since the software FX processors replicate a model of hardware, the only practical difference is instead of using your hands to adjust the knobs, you’d adjust the knobs on your screen using your mouse or control surface. Thus, I will cover general functions and results of popular audio effects used in the modern pro or home recording studio.
Signal Processors
Compressor – Acts as a volume control that regulates the signal when it gets too loud.
Limiter – Prevents the audio signal from peeking above a preset level, this level, is determined by you. Limiting helps keep an overall balance, as inconsistent vocal takes and DAT overloads are common in all recording studios.
Noise Gate – Gates are used to smooth the pauses in between active parts of your recordings. They automatically decrease the gain when its level falls below your pre-set volume. (I.E the breathing or moving around that gets recorded in between vocal sections.) These are very helpful and should be experimented with hands on tweaking.
Equalizer – The process of equalization is adjusting the way a frequency range responds. (Adjustment of frequency to affect its placement in the mix.) This is generally used to accentuate a frequency range or exclude a certain frequency, dependant upon the producer’s vision and recording techniques.
Effects Processors
Chorus – Produces a slight pitch delay that makes the recording sound fuller. Chorus gives the illusion of additional performers/accompaniment, it somewhat highlights a given section.
Flange – Flanging is the process of replicating the original sound, and then delaying that replicated signal. This alters the original sound and makes a swirling, swooshing, sound. Again these effects are “audible,” so needless to say it’s much easier to understand them by using them.
Reverb – Is simply a reflection of sound that repeats or slowly fades. Reverb mimics what you hear when you scream into an auditorium. This slowly fading sound, is the decay of reverberation.
Delay – A delay processor simply makes a signal lag. This lag is what you hear and it is the interval between the original signal and the delayed signal.
Echo – Whereas reverb is the decay of the original sound, echo can be explained as the original reflections. For example, Screaming into the grand canyons you hear a very loud reflection of what you screamed for a few repetitions…This is the echo…The slow decay of the noise is Reverb.
Audio processors are available for any home or pro recording studio, in hardware units or software plugins. These additions can be used to boost a second rate mix, to a commercial quality record. The first step is to understand what audio processors do. The next step is developing techniques for how to implement the right effects when needed to add dynamic and realistic characteristics to your audio tracks.
Reply