Compared with live recordings, MIDI sequences often lack warmth and expressiveness. That is in part because many electronic musicians regard MIDI sequencers merely as recording devices with a few tools for correcting notes, timing, and basic dynamics. I can't count the number of times I've seen MIDI musicians play the notes, tighten up the parts, and simply move on to the next track.
In fact, MIDI recording offers a deep synergy between the sequencer and the inner workings of the synthesizer's sound-shaping capabilities. The ability to change virtually any aspect of a performance at any phase of the creative process is an immensely powerful creative tool.
Expressive sequencing is achieved using three main elements: the synth architecture, the sequencer, and the controller. All too often, articles about MIDI sequencing focus on just one of those aspects. For a truly animated musical performance, it's vital to consider all three components as a whole.
To that end, I enlisted the help of artists whose work reveals a deep understanding of MIDI and sound design coupled with stylistic know-how. The result is a wide-ranging pool of ideas from the standpoints of synthesizer programming, sequencing, and control options.
Inside Your Synth
Whether the synthesizer is a sample-playback unit, a physical-modeling synth, or something else, it has many features that are common to all synths, including envelopes, low-frequency oscillators (LFOs), and other modulation capabilities. Those features primarily control timbre, loudness, and pitch. Modulation features such as LFOs and envelope generators (EGs) can run free, but your best option for lively, nonrepetitive sequencing is to bring those capabilities under real-time control.
For example, LFOs are great candidates for modulation with Aftertouch. You can supplant the periodic effect of LFOs with a more humanized effect by controlling their depth or speed in real time. Many late-model synths offer knobs, sliders, and other controls that govern a variety of modulation features. Those controls often transmit Control Change (CC) messages instead of less efficient, bandwidth-consuming System Exclusive (SysEx) messages. If your synth offers such controls, you can capture and manipulate them in your sequencer.
Once you've grasped the capabilities of your synth's sound-shaping tools, what do you want to do with them? Whether his synth sounds are emulative or not, Rob Mounsey looks for elements that evoke acoustic instruments. “I always try to make sounds that suggest that they could be some sort of real instrument that you haven't run into,” he says. “I try to create the illusion that you've found an unusual instrument that people haven't heard yet — one that could actually happen in an acoustic space with acoustic materials. The way to get there is to carefully analyze acoustic instruments that you like to hear.”
Lyle Mays also finds inspiration in the behavior of acoustic instruments. One of his signature sounds is a swooping, ocarina-like synth patch. Mays explains the acoustic orientation of that sound: “It reflects the way pitch responds when a string is plucked; the harder you pluck it, the more out of tune it is at first before it settles. The other acoustic principle is the way ensembles, especially young children, start things out of tune and then gradually end up more in tune. I was thinking specifically of a grade-school choir of ocarinas, and the pitch attacks are just all over the place. The kids are listening, so they eventually get closer in tune with each other.
“That's an oversimplified version of what I'm talking about. It's much subtler in the synth sound, but one of the oscillators does start sharp and then comes down in pitch, and the other one hits the pitch. There's pitch information on every attack.” Routing Velocity to control oscillator pitch adds a bit more acoustic behavior in that acoustic instruments, particularly plucked strings, stretch and go further out of tune the harder they are hit.
Sampled instruments often provide a superficial realism, but sustained listening can be boring. The static nature of samples often works against a natural feel and sound, but using sampled instruments doesn't have to be a sonic dead end. If you understand your synth's architecture reasonably well, you can find ways to imbue samples with new life and realism.
Sometimes all that's missing are the imperfections that occur naturally in acoustic sounds. George Whitty enhances the realism of his sounds by using waveforms from unrelated instruments. “I used to create my Hammond sounds by putting the [frequency modulation (FM)] part of a Yamaha SY99 through a SansAmp to dirty it up, but that messed around with the bottom end too much,” he says. “The most suitable thing to create Leslie grit is a highpassed alto-saxophone wave. The gritty grunge of a real Hammond through a Leslie cabinet creates an aggregate effect that's not just a bunch of sine waves added up, but a kind of dirty, tubey thing. In trying to simulate that dirt, the high end of the saxophone samples works great; I filter out most everything below. I can make a sampled string section play more expressively by assigning a bit of bandpassed distorted guitar to the expression pedal to add some bite as things get more intense.”
Out Of Range
Occasionally, the right sound exists in the outer regions of a wholly unrelated instrument. Jimi Tunnell carefully tests his sounds outside as well as within the usual playing range suggested by a patch. He finds that the categories suggested by preset titles can often lead you to overlook material that's viable for completely different applications.
“Don't look at the name of the sound,” Tunnell says. “Just because a patch is named ‘Flaming Gibbons’ doesn't mean its only possible use is to imply monkeys on fire. Forget the names and listen first to the general shape and timbre of the sound.”
I have a background in bluegrass and country music, and I've often sought the perfect pedal-steel-guitar sound. I've heard patches that approximate the instrument's slow, weepy characteristics, but I've rarely heard a patch that captures its higher registers or one that conveys the fast staccato soloing techniques I've heard from some steel players. However, when I accidentally sent the wrong Program Change message to my Roland Sound Canvas, I heard just the right sound from its fretless-bass patch. To help complete the country tune, I found an effective Telecaster-like sound in the General MIDI (GM) Clavinet patch; it was perfectly nasal, though a tad synthetic sounding. With a bit of adjustment to the filter's cutoff frequency, I found just what I needed.
David Battino takes his cue from movie sound design. “Often, technically accurate samples sound wimpy and unrealistic in context, so you need to exaggerate them, subtly adding timbres the mind expects to hear,” he says. “For a movie soundtrack, I had to create an electric-bass solo for an actor to match during filming. I set up a layer in a Korg T3 to trigger a fret-squeak sound in a very limited Velocity range — something like 55 to 64 out of 127 possible values. That meant the squeaks appeared almost randomly.
“When I saw the final cut of the movie months later, I initially thought they'd replaced my performance with a real player. I doubt a real bass would have produced those squeaks, but they lent a certain organic realism to the performance. The Roland SC-8850 Sound Canvas and the Yamaha Motif, among other synths, include numerous performance artifacts such as scrapes and breath noises that you can use to desterilize a track.”
Mounsey likes to beef up sampled sounds with analog synthesizer waveforms. “People have been layering samples with analog stuff for a while; that's an old trick. You can hide deficiencies in the sample that way and make it more even or full. I like to take a sampled sound and mix it in with something different that's filling in certain holes, maybe rounding out frequency ranges that I miss or coloring the sound a little differently.”
One reason that sampled single-instrument sounds usually fall short is that they don't evince the complex timbral changes that acoustic instruments go through. Simply layering another waveform with the original isn't going to do the job; you need to continuously vary the balance between one layer and the other. More importantly, you need to do it in a way that the sequencer can capture.
Stock fretless-bass samples sound a bit too muddy and static for my taste, for example. Instead of relying on those samples, I use a dual-oscillator patch with a sampled, fingered electric bass on one oscillator and a tuba sample on the other. (Other sampled brass instruments such as French horn also work.) I control the second oscillator's amplitude (and to a lesser extent, its filter frequency) with Aftertouch. Bearing down on the keys brings up the tuba waveform, producing that hornlike Jaco Pastorius tone. You can also use Aftertouch or Modulation to bring in a light, slow LFO to get that characteristic slow, wide vibrato, but be careful not to overdo it.
Even if your goal is a replica of an acoustic instrument, don't forget to listen carefully to unabashedly synthetic waveforms; you never know when a little fine-tuning with filters or envelopes will yield the basis for a perfect instrumental sound. For example, to imitate the nasal qualities of a fingered electric bass, I've had great success using pulse waves at roughly 25 percent pulse width. By subtly modulating pulse width, you can vary the virtual picking hand's distance from the bridge; as pulse width approaches 50 percent, you can simulate the rounder, more hollow tone achieved by playing a bass closer to the neck.
It's a good idea to become acquainted with your synth's raw, unprocessed waveforms. Familiarity with your palette of waveforms can suggest new sounds or offer alternatives to old favorites. This is an excerpt from the following article: Sequencing with Style.
To stay informed about new articles, be sure to click here to sign up for the DigiFreq Music Technology Newsletter. It's free!