Inventor Roger Linn shared this demonstration of using the LinnStrument as a controller with Sample Modeling’s The Viola virtual instrument.
The video demonstrates the range of expression possible with an Multidimensional Polyphonic Expression (MPE) MIDI controller, especially paired with a virtual instrument that is capable of using the MPE information.
What is Multidimensional Polyphonic Expression?
MPE is a new specification for communicating musical performance gestures, polyphonically, that is compatible with MIDI 1.0.
Traditional MIDI keyboard controllers capture one dimension of movement – your finger moving up and down on a key. More advanced controllers capture and communicate three dimensions of finger movement.
For example, the LinnStrument is designed to capture these three dimensions of expression, for each note played:
- Velocity and finger pressure (Z axis) is typically used to vary note loudness, like traditional MIDI controllers. While most MIDI controllers capture velocity, and some channel-aftertouch, the LinnStrument captures finger pressure polyphonically.
- Finger left-right (X axis) movement is used to vary pitch, both discretely, like a traditional control keyboard, and continuously.
- Finger forward-backward (Y axis) movement is used to vary timbre. Most control keyboards are limited to a mod wheel, which modifies all voices. The LinnStrument allows for per-voice modulation.
Beyond Sample Playback
Capturing this richer range of expression is only useful if paired with software or an electronic instrument that can make use of it.
Sample Modeling’s virtual instruments go beyond sample playback, modeling the waveforms and how they are affected by physical gestures. Here’s how they describe it:
Identification of the “fingerprints” of high quality instruments has been carried out by state-of-the art recordings of chromatically sampled notes, typical articulations, and expressive phrases, played by excellent professionals in an anechoic environment.
An “adaptive model”, based on the physical properties of the instrument, and exploiting the knowledge of the performance characteristics was then constructed. The purpose of the model was to minimize the differences between the real phrases and those played by the virtual instrument. Sophisticated technologies, including proprietary “harmonic alignment” (ref.1), de/reconvolution with modal resonances (ref.2), innovative techniques for sample modulation, along with advanced AI midi processing, are used for real time construction of all articulations and morphing across dynamics, vibrato, legato and portamento.
The result is a user-friendly virtual instrument with few midi controllers, which can be played in real time or from a sequencer, in standalone mode or as a plugin, for PC or Mac.