This video captures a performance of Tokyo Lick, by Jeffrey Stolet, using custom software and infrared sensors. The system converts his waving of the hands into complex piano music.
Stolet describes his system as a “new paradigm for virtuoso music performance”.
Simple Input, Complex Output: Performance and Data mapping in Tokyo Lick.
Challenges regarding the conceptual design and implementation of human / music instrument interfaces have a rich and nuanced history. Generally, if a musical instrument has thrived it has been due to the fact that the particular instrument could provide the desired musical outcome. Traditional instruments typically display a simple one-to-one relationship in terms of input and output (e.g., one piano key is depressed, one note is sounded). Current technologies release us from the shackles of such one-to-one input-output models and permit to the creation of new types of musical generation. At the University of Oregon we have been involved with projects where musical robots perform music, where eye movement data control sound and video, and where infrared sensing devices control sonic and video events.
In his program, Mr. Stolet will focus on the technology and the human-performance elements in Tokyo Lick, his composition for infrared sensors, custom interactive software, and MIDI piano. He performs Tokyo Lick by moving his hands through two invisible infrared spheres and directing the data derived from those motions to algorithms residing in customized interactive software created in the Max multimedia programming environment.
Tokyo Lick contains no sequences or pre-recorded material. Mr. Stolet will perform every note in real-time. Using a technology he refers to as “algorithm flipping,” he can rapidly change the specific algorithm or algorithms governing the response to the incoming MIDI control data. He actuates the algorithmic changes through pre-composed schedules, musical contexts, or through explicit intervention. Taken together, these techniques provide a conceptual framework for practical input/output mapping (action ? specified outcome) and for control and performance flexibility, while offering a truly new paradigm for virtuoso music performance.