On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
I am very pleased with the result of the performance and quality of the audio recording in the video. The documentation produced by Dan Senn contains the entire 45 minute performance. Trafačka Arena, the venue for the performance, was originally a decommissioned power station. The room was a reverberant cement rectangle with incredible acoustics. You may recognize some of the video used in the performance from other projects. The majority of the video was recorded specifically for Vocalise Sintetica, but I also used video from Machine Machine and Voice Lessons as well as two short clips of found video in the first movement.
Technically the piece centers around a Max patch that handles the audiovisual granular synthesis. The patch is controlled by an iPad running MIRA by Cycling 74. MIRA allows Max developers to create iPad interfaces within the Max patch using standard Max objects. One of the key functions new to this patch is the ability to record and loop gestures. This feature allows the performer to let a sequence of audiovisual content loop while adding layers from other instruments. In this example I used a Novation Bass Station II running through a Moog Minifooger Delay and a Korg Volca Keys. I modified the Volca Keys with a MIDI out jack to provide synched clock to the Bass Station II.