Voice Lessons is an electronic, audio device that interrogates the popular myth that every musical instrument imitates the human voice. Touching the screen allows the participant to manipulate the visuals and vocalizations of the “voice teacher” as he recites vocal warm up exercises.
The piece resides in the space between a musical instrument and voice lesson. Move the touch point left, right, up, and down to explore the visual and auditory possibilities. Rapid high pitched loops occur while touching near the top of the screen while lower pitched longer loops are heard near the bottom.
The actor, also named John Keston, is the artist’s retired father who became a voice teacher after a long career on stage in plays, operas, and musicals with the Royal Shakespeare Company in his native country England and abroad.
My project Ostracon (John Keston and Graham O’Brien) was accepted and performed at the In/Out Festival of Digital Performance in New York, September, 2010. Ostracon performs generative, improvisational compositions using my custom software, the GMS (Gestural Music Sequencer), that converts video input into musical phrases. I capture, layer, loop and process melodic segments in real-time out of the stream of notes created by my gestural input, and tailor them with probability distribution algorithms. O’Brien accompanies these angular, electronic structures, with dynamic playing that, at times, verges on the chaotic.
In/Out is an annual festival that features leading performers, developers, artists, and tinkerers of the digital design community in hopes bridging the gap between the forum based world and the stage. The festival seeks to bring digitally driven performances into the limelight with two full days of workshops and performances.
This video above is a live studio piece shot by Ai student Josh Clos, and recorded at Ai Minnesota by John Keston and Graham O’Brien. It’s representative of the music that we are generating during our live performances. For more visit the Ostracon tag on AudioCookbook.org, or visit Unearthed Music.
Ostracon at the In / Out Festival of Digital Performance.
On Wednesday, July 7, 2010 my piece Chromatic Textures was shown at 6X6 #5: Mystery, an exhibition at Ciné Lab in Athens, Georgia. My work was accepted along with five other artists, “…including Denton Crawford’s eyeballs, Aaron Oldenburg’s plunge into asphyxia, and a performance streamed live over the Internet from California.” Here’s my abstract for Chromatic Textures.
Chromatic Textures is a study on the synesthetic nature of our senses of sound and sight. Video input is used to produce generative musical phrases. The visual media is analyzed by the GMS (Gestural Music Sequencer) to create the musical forms in real-time. The software includes adjustable probability distribution maps for the scale and rhythm. Adjusting these settings allows familiar structures to emerge. The settings chosen for this piece cause notes within a particular scale to play more frequently, however, it is still possible for any note within the twelve-tone chromatic system to occur. As a result, dissonant or blue notes can be heard at rare instances throughout the piece. Read the rest of this entry »
On Tuesday, December 7, 2009 I presented a sound art installation titled, Fives, at the University of Minnesota. The subtitle of the work is, Five Movements for Five Sampled Sounds in Five Loud Speakers. To produce the sound for the work I developed an instrument designed to explore granular interpretations of digitized waveforms. The instrument was controlled over a wireless network with a multi-touch device. The sound objects generated were amplified through five distinct loud speakers arranged on pedestals at about chest height in a pentagonal configuration. Read the rest of this entry »
I will be presenting and performing at the Minneapolis Ableton Live Users Group on December 8, 2009 at the Nomad in Minneapolis, Minnesota. In my presentation I’ll be showing what I do with custom built applications and Ableton Live, including the GMS and my new Wavetable Glitch Machine. Currently I interface my custom built applications with Live, using MIDI via the IAC drivers in Mac OS X, and Soundflower for audio. Soon I’ll be converting my audio based Max patches over to Max for Live, so I can use them in Live directly.
Also appearing is Ali Momeni who’ll be showing some of his Max for Live patches, and JP Hungelmann who also organizes the event. Last time the group met it was held at IPR and there was an excellent turn out. The speakers were terrific and they gave away Ableton demo discs and t-shirts at the end of the event. If you use Live, have any interest in it, or electronic music in general, I highly recommend attending.
This piece, titled Forgotten Complex, was originally exhibited on audiocookbook.org and came about as a side effect of my contributions to the One Sound Every Day project. Rather than a percussion driven piece, Forgotten Complex relies on ambient Rhodes and other processed sound effects to create the lonely atmosphere of an abandoned warehouse. The piece is included on my solo album, Precambrian Resonance by Ostraka (Unearthed Music, 2009).
This second part to “Chromatic Currents” was produced with the GMS by using a string of lights placed into a large glass vase. I moved the camera around the vase to direct the flow of musical phrases with one hand while I adjusted transposition and note duration settings in the sequencer with my right.
You might notice that the video stimulus does not resemble lights in a vase. This is because I applied a negative filter to the video after capturing the performance. I used a pentatonic scale interspersed with rare dissonant notes and probability distributions in the note durations to give it an eerie awkwardness.
This sound was generated using an instrument that I developed using MaxMSP tentatively titled the Wave Table Glitch Machine. The instrument uses TouchOSC as a controller running on an iPod Touch. I interfaced the accelerometer on the iPod to a filter so that when turned on with a toggle, tilting it on the y axis causes a lowpass filter to effect the sound. By setting a threshold on the z axis, giving the iPod Touch a brisk shake will cause the patch to loop a randomly selected grain of random length from a randomly selected buffer played back at a randomly selected rate. The variety of sounds possible with five short samples is expansive. Here’s a selection of sound produced with just one sample selected.
Chromatic Currents Part I is a generative music piece driven by particles floating in a liquid. No intervention in particle behavior occurred while the piece was being performed using the GMS. The scale was strongly C minor pentatonic, weighted with a Dorian mode by adding less-likely probabilities for D and A. However, every note that was not part of the scale still had a small possibility of occurring. This led to occasional blue or dissonant pitches in the sequence. The possibility of occurrences for any note within a twelve tone chromatic scale led me to the title.