The first ever Eyeo Festival was last June and the second iteration looks to be just as amazing as the last. Here’s a video of a presentation that I gave at Eyeo last year on using digital imagery to generate sound. I also have the HTML5 slideshow available (use the left and right arrow keys to navigate). A big thanks goes out to Dave Schroeder for creating Eyeo and sharing these videos.
Recent
Midnight Playground
Midnight Playground is an interactive, kinetic, installation by Peng Wu, Jack Pavlik, John Keston, and Analaura Juarez. Peng initiated and directed the idea, Jack built the jump rope robot, and Annalaura helped refine the concept and promote the piece. My role was to produce the music and track it to the still images that Peng had selected. I ended up making a one hour video with thirty minutes of the image from the moon followed by a four second transition into another thirty minutes with an image of Mars. To produce the sound I gave Peng a list of audio excerpts that had all been previously posted on AudioCookbook in One Synthesizer Sound Every Day. He picked the two that he thought would work the best and I went back to my original recordings and processed them specifically for the piece by adding some reverb and delay to enhance the spatial properties of the music. The piece will be on display in Gallery 148 at the Minneapolis College of Art and Design through January 29, 2012.
Voice Lessons
Voice Lessons is an electronic, audio device that interrogates the popular myth that every musical instrument imitates the human voice. Touching the screen allows the participant to manipulate the visuals and vocalizations of the “voice teacher” as he recites vocal warm up exercises.
The piece resides in the space between a musical instrument and voice lesson. Move the touch point left, right, up, and down to explore the visual and auditory possibilities. Rapid high pitched loops occur while touching near the top of the screen while lower pitched longer loops are heard near the bottom.
The actor, also named John Keston, is the artist’s retired father who became a voice teacher after a long career on stage in plays, operas, and musicals with the Royal Shakespeare Company in his native country England and abroad.
Voice Lessons
32” interactive touch screen installation
2011
Continue reading
In Out Festival of Digital Performance, New York, September 2010
My project Ostracon (John Keston and Graham O’Brien) was accepted and performed at the In/Out Festival of Digital Performance in New York, September, 2010. Ostracon performs generative, improvisational compositions using my custom software, the GMS (Gestural Music Sequencer), that converts video input into musical phrases. I capture, layer, loop and process melodic segments in real-time out of the stream of notes created by my gestural input, and tailor them with probability distribution algorithms. O’Brien accompanies these angular, electronic structures, with dynamic playing that, at times, verges on the chaotic.
The lineup this year included Monome creator, tehn (Brian Crabtree), and Peter Kirn of Creative Digital Music. From the In/Out Festival website.
In/Out is an annual festival that features leading performers, developers, artists, and tinkerers of the digital design community in hopes bridging the gap between the forum based world and the stage. The festival seeks to bring digitally driven performances into the limelight with two full days of workshops and performances.
This video above is a live studio piece shot by Ai student Josh Clos, and recorded at Ai Minnesota by John Keston and Graham O’Brien. It’s representative of the music that we are generating during our live performances. For more visit the Ostracon tag on AudioCookbook.org, or visit Unearthed Music.
Ostracon at the In / Out Festival of Digital Performance.
Chromatic Textures Shown at 6X6 #5: Mystery
On Wednesday, July 7, 2010 my piece Chromatic Textures was shown at 6X6 #5: Mystery, an exhibition at Ciné Lab in Athens, Georgia. My work was accepted along with five other artists, “…including Denton Crawford’s eyeballs, Aaron Oldenburg’s plunge into asphyxia, and a performance streamed live over the Internet from California.” Here’s my abstract for Chromatic Textures.
Chromatic Textures is a study on the synesthetic nature of our senses of sound and sight. Video input is used to produce generative musical phrases. The visual media is analyzed by the GMS (Gestural Music Sequencer) to create the musical forms in real-time. The software includes adjustable probability distribution maps for the scale and rhythm. Adjusting these settings allows familiar structures to emerge. The settings chosen for this piece cause notes within a particular scale to play more frequently, however, it is still possible for any note within the twelve-tone chromatic system to occur. As a result, dissonant or blue notes can be heard at rare instances throughout the piece.
Continue reading