INTERACTIVE VISUALS, LIGHTING AND SOUND FOR LIVE PERFORMANCE DUET.
Heartfelt thanks to STEIM team! I am also grateful to have the support of the ‘Canada Council for the Arts’ which funded my flight and a per diem for the duration of the residency. Coming to STEIM presented me with the opportunity to explore several technical and artistic potentials for an interactive performance collaboration with Berlin (Potsdam) composer, Alex Nowitz, which we have given a working title of ‘Hybrids’. which I have worked with interactive visual performances (aka ‘Visual Music’) since 2001 using various approaches for opera, orchestra, new music and dance; using interactive software (Isadora and Max), hacking into C++ screensavers, MIDI controls, video tracking, motion capture systems, pitch and volume triggers, the Lanbox (hardware) for interactive control of lighting and of a custom designed interactive Video-spotlight called the ‘Videmote’.
Since Alex has built his vocal/music performance tool at STEIM using JunXion, LiSa and Wii controllers, it made sense for me to explore our connectivity, initially through a similar approach in order to understand his system and refine my own existing methods. I had already begun successfully using a Wii remote & Nunchuk with Isadora software, in conjunction with OSCulator to map MIDI commands into Isadora from the Wii buttons and accelerometer, so was very curious to investigate JunXion and LiSa as well. Within a very short time, we were able to smoothly send and share all data from our individual Wii systems into each others software. I also used a video camera to ‘watch’ Alex, and took audio line inputs from the Mixer board into Isadora, so that I could analyse pitch and volume (left and right channels individually).
LIGHTING My pre-existing lighting patch in Isadora/Lanbox, gave me control of the lights in the Steim performance studio within ten minutes of having set up my computer. This was a nice confirmation that my system is now stable and ‘tourable’ into any theatre. One goal of the STEIM residency for me, was to explore reasons for interactive lighting with sound performers, and to develop a tool for Wii triggering of lights as a performative tool for myself and a musician at the same time, while also performing video projections. When to use lights as well as video? How much ‘interactive’ lighting can an audience tolerate, for instance? How obvious should or could it be without getting predictable? If the triggers are too subtle as to be invisible to the audience, does this matter? Do they still enjoy it, or does it become an indecipherable mystery at that point? Is interactive lighting interesting as an artistic tool in itself, or is it best used to create appropriate lighting onstage for an improvised performance, giving full lighting control to the on-stage performers as they make their creative choices in their artistic mediums… These were interesting questions for me to explore with Alex.
We were also lucky to have two opportunities to have an audience watch the work, on Nov 12th and Nov 19th. The feedback and Q&A gave us valuable information and raised many questions that will take far too long to write about in the blog, but for which I am grateful to the Steim advisory team, especially Daniel Schorno & Nico Bes (from STEIM), Benjamin Thigpen (visiting musician) and Florian (conceptual artist from Germany).
VIDEO Using pitch and volume to control some parameters of the video, as well as mapping some of Alex’s Wii commands to visuals was technically successful, Perhaps only partially successful artistically, since it seemed to throw some confusion for the audience, as to what physical gesture or action by which performer was creating the visual content and effects. The important question arises… how literal does the mapping of a gesture to a visual result in the performance need to be? If the gestures translate directly in a meaningful way to the images, sound and music, is this more satisfying result, than if it happens in a more abtstract way? Should the mapping be constructed in such a way as to enhance this translation?
MUSIC Although we were able to send my Wii data to Alex’s computer and into LiSa, the question also arose, as to what to do with that information? We decided to let that rest for the time being, and focused on lighting and visuals. That is a question requiring more time, more mapping in JunXion and into LiSa, on Alex’s system, and certainly some thought abou the purpose and intention behind that in a live perfromance duet. How connected do we wish to be, and how ‘hybridized’ do we wish to make the duet?
Once we connect to each others tools have we then created a single (yet two-brained) performance instrument? Or is it still inherently a dual system, with some open channels?
Many questions remain about the nature of the performance aspect as well, with regard to concepts and character development. Some of these questions come from the results of the tool-making experiments directly. What kinds of physical gestures work well with controlling video? Are these different for Audio? Are we performing as a Visual Artist and a Musician/Composer, or are we creating a mini-opera, requiring character development. If we have wireless instruments in our hands, how does this limit the performance? How do we deal with differences in physical style and ‘gestur-ality’?
And so much more! Thanks again to STEIM for the opportunity to be here and to develop our tools, concepts and research ideas.