This ‘blog post provides some thoughts following the STEIM orientation of 29 Sept. — 1 Oct. 2009. This is an updated and final post about the Orientation.
The first phase of the STEIM Orientation offered a definition of the aesthetic point of reference, the social mission, and the products developed by STEIM. The aesthetic may be described as a ‘high-touch’ human-performance foundation and motivation for creation of electro-acoustic music. Computers have come to dominate electronic music, yet the best of musical performance, leading to the most ‘human-oriented’ musical composition, requires a much more flexible and ‘human-specific’ interface than is provided by the keyboard/mouse/visual-display standard. Further, the designs of interfaces developed by music industry either emulate acoustic instruments without substantially extending their electro-acoustic potential, or primarily facilitate operation of standard more utilitarian music software, which tends more to support ‘performance of the software’ than intrinsically musical performance. The overriding question behind the interface design may be given as follows:
Does the design of the interface, look back from the design of the software to the human being as an efficient operator, or does it begin with the shape of human performance and then design for its potential expression?
The STEIM aesthetic clearly follows the latter path. Indeed, ‘Electro-Instrumental Music’ defines the name for this studio, and unlike many other institutes that hold onto their mission for only a short period, STEIM has preserved its raison d’être, which has grown only more relevant to the international scene over the course of its 40-year existence.
It is equally impressive that STEIM has such a broad aesthetic-social reach, which may realistically define its ethical position. Its artist residencies and the design of its products address an impressively wide range of musical styles and performance situations, and therefore reach a broad audience. There is also a healthy emphasis on experimentation and innovation, which fulfills an extremely important role in art and society, this being to provide artists with an opportunity to be released from commercially defined and industry-driven interests for a period of freer advancement. This leads to new aesthetic opportunities for all, and the most successful of these will eventually be embraced by the music industry and only extend the effect of what is nurtured at STEIM.
The reasons I have been interested to come to STEIM are two. First is the potential to close the gap between two poles of my musical life: (a) composition with electro-acoustic material and (b) musical performance. For me these have never directly joined. I have been primarily a studio-based, ‘fixed-media’ composer, for whom transformations of sound always have been crafted outside real time, the sound ‘orchestrated’ into a prescriptive mix, and the composition presented in mostly fixed sequence. Separately, my performance experience has been with Early Music (viola da gamba), Classical (cello), and acoustic avant-garde techniques (notably extended string instrument techniques and bowed piano). The missing link, as it were, is that I haven’t been directly involved with the practice of live electronic performance (beyond live sequencing control). Primarily this is because the only interfaces available to me seemed incapable of creating a meaningful teleology of sound transformation–especially on the edges of subtlety and extremity of sonic expression.
I have been thinking about what is missing in the virtuoso standard controllers programmer model, where theoretically an array of standard controllers that use key/buttons, slider/knobs, trackpads, and the like to control banks of automated signal processing may indeed achieve a meaningful sonic complexity. Such an effort could achieve the equivalent of what one may accomplish with one bow on one string of an acoustic string instrument. But something is wrong with this virtuosic programming and presumably Olympiad-level multiple-controller performance, namely the meaningful tracking of expressive motions in the physical performance to the motions in the musical sound. This is important for the apprehension of musical values by the audience, but even more to the point it directly effects the performance quality itself. Along this line, the inspiring historical promise of the Theremin, performance on which one most definitely could track significant motions to sound, has been advanced outside the mainstream of the music industry. Artists and developers working at STEIM have played a significant role. So for some years now, in addition to following NIME, I have looked forward specifically visiting STEIM as an ideal place to start the process of bringing my experience with electro-acoustic composition and performance together.
The second reason I have been interested to work at STEIM follows from my research in applications of haptic-enabled geometric virtual environments to sound synthesis using a stylus-style ‘probe’ (VE-SoundSynth). The purpose of this interdisciplinary project is to provide aural feedback for virtual engineering practice, and to create a tool for experimental electronic music composition, where the design of objects in the environment act as both a score design and a live sound controller. You might think of folding your 2D trackpad into a 3D multifaceted polyhedral tracking object, and doing this in several different shapes, moving from one to the other as best for the desired control configuration. I am especially interested in the work done at STEIM in terms of studying the type and degree of freedom in their interface designs founded so directly on human musical performance. VE-SoundSynth has a very analytic origin, and as mentioned, is very geometric in design, but this need not necessarily be so. Virtual objects may be very expressively shaped and textured. To get an idea of what I mean, imagine having a controller with a pattern of variable resistance to motion along its surface, say moving from relatively slippery to relatively sticky across its surface. Then, imagine what shape would be best for this object in terms of (virtually) ‘playing’ upon it with the stylus. More experience with design of virtual objects/environments defined by and for human-musical motion is another thing I hope to gain in further work with STEIM.
After three days of orientation, the explication of the goals of STEIM, examples of work shown, discussions with practicing resident artists, and demonstrations of current software development, not to mention simply getting to know some very fine people working here, all these are very encouraging in relation to my reasons for being here. Robert van Heumen, Taku Mizuta Lippit, Frank Baldé, and Daniel Schorno each gave presentations that I found very valuable, combining clear concepts and theoretical underpinnings, salient demonstrations, and inspiring musical examples.
The software, which although designed for persons with less programming experience than myself–which is a good thing–still offers a different point of view. I like the LiSa software, and I think it does well for its intended use in live sampling control, and has some features I have not found in other software. I use other programming environments already (MaxMSP, Reaktor, Kontakt scripts), so I’m not sure how much I will use LiSa. I do see that it is a very good tool, and I definitely see some of my students wanting to use it, so that in itself would have me learn it better. Then, who knows? The JunXion software however I am more confident to use soon, as it will facilitate access to affordable devices with which I can experiment. It’s design-for-use is very clear, and especially if extended to include ‘sub-patches’ I am sure I will use it more in the future to develop ideas for using different freely-motive wireless devices (like the Wii, Real Play batons, and OSC-based iTouch apps).
I also expect that further experience with such interfaces will prepare me to consider designing some new and unique instruments that address specific musical challenges. In turn, I expect idioms that develop through improvisation with these new instruments will lead to new possibilities in composition.
On this final day we experienced a presentation by Hans Leeuw on his Electrumpet design. The essential message of his work is full integration of electronics control with the idioms of playing a particular instrument. He adapts the design of the hybrid electronics to the experience of playing trumpet, rather than adapting the trumpet for electronics. Notably, everything is placed within an idiomatic reach of normal trumpet playing. Further, the action of the controllers and the positions of these where very much attuned to a trumpeter’s hands, or at least this is a continuing goal: he noted some redesigns to come. We worked beyond his prepared introduction, from NIME 2009, and examined his MaxMSP patches. We learned the value of a distributed design using three instances of MaxMSP each with different combinations of scheduling rates and vector sizes, to customize resources to different overall functions, for example the first-line instance is set for a high scheduling rate to process sensor data more accurately without any sound processing. This turns out to be a STEIM-type concern, as it addresses issues in the programming background to live performance situations.
In the afternoon I met with Robert to discuss several ideas for future work. I hope to return. This is a nexus for a very important kind of work in electronic music.