« Video: Alex Nowitz - voice and live electronics | Back to frontpage | Video: Manuel Poletti on MOP.Tools »

Session 3: On Mapping - Techniques

This afternoon’s session was on mapping. Robert introduced the topic, explaining it is a very large and diverse field. This was reflected by the large amount of speakers and the range of subjects they covered. Because of this, and the enthusiasm of the speakers, the session ran far beyond the allotted time range.

First up was Atau Tanaka, former artistic director of STEIM, who talked about and demonstrated his instrument, the Bio Muse. In it’s original version the Bio Muse was based strictly on muscle sensors, sensing tension in the muscles of Atau’s lower arms. The data from this was send over Bluetooth to MAX/MSP where it was interpreted, scaled and smoothed. Atau explained how this interface led to “gesture without moving” and how it enabled him to get away from the computer, which he finds a distraction to both himself and the audience. In his new version he has added gloves, using accelerometers and buttons. Because moving his arms in certain ways will both use the muscles in his lower arms and change the orientation of his hands, Atau explained that the signals of these two parts of the instrument are actually closely related. His mapping strategies reflect this and make sure the parameters these controllers are assigned to are related as well, which leads to a “organic” experience of the complete instrument. While playing Atau came across to me a sa bit of a “cyborg”, this could be my naïve experience of the visual effect of him being “all wired up” but might also have to do with him having been playing this instrument for a long time now.

The second speaker was Manuel Poletti who talked about his “MoP.TOols” library for MAX/MSP, a library aimed at simplifying the mapping process. In particular, Manuel is working at Irkam with composers who want to integrate electronic sounds into otherwise acoustical performances. This creates difficulties because the score for such performances may be very detailed as well as complex. Manuel’s solution to this is to work with a analysis of the sound of the instruments, while they are played, and apply effects according to the results of this analysis. This works using a matrix of routings between microphones and various kinds of processing, not just electronic effects but also the spatial placement of the sound in the room. To me (as somebody who never used MAX/MSP) MoP.TOols looked like a very polished and indeed friendly set of patches and tools, quite unlike the “spiderwebs” one sometimes sees. Manuel seemed quite pleased and proud that “his” library is now being worked on by a larger group of programmers, not just himself.

Next up was Tim Groeneboom who talked about (and demonstrated) his perspectives on being a “Wii-J”; a DJ who uses a Nintendo Wii controller, in his case to control Ableton’s Live software. He very quickly (Tim seems to do everything quickly, moving, talking…) went over his reasons to perform int his way. He wants a more physical way of mixing the tracks he is playing, relating to the dynamic style of performance of rock music more then the traditional modes of electronic music performance which he finds “boring”. Tim listed three ways of using the accelerometers (his favorite part of the Wii-mote); direct mapping, orientation, and gesture recognition, then demonstrated all three. Direct mapping, as he explained, is very useful for establishing a clear (to the audience) connection between his movements and the manipulations performed on the music. While gesture analysis (performed using more traditional programming, with the rest of the link between the Wii-mote and Live is taken care of by MAX/MSP) could make use of a large range of gestures yet creates considerable latency as you have to complete the whole gesture first. Tim performed a short set to illustrate this method of mixing, then briefly spoke of his Guitar-Hero-guitar controlled sampler. I feared he’d leave it at that but after a audience question he did perform on his guitar as well, manipulating breakbeats and playing bass. To me this was the best part of his demonstration, perhaps because it seemed more “direct”. At least it proved the Guitar-Hero controller can be a expressive interface beyond that game, something I have been wondering about. Maybe when the game goes out of fashion and the price drops….

Robin Price presented a entirely different aspect to mapping, he talked about mapping a database of sounds to visual patterns as well as some advanced mappings of the controls of a gutted radio to the selection of entries in his database. What Robin did is record samples from radio signals, store them, visualize the database, then allow visitors to his instalation to explore these sounds by manipulating controls on his “radio”. His aim was to give the user a sense of aimless browsing, of exploration. To this end he was using several ways to interpret the signal coming from the controls, not just directly but also considering -for example- the speed of the change, using heat caused by friction as a analogy for the higher level mappings he made. It struck me as somewhat ironic that Robin’s strategies for mapping seemed amongst the most advanced of the ones presented this afternoon while this installation was the least clear in it’s purpose, in fact Robin seemed to want to encourage a sense of wonder more then straightforward utility. Facinating.

Jo Kazuhiru presented the next subject; “Inaudible Computing”. This term wasn’t completely clear to me; this take on computing (and communicate between devices) seemed to be a lot more “audible” to me then regular computing… Jo’s focus at the start of his talk was the usage of audio signals to use a regular soundcard to get values from buttons, sliders and so on into the computer. He demonstrated this by connecting a signal from his audio-out to a button, then back into the input and detecting the result, then demonstrating the same with values from a slider. Next he talked about a more advanced application; the encoding of a image into a audio stream, playing and re-recording this to re-construct the image. This train of experiments is leading towards applications like using Skype (as system for voice over IP) as a way of sending controller data over phones, enabling the use of mobile phones for portable data-transmission, he also speculated about audio recordings as data storage and voice recognition systems as gesture recognition. It struck me that this may be a step backwards if taken literally; the C64 seemed to be doing fine using audio cassettes for data-storage but perhaps the true aim of Jo’s research is to arrive at new ideas by creatively subverting the traditional usage of devices. For one thing traditional multi-effects could become a lot more interesting in this context.

Finally Alex Norwitz demonstrated his use of the Wii-mote, a use quite different from Tim’s demonstration earlier. Alex was using two Wi-motes (one for each hand) to control STEIM’s Lisa which he used to sample his own voice, extending it’s range and enabling him to re-use previously made vocal sounds. While his talk was light on technical details Alex did mention a few aspects to his way of working that were quite interesting to me. He mentioned how he found it important to use his two Wii-motes in a symmetrical way, so that a outward movement on the left hand corresponded to a outward movement on the right, instead of maintaining a absolute reference where “left is left” on both sides. Another point that interested me was his explanation on how working in theater (where he was expected to be able to repeat the same performance the next night) led to him learning his instrument in more detail then completely free improvisations might have.

Quite a lot of material to attempt to fit into two hours, I’m thinking as I’m sitting in a largely closed STEIM building, attempting to summarize it, but very interesting material. I also learned how to type a ” ï ” on a Macintosh keyboard, only to now have the spell check notify me that that wasn’t the correct spelling of “naive”. I don’t trust that, I also don’t trust the amount of red lines below words in this window; a text by me typically has a lot more but I want something to eat now so I’m off.


Leave a Reply

        Login here to post. If you don't have a login yet, contact admin [at] steim [dot] nl.