Analog mush in this piece comes from the use of several self-built piezo contact microphones to pick up the ‘found sounds’ or mechanical sounds created through using my instruments, this is as far as one can get from technological complexity. The contact microphones will be strategically placed to pick up specific sounds created by
- button presses on the Monome computer interface
- fader and knob movement on the midi controller
- furniture noise caused by equipment moved during a performance, hand drumming
on table/chair/floor, chair screeching from body movements accidental or otherwise
This ‘technological enhancement’ will be the only added hardware to the current performance setup. Other elements that the performance consists of
- playing with a given space and audience
physically moving through a watching audience, thus dissolving the line between performer and audience.
- manipulating psychoacoustic perception
the use of voice with and without microphone, amplifying or muting sounds picked up by piezo microphones.
exploring the risk in undetermined form, implying new notions of chance in its relation to art.
- using the environment as an instrument the environment at my disposal, I have used tables, chairs, drums, percussion, dolls, cymbals, shoes, sticks, necklaces and an endless array of found objects at a given venue as sound source.
Related to Andy Schmeder’s talk about micro-OSC yesterday, during which he showed his multitouch pressure fabric prototype, I was reminded of a couple of other similar projects with slightly different aproaches. The “UnMouse” prototype from microsoft research uses custom FSRs to do the sensing, and Randy Jone’s “multitouch prototype 2” just uses copper strips and a multi-channel audio interface for input. Randy’s post brings up a point for discussion:
“You need a 1 kilohertz sampling rate to capture the nuance of live musical performance.”
This of course depends on what type of mapping is employed, and how a controller is used musically - but using audio rate sensor input clearly lends itself to certain situations.
At the end of the Jamboree opening talk with David Ziccarelli, there was short discussion about Max/MSP’s overt and obviously masculine user base - and how this concerns David. This is something I think we’re all somewhat conscious of - in interactive art work, computer music, and technologically implicated arts of all kinds. Diversity is key to the advancement of a form, in my mind, as it helps develop voice as well as helps integrate. Also - myself and Atau are currently recruiting for the second year of an interactive digital media masters course we’re running out of the Culture Lab, and so have become very interested in questions of heterogeneity amongst our student population.
Yolande Harris - one of four women in the room when David brought this point up on Monday - put up a good post addressing the issue a bit on her blog scorescapes.net. I thought it might be good to try and move the discussion over here, if anyone’s up for it! (See my first comments on Yolande’s blog.)
Hope you all are enjoying the Jamboree so far! Later today, I will be giving a presentation on Rhythm and Sequencing for Live Performances.
I’m going to talk about blurring the lines between the use of live and concrete sounds in the context of a live performance followed by electronic rhythm creation and sequencing using Ableton live and Pure Data.
First Part: -
Having been playing with a Monome and a midi controller over the past year, I am trying to push the envelop further in regards to live performance. Recently I have augmented my new live sets with inclusion of live table banging/foot stomping going hand in hand with concrete sounds.
Second Part: -
Being a happy Ableton user, the only thing missing is modulation in this program. Bridging Ableton with Pure Data (PD) using Midiox, I’m able to use the pandora’s box of modulations sources which PD is to shift and shake the rhythms in Ableton!
A real sense of anticipation for this Jamboree. I had had a very apprehensive feeling walking from the Metro to STEIM on my last visit back in November, my first time back since Michel’s passing. It was enormously reassuring to see the place well, and just keepin’ on going. Then the news that financial support had been reinstated was a joy of relief. The Jamboree affirms this continued livelihood and liveliness of the place, and the community.
I rushed up on an early train Monday to be able to work w/ Jorgen to finish my instrument, and as I worked away in the workshop, the afternoon leading up to the first session was a wonderful reunion. David Z walked in, first visit to STEIM, went out for lunch w/ Frank and Taku. Then there was Dan Overholt in the software room, tinkering around w/ his new touch-joystick instrument. As if he had never gone away. Then I turhed around and Sukandar had walked in -
David Zicarelli, founder and CEO of Cycling 74, company behind Max/MSP, introducing the design decisions behind the very different Max 5. The design improvements really reflect an attempt to incorporate creative users needs, aiming for fluency in programming so that the creative process is allowed to take the foreground.
I particularly liked the feature of presentation mode which recognizes the different processes of building/programming and testing, allowing two different views of the same patch to be open and worked on at once. This division of the programming process and the interface starts to recognize the conceptual mismatch between the process of technical design and the creative goals. Why is it that many Max users play around with their patches without having a predefined idea of the outcome that’s needed? Why does Zicarelli express surprise at this? Musically, how close is this kind of working to improvisation rather than composition? Does Max/Msp facilitate a way of working that allows fluency of conceptual musical ideas and not just technical fluency?
In using Max/MSP I tend to push the material around until I get the sonic results I want. I don’t try to define the outcome before embarking on it, I concentrate on the process. I aim to let the ‘material’ - the data I’m processing for example - find characteristics audible in sound that I wouldn’t be able to predict before I hear it. Sound is not programming! There needs to be a possibility to play with the sound while building, to play with the programming, to change, to tweak, to experiment, to allow mistakes, to allow the possibility of discovering new ideas, and to ultimately shape sound in ways that are not limited to the conceptual thinking of the programming language that it’s built in.
This is why I like Max and still work with after 10 years as the program of choice for the work that I make. But, as was pointed out by Zicarelli, I am one of a shocking minority of women in the Max/MSP community of users. Why? Why do I like it? Why am I one of only 4 women in the room? It points to a larger problem visible in the minority of women in electronic music.
David’s attention to the male dominance visible in Max, is at least an acknowledgement of the problem. But it cannot be seriously considered by following with jokes on the female nature of the roundness of button/bangs in the graphic interface a joke that made all the men in the room laugh, alienating the women, and continuing to talk on a male level.