“Noise Trumpet,” as some of my friends call it, is using extended techniques on the trumpet to make it sound what many people would call un-trumpet-like. The use of air as noise, valve sounds, split-lip multi-phonics, sung multiphonics, throat growls, tongue stops and modulations, changing the resonant shape of the inside of your mouth, playing more with the inside of your lips, playing more with the outside of your lips, playing with less lip pressure, more lip pressure, bouncing the lips of the mouthpiece, opening your teeth more, closing your teeth more, more air, less air, letting air escape, playing very quietly, playing very loudly, very low pitches, very high pitches, circular breathing to make constant harmonic/melodic/rhythmic textures beyond the traditional idiomatic uses of the instruments. Not to mention physically reconfiguring your instrument, taking it apart, adding mutes, tubes and more.
When mixing recordings of myself and others playing trumpet, I noticed while using a FFT visualizer during the EQ stage that, unsurprisingly, when playing notes in a traditional manner, the resonant spikes would fall into a fairly predictable pattern of overtones. And, equally not surprising, when going into extended modes of playing, the patterns of resonant frequencies would become more dynamic and complex. Yet even though it was unsurprising, somehow this was interesting to me, which made me think, is there a way this can be used in my software patch in an improvised music setting. This question was what I explored during my STEIM residency in September 2009.
The exploration was somewhat frustrating at first, and I won’t go into tedious and somewhat academic details of that, but I did come up with a way of spatializing the resonant bands that I found interesting, and was able to incorporate into my Max/MSP rig.
The first technical question was answered by the use of “sigmund~” by Miller Puckette. I would use this object to track the fundamental frequency of what I was playing. This base pitch would then be used to set a series of seven biquad filters to divide my trumpet sound into discrete frequency bands. (“Sigmund~” is a great tool, and I was very impressed with the speed at which it would track even noisy trumpet sounds.) These discrete bands were routed into seven channels of audio, then sent into ICST’s Ambisonic externals for spatialization. In other words, I’m able to locate these channels into three-dimensional audio space. (These tools are free, so know that I’m not trying to sell you something when I say they are truly fantastic.) The spatial placement was at first pre-determined by sending patterns to the ambisonic controller, but then using amplitude and pitch following on the seven audio channels, decisions can be triggered at certain points by the software as to where the sounds would be placed, either in a pre-determined, random or other generated manner.
The idea was at first that this is something that I could use in an electro-acoustic improvised audio environment, something that would quickly respond to my sound just by playing, without me needing to exert external control over the software with my pedals, buttons, faders et al.
But this become so enjoyable to work with, that now I’m working on routing the bands of data into areas to be controlled and modified by other patches.
My residency at STEIM was a very productive and inspiring time on multiple levels. In addition to the creative/technical work mentioned above and grant writing, I was able to perform a concert with Michael Moore and Michael Vatcher (see video) and had a very nice recording session with Anne LaBerge. I also had some wonderful visitors during my residency, included violinist Johnny Chang, composer Trevor Grahl and was able to spend time working with some young Amsterdam artists and assist them in developing their own software, and talk with them about what I was working on. Dinner with the ICP at BIMHUIS two nights in a rows was a real treat, hanging with Mary Oliver, Han Bennink, Susanna Von Cannon, Thomas Heberer, Carolyn Muntz, Jodi Gilbert and lots of time with Michael Moore…what a treat. Also: big thanks to Esther Roschar for helping me get my grant out among other things, Takuro Mizuta Lippit for software and creative dialogue, Vivian Wenli Lin for the great video, Alex Nowitz for the hang-time and walking long distances to eat vegetarian with me, and Nico Bes for everything he does (which is everything). So many more people to thank for great discussions and food and music…John Dikeman, Mike Straus, Dana Jessen, Taylan Susam. Only one regret: not enough chess time with Michael Vatcher…!