Mark IJzerman – Audiovisual live-set development residency

B5252018-1B01-4290-9AD3-3309750F8A41

During my STEIM residency in the first 2.5 weeks of january 2016, I worked on my audiovisual live set. I’ve been performing a live set consisting of tape-loops on reel-to-reel recorders, 4-track cassette-recorders and with small instruments and guest musicians for quite a few years.
I hadn’t performed in a while, when FIBER asked me to perform at their 5 year anniversary party- a very informal night for friends and family, which turned out to be quite some fun. After the performance, I was asked if I wanted to play Spektrum in Berlin at the end of January, at a Pre-CTM gig where I would be one of the artists representing the Netherlands under the FIBER-flag.
Around the same time Tijs from STEIM asked me if I wanted to do a residency. I was supposed to work on an installation, and was very much looking forward to that. But combining the two meant being able to play a gig in Berlin, and I’d been wanting to showcase my work within a FIBER context.

During my two week (short!) residency, I had a few goals:
– I wanted a live set that was still very tape-centered but possible to travel with
– I wanted to create new material for this which I could play with
– I wanted to create a visual counterpart.

During my residency, I spent about 60% working on the visual counterpart, and 40% on creating a lightweight tape instrument and material for the performance & the material itself. I’d never REALLY done audio-reactive graphics programming I’d been happy with. After a short “research” period leading up to my residency on my parents’ couch during the christmas holidays, I sought out some “organic” visual algorithms which could fit with the organic tape-aesthetic of my performance.
Also, I started going through audio recordings I had made in Guangzhou, China, as well as old historic visual footage from the same place, which I liked to combine with whatever visual algorithm I would be creating.

MAKING A SMALL FRIPPERTRONICS MACHINE

frippertronics
(a frippertronics schematic)
On the first days, I transported all the instruments and tape machines that I might be using to the STEIM studio, and started out re-creating a process (Frippertronics) that I normally did using two big reel-to-reels, to a cassette-based system. After trying to create a working prototype, and talking to Frank Balde, I figured this might be very fiddly.
I decided to find a way to get my RE-201 Space Echo module modded so it could function as a Frippertronics machine. I was looking for a person who’d be able to help me achieve this, and found that person in Ohm Dirk. Ohm Dirk is a guy called Dirk, who has his workspace in Noord and mods or repairs old electronic musical instruments. With him I was able to devise a system to turn off the erase-head on my Space Echo (by putting a switch on the bias), creating a Frippertronics-like machine. What we didn’t manage is to make the amount of erasure controllable with a pot. We hooked up a pot to the bias of the erase head, but this didn’t provide us with good control. For the future, our plan is to install an extra playhead before the record-heads.

IMG_0569
(the setup)

 

CREATING A VISUAL COUNTERPART

IMG_0398
(very early test of the algorithm & testing it with the video feed)

 
Often I don’t really like visuals that accompany music, so my pre-residency research was crucial to being off to a good start. I wanted an algorithm that would not react immediately, but had a life of it’s own. But it also had to be able to still tell a story. After finding out about reaction-diffusion algorithms, I started looking into ways to actually code these into Max. Pixel shaders have been around for Max for quite some time, but as I’d never looked into them, they seemed like magic to me. I found out there was an example patch on the r-d algorithm within Max (Help>Examples>Jitter-examples>gen>reaction.diffusion.color.world), and started fiddling with this to create something that would fit aesthetically. I also found out about another Max/Jitter user named Paul Fennel, who was doing similar stuff. His patches helped me to understand the system and because of this I was able to quickly build a framework within I could make something that would make sense within a performance context.

one of the early examples I recorded when the final viz algorithm was in place

On the other side I did some research into realtime audio descriptors that would be able to drive the algorithms parameters. I ended up using different descriptors, from absolute amplitude, to the square root of the signal for event detection, to a lot of descriptors from IRCAM’s FTM package to get descriptors like pitch, mean energy, MFCC’s, et cetera.
Using these, I constructed my patch in a way that I could swap out the way the audio descriptors were driving the algorithms parameters by using a poly~ object. This way I spent about 5 days making different “presets”: behaviors of the r-d algorithm to the incoming audio descriptors. These are performed live, just by pressing a number (which corresponds to the algorithm) on the keyboard. Some of these include “construct a circle in which video is playing, and the noisier the signal, the bigger parameter X will be”. Or “the higher the mean pitch, the bigger parameter X”. Or a combination of these.
Following this, I worked on a way to combine the algorithm with video-material which I collected last year. Using white pixels from the video as “active” pixels for the r-d system allowed me to use video in a very unique, fluid way.

FINISHING THE PERFORMANCE

IMG_0719
(picture by Frank Balde. Hard at work)

 

In the last week I mainly worked on the performance. This meant putting new audio material on the endless looping cassette tapes I use, getting to know my new synthesizer, tuning the visual algorithms to the sounds and vice versa, and just rehearsing and reflecting a LOT.

THE PREMIERE

ii_iks36asy3_152f3c788e5b5ad5 ii_iks32xag2_152f3c5207002c65 ii_iks32mc31_152f3c4e90560c6e ii_iks31t690_152f3c454900fb51
Doing the performance in Spektrum in Berlin was amazing. FIBER did an amazing job in organizing an artist meetup beforehand where we were able to meet several artists from Berlin. After the performance I got such good feedback that both FIBER and LIMA are willing to commission a new piece of mine. I still have to do the performance at STEIM though, would love to!

CONCLUSION
I loved having the time to do a residency. Normally, I’m often rushed as I’ve got multiple projects going on. The residency and the performance deadline forced me to spend almost all of my time on a two week sprint to get a performance in place and it worked.
Having the STEIM personnel there helped me greatly: discussions with Frank, Tijs, Marije and Dick helped me in my thought process. Also meeting some of the other artists working there, like Gabey Tjon a Tham who was in the studio next to mine, sparked lots of inspiration.
Seeing Gabey at work made me want to make a physical installation piece though… maybe another time! Thanks STEIM for the residency!

Comments are closed.