Zeno van den Broek & Robin Koek – Raumklang Residency

This summer we worked for 2 weeks on the project Raumklang in residency at STEIM. A project focused on creating site-specific sound in physical space. Based on acoustic features we develop interactive binaural sound sculptures with architectonic dimensions.

Objectives:

The residency was a follow-up to initial empirical research done at the V2 institute in Rotterdam. During this initial residency we had the opportunity to work with the Usomo system which is locally installed at the institute. This allowed us to focus on prototyping the interactive environment without having to focus on the sensor-technology of the project yet.

In this initial phase we then developed the primary virtual architecture for the sound engine in MaxMSP and the visual environment for design of the sculptures in Touchdesigner. This was a great first step in developing the virtual foundation of Raumklang, nevertheless there was a huge caveat on the long-run as we could not continue to develop with this sensing technology as Usomo maintains a closed system as part of their business strategy.

This leads to our ambition for the initial part of the STEIM residency which was to integrate a spatial tracking system with a sustainable quality which we could advance also during further stages of the project and could hold with upscaling to many users etc. After conducting research and getting recommendations from peers into potential alternate tracking solutions, we found the Belgian company Pozyx who offers different options for highly-accurate position tracking based on a novel wireless radio technology called ultra-wideband (UWB). Within the scope of this initial prototype phase we acquired their Developer’s kit which allows to track up to 5 users simultaneously. The system works with anchors which both deploy and measure the radiofield and individual tags for each user equipped with an antenna.

One of the methods to work with the Pozyx system is to read the position of the tag directly on an Arduino or Raspberry PI using a Python script which gathers the tracking data from the anchors. With the desire to have extremely low-latency sonic interaction in physical space we decided to move on with the configuration of running each tag on an individual Raspberry PI As this would allow us to give each user their individual sound engine. This new system a had a lot practical implications:

  • The sound engine had to be re-developed in PureData as we preferred to keep running in Linux (Raspbian) for low latency optimization
  • Communication between each Pozyx tag and Touchdesigner had to be established reading from the PI over OpenSoundControl via wifi to perform all the calculations based on position in the room relative to the sculpture and distribute the calculations back into PureData running on the PI
  • The environment in Touchdesigner had to re-structured towards the new set-up
  • Material design of a wearable integration of the individual PI’s and sensors for the users.

The second part of our objectives was less technically oriented, to be conceived once the interactive system was up and running: to engage with the physical space and make a site-specific sound composition to be implemented in the sculpture. We were grateful to have the ability to work in the large room of STEIM which allowed for this acoustic exploration.

Progress

In the first week we had a smooth start on the first day rapidly getting quite decent tracking results with the Pozyx tracking system following their cloud method, which is using one central tag to collect the data and communicate with the anchors based on the MQTT machine-to-machine connectivity protocol. The company has built in a cloud-based environment which it’s possible to tweak the settings of the overall tracking system.

The next step would be to read the position data directly from the Raspberry to increase processing speed and make it possible for each user to have an autonomous data flow. Here we ran into the first obstacles getting the Pozyx Python library to work with the version of Python we were running on the Raspberry. After losing quite some time on this matter we managed to get a script running reading the sensor data directly on the PI. The next hurdles were integrating OSC communication and adapting the script to also read the head rotation from the data package. Different factors accumulating (lack of thorough knowledge of Python and inadequate customer support from Pozyx) resulted in a real setback on this field which would not be resolved until the second residency week in August.

Meanwhile in search for alternative ways of working the sensor data in Zeno focussed on reading the MQTT stream into the patch in Touchdesigner while Robin focussed on exploring different granular synthesis models for PureData for the sound engine. By the end of the first week residency the MQTT solution was achieved through Touch and also communication with PD was established over OSC. Some goals were met, but we were far behind on our original goals, which led to adding one more week of development at STEIM.

In the second week of residency we were able to make big steps, both in sound quality and responsiveness. We got the direct reading of both position and orientation up straight from Python on the PI running and it was flowing over OSC in Touch and back with all the applied calculations, mapping and scaling towards interaction with the sound engine. By this time we also invested in a strong wifi router so we could perform all network communication over a closed 5G network, this had a tremendous impact. Other tweaks that were done to get optimal performance were adding a fan to cool down the PI and adding a pHAT DAC to take over the digital to analog conversion of all the sound processes. Another element which took quite a bit of research was finding the right usb cable in terms of gauge (AWG) which would meet the rate at which the PI draws power, in the end we found the Anker Powerline Micro-usb cables gave the best results.

At the middle of the second week we got into the exciting phase of creating a site-specific composition, recording this and placing this in the virtual sculpture. A composition was written based on recordings of electromagnetic fields, clusters of sine waves and small rhythmical exponential pulses, a wide range of frequencies and different sources to support the audible navigation of the wall in the sculpture.

The results were really satisfying and navigating this in the darkened room at STEIM was a real liberating feeling, a proof of concept on a completely mobile system, the sculpture could be freely experienced in space.

Prototyping Raumklang

Tweaking the granular engine at STEIM: https://www.instagram.com/p/BmL2_TIHlqN/?taken-by=raum_klang

Nevertheless on the last day we did run into an unforeseen obstacle, as we had been focussed on single user interaction the entire time, optimizing the sound and mapping on a single PI (with a precision and resolution we never experienced before!) we stumbled on lags when switching to a multi-user configuration, running multiple PI’s simultaneously. This was due to the the lack of TDMA (Time-division multiple access) applied in our server structure, which basically divides the incoming data from each device over a certain time slot at a given frequency – a type of high-frequency schedule based priority management. We were in the somewhat naive assumption we could bypass this reading directly from the PI, based on misinterpreting information given by Pozyx, but in the end there has to be some package management to avoid interference of data. Which does lead in severe decrease of response rate in interaction, as basically this leads to a simple equation in which the available bandwidth is divided by the amount of users, a division by four in our system, so each user added to the experience gives less direct response and introduces additional latency.

We reverted to the MQTT method in the end to have a stable solution for the upcoming presentation at Gaudeamus. The conclusion of the residency was also focussed on the final details for the upcoming presentation including the casings to be designed, which Zeno developed and finished in Copenhagen. On the left side of the image is the tag which will be mounted on the headphones (vertical positional was optimal for the signal quality with the antenna having the optimal connection) and the box on the right side contains the  PI and a powerbank.

Casing raumklang

All-in-all we had two extremely valuable weeks of residency at STEIM in which we were able to create a first raw prototype of Raumklang, it was a great experience working at STEIM and we are very grateful for their support!

 

Comments are closed.