29 Mar

Meeting Notes March 24, 2011

Posted by Jon No comments

Attending: Taku, Georgios, Dick, Berit, Jon
March 24th, 2011

* NIME plan, who will be going and what is our agenda

Taku spoke with Wouter about how much money we would need. At this point he thinks that sending Georgios, Jon, Berit, and Daniel will be definite. Georgios, Jon, and Berit to represent SRG through a workshop. Jon and Berit to present their papers. Daniel for his performance and demo session. Kristina and Joel had their magic machines workshop accepted. Taku, Alex, and Richard Scott had performances accepted as well. It would be good to send Alex in order to test the shells in a live setting prior to the festival. Taku could go either way. Kristina also could go either way, and she’d rather see a solid single workshop come from STEIM than have them overlap in material.

* NIME workshops

Georgios, Jon, and Berit would like some advising from Kristina and Joel, and even participate in their magic machines workshop in order to get some insight into how to conduct a workshop.

We (Georgios/Jon/Berit) will test drive our workshop during the orientation May 3rd – May 10th.

Kristina is fine with not going, she would help us with developing this workshop after April 10 (when Jon gets back into town). Taku submitted the workshop as a way of getting us there. Take ideas from the SRG and rejected papers and turn them into a workshop. Dick and Taku also suggested they would help us.

Some workshop ideas coming from instruments from instruments:
Q: Defining style. Defining the instrument. Cross mapping. People mapping. Describing an instrument as a system and a practice. Think of instruments which comprise complex interrelationships of systems. Dick says: idea of a workshop is to give them something to work on.
The idea is to use this workshop as a platform to present ideas we have been working with in SRG. Develop a workshop from this paper, or even from Georgios’ paper?

We meet next thursday to brainstorm for the workshop.

* Workshop registration and lodgings in Oslo, attendance etc..

Jon is looking into housing with NOTAM. Daniel and Jon both found cheap flight tickets. Let’s book travel arrangements in early April the latest.

Someone (?) will ask NIME people if we get in to the workshops for free as workshop teachers. Otherwise register solo.

* Paper resubmission.

For the experiment paper, we’ll wait until we have the experiment data then resubmit to LMJ, CMJ, or elsewhere.

* Jon is giving an oral presentation

They are mostly free-format, about 15 minutes + 5 minutes questions. A demo is also possible.

Categories: NIME, Workshops
10 Mar

SRG Meeting March 3rd

Posted by berit No comments

Present: Taku, Dick, Georgios, Jon, Berit

Wekinator: Mapping machine learning software based on Weka, a Java library for feature extraction

Experiment: We have a software that can record, play, eventually extract features. We need gesture descriptors to understand how different controllers have different musical applications.
Questions that were raised: Is it a good idea to do an experiment based on “Lab conditions”? Isn’t STEIM about artistic practice, and shouldn’t be the software research based on this?
Consensus: the experiment is a proof of concept, which should not consume too much time. In the long run, interviews with artists and annotation of musical passages together with them, should be the way to go for further development.

Other issues raised by this discussion:
Phrasing and time: is that information on the interaction in itself? Cf. animation: keypoints from human motion applied to different objects retain a “human” quality. Is that true for music performance as well?
Expressive gestures: how should we deal with “expressive gestures”, i.e. gestures which are theatrical, and do not trigger anything (e.g. Alex doing an expressive hand gesture while pressing a button)?

10 Mar

Mini-Instrument Spec for Experiment Next Week

Posted by Jon No comments

Today Berit and I worked on the instruments from instruments initial cross-mapping experiment a bit further. I’m talking to Ivo Bol at the moment to see if he’s available to be our performer for the experiment. I’ll let you all know when/if that comes to fruition, otherwise we’ll just go ahead and do the recording next thursday afternoon (Mar 17) with someone at STEIM. The plan is to finish the experiment next thursday, and then meet formally as a group the thursday after that (Mar 24) to discuss next steps and NIME. By then we’ll know which of our papers/performances/etc were accepted. :)

We tested everything out and it looks like the performance recorder is working well in conjunction with the OSC listener/broadcaster in SuperCollider.

The recorder program, along with some example data, an osculator setup, and a maxmsp testing patch is downloadable here (choose “load state” to load the example performance data along with the input/output configuration).

Plans for next week are for Berit to finish her instrument, also, Georgios and myself will each create a little synthesis engine to use in the experiment which can be controlled by the replayed performance data. Here are the specs for these instruments:

1) They should be able to receive OSC from the performance tracer, listening on port 57120 if possible to make configuration easier.

2) The instruments should be capable of recording their own audio output to a file, when signaled to do so from the performance tracer. These files will be the audio recordings we use for the experiment.

3) The instruments should respond to the following OSC control parameters:

/roll f

/pitch f

/yaw f    (not very reliable)

Continuous roll, pitch, and yaw from the Wiimote’s accelerometer, all of them send one float argument from 0. – 1. It’s not a good idea to depend too much on yaw here, because without the gyroscope addition the Wiimote isn’t able to interpret orientation along the z-axis unless you point it upwards (really awkward).

/toggleB i

Momentary toggle, the “B” button from the Wiimote, an integer either 1 (pressed) or 0 (released).

/playback i

This message is sent from the performance tracer to signal when playback of the performance has begun and ended, for the purpose of knowing when to start and stop recording audio output. Integer 1 means the playback has begun, integer 0 means it has ended

10 Feb

SRG Meeting Thursday February 10th

Posted by berit No comments

Present: Dick, Daniel, Taku, Georgios, Jon, Berit

Papers & Presentation
Authorship: Thanks by acknowledgement, not co-authorship
Literature: Let us put all our references, with some short notes of the key issues they are dealing with, on the Steim blog. NB: Don’t make more than six categories, as that gets messy.
Presentation (if accepted): Demos are in seperate rooms and timeslots, posters in the foyer to be visited during breaks. For the oral presentations, should we think of a visual identity?


Different aspects of the festival:
-Instrument from instrument-we need to further develop the cross-mapping concept and have a better idea and a model of what we imagine this tool to be like
-Launching a tool for resynthesis based on audio content analysis as done by EchoNest, in collaboration with the National Archive
-Symposium. Discussions, talks. Discussion about improvisation (organized by Dick). Large pool of researchers from different fields needed.

Technologies to (not?) investigate
Kinect: combines a camera and depth camera (infra-red rays are sent into the room, their reflections measured), and an accelerometer for orientation (the cameras can adjust themselves with a motor)
->30 fps time resolution may be too slow for dance purposes
->is the Kinect easily implementable in JunXion? -Yes, open source libraries are stable and available
->Relevance of the Kinect for the research group: -the technology is useful to have at Steim, but it is not going to be a focus of research. More important: what does the Kinect or what do related technologies allow us to achieve in terms of mapping?
Can we spend time with the Kinect and do something special that other places don’t do?
Should we still invest in camera tracking?
Can we achieve a sound interface as the one that Ostertag describes in one of his papers: a mouldable piece of clay that controls sound? The clay as the real representation of the control done by tracking?
But is a ?limitless” interface desireable at all for music?
Free movement without anything attached to the body is most interesting for dancers, not so much for musicians
Musical control is mostly done by movements of hands and fingers, not by bigger scale movement of arms, torso or legs.
Do sensors in themselves still need to be at the center of STEIM’s identity?

Open Frameworks: a new DSP library is going to be included in the next release-> workshop at STEIM later this year
Strength of Open Frameworks is visualization: STEIM shouldn’t interfere too much with workshops for this as given at Mediamatic of V2

Minituarization: Daniel could fix up a presentation by Alderik Pander of ExSilent on minituarizing wireless audio transmission for low power devices with on-board DSP (e.g. hearing aids)

Wekinator: machine learning tool based on ChucK and Weka

18 Jan

Meeting notes January 13th

Posted by berit No comments

Some general information on the papers
-sharing on Google Documents
-use Word or LaTex template later (LaTex template has ready-made tags, so it might be not too work-intensive even without prior knowledge of the language.)
-do the papers need to be anonymized? -> I did not find any request to do so, so probably not! -B.
-primary reviewers: Taku and Kristina
-Joel and Dick’s opinion would be useful for the ?Instruments from Instruments” paper
-Feedback is communicated via comments on Google or per email

Status of the papers
-with all papers, reading consumed a lot of time

Instruments from Instruments
-controller gesture classification needed. Advise: do not focus so much on the what, but the why
-”Gestural Music” by IRCAM and some Touchscreen research might be interesting to consider – yet the SRG paper will consider all interfaces, which is an important difference
-Experiment: even though it has not been performed yet, it can be performed before the final submission deadline (April 26th), and should be included in the paper

iPhone paper
-Literature research was problematic, since some of the papers are not easily available

Other NIME related stuff
-Taku is proposing workshop(s)
-it/they should be tied around a ?common denominator”-> mapping, instrument, metaphor
-Georgios proposed a analogue modular sound manipulator building workshop
-quick brainstorm gave the idea ?represent your instrument (existing or idealized) in a physical or motional model” (cf. mapping of STEIM building with Kristina)

Next meeting: Monday, January 24th
-We will discuss the final format for the papers, amongst other things

Categories: Meeting Notes
12 Dec

9 Dec 2010 // Plans for 2011 and Reevaluating Core Research Paper

Posted by Jon No comments

Attending: Georgios, Taku, Jon

SRG Plan for Next Year

We began by discussing general plans for next year. We’d like to plan a meeting either next week, or next year involving everyone who is a part of the research group: Pinar, Georgios, Berit, Jon, Taku, Kristina, Joel, etc. to refocus. We are concerned that the meetings have drifted so far as scheduling, with people not showing up regularly.

Taku would like to see Georgios and Pinar run the research group next year, organizing the meetings, keeping track of who is coming, setting a schedule for what should be done/shown each week, keeping notes, and directing the discussions with guidance from other STEIM staff.
Also, we will consider making another call for researchers depending on the needs of the research and the dedication of our current group.

Next year we should be ready to rebound from the break and get into tune as quickly as possible. The time submission will come up very quickly and we need to be working hard to get the papers submitted. Once we do that we can begin discussing the SRG’s involvement at the festival in September, and what future research we will do.

Discussion of our Core Research

Next we began discussing the “Making Instruments from Instruments” paper. Questioning what our paper will be focused upon.
Looking back, over the course of our research group, we seemed to have narrowed down to a few specific technologies that look interesting for live performance. Some of which lie in the domain of audio analysis: beat following, spectral similarity (for samples).
And some in the domain of gesture analysis: giving semantic meaning to performance data, recognizing gestures and qualities of motion.

At one meeting we discussed the idea of creating a toolkit for building smart, responsive computer performance systems. Featuring:

  • frequency (beat following), rhythm detection
  • spectral similarity (retrieving samples and pieces of samples based on perception)
  • gestural data -> mapping the gestural data of a performance to sound

What the Paper Should Cover

Taku also stressed that this research paper shouldn’t be about the experiment. The experiment’s purpose is to give a factual backbone to the paper. The paper should be mostly focused on the thought process which led us to this experiment. Eg: from this premise we tried this and this, and got these results. The premise is the meat of the paper. It should sum up the discussions of our research group and how we got to this point and why we’re interested in it.

The experiment itself is about coming up with a mental model for naming the various behaviors of sensor data as they relate to the physical motions that created them. We are trying to create a language for giiving semantic meaning to performance data. Thinking about the types of physical qualities we would assign to different gestural data. To this end, it makes sense to record a video of the performance along with the sensor data and audio output.

Using Gestural Data to Give Physical Meaning to Audio Recordings

Jon suggested a possible next research step: that we could try associating gestural data of a performance with the music it produces. Using the DJ metaphor, it’s like taking a record and adding extra grooves that follow the audio but carry information about the physical gestures of the performers that made the sound. This extra information could be used as a way of making sense of the sound. Kind of like the way Echonest analyzes recordings based upon rhythm and pitches, we would be able to analyze the sound recording based on the “physicality” of it. “Physicality” being a catch-all term for the qualities we invent to describe gestural behaviors in our experiment. This would be a completely different way of analyzing music based upon its physical origins, the actual kinetic motion that created the music.

Taku thought this was in-line with our research. He thought this could prove to be a common way of understanding electronic music.
What connects electronic instruments? It’s not notes, it’s not rhythm. We can try to prove that it is the physical motions.

This gives our current experiment more context. As Berit said, in a way, we are also examining if we can look at just gestural data. Or if there needs to be a combination of gestural analysis as well as audio analysis.

06 Dec

2 Dec 2010 // Review of Paper Abstracts

Posted by Jon No comments

2 Dec 2010 SRG Meeting Notes

Attending: Georgios, Daniel, Dick, Luuk, Jon, Taku

Paper Titles

Taku begins with a comment on the titles of our papers, “When you’re looking through NIME papers, the titles are rather boring.”
He would like us to think of less banal paper titles to make our work stand out a bit more from the bevy of more techy NIME papers. The titles we have now are good for subtitles, but they aren’t enough to ignite that spark of interest.

From there we went around the table, taking the papers one at a time.

Penta-Digit Music // Daniel

The first question Taku and Dick have about this paper is: What is the premise? Daniel’s response is that it is a paper dealing with mapping without calling it mapping by taking a bottom-up, musician-centric approach. The paper describes Daniel’s setup, where one instrument is used as a remapping system for the rest. It is a very cartesian system but there is an interlinking system going on. The left hand changes the behavior of the right hand on the fly.

In the abstract, Daniel explains this as a “fresh look”. Taku suggests another two lines explaining what is the fresh part. Daniel explains that he wants to explain things like mapping in musical terms. This paper comes from practice. Which is nice. From the musician’s perspective. But this needs to be laid out clearly in the abstract. For example: “This paper presents a collection of instrumental devices” … approaches mapping from a practitioner view.

We discussed what the best format for this paper would be. One line of logic that Dick suggests: Daniel has a bunch of devices and they constitute a class. What they do constitutes a certain method for dealing with objects of that class. The method is strong because there is a class of devices. The method is not only instrumental but also teachable.

An important point to take away from this paper is that we can extract a teachable practice from a musical pursuit.

iPhone cracklebox // Jon

Taku and Dick both think the framing here is essential to the success of the paper. What is the thesis?

Jon’s first thoughts on what the thesis is: Taking a physical and artistic object and abstracting it to apply to a virtual platform.
For Daniel, through discussions he’s had independently with Jon, the dynamic mapping approach and parabolic edge behavior was most interesting. So the question is how these things can fit within a larger thesis?

For the abstract itself, Taku suggests taking out the first sentence, and paring down the section on context and dive deeper into the development process and details. Both the iPhone and Cracklebox already have a story behind them. A new story doesn’t need to be written. The context is already there.

For Jon the issue is taking a physical interactive object and adapting it to a platform, which is a framework for objects and has its own context and characteristics. Dick doesn’t like the word virtual, due to its myriad connotations. He suggests to instead use the term “on screen”. Taku suggests Jon look into something Joel had written about “physical handles on virtual models”.

Jon sees the element of tangible computing here. Where do you draw the line on a platform like the iPhone between the tangible aspect and the virtual aspect. Especially when trying to emulate a tangible object like the cracklebox. Where are the lines between tangible and virtual drawn?

Dick asks: “What part of the cracklebox is the paper taking into consideration?” Because the cracklebox is many things. It’s an interactive sound object. It’s a piece of art which addresses the connection between the human body and the underlying circuits. The paper must be very specific on how the cracklebox is being taken into context.  Jon is looking at it primarily as a musical experience. His analysis takes place inside the interactive experience of the original object. The interactive paradigm of the cracklebox, which is a process of exploration, of navigating, of finding places you like and staying there. An unstable sound source that is chaotic by nature of the environment and the physiology of the person operating it – via circumstantial and environmental forces. The artistic connotations of connecting the human body to the underlying circuits are lost on the iPhone platform.

Taku suggests to Include these ideas in the keywords: the original definition of what the cracklebox is dealing with. As an art object.
Jon is trying to replicate the whole experience of the cracklebox. He tried a literal translation, but soon realized that the context of the iPhone, both technically and conceptually, don’t allow that

Juggling Balls // Luuk

Luuk wants a good name for the balls.

Dick suggests that what is most relevant here is the development of the balls: the restrictions imposed by Tom and the jugglers, and how it was approached. Dick also suggests replacing the term “algorithmic composition” with “pattern-based composition”.

The paper can lay out the criteria of what is important to the jugglers.
We were presented with a challenge of what the minimum of the interaction should be.

The Wii Years // Georgios

Georgios began by describing his interviews of Alex and Richard Scott. With Alex he analyzed technical and ergonomic issues. Pressure was important. Richard Scott was only using the Wii-motes as wireless buttons.

Dick asks the question: Why is he not using acceleration?
Georgios says it’s because he’s fine with the motion detection of the lightning. Dick thinks that if interviews with Richard Scott are included, they should cover why he’s not using acceleration.

Georgios considered discussing ergonomics and physical design of the Wii controllers in his paper. Eg. the buttons are quiet, free motion (wireless), essentially with 5 degrees of freedom. Daniel suggest he be clear about what those degrees of freedom are.

He would also talk about the problems with bluetooth in a performance context. The Wii is not useful for pitches. The problems with detecting force peaks. How to deal with the gravitational offsets of the accelerometers. Georgios found a group of residents (dancers) in 2007 who found solutions for all of these problems. Funny enough: Alex, Richard, and Frank haven’t found solutions to these problems.

Taku questions to the format of the paper and tries to nail down the premise of the paper. Do we want to create a how-to of designing a Wii instrument? STEIM has been using the Wiis for three years. On the surface level there are practical issues and there is also a question of why we have been supporting artists like Alex who use the Wii controllers. What is the cohesiveness here? The cohesive element will be the premise of the paper.

Dick suggests that, as a researcher, Georgios use the facts to support his own conclusions, and to think about framing the paper through those conclusions. Georgios thinks nobody will be using Wiis again in the future. Even Alex conceded that the Wii was just a game controller.

Daniel thinks that it’s easy to say that in retrospect, but for Georgios, he has to focus on the mindset Alex was in when he first discovered the Wii controllers and a whole new world of musical expression opened up to him.

What does the Wii represent? Why did we use them at STEIM? Everyone eventually agrees that, at its core, it’s not about the Wii itself, but about acceleration mapping. About using the Wii as a prototyping tool for acceleration-based instruments. Accelerometers were unknown to the mass populace before the Wiis/iPhone, they were expensive, and the Wiis brought acceleration instruments into the forefront.

STEIM used Wii controllers because it’s a quick accelerometer-based system. By analyzing the projects done at STEIM you can analyze the uses of acceleration in instruments. With Alex, while using the Wiis he learned what it was he wanted. Now many performers at STEIM, like Alex, or Andreas Otto, are moving towards dedicated wireless accelerometer sy

Perceptual Reverb Instrument // Berit

Dick really likes that this paper is focused on the listening experience. He thinks all of our papers should try to focus on experience rather than start with the technical stuff. Eg. listening experience, interaction experience, performance experience. He also questions the use of the word “cognitive” in the abstract. Otherwise, looks good.

Making Instruments from Instruments // Everyone, mostly Berit & Jon

We’ll have a meeting next week specifically to discuss this paper.

Taku thinks the title is a little long. Dick thinks it’s missing a good name for what we will call this “performance recording” … some names thrown around were “signature” and “trace”. Taku brings up that mapping sensors has been covered extensively, and automation of parameters via recorded data streams is available in most commercial DAWs. What differentiates our research?

Jon suggests that we are mapping qualities and sections of an annotated and analyzed performance to sound parameters, but questions whether we are directly addressing that aspect of the research in this preliminary experiment.

Daniel brings up that in the abstract we use words like “instruments” and “performances” and “performance gestures”, but we need to be clear about what these words mean in our given context. What Taku thinks is especially interesting is the multi-dimensional nature of the data. The instruments we are looking at are multi-modal interfaces. The way the multiple data streams interact creates a kind of signature.

In our paper we need to be clear about what the goal is and what we are trying to achieve. Is it just research? Is to try extract what a performer does and use it in an interesting way? Daniel posits that it could be a learning tool for artists to understand what they are doing, to understand their instruments and compare themselves to other artists. It’s a way of dealing with multi-dimensional gestures, going back to them and naming them. For Daniel it’s a sort of notational system as well, a kind of figured bass for electroacoustic music.

With this paper and experiment we still need to come to an agreement on the hypothesis. In an earlier discussion, Berit suggested that we are trying to see if the data is musically meaningful on its own. As Jon sees it, the supposition is: there is a way to extract a signature of a performance from data, that there is a way of attributing semantic meaning to the performance in a way that defines it uniquely and sets it up for reuse and comparison.

Taku still isn’t convinced. He asks, why do we want it? What do we want to offer from it?
Some answers:
Extracting “words” from a performance. Giving meaning to performances and gestures in order to understand them.
Building an instrument from a performance signature.
Creating a pool of gestures and sensor utterances that come from artists, creating an “impulse response” for a performer, which can then be combined in performance.
These all pertain to the question of whether the gestural data is musically meaningful, and if so, under what conditions?

To get it into the conference we need a strong point. Taku suggests we distance ourselves from the current experiment a little bit, by defining the context, method, and projection of the experiment.

Additional Thoughts

At this point the meeting ended, but myself and Pinar continued discussing the purpose of this preliminary research. I add it here because I think it’s relevant to next week’s meeting specifically addressing this research paper.

Pinar brought up points from the discussion we had at Kristina’s a few meetings ago, where we came to the conclusion that the key point of this experiment is in devising names, categories, and qualities by which to annotate performance data.

Put another way, when we see something interesting, what do we call it … based on the physical actions which caused the data changes, the way the data looks through visualizations, and the sound which results?

In pattern recognition parlance we are at the feature generation stage, which is a process of creating descriptors, a set of features by which to classify gestural utterances. These features will form the basis of a structure for analyzing a performance based upon gestural data.
Until we have these features and give them names we are forced to speak in a language we don’t know yet.

02 Dec

The Wii Remote as a Musical interface – Things we learned

Posted by Georgios No comments

This paper examines cases and principals of using the wii remote as a musical instrument interface, as used by performers at STEIM and elsewhere. It draws from interviews and practices of wii-instrumentalists and a review of related work to evaluate the device’s capabilities and restrictions. The aim of this paper is to assist acceleration-based instrument developers into a deeper understanding of the device, by referencing technical, aesthetic and mapping issues and analyzing how they were resolved.

Keywords: Wii Remote, Gestural musical instruments, STEIM

Categories: Abstract, Paper Submissions
02 Dec

Making new mobile music paradigms from old ones: designing the iPhone Cracklebox

Posted by Jon No comments

In this paper the process of using the iPhone as a platform for building playful interactive music objects is explored. The questions of design and interaction which arise are examined through the lens of reimagining a classic icon of mobile touch-based music, the Cracklebox, on the iPhone. The design process is discussed in detail, in particular, the challenges and shortcomings of using virtual representations to represent physical controls. Interface to sound-source mapping strategies are discussed which utilize the discrepancies between virtual and physical to create expressive control over the underlying sound source. To give the same sense of exploration and improvisation embodied by the cracklebox, a dynamic mapping strategy is employed. The paper encourages an approach to developing for mobile touch-screen devices that emphasizes physical and musical feedback over visual.

Keywords: touchscreen, interface, design, mapping, metaphor, mobile music, iPhone, cracklebox

Categories: Abstract, NIME, Paper Submissions
29 Nov

A Reverberation Instrument Based on Cognitive Mapping

Posted by berit No comments

The present article describes a reverberation instrument which is based on cognitive categorization of reverberating spaces. A multidimensional scaling experiment was conducted on impulse responses in order to determine how humans acoustically perceive spatiality. This research seems to indicate that the perceptual dimensions are related to early energy decay and timbral qualities. These results are applied to a reverberation instrument based on delay lines. It can be contended that such an instrument can be controlled more intuitively than other delay line reverberation tools which often provide a confusing range of parameters which have a physical rather than cognitive meaning.

Keywords: Reverberation; Cognitive Musicology; Multidimensional Scaling; Realtime Control