Matt Brown from Berg has made this rather interesting little half day experiment, Music for Shuffle. As with most things in that building, one person suggests an idea somewhat absent mindedly, and then someone else just goes along and makes the damn thing. What else do you expect from the most interesting group of people in London. But instead of going on a downward spiral of Berg/RIG praise, I thought I'd put in my two cents.
But during this little experiment, I didn’t want to spend my time just writing rules to generate music – I wanted to compose it. To author it. Is authoring a system the same as writing the notes? Need to think harder about this.
The distinction here, is between modularity and generativity (the dictionary insists this isn't a real word, I disagree). At surface level, it's quite easy to get confused between the two, if it's done well enough. Which, in this instance, it most certainly is. The only problem is the iTunes interface: you know it's just a series of short tracks set to shuffle. Unless it's possible to hack iTunes in such a way as to generate music on the fly (which I doubt), then it's not going to happen any time soon.
Moving away from the iTunes interface, or the idea of a static song file as a music Kids on DSP, one of the more interesting RJDJ scenes bundled up as a standalone app. It pulls in your audio surroundings in different ways to create original music as it responds to your immediate environment. There's a lot of incredibly clever coding inside this thing: creating music based on the level of ambient noise, using temporal shifts and reverb to play with the music as well as sampling the environment in such a way as to create music.
However, these can tend to feel like the audio equivalent of a photoshop filter (a damn good one at that, mind you). It's been a while since I've listened to any of them because the stupid microphone on my headphones has been broken for some time. But the first week was a revelation, trying out different sonic environments to see what the responding music would be. Some environments tended to work better than others (as to be expected), while the differing augmentations were definitely suited to other environments.
Lots of wind wasn't a good thing, as it was all the microphone could pick up, trying to slow down and reverberate wind doesn't make for an interesting sound. In quieter areas with no background sound, they pick up foreground sounds like a charm. At times I even caught myself talking aloud, user generated content for an audience for one.
One thing I kept on being frustrated about was the lack of semantic distinction. I wanted some sounds to be sampled and others to be ignored. But for that to work, there'd have to be a lot of spooky stuff under the hood. Something I don't think I'm comfortable with just yet.
Moving back towards the land of the plausible, the use of authoring modular music is an interesting one. It provides a grounding, an artistic input that isn't possible with generative works. That's not to say that generativity isn't a creative process - far from it - but the rules need to be kept fairly tight to create something cohesive. Which Music for Shuffle is, despite the really simple music Matt has created.
One thing I had a problem discerning while writing my thesis was the difference between a design that uses the trick of modularity to appear generative and truly generative work. I never came across something that was truly generative, or if I did it bored me. All of the good stuff seemed to live at the intersection of the two. Using the authoring process to create a basic kit of ideas and having that mix with some network input to create constantly new work. I'm interested to see how far he will take this thing, especially if he confines it within the boundaries of the mp3.
I forgot to discuss the use of album cover art as a way of not only visualising the music (I think it's chord structures (someone correct me?)) but in helping to punctuate the difference tracks. It makes the whole project work much better than if it was just the music.
Update: A few days after posting, Rob over at RJDJ left a rather illuminating comment regarding the use of semantic variables in the Inception app which he worked on. It should be noted that this app is sitting on my phone, untouched until I borrow some headphones from someone. I'd really like to chime in here regarding the app, of which I've heard rather interesting things about, but natch. Over to him:
Great post :) I'm Rob from RjDj. I look after a lot of the music production here. Music for Shuffle is a great project. The point that the interesting stuff seems to happen at the intersection of generative and modularity is very true I think. During creation of the Kids on DSP app / scenes and that was definitely true. Greater semantic distinction would have been really good for Kids on DSP, however at that time the devices were simply not powerful enough to run the audio analysis needed, plus also run all the synthesis and DSP. These days however iOS devices are much more powerful and we have explored quite a few different, more detailed, analysis methods. Two of them are in the Inception app we recently created with Hans Zimmer and Chris Nolan. The Travelling Dream does a pitch analysis of the mic and derives melodic content from it, which then plays within the composition in realtime. The Airport Dream listens out for the chimes that happen before most airport announcements, and uses that as a trigger to make certain audio events happen. Inception the App also expands the 'reactiveness' of the music beyond the instant, to the general conditions or situation of the user. For instance if the weather is sunny at their location - they will enter the Sunshine Dream, or if it is Full Moon they will enter that dream. The point you make about the album art in this project is also very true. I think in the early days of the work we did at RjDj we tended to focus purely on the audio, but having visual feedback to reinforce the musical changes is something we have grown to regard as critical. Rob RjDj.me