ICMC 2011 Review: Audiovisual works, Thursday 4th August

By Andrew Connor

It is the fourth day of the main conference, and my last, as I have a prior commitment requiring me to leave early tomorrow. But it’s a great day for audiovisual work, with a fine example on show in the listening room, and visual elements cropping up in a swath of the concert pieces.

Diego Garro’s Patah, in Listening Room 4b, carries me on from the high of yesterday’s Sinus Aestum. This is another work I’ve encountered before, but the opportunity to experience it with a good screen and set of speakers is not to be missed.

Patah is an Indonesian word for fractures, and Garro notes that in this work he is exploring how the sonic material interacts with the fractures in the visual material. The sound is introduced over a title sequence, but immediately set a rich, textural scene, laden with dissonances. The visuals also add to this impression of rich texture, with intricate interlacing lines creating shifting entities of colour on screen, interacting with the sonic movements to amplify the eerie and slightly uncomfortable world being created. Every so often a discernable voice will break free of some background whispering, adding to the unease. The visual equivalent, a flash of the underlying source video being manipulated by the animation process, will also break through on occasion.

While this work has much to impress the audience, the most attractive part of it to me is the space it gives both sound and video to develop and feature their own pathways, as well as interact and reinforce each other. As a single example, at one point a central, slightly oval shape begins to develop, in tandem with an expanding textural sonic shape. A single lens flare suddenly flashes across the screen and is gone – there is no direct sonic match, but the flare adds that extra dimension to the animation. Highly recommended.

The lunchtime Concert 9 offered another three pieces with a visual component. In Alo Allik’s mikro:strukt, the sounds produced by Satoshi Shiraishi’s bespoke instrument, the e-clambone, were augmented by processing based on signals from integral haptic sensors. Allik took the incoming audio and used it as a further source to trigger changes in the accompanying visuals. The initial impression was of a screen full of regular cells, mostly green, which started to pulse in sympathy with the audio. As the sound developed in depth, texture and complexity, so did the structures on screen, with more colour and variation, moving from cells to a dot matrix, with the regular spacing deforming as the sound gained granularity and texture.

The direct correlation between the sound and visuals tied this piece together well, and I was impressed with the overall impression of complexity afforded from quite simple structural elements on screen. I was a bit less satisfied with the sonic element – in some ways, it would have been good to see the instrument and its operation on its own at some point, as the undoubted skill it took to play was lost in the darkness surrounding the projected images.

Shawn Greenlee’s Endolith definitely took the idea of combining audio and visuals and played with it convincingly. His starting point was a paper multimedia score, scanned and interpreted on screen in expanded pixels. The images were also used to feed a sonic synthesis process. As with mikro:strukt, this immediate correlation between sound and image created a strong synthesis between the two, which could be manipulated further by the performer using trackpads and other sensors.

The close match between sound and image worked well for this piece, and the performance element was visible as Greenlee was illuminated by the lower part of the projected image. The pixellated images worked well with the sonic interpretations, and I particularly liked the moments where the scanning lines produced images reminiscent of the stacked paper edges of books lying on their sides. The piece’s duration was also nicely judged – enough to illustrate the concept and develop it, but not so long that it became overly repetitive.

The final audiovisual work in this concert was Jordan Munson’s Those That I Fight I Do Not Hate, a combination of live bodhran, processed sound and accompanying video. The instrument and its player, Scott Deal, were highlighted on stage, allowing his movements and concentration to be seen clearly while the images played out across the screen behind him. The source sound from the instrument was clear within the processed sound, which added some pitched material and reverb. The images were from battlefields, showing soldiers marching to the front, the squalor of the trenches, and the aftermath, broken men and corpses.

The use of the bodhran was very effective, and the light but appropriate processing added well to the sound. However, the visuals just didn’t quite work for me – while I appreciate the inspiration the composer quotes in his notes and I could see the connections he was making, there was little true synthesis between sound and image. I ended up only glancing at the screen every so often, as I found the bodhran caught much more of my attention.

The evening Concert 10 continued to feature audiovisual work, mainly in combination with live instruments. The first of these, Ai Kamachi’s 21st Red Line, made use of a laser beam attached to the soundboard of a koto, which when broken would add a transformative process to the instrument’s sound. I have a particular fondness for the sound of the koto, so this was always going to appeal sonically. The visual component was a developing field of intersecting lines, flashing with red when the red laser beam was disturbed, and cycling through a series of geometric transformations.

As with the earlier bodhran piece, I found the visuals were possibly an unnecessary addition. In this case, the synchronisation was very close, and it had the unfortunate effect of bringing media player visualisation software to mind. Again, I ended up concentrating much more on the koto and the skill shown in playing it, with only the odd glance up at the screen. I ended up feeling the visuals were a bit of an afterthought rather than a key element from the inception of the composition, and they really didn’t add anything.

The start of Se-Lien Chuang and Andreas Weixler’s Momentum Huddersfield had me worried that the same problem would surface again.  A collection of excellent musicians were on stage, married to granular synthesis, and it all created a rich sonic texture, very well realised, that made good use of the acoustics of the venue. And against this, a screen where simple pixel interactions led to moiré line interactions, and on to increasing intensity and complexity. However, here the visual realisations felt more in sympathy with the live music, and did appear to be manipulated and crafted in situ as the music progressed, particularly in a quieter, breathier passage which was perfectly captured visually with a blue fractal image. The end came as a slightly abrupt but very effective full stop, and left me wanting a bit more, which is always a good sign.

From the concert notes, I really wasn’t sure if Oli’s Dream by Jaroslaw Kapuściński would achieve its aim of synaesthesia. In the execution, I don’t think it quite managed it, but it was, for me, the highlight of the concert anyway. This collaboration with the poetry of Camille Norton made use of keyboard sounds, both piano and typing, allied to visual manipulation of text on screen. Judicious use of recorded sounds, such as the sound of drips or a baby crying, added to the interplay between audio and visual. The overall effect was impressive, and made excellent use of the juxtaposition of the written word and its sonic—or occasionally silent—accompaniment.

The final audiovisual work on offer was Mike Solomon’s Norman (age 1), which offered up a view of a multimedia score being read and performed live by Heather Roche on the clarinet. This idea was a nice conceit, particularly as each movement shown grew progressively more intricate and often slightly more confusing. The second movement of the three had a slight problem with dynamics, where the clarinet was directed to be so quiet that the sounds did not reach to the back of the audience. I appreciated the idea behind the work and thought it came across well, although I think it would be hard to create another similar piece, as the surprise and affectionate use of the score would be hard to replicate again.

Unfortunately, I will miss the final day, which looks to have an equally enticing line-up of audiovisual work. Despite the occasional criticism in my reviews, I have really enjoyed the audiovisual work shown at ICMC 2011, and believe it shows a vital, flourishing avenue of creativity. This was my first ICMC, but from conversations from veteran conference attendees, I gather that there has been a great increase in audiovisual work shown at the conference over the last ten years. Long may this increase continue!

—–

Andrew Connor is currently undertaking a PhD in Creative Music Practice at the University of Edinburgh, Scotland. His research and practice examines the intersection of electroacoustic music and abstract animation.

andrew.connor@ed.ac.uk

Advertisements

About arrayblog

www.computermusic.org
This entry was posted in Reviews and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s