A Summary and Transcript of the ICMC 2010 UnConference UnSession on Computer Music Performance

by Identified Participants and Authors: Jeremy C. Baguyos (JB), David B. Wetzel (DW), McGregor Boyle (MB), Bonnie Lander (BL), Scott McLaughlin (SM), Scott Hewitt (SH), Krista Martynes (KM), Dale Parson (DP), Andrew Cole (AC)

Introduction and Rationale

The time has come to reflect upon and assess the role and the identity of the computer music performer specialist.  In pursuit of creating a forum for those that are interested in the art of computer music performance, An UnConference UnSession on Computer Music Performance was hosted on June 5, 2010 at the International Computer Music Conference in New York to initiate a dialogue regarding the past practices, current state, challenges, and future opportunities for the sub-field of computer music performance. The UnSession on Computer Music Performance was proposed and integrated into the ICMC 2010 Unconference by faculty and alumni of the Peabody Institute of The Johns Hopkins University.  To date, the Peabody program in computer music is the only program in the United States (and possibly the world) that grants degrees, both undergraduate and graduate, in the specific area of computer music performance. Reflecting the inherently eclectic make-up of computer music, the unsession attracted a diverse group of performers, composers, researchers, computer scientists, sound engineers, and technicians. This unsession was particularly interesting because a collective of performers drove the content of the discussion within a larger ICMC conference that is normally driven by a collective of researchers and composers.

The Unconference Format

On her web site <unconference> found at http://www.unconference.net, professional unconference facilitator Kaliya Hamlin defines an unconference as “a facilitated participant-driven face-to-face conference around a theme or purpose.”  The unconference format has several advantages over the traditional paper formats of poster, presentation, and panel discussions. Its egalitarian, fluid, user-generated approach allows a large swath of participants ranging from established veterans to promising emerging talents to spontaneously and collectively introduce and develop ideas, which is not always possible within a traditional conference. Most importantly, this format allows for more informal, direct, and honest dialogue. The format is flexible, open, and interactive and allows for points of relevant departure as well as tangential discussions.  It allows for the crowdsourcing of the collective intellectual capital of the willing attendees and yields ideas that otherwise might be withheld if the focus were only on the prepared paper and structured presentation of a primary investigator. As in the tech sector, which spawned the idea, the unconference and the unsession format can be just as enlightening as the traditional paper/panel/poster formats when applied to academic computer music.  Jennifer Howard published the article “The ‘Unconference’: Technology Loosens Up the Academic Meeting” in the online version of Chronicle of Higher Education on May 23, 2010 and can be found at http://chronicle.com/article/The-Unconference-Technol/65651/.  The article outlines the unconference format and extols its advantages.

Summary

What follows in the main text of this article is an edited transcript of a recording of the active participants in the discussion of issues in computer music performance.  Although the identified participants are named, some of the dialogue will not be attributed to any specific participant because some of the participants could not be identified.  However, the majority of the dialogue was culled from the identified participants. The identified participants were the scheduled presenters and moderators of the UnConference UnSession on Computer Music Performance as organized by Freida Abtan, the ICMC 2010 Unconference chair, and her staff.  Although they are not identified by name, some of the other attendees did participate actively in the discussion and many more were in attendance listening intently.

Jeremy Baguyos and David Wetzel delivered some introductory remarks at the general introductory presentation session to the large group gathered for the ICMC 2010 Unconference before the sub-group interested in computer music performance was moved to the multimedia lab. Although those introductory remarks are not included in the transcript, they outline the content of the article “An UnConference UnSession On Computer Music Performance” found on p. 397 of the International Computer Music Conference 2010 Proceedings. For purposes of this unsession, it was assumed during the introductory remarks of the introductory session that computer music performance is a separate and distinct sub-discipline within the broader academic area of computer music, which normally places focus on composition and research.  Furthermore, it was assumed that the definition of a computer music performer was inclusive and included performers of all instruments including alternative, non-traditional controllers.

While participants were taking their seats, the unsession began with a conversation between members of the Huddersfield University Experimental Laptop Orchestra (HELO) and past and current members of Peabody Computer Music Ensembles about HELO’s innovative, efficient, and inclusive approaches to the realization of works for laptop ensemble.  They use generalized, high-level descriptive instructions to coordinate composition and performance. This segued into a longer discussion about sustainability of repertoire, since the use of generalized, high-level descriptive instructions that are independent of any specific implementation or platform is a tool in the preservation of interactive computer music works involving live performers.  Many successful battle-tested approaches to sustainability were recounted in addition to cross-disciplinary ideas from the area of software engineering. In addition, theoretical and speculative approaches were introduced, examined, and related to current practices in sustainability.  Also, sustainability itself was questioned to see if it was worth pursuing in the first place. In pursuit of sustainability and the more general concerns of concert production and realization of electronics, the roles of the performer and composer were compared and there was a general consensus that the role of a technical mediator between composer and performer needs to be created, encouraged, and valued in order to support the creative process through the stages of conceptualization, composition, technical preparation, rehearsal, performance, and preservation. Objectives were introduced to help achieve this aim, along with strategies for facilitating further communication between composers and performers. Two of these objectives were a) better documentation of the technology used in new works by composers and b) a stronger commitment by performers to understand the technology required for a given work. Also related to sustainability, notational systems for classical instruments as well as new notational systems for computer music instruments were discussed. At the end of the session, members of HELO demonstrated their notation approaches. Throughout the discussion, many useful analogies were offered by several participants in order to clarify many assumptions about computer music.  For example, many were in agreement that the person who creates a computer music performance system (hardware and/or software) is the 21st-century equivalent of a 19th-century instrument designer.

Transcript

SH: While I was in the other room, I was thinking that your technical topic of electronic music/electroacoustic music and instrumental computer music performance is quite interesting and very relevant to the work that I’m doing.  We take laptops on stage in a very “everybody has to take responsibility for themselves” approach.

MB: So when you say everybody is responsible for himself or herself that means they’re responsible for the software?  They’re responsible for the programming?

SH: Yes.  We provide nothing at all other than borrowed guitar amps from the popular music course.  That’s all the assistance that we offer.  Everything else is their responsibility.

MB: Is there a composer?  Is there a score? How does that work? Is it all improvisation?

SH: We do have compositions. We discourage composers from writing a piece of software to give out to ensemble members because we don’t have any rules about who can participate in the ensemble. Composers would find themselves in a difficult situation if faced with eight different computing platforms. They would have to write for all the different platforms, which include Macs, PC, Linux and ten years of computing history in front of that, as well.

MB: So how do the composers work with that?

SH: We have text scores and graphical scores. Many composers use a very high-level language. For example, the score could read, “I want a filter sweep occurring at x point in time.” According to the instructions, people create a filter sweep from given ranges at the designated point in time.  Rather than telling ensemble members “Here’s a Max patch; it does a filter sweep,” ensemble members have to implement the filter sweep themselves on their platform as per the composer’s high-level instructions and execute the filter sweep at the designated point in time.

MB: That’s interesting.  We’ve actually been thinking about that approach for years. David, who does a lot of his own programming, has been successful at reviving some pieces that have been dead for years because they were written with a very specific technology that no longer exists. But with a more generalized approach like what you are talking about, you don’t run into this problem of trying to create a piece that not only travels in space, but can also travel in time. It can last.

JB: Now that more people are here, can you review what you have said so far, and tell us more about how you run your laptop orchestra and how the creative ideas are implemented?

DW: Yes, I’d love to know. I’ve got a lot of students who want to start one.

SH: We allow a very broad spread of equipment, so you can bring any laptop you want. This means if a composer says “I’m going to write a piece of software for the laptop orchestra to run,” I reply, “Well that’s great. But we run Windows, Linux, and Mac, and we have laptops that are ten years old, so you’ll need to write that for Windows95 as well, please.” Usually they can’t, so this is where they have to move into our territory.  This is where we get instructions like, “I want a filter sweep to occur two minutes in,” or “I want a comb filter at this point.” It’s that kind of higher-level descriptive language that we want rather than a composer saying, “Here’s a program, run the program.”

DW: That’s where your work merges with what I’m doing. When you have to adapt something that was written ten, fifteen, twenty, or thirty years ago to current technology, higher-level descriptive languages are the only things that make sense. When I first became really serious about doing this kind of work, it turned into my dissertation.  I analyzed four works for clarinet and interactive electronics. I really just looked at the electronic systems.  What I decided in that whole process was that what was more important than simply porting the old system into a new system was actually doing the full analysis and really understanding what the original system was about, what it was supposed to do, identifying its specific functions, and identifying the musical aims of using those tools. For instance, one of the pieces I analyzed was a piece by Jonathan Kramer. Written in 1974, it’s a piece for clarinet, tape, and tape delay system.  Its live processing outputs a long delay. If you’re not familiar with the old-fashioned tape delay methods, you start with one open-reel tape deck recording.  Then the tape travels across the physical performance space to another tape deck that plays it back.  The amount of space between the two machines determines delay time. He wanted a long delay. It turns out he wanted a very precise long delay.  With a time signature of 2/4 with a half note at 100, he wanted thirty-four measures of delay.  The first note you play has to come back exactly thirty-four measures later and synchronize with your next eighth note. It had to be absolutely precise.  The problem with tape delay is that you can’t be that precise.  The machines mis-align themselves as soon as you start them, and controlling the gain is ridiculously difficult.

MB: However, you did a performance of the Kramer piece at Peabody.

DW: I did perform it with “period” instruments several times. The first time I did that piece I was an undergraduate; I did an honors recital in electronic music. That piece is really what got me interested in performing with live electronics. I did perform it a couple of times in Baltimore at Peabody.  When I was working on a DMA in clarinet performance in Arizona, I had a teacher and adviser who allowed me to write a paper on clarinet and interactive electronic music.  I went back to the Kramer piece one more time, only this time after twelve years, it was now a 30-year-old piece. This time I actually contacted the composer, and we had a lot of conversations. One thing I wanted to know about was the pre-recorded tape. How do you reconstruct that?  It consisted of a lot of loops and sounds from the clarinet score. It was a bunch of clarinet sounds looped and processed, but it was done in 1974 and it’s the clarinet playing of Phillip Rehfeldt. Even though I’m a big admirer, it’s still his playing and not mine. Furthermore, it’s an old analog tape, and it sounds very different from my digital delay system.  So I wanted to know how to reconstruct the pre-recorded tape part, as well.  The composer was very gracious and imparted all kinds of secrets about the piece. He was very supportive about the idea of a digital delay instead of a tape delay.  Because, really, what he wanted was precision in the delay.  He did not want the sound of analog decay, necessarily. So the Kramer work was a case where the technology at the time of composition was not really adequate for the musical goals.  It turns out that current technology is much more appropriate for achieving his vision. I wrote the chapter on his piece, and I sent it to him.  He gave me comments, he approved it, and it was all done. Six days later he died. This was one of those cases where, if the composer has not documented everything, and you don’t know what it is that is supposed to go on in the composition, it’s going to be very difficult to reconstruct it later.  My message to composers is that you have to make it very, very clear what the technology is supposed to be doing, what you are after musically, and what those high-level intentions are. What are the signal processing routines?  I don’t want just the code; I want to know why. I think that’s much more important.  I looked at several other pieces of varying levels of complexity.  Thea Musgrave’s Narcissus was another composition at the core of this research.  It’s a piece that if you go to a flute convention, somebody’s going to be trying to play it.  It’s originally for flute.  There’s a clarinet version; I did an analysis of that.  I had to get a hold of the composer’s original machine because there were some knob positions in the score that were undocumented, and I wanted to know precisely what they were and how it worked.  So I tracked down the original machine, did an analysis of it, and came up with my own algorithms and published them.  Now when a flautist or clarinetist wants to play that piece, they google it and find my stuff. I ended up, over the last few years, consulting with dozens of performers around the world who are trying to play Narcissus. The technology is not a difficult hurdle to get over. It’s just that they are primarily performers with limited training in electronics, and they want something quick and easy. I also looked at Cort Lippe’s ISPW pieces, crawled through all of his ISPW patches, took them apart, and documented every signal processing routine, every variable, and every connection between devices.  I also looked at a piece by Bruce Pennycook, which is also fairly complex.

JB: You guys are talking about sustainability.  It seems performers in general are interested in keeping their repertoire sustainable.  Do you think that’s the key to sustainability?  Keep everything very high-level and above any kind of notation?

DW: I think notation is very helpful. You need block diagrams sometimes. Just plain old text is good.  The English language is flexible, and it’s good for describing these things. Sometimes you need pseudo-code.  Sometimes you need a filter equation.  It depends on how exact things need to be. And that’s really very dependent on the piece and the composer. Again, what were composers after musically?

SH: I think that musical notation is incredibly robust and efficient in that it helps in playing material from hundreds of years ago.  It works perfectly fine.  In the computer music sphere, I think we have yet to really establish a notational repertoire that is that robust. Even with something that is heavily scored, you still have to sometimes go back and ask questions because composers, for example, will make references to dial positions on machines that don’t exist anymore.

DW: The last whole week, I’ve been a featured performer here playing five pieces. For every single one of them, there have been a lot of questions like, “What exactly did you mean?”

JB: At times, I feel that some composers want performers to answer that question themselves—Greg, were you going to add something?

MB: David was talking about Narcissus, and it just so happened that I was involved with the premiere of that piece.  It was written for a very specific delay unit, a unit they called a Vesta Koza unit.

DW: Vesta Koza DIG-411.

MB: We looked everywhere to find this unit.  As far as we know, Thea Musgrave owns the only one that was ever made.

DW: Actually, someone e-mailed me after they saw my research, and told me they bought one for $50.

MB: So there were two. Back when these things were still new, back in the late 1980s, we couldn’t find one anywhere. We were calling every music store in the country, and nobody had one. Ms. Musgrave was kind enough to ship her Vesta Koza to us, and we did the premiere with her machine.  For a while, that machine traveled around the country with anyone that wanted to perform that piece.  If they wanted to perform the piece, they had to get a hold of Thea’s machine.  It turned out what she was asking for was really very simple.  It’s a modulator delay, but she didn’t know how to specify it in any way other than with a knob position.

DW: The score reads “Turn knob to ‘1’” and it reads “modulation speed remains at zero throughout.”  That’s what is says in the score. So if your mod speed is zero, that’s an LFO. It doesn’t make any sense. But I got the machine in the mail, and I looked at the front panel. And it said mod speed .1 Hz to 10 Hz. So the first problem is solved just by looking at the front panel. So, again, careful documentation, please.

Unidentified participant #1: May I ask to point something out?  And this may be, but I hope it’s not, offensive. There is an assumption here that music is written to be repeated and saved.  I’m a big proponent of disposable music.

DW: I have nothing against that, but there is so much music that is meant to be preserved and repeated.

Unidentified participant #1: Maybe the idea that something needs to be kept and preserved and repeated will just fade and disappear.

DW: Except that as a performer…

Unidentified participant #1: Yes, as a composer, there is a difference.

DW: As a performer you prepare for months, ideally.  Sometimes you only have a week. But you put so much of yourself in learning how to play it, and then to just let it disappear is disappointing.  Other performers can chime in on this.   It’s disappointing if I’ve put a lot of work into it and that’s the only chance I get.  That’s kind of how I feel about the piece I played earlier today. I like the piece. I put a lot of work into it. I’d like to do it again. I think I can get more out of it the next time. I sort of got through this first performance, but I think I don’t know it well enough, yet, and there is more I can pull out of it with subsequent performances. I think with performers, there is a meditative thing that goes on when you play a piece again and again; you start to understand the work on a deeper level. I think that’s why we still play Beethoven.

Unidentified participant #1: To the point where the Laptop idea was introduced, where people are arriving in a room with a mobile phone and they are connected all of a sudden and making music together, do you think there will still be a desire to preserve it as a museum piece, because that’s how I’m seeing the whole classical tradition.  It’s kind of a museum piece.

MB: It is.

Unidentified participant #1: We seem to be so attached to the classical tradition, we don’t want to let go of it.

DW: I don’t look at the classical tradition that way, but I can see how it can be seen that way.  There are so many musicians playing this music over and over. I teach an online Music Appreciation class, and I try to introduce as much contemporary music as possible and teach music as a living art.  I think the reason we play old music is because people want to, not because someone told us we had to.

SM: I don’t think there is any danger of that type of music going away.

DW: I think it coexists beautifully. I don’t see why a classical tradition or even an electronically enhanced classical tradition can’t coexist with spontaneous, ephemeral musical happening that can also be so much fun and rewarding.

SM: Notated music is a blip in human history.  The point is notated music is just one more way humans interact with music, and it’s one more way of making music. It’s been the best way so far to make music persist through time. Every tribal society has its own way of making music persist through time.  We’re just as much a tribal society as anything else.  Computer musicians are a tribe. Live electronic musicians are a subset tribe; computer musicians are another subset tribe.  We all have our own ways of making it disseminate, but in computer music, as Scott was saying earlier, we don’t really have a fixed way to do it yet. We’re still feeling our way.

MB: Computer music is not too far away from where Indian classical music is right now in that it’s an oral tradition, and it’s a very carefully preserved oral tradition.  There are very strict sets of rules that need to be followed. There’s no way to write it down. And right now, while the code that we all use is so constantly changing, I don’t see any way for that to happen unless we develop something that’s analogous to notation or analogous to a more rigorous oral tradition.

SM: The closest thing we have is pseudo-code.

DW: We have pseudo-code, and we have signal processing routines. A millisecond is a millisecond. I’m quoting Gerry Errante.

SM: And an on-off gate is an on-off gate.

JB: Just jump in.

DP: I’d like to throw in a little computer science and software engineering perspective as opposed to a musician perspective, because I’m a poser on that front. It seems to me part of the problem being discussed is what a software engineer would call over-coupling of composition and instrument design. One of the problems you’re talking about is this: some of the code (if that’s the form that it takes) is composition and some of the code (if that’s the form that it takes) is instrument design (and it certainly does in computer music and not all electronic music). If you overcouple those two, then one of the problems you create is that if you have an absolutely unique instrument and the composition can’t be repeated unless the instrument is reconfigured again, then, basically, it’s not going to be performed again. Whereas if you can decouple the design of those two things to some degree, to come up with a class of extensible instruments, and then a class of compositions that utilize those instruments, it’s possible to duplicate the instrument and play the composition again.

DW: The system I have been working on does that.  All these pieces that I’ve analyzed, I’ve broken them down into little modules that each do one thing.  I’m doing all of this in MAX/MSP, but there’s no reason it has to be in MAX/MSP. For instance, with Cort Lippe’s ISPW pieces, there’s a spatialization module, there’s a harmonizer, there’s a reverb unit, there’s a flanger, and there’s a granular sampler.  I turned each of those into separate modules.  The system that I came up with loads each of those as abstractions on-the-fly, and I have a script. So it’s a simple text file. It’s just an event list. But it will load all the modules that you need and connect them any way that you want. And then it will play a piece. There are a bunch of standardized instruments, and the piece exists in that little text file. It’s a command-line kind of thing.

MB: That’s a great idea.  So if a composer would learn your system, he could write a piece for it.

DW: What’s fun about it is that in a recent performance, I used Cort Lippe’s stuff in another piece. So it’s very adaptable. You can take someone else’s very specialized system and then start repurposing it. It’s event-driven, so you would think it would be tied to a score with rehearsal numbers and things happen here and here and here, but I built it in such a way so that you could do a lot of branching, too, and so you could have an event script that loads another event.  It has a command line so you can type things in on-the-fly and operate it that way. It has a module for MIDI input. It could have a module for any type of input you want. What I really want is camera input, so I can get rid of my MIDI pedals. I hate MIDI volume pedals.  What I really want is something camera-based so I can put my foot through a field and it turns the volume up or down.

SM: Can you strap an iPhone to your foot?

DW: I’ve thought about it.

SH: To play a little against what you’re saying, is there not a danger that you’re swapping obsolescence for future obsolescence? There’s a whole body of works that are hard to play now.  Composers worked with their systems.  And out of ten things, one of them is good. That’s the first approach, isn’t it?  I’m going to develop my toolkit, and I’m going to keep my toolkit up-to-date so people can play it. Myself as a programmer, I’ve written maybe six environments, with the idea that I’m going to write compositions for these environments. As times have gone by, four of them no longer work, four of them probably could be made to work eventually if I bothered. But if nobody asks to play those pieces, I’m never going to bother to do it. The interesting thing to me, though, is this idea of a universal text score driving some kind of time-based events. Because at least that abstracts it so I can interface with the text score in the future, perhaps.

DW: The way it works, in the event line where you create a module where it just loads an abstraction, you know, I have a main module, so it’s just an event number, it will say MAIN, new mod, the file name, then you give it a handle, just give it a name. Later on in the script, so you call it “delay1” for instance.  You load your delay module called “delay1” somewhere else in the event script, event number, delay1, time=1000.  It’s a set of very standard parameters and values. That’s how it runs. So then you have to maintain a module that does all of that, has an actual delay in it, and can interpret those keywords. The script itself, the part that’s actually the composition, is very separate from that and very accessible. So when I’m rehearsing one of these pieces and you want to change something, you open up the text file and change a value in the text file and you never have to re-patch something in Max/MSP. So when you’re on stage and you want to adjust something, it’s very simple to do. That’s what I was after.  I want to be able to rehearse, rehearse, and rehearse.  I have my own system.  I can travel with it, set it up somewhere in about twenty minutes, and play a concert with four or five works with different technical requirements with the same system.

SM: This seems like an interesting paradigm for performers who perform electronic music in general.  Obviously it’s a lot of work to set up. Is it something you can pass on and teach to other people, or do you think everyone has to develop their own way of working with the technology and the problems of obsolescence and non-portability?

DW: As we’ve been discussing, it changes so fast.

MB: That’s why we need something that steps outside of the technology, as a representation of it that isn’t necessarily dependent on any technology.

SM: I think there’s also a problem because of a slight difference between composers and performers, and I could be wrong about this, so shoot me down if this is the case.  For the performer, repertoire is important and you need persistence when you spend six months preparing a piece. Whereas, personally speaking as a composer, I write a piece; it’s done.  Next piece. Next piece. And I think this is why so many tech pieces lie in obsolescence because composers don’t go back and make that piece work in new systems.  Composers would rather write a new piece. This is an important difference in viewpoints between composers and performers in pieces like this.

BL: You get emotionally attached to the pieces that you play.

SM: Right. Possibly more attached than the composers.

BL: It sort of goes back to the comment about museum pieces. I play Bach. I sing Mozart.

SM: You’re not in a museum.

BL: But you do it because you have a genuine passion for it. It’s never dead to you. This is why you become a musician in the first place. And your audience has to be engaged. There’s a translation of what other people write that you give to other people. That’s why you become a performer. I’ve definitely done pieces before (and my problem is I’m definitely not a programmer) and if there is something wrong with a patch, I can marginally approach it on basic levels but otherwise if it doesn’t come to me as a package, I need someone there.

DP: So is it the case that there are no instrument designers in the classical sense?  I mean the composers are not the violin designers.

DW: There are, but they are not connected well with the performing community. Or they are working more with the composer.

MB: Or they are the composer.

DP: What I was saying was that classically, the composer was not the one who designed the violin, for example. And neither was the performer. So is the instrument designer missing?

MB: I think you’re right.  And I think that’s where David is trying to plug that gap and find a way to do that.

DW: There have been great players through history who were instrument designers and contributed to the design of their instrument. Sometimes there are people who just focus in on the instrument design itself, but they work typically with performers.  And then sometimes a composer will catch on to what they’re doing. I have my favorite historical analogies.  Mozart wrote his clarinet concerto for a kind of weird hybrid instrument that his drinking buddy came up with.  It was a cross between a basset horn and a clarinet.

BL: You made a point about composers needing to document why they want a certain effect; I think it’s a good idea.  The thing about notated music is that it’s also imperfect. It can be interpreted in so many different ways.  Even folk musicians and jazz musicians have to interpret written rhythms according to varying performance traditions. And that seems to be getting more and more convoluted.

MB: And that’s always been a problem.  You can go all the way back to the French Baroque.

BL: But I think we have an advantage because now we have recordings.

DW: But that’s not always the best…

BL: Yes. If used incorrectly, recordings have their limitations.

DW: I spent the last two weeks intensively listening to this recording of this piece I performed.  I finally get together with the composer two days before the performance; he says that on the recording it didn’t go right in the performance. [This comment evokes big chuckles and nodding agreement out of the audience.]

DP: This brings me to my last question. Someone had mentioned Indian classical music, which involves a substantial amount of improvisation.  I’m wondering if this technology pushes improvisation harder than classical instrument making or composition technology ever did. So that a piece can be composed for a range of instruments and part of the performance is the improvisation over the range of instrument space.  An example that comes to mind comes from reading tales of Charlie Parker pawning his saxophone for heroin money and then proceeding to play an amazing performance on some squeaky plastic saxophone.  So I don’t know if this technology pushes more into the direction of doing improvisation as part of the performance.

SM: You jumped right into the question I was going to bring up. The Mozart Clarinet Concerto for example, you can port the Mozart Clarinet Concerto from Basset Horn to clarinet because it’s note and rhythm based.  In a lot of electronic music, composers are tied to the specific sounds and timbres that they are using.  Even the wrong loudspeakers can make some composers reject a performance opportunity. Being able to port stuff in that way (timbre based compositions) becomes very problematic. Whereas note and rhythm music (not that I’m trying to reduce Mozart to only that) is more portable.  Another example is the Schubert “Arpeggione” Sonata.  There are no arpeggiones today, but it’s quite happily played on cellos and viols and things like that, and it still sounds great.

BL: Maybe it’s a question of asking what it is of your piece do you want to preserve.  What do you want to remain consistent, and what is it that you don’t mind changing over the years?

SM: A living will for your pieces!

BL: There are pieces that don’t have any dynamics on the score or the publisher adds dynamics as a suggestion.

Unidentified participant #2: Another thing is the composer should include a sample of the result of the processing with the sheet music. Then it’s quite obvious what kind of reverb, for example, is intended.

DW: Yes. I would say as many kinds of documentation that you can throw at it, even simultaneous documentation of the same thing. For example, a composer could document “Here is a description of it.  Here are my thoughts on it. Here’s a block diagram.  And here’s a recording.” All of them.  Then you can triangulate the problem.

SM: It becomes a framework for future proofing of the piece.

BL: And then you don’t have the performer fixating on something that isn’t important to the composer.

DW: Fixating is something we do a lot.

BL: We see one staccato and we think the composer really wants a specific sound.

DW: Then we end up missing what’s really important.

BL: For example, just yesterday, Andrew said “I was told I couldn’t have naked notes and that I have to put dynamics on everything. ” He was joking around about it and said you can change that.  Even though I spent all this time trying to figure out how to do that. But really it was the interaction with the electronics that was more important.

SM: Can I ask, though, as performers, isn’t there an issue that too much ownership of the work is going towards the composer at that point?  If I was to write something for trumpet and demand that it was only played on one make of trumpet ideally out of a factory batch of 1000 manufactured between two dates, in my opinion, that would be fairly ridiculous.

DW: I was thinking the same thing, like “This is a piece for a Steinway C.”

SM: If I write stuff that I want people to play, then I make it easy to play. It seems to me that the dialogue we are having here is driving to a point where everything is dictated absolutely and it’s starting to feel like the performer would become redundant. If I’m going to record an example of the processing, then why don’t I just keep pushing performers until they make a recording that I think is the best and then I die and then that’s the best recording that exists ever.

DW: Then maybe you will have a performance tradition that would sustain it. Beethoven’s dead.  He’s not here to do that (push us to make perfect recordings).  So somebody else has ownership and it’s not just the performers. It’s the listeners, the musicologists, teachers; everybody seems to own a piece of Beethoven.

BL: I have two thoughts on that. First, even when you get two performers looking at, let’s say, Ligeti scores, it’s so specific and every note has an affectation—every note has the most specific rhythm that sometimes doesn’t make sense.  But if you get two different performers, even if you teach them the same way, the performances are going to be completely different.  Secondly, for me, as a performer, if I see a score that is heavily, heavily notated, and very specific, it becomes clear whether or not the composer has a clear intention of why they are doing it.  And if they don’t have a clear intention of why they are doing it, then that’s not going to uphold in performance. But if it’s for a real valid interesting musical reason (and I want to avoid discussions of quality of music; that’s just a can of worms), then upholding that tradition becomes a satisfying thing.

DW: My whole focus on trying to find these pieces that are worth sustaining—it really comes down to the opinions of the performers.  We were talking about Thea Musgrave’s Narcissus.  I gravitated toward that one not just because I like the piece, but because so many other people want to do it.  And it just seemed like a problem that needed to be solved because there were a lot of performers out there waiting to do this. They had heard the piece, and they really wanted to do it themselves. There’s something about getting inside a piece that is very different from just listening to it. Getting into it, and performing it, and interpreting it.  You take it into yourself and then send it back out. It transforms you and it transforms the piece. So when there is some dumb obstacle like we don’t know what the modulation setting is, then—sometimes it’s that easy. Sometimes it’s a simple thing to sort out.  Others are far more complicated.

MB: An example of that would be Morton Subotnick’s Ghost pieces.

DW: Those scared me away.  I was going to do Passages of the Beast.  I heard the piece and I thought, “Wow, that was really cool.”   So I looked at the score, and I called the publisher. And I asked about the Ghost Box, and they said, “Well, we could rent that to you, but it’s had mice in it. The mice chewed out the wiring, and now it doesn’t work.”

MB: Mort is now really interested in this.  You should get in touch with him. I think he would be happy to work with you.

JB: According to his web site, he is already in the process of transferring the electronics of some of the ghost pieces into MAX/MSP.

DW: There are these pieces that capture the imagination of players, but the moment they try to access it, they get scared away by these technical problems. And if you are not a really tenacious computer music oriented performer who has programming skills—how many of us are there?

SM: It’s a family that’s slowly increasing.  Give it a couple more generations.

DW: I think there’s a real need for some training. For performers who are interested in this, they at least need some kind of workshops or tutorials that are really aimed at performers who want to do a broader range of works—and not just composers who want to do their own work.

BL: If you are writing for someone who isn’t technically proficient, if you can make it simpler, then make it simpler. For example, if you’re just making what amounts to a tape part realized by a live processor with no necessary processing then just make it a tape part.  It’s so hard, however, to say that to a composer.

JB: Yes, for some computer music composers, it would be very awkward to suggest to them that the music that they created with their fancy DSP algorithms could be realized just as effectively by capturing their ideas in a DAW and saving it to a fixed media format that could be simply played back in iTunes.

DW: They have very fond ideas about interaction. This conference has been kind of weird for me. I’ve spent so much time trying to take control of the electronics and perform on my own stuff. Then I come here as a performer for the conference, and I’m playing all these pieces where basically I’m a puppet on stage. The composer is out there in the hall and I can’t see them because of the lights and they’re doing something with the electronics, I guess??

SM: Which is the tradition. You are kind of privileged to have built yourself a system that allows you not to have to do it that way.

MB: I think one thing we are interested in is changing that paradigm and getting the performers more engaged and know more of what’s going on.

KM: I know I had a few pieces that I would see little things in the score—I would play, then I wait and hear what happens, and I think, “Ok, I guess I’ll keep going now.” But when I play with a cellist, I know that when they do this [motion of a bow], I know that sound is going to come out. I know exactly what’s going to happen. I know that it’s going to be a low note. It’s a very simple thing. But with computers and multimedia, I don’t know what’s going to happen. I just did a piece. The score had a bunch of fermatas, and underneath each fermata there was a word.  Underneath one of the fermatas was the letter “D.” And I played, and I’m waiting, waiting, and waiting.  Then I’m asked, “What are you waiting for?” I respond, “I’m waiting for the delay. I need to hear the delay. Is ‘D’ a delay? I hope it’s a delay.” I don’t know what the effects are. The first thing is for performers to aurally understand because our ears—as much as composers write down, and give us fifteen pages of books, and diagrams we can put up all over our house—I trust my ears more than my eyes.

DW: I think when you are on stage, that’s what you have to do.

SM: And it is what your training has brought you up to do.

BL: Would you agree, though, if you knew why they wanted a specific sound, you know what things to let go in a performance?

KM: And for something as simple as a pedal when you’re instructed to put a pedal down, is the pedal stopping something or starting something?

DW: When you’re playing piano music, when you put a pedal down, you know what the pedal is going to do.  When I was trying to figure out Cort Lippe’s ISPW patches, I opened up this thing that’s called a sampler. I didn’t know what was going on, and there was this little subpatch called Trevor, named after Trevor Wishart as it turns out. And it’s doing this strange logic where it’s playing these tiny little snippets of sound. So I go look that up and I go, “Oh, that’s granulation.” I kind of learned all these things that composers have been trained to use.  I really didn’t figure this stuff out until I was analyzing a real piece of music that I wanted to play and understand. I played it actually, and I did the puppet version where I was there on stage and Cort was out in the audience with his NeXTcube and his ISPW card. We flew him in from Buffalo, I played the piece, he left, and that was it. Then six years later, I asked Cort if he was still interested in that piece.  Can you send me that port so I can see what’s going on? I still have not gotten around to performing it again because I’m still analyzing it and trying to resynthesize it and put it back together into a real viable instrument. I learned a lot of tricks just by looking at a piece.  But then, I looked at somebody else’s piece, and then I realize, that’s just the same thing I saw in Cort’s piece. It’s a form of musicology. One of my advisers begged me not to use the term “technomusicology.”

SM: It’s a technique thing.  You learn a piece of Mozart, and you learn a particular fingering and you play another Mozart piece, and you go, oh that’s the same fingering.

DW: And then you play something by Haydn and you go, oh, that’s where Mozart gets it.

SM: But it seems so many times, that performers are like when in film they use blue screen in the background.  And there are actors that have to act like Bugs Bunny who isn’t really there. There’s nobody to feed back against.

Would it be worth asking of the composers and computer music writers, what are your experiences in the other direction in writing for performers? At the moment, the onus is on the composers to make more sense to the performers. But is there anything the other direction? I’m aware that the answer here might be a quiet room, but I thought I’d put it out there.

JB: Why don’t you start, Greg? What do composers expect from performers to help make the repertoire sustainable?

MB: So much of the time, all the composer expects is that they show up. And that they’ve hopefully practiced the piece a little bit. A lot of times our expectations are very low. The Holy Grail for a composer is after you ask a performer to play your piece, they want to play it again. And in extreme rare cases you get a situation where you get someone like David who wants to play it as badly as David wanted to play the Kramer pieces and who wants to go to so much more trouble. We need more of that. Not only is Beethoven dead, Kramer is dead. If David hadn’t done that work, we wouldn’t have that piece around anymore.

DW: When I went to SPARK in 2006, they let me present the Kramer work. I kind of lied about the presentation of the Kramer work because there was no check-box for performers who were presenting pieces without the composer.  Kramer had already died two years earlier, so I could not bring him with me. There is no forum for performers who want to present pieces that they think are cool.

MB: And there should be more things like that. Part of the problem with computer music is that composers are writing pieces that can’t be performed unless they are in the same room. And that’s something that we really need to find a solution for. I’m as guilty as anyone, but I’m trying to get away from that.

DW: But I think a piece does have to start that way, at least at the premiere.

DP: There is some enabling technology that could help. Use a data representation exchange format that is not only both machine and human readable but machine and performer readable, rather than specifically coding to a Kyma machine or writing code in ChucK, etc. Nowadays the format would probably be XML. You run into this in lots of other application domains. I spent time doing interoperability testing in Asia where the common language was XML. You would spend time pointing at XML on a screen to work out incompatibilities between people who were generating media signals and people who were synching the media signals in order to render them. It’s a similar sort of problem, but it boiled down to coming to agreement on a data exchange format that both people that were involved with it could comprehend it at the time that they needed to.

DW: I always thought we could have a big dialogue with our friends in the graphic arts, too.  You’ve got all these MFA students whose portfolios are in Flash. How long is that going to last? Can you even get to your source code anymore for some project you did several years ago?

AC: Another solution is—do you know Jeff Herriot? He travels with a performer and they bill themselves as a duo and they do lots of different pieces.

SM: There are a few English examples.  There’s an English cellist named Neil Heyde who travels with an electronic composer, Paul Archbold, and they do concerts of lots of different music.

DW: A performer who has an engineer sidekick.  That would be a great model.

SM: It’s more of a symbiotic relationship.

AC: I see more and more of that.

SM: So do I.

JB: Earlier today, some of us saw a great example with Krista Martynes and Julien-Robert Legault Salvail.

DW: I’ve often thought of that.  When I do my own concerts and when I’m bringing all the technology, it’s very hard to concentrate and that’s why I rehearse with the electronics so much. I practice everything from shutting down the system to putting it in the bag.  Then I take it out and run it.  And I do that over and over.

BL: It’s like tuning.

DP: Students are good for that sometimes.

DW: I want to be able to do it myself. I want to be able to go to a venue, unpack, and put it together myself.  I can put my clarinet together myself.

MB: It’s like practicing scales.

DW: It’s like that.  This cable goes here. This cable goes here. And I don’t think anymore. I just know where it all goes.  But it’s still difficult to perform and do that at the same time.

SM: You shouldn’t have to be your own roadie.

MB: As much as I like the model Andy is talking about—it’s a much more feasible one— it’s much more rewarding for a performer to understand what’s going on under the hood.

DW: But on the other hand, if I could have a technical assistant as well, and we could talk to each other, speak the same language, but someone else was actually responsible for setting up and making sure everything is running, and we could be on stage together. I don’t really like the idea of a composer sitting at a desk out in the hall, while I’m noodling away on stage and I don’t know what’s going on. I feel like I’m half of an ensemble.

Unidentified Participant #3: Maybe we should approach these pieces less as a piece that I write for a clarinet player but as a collaboration with a performer who is going to take three months and will require more of a commitment out of the performer. Through that process they will learn how to operate the Max patch and learn about synthesis techniques. Otherwise you’re just playing with structures. I want to hear that particular passage with that particular processing technique and see if it works for me long before the premiere and not at the last rehearsal.

DW: The other thing I think we are getting into, now that we have talked about the onus on the composers documenting better, there just needs to be more performers who will make that kind of commitment. I’m not sure how we get there. The program at Peabody—I heard about that as an undergraduate—I knew that’s where I wanted to go. That’s the only place I knew of that did that. That was fifteen years ago, and it’s still the only place I know of.

MB: And there are still not may people like you that want to be computer music performers.

DW: But I hear from them, though. Clarinetists and flautists who ask me about Narcissus, and they tell me that they really want to perform with electronics.

MB: Send them to Peabody.

DW: I coordinate a Music Technology program. It’s not a composition program.  I have students studying an instrument and they are also learning music technology, but more towards the recording side. But occasionally, I get somebody interested in what I do because I talk about it all the time.  Some of them start to get interested and then they decide that maybe they should do something on a recital.

MB: And Jeremy, you’re doing that kind of stuff?

JB: Yes, I am also trying to teach computer music performance within a program that was originally designed for students studying an instrument who lean towards audio recording. I go to computer music conferences, but since I play the double bass, I also go to the big bass conventions. I get routinely approached by younger kids who want to get started doing what I do. Apart from telling them I studied computer music performance at Peabody, I tell them all the things they need to do, but after my spiel, it seems impenetrable to them.  The learning curve at first seems a bit overwhelming. Max is easy to use, but not to someone who has never seen it before.  The payoff at the beginning is not enough to keep going.

DW: It’s not just Max programming.  It’s understanding granulation. It’s understanding signal processing and filter equations.

SH: As performers, there’s so much preparation that needs to be done in terms of being able to play the part indicated in the score and being able to do it properly and in a fixed state and then trying to put the technical layer on top of that can become overwhelming. Meanwhile for the composer, they’re constantly chasing grants or commission money, which means that the preservation of a work that’s been paid for and won’t be repaid for is of no interest to them.  So they just keep driving forward. The gap, the part that’s missing, is the technically dedicated mediator—the role in the middle—the guy that solves all of these technical problems and who is driven by the urge to resolve these technical problems. As a composer, you could say to this person,  “I want to do this, how do I do that?” As a performer, you could refer to these people who are technically aware of the technical issues in a composition. Personally, I actually fit the description of that group and I’ve done a lot of work for a lot of people in that role, but I’ve discovered that it is a role that isn’t really acknowledged.  As an illustration, over the last four or five years, there have been works where I would have wanted to claim some kind of technical consultancy.  For example, Scott’s piece was played earlier on this year.  When Scott thought of the idea he came up to me with a description of a proposed project and asked, “Is this doable?” My answer to him was, “Yes, you could do it in six months.” I knew that I could do it, and I knew that I could support him if he had a problem.

DW: And that’s the role of the instrument maker that we were talking about.

SH: But I need to be acknowledged in that process.

DW: And you should be acknowledged. It’s interesting that the composer and the performer would be expected to be there. And then there is a need for the technical role, but people don’t bother doing it because their work doesn’t get acknowledged.

SM: But it’s not about the instrument maker.

DW: I played Hans Peter Stubbe’s Bass Clarinet piece. They had a technical person as part of the original project. We didn’t have that person here, and we faced enormous technical problems getting this piece off the ground because we didn’t have that person here. If he had been here, things would have been much smoother, and I would have been much more relaxed.

MB: That, historically, has been the role of the instrument builder.

SM: If we take this discussion outside of the music sphere, and look at, for example, drama, you don’t write a play without a stage manager, a lighting director, etc. That’s their job. If a piece of music was written like that, there would be someone there dedicated to fixing the patch and setting stuff up.  And that’s what you’re talking about. That role doesn’t exist in music.

BL: How about this? Etude books for electroacoustic music?

DW: I thought about that when I was at Peabody. I was in the studio working with all the equipment, and I was thinking that there really needs to be a method book. For example, Etude No. 1: Exercises you can do with a multi-tap delay. This would not be for the stage.  It would be for performers working in the studio learning how to perform with delay. Or even microphone techniques for computer musicians. So performers know how close they need to be to a mic, know the different types of microphones and when to use them, and know pickup patterns.

SH: But now you’ve already pulled away from the idea of a dedicated technical mediator. You’re starting to facilitate the performer and give them more tools. I think the problem is that the task of technical mediation is too large for the performer. I don’t think that it’s so big that you couldn’t have one technical person facilitating five or six performances in an evening.

KM: That exists already.  There are companies that do that.  There are two guys that call themselves Sound Intermedia. They are two guys that tour the world. Every opera that they are at, it’s their responsibility to take care of everything technical.  They are these two guys from England and they are financially supported by just their business, which is technical support for opera.

Unidentified Particpant #4: Does that business have a technical system?

KM: They are composers. They read music. These are the important things about them.  They know how to read music.  They are the intermediators.  They are like “performer whisperers.” So these jobs do exist.  What doesn’t exist is the money. For example, in Quebec, we have a bunch of grants.  For applications, I never put “solo.” There’s another person there, and he should be paid as much as me. There are two people on tour.  We need two flights and two of everything.  There’s also production.  It’s up to us to install a collaboration process.  I try to be clear that I’m not going to just improvise and the composer clicks away.  Instead I want the collaboration process to include a composition process that involves hours devoted to experimentation, development, and finally, performance. And we’re not going to experiment during performance.  We are going to perform what we worked on during the experimentation and development hours. It’s the collaboration process.  It’s what we (as a duo) did for a year as part of our research. We had a microphone session. Then we had a speaker session. Then we devoted time to sound.  Then we devoted time to images. Then we changed some image.  THEN, we talked about making a piece. And it still needs more work: movement, lighting.  It’s at its most interesting because we are right at the theater level.  Getting the money. That’s the hard part.

JB: We have to convince administrators that the people doing the technical mediation are important. I don’t know how many times I have had to go back to the person making the programs to include, in the printed programs, the crew involved in technical mediation. Their importance to the musical deliverable is obvious to me, but it needs to made obvious to others outside of our area of expertise. At the very least, we have to go back to our home institutions and convince our cohorts of the value of the ones that take care of technical aspects, and maybe the money will start flowing.

KM: Or go to the theater person, and talk to them about their lighting and their technical rider. Install yourself in their mindset.  They get as complex as keeping track of all their protection laws. We don’t think like that.  We come with our tent and campfire, and we try to make a little concert. But we need to get as serious as theater. This music is fantastic music.  But if there is no technical support, the music won’t be passed on.

Unidentified Participant #5: But the problem is how can you support something that’s at the cutting edge of what’s going on? You have to have technicians; you have to have a training process, which means formalizing the process. It’s strange, if you had a formalized process in the first place, you wouldn’t have a problem in the first place.

KM: The other problem is the performance practice. When a performer performs something, they acknowledge the composer. But performers tend not to acknowledge the guys in the back that are clicking away and making sure everything does happen. When I perform, half the time the composer is not there, and when I try to acknowledge the technical crew, the audience thinks the composer is present.

SM: Again, this goes back to the theater model.  In music, acknowledgement of the technical crew is not thought of from the ground up. It’s not part of the original conceptualization of the composition.  It needs to be part of the composition right from the start. Who is going to be dealing with and mediating the electronics of a composition?

DW: The Kramer piece that I started with actually does have a role for a technician. There’s a line in the score for somebody at the mixing console operating a matrix mixer and punching things in and out. And he is actually a performer on stage.

SM: Stockhausen pieces have that, as well.

SH: Perhaps the technicians just need to get a little more audacious about it. I have a background as a live sound engineer. I used to work theaters and gigs and all sorts of stuff. The prank that we used to play was we used to wander on the stage as extras in the middle of scenes. After doing five shows a day for a month and a half without being acknowledged whatsoever, we used to just dress up as random donkeys and just stroll across the scene. Eventually we got our names in the programs and we got a bow.

Conclusions And Future Activity

This record of the discussion can be referenced for purposes of establishing a directed academic community engaged in formal discussions and research regarding the maturing sub-specialty of computer music performance. The content of this document relies solely on the contributed narratives and expertise of the participants of the International Computer Music Association’s International Computer Music Conference 2010 Unconference Unsession on Computer Music Performance; it does not rely on secondary sources.  It can be considered a trusted primary document that captured a one-hour discourse among computer music performance experts who were in attendance at the 2010 International Computer Music Conference.

In an ideal setting, this discussion would continue and mature beyond the unconference and identify established performance and technical production practices, codify a lexicon of terms and techniques, solve some current challenges like sustainability and notation, and promote computer music performance as a legitimate artistic and professional endeavor within the academic computer music community, the broader mainstream classical community, the underground experimental community, and the commercial music communities.

Perhaps at the very least, a regularly scheduled conference (or unconference) of computer music performers could be established. If interest and resources are sustained, an academic society and journal that mirrors the academic societies and journals that promote computer music composition and research could be established as well.

Acknowledgements

Special thanks to Freida Abtan, chair of the ICMC 2010 Unconference; Margaret Schedel and Daniel Weymouth, co-chairs of ICMC 2010; Chryssie Nanou; the Stony Brook University Department of Computer Science for use of their facilities; and all unidentified participants in this transcript who made contributions to the discussion.

Advertisements

About arrayblog

www.computermusic.org
This entry was posted in Articles and tagged , , , . Bookmark the permalink.

3 Responses to A Summary and Transcript of the ICMC 2010 UnConference UnSession on Computer Music Performance

  1. Jeremy Baguyos says:

    Here is a timely and relevant article from Chamber Music Today (shared by Jason Bolte) about sustainability of repertoire. http://chambermusictoday.blogspot.com/2011/04/save-music-writing-refactorable.html

  2. Hi my name is Buck and I live in West Palm Beach, Florida–Oddly enough I own one of the very digital delay units spoken of in this piece. I just happened to be doing some research on the unit when I found your conversation. so I guess there are a least 3.

  3. popelulu says:

    I was there — I’m pretty certain Unidentified Participant #1 was Eliot Handelman.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s