rhoadley.net music research software blogs
In terms of the compositional process, the development of pSY and specifically Copenhagen has been in some respects very similar to my approach to acoustic composing and in others radically different. Below (see The Musical Interface) I will investigate whether one can draw a real distinction between a composition and the hardware and software that it uses, but first I would like to investigate the related idea that the composition itself can be seen as a form a programming and vice versa.
There's nothing new in this idea - in terms of an acoustic piece I might have an idea or two and seek to develop, expand or simply investigate these using various 'processes'. This is a very standard musical method observed as much in Bach's use of fugal/episodic technique, for instance, as in the complex use of matrices and rows in Berg, Messiaen or Peter Maxwell Davies. All of these processes tend to be more or less algorithmic in that, no matter how arcane and esoteric and no matter how inaudible the resulting structures are, the processes follow generally clear and quite straightforward rules, (at least at the outset - these rules or the resulting structures may often be altered or edited later). So, for instance, there is little fundamentally different between Bach's use of a phrase from a fugue subject combined with a chord sequence that might make an 'episode' (for instance, Das Wohltemperierte Klavier Part II Fugue I, bars 13-18; Fugue II, bars 8-10; Fugue III bars 12-14, etc.), and Messiaen's use of combinations, illogical though it may be, in Chronochromie (Sherlaw Johnson ****). More intriguing is Harrison Birtwistle's use of 'random numbers' in a number of pieces as at least on the face of it there is by definition no logical pattern to the numbers, and this intrigue is only increased when Birtwistle himself has said that he can no longer remember how he used them (Hall 1984 p45). In another interview, however, Birtwistle at least partially resolves this paradox by emphasising that random elements are, in his opinion, always present and that we simply determine which are the more or less random parameters in any creative process (BBC 1989).
This is not the place to undertake a full review of how algorithmic various elements of music are - however, it is quite clear that pSY operates (as does all computer software) algorithmically and that Copenhagen is strictly algorithmic music. As with the Birtwistle, the fact that no one knows precisely what will happen in detail at any given point is no more strictly relevant than the fact that in the performance of an acoustic composition no one knows precisely what will be happening at any given point.
In comparison to my experience of acoustic composition, then, there was quite a clear comparison between the investigation of a particular process (let's see what happens if…) and the construction of a routine. In the best cases these are a combination of logic, irrationality and intuition. As an example, I can take the part of Copenhagen where what I think of a 'flute' sound alternates between rather scalic patterns and a more arpeggiated form. Each of these contrasting patterns are constructed from functions controlling the MIDI pitch of the events. In addition there are number of routines and functions for altering the nature of the sound. One of the above functions is based around 'wondrous' numbers. This is a mathematical concept that I first came across in Douglas Hofstadter's ubiquitous book Godel, Escher, Bach an Eternal Golden Braid (Hofstadter 1979) and is used as an example of a process which is not algorithmically predictable.
Basically, the function takes an arbitrary number. If the number is even, then the function halves the number, if the number is odd, the number is multiplied by three and one is added to the result. In each case, the resulting number becomes the output. Here is a typical example:
x = 13
x is odd so x = 3x + 1 = 40
40 is even so x = 40/2 = 20
20 is even so x = 20/2 = 10
10 is even so x = 10/2 = 5
5 is odd so x = 3x + 1 = 16
16 is even so x = x/2 = 8
8 is even so x = x/2 = 4
4 is even so x = x/2 = 2
2 is even so x = x/2 = 1
From this we get the following sequence of 'pitches': 13, 40, 20, 10, 5, 16, 8, 4, 2, 1
Similarly, starting with 15 we end up with this sequence: 15, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1
Here is the very straightforward code in Basic:
' if the note pitch = 0 or 1 then reinitialise:
If X = 1 Or X = 0 Then X = Int((127 - 1 + 1) * Rnd + 1)
'If the note pitch is even then divide by 2:
If X Mod 2 = 0 Then X = X / 2
'If the note pitch is odd then multiply by three and add one:
Else X = (X * 3) + 1
'Convert X into MIDI range:
NotePitch = X Mod 127
As it happens, although I did not use it immediately, I did eventually use the wondrous function quite extensively in two places in Copenhagen. In one, the pitch patterns which result are quite clear, in the other, this is not the case. In neither did I specifically think that I 'must' use the wondrous function - I had constructed it earlier and in the circumstances, having tried it amongst other resources, the results of its use were satisfactory. In other words, the utilisation of the function was arbitrary to the extent that if I wasn't interested in it I would not have implemented it earlier, and if I hadn't done this I would not have been able to experiment with it during the composition. Presumably the most important feature is the fact that the function produces a patterned (yet not entirely predictable) result, (although mathematically there is no algorithmic way of predicting whether a number exists where the wondrous function will not send it higher and higher! (See Hofstadter 1979 for a more detailed discussion of this intriguing idea.) In effect, where pitches are discernable, the final values produced by the function (16, 8, 4, 2, 1) are quite clearly discernable and due to the function, it is merely the length of each 'loop' and the manner in which these pitch values are approached that make the effect.
From the above it can quite clearly be seen that this is hardly a 'logical' approach to composition! Indeed it is almost entirely arbitrary, each idea often depending on a series of previous ideas and whether or not they had been implemented. Moreover, knowledge of these preceding ideas themselves are arbitrary - if I had not read Hofstadter's book and been struck by the possibility of using this function in a musical environment it would not have happened either. But on this level, the sequence of events is not so arbitrary - I am a musician and yet I am interested in the sort of literature of which Hofstadter's book is representative (although I should say that I do not agree with some of his later conclusions concerning AI). In my terms I would probably not have read the book, and certainly would not have even considered any possible musical application for wondrousness had the circumstances not been what they were!
I would further suggest that none of this is any more peculiar than what happens in the creative processes of an acoustic composer. I would suggest that although all the methods used by Bach, Berg, Messiaen, Maxwell Davies or Birtwistle mentioned above do have an effect on the resulting music, it is far from the most important consideration - the methods are from this point of view the materials with which the piece was made. All of the above composers have a perfectly recognisable musical style which incorporates the various methods they have used in many pieces (as Stravinsky's use of serialism in his later pieces makes them no less Stravinskian). In this way it could be argued that, far from being an arbitrary use of found materials in a sort of chaotic process, the above composers take various materials, methods and processes which they feel might produce interesting results and incorporate them uniquely into their own style. (Of course such materials, methods or processes may be more or less successful!) Similarly I will suggest below the idea that the very idea of composition is in many ways similar to some of the ideas and processes of programming.
There is, however, one difference when using software in this way - the amount of time and effort required to fully implement, test and audition a function or process. This is especially the case where the function is quite simple algorithmically. So, for example, my implementation of the set of pitches using the wondrous function is comparatively much simpler, quicker and more accurate than would have been achieved by 'hand'. However, in terms of detail, this method is considerably less flexible and does not allow for 'insight' within the algorithm, as happens, for instance in Chronochromie, where the method is most subjectively implemented, as is often the case with Messiaen. Alternatively, there will be many functions whose effects would be very difficult, if not impossible to ascertain without the use of computers, or where we as composers would not even consider such complexity without computers because of the technicality and detail of the work involved. This is the case with pSY itself both intellectually and physically (as will be discussed below), and was at the heart of the discovery of fractals and much chaos theory, where the patterns resulting from such algorithms were initially thought to be symptomatic of errors in the programming! Similarly, only in reverse, few electroacoustic composers would consider undertaking the effort involved in recreating the complex details involved in the creation of 'live' acoustic sound without the use of some algorithm - the volume of such detail is so great.
Although I have already considered the importance of the nature of the relationship between the performer and his/her instrument's structure or interface, it is, I think, worth considering this also in terms of the composer. As a teacher of acoustic composition as well as electroacoustics it has become increasingly clear to me that a lack of understanding of the theoretical and practical usage of acoustic instruments - typically illustrated by a belief that one first composes and then orchestrates - is one of the problems for students of composition to overcome. This is not least so that composers feel comfortable in using any or all of the resources that are generally available to them in full and not satisfied with simply sticking to the few instruments, instrumental groups or instrumental techniques with which they feel comfortable. In terms of 'standard' acoustic instruments, even at a fairly extended level, this means getting acquainted with a limited number of fairly simple interfaces. A clarinet, for instance, is, on one musical level, a fairly standard piece of equipment, and you can typify a clarinet by stating that its range is such and such and that it sounds like this and that here are some examples of what it can do, this is its typical character, this is a different use, etc., etc. In reality, things are a little more complex than that, but I think it can be argued that most of the 'accepted' acoustic instruments are accepted precisely because the standard orchestra is perceived as a successful and expressive collection of instruments which has the benefit of a tradition and a large repertoire, not to mention an equivalently large base of performers, by which these individual instruments can be 'justified'. Any attempt to invent a genuinely new instrument not based on an existing design is not easy as most of the basic methods of acoustic sound production are used by existing instruments. An acoustic composer, therefore has substantial existing resources supported by a substantial and meaningful heritage with which to express him or herself - I mentioned above the huge and generally unconsidered resource that are the individual performers who will, in their inimitably and predictably unpredictable way interpret our notes and grant them an individuality that is currently impossible to imitate.
What has all this to do with pSY and/or the hardware and software of music technology? I have principally been thinking of the use of tape or 'press play' music where both the composition and, if such things can be separated here, the interpretation, occurs earlier and the results of these (arguably rather solipsistic) processes are finally recorded on tape. However, there are many examples, and the number is increasing all the time, where composers (and indeed performers) are using live electronics as a part of their compositions. (Bearing in mind the inconvenience and still the general unpredictability of this it is worth speculation that its increasing popularity is surely an indication of a basic dissatisfaction with the performance element of 'press play' music.)
One of the principal observations to be made of such occasions is a very pragmatic one - the music technologist must arrive at the venue hours in advance to install their equipment: mixing desks, microphones, amplifiers, synths, various black boxes, loudspeakers, increasingly possibly one or more computers and, inevitably, yard after yard of cabling. All this needs checking, taping, marking and ultimately taking down again and returned whence it came. Of course, some of this fuss can be reduced if the venue has certain features built-in, although at the moment there are often so many difficulties in maintaining compatibility between different groups of equipment that it is usually quicker and more convenient simply to bring one's own set-up anyway, (see the earlier passage regarding 'ownership' of technology, p2-3!).
Compare these elaborate and time consuming activities to the acoustic musician who may arrive at the last minute, casually unpack their instrument(s) and can begin performing almost immediately. Of course, with more performers the situation becomes more complex, but just thinking of the practical complexity of arranging for the same number of performers each using a synthesiser and requiring lines, power-points, etc., sends shivers of fear down my spine - chairs and music stands seem simple in comparison. Of course, we can say that this is surely the case, but look - the violinist, for example, can only make violin sounds - and look at how much more we can do with all this technology! On the surface this may seem a valid argument, but is it really true? Let me use a simple example - compare our violinist to a performer 'armed' with a single modern synthesiser, (plus appropriate outboard equipment). It doesn't really matter which one, but for argument let me take the SY as it is complex and potentially quite expressive and I am intimately acquainted with it. Without going into too much detail, the violinist has a number of resources at his/her disposal:
Our music technologist meanwhile, (I'll assume he or she has set up and everything is working well), will have the following resources:
If we examine each of these lists it is quite clear that the technologist has some clear advantages - specifically in terms of the range of sounds he/she is able to produce. However, the violinist has other clear benefits, most especially and I would suggest most overlooked, being the repertoire developed over many years and the many years of experience on the one instrument. These two factors themselves come together to produce the last - not just a mentality, but a mind specifically geared to this one task. The violinist's factors 1-3 pose a chicken and egg question - does the repertoire exist because the violin is an effective instrument, or vice versa? This may be an unanswerable question, but we can see that this situation does not at least yet exist for the synthesiser. Of course the synthesiser is much 'younger', but even in cases where they have existed for some time there is no substantial repertoire in existence or in prospect. This, I would suggest, is because of the very nature of the synthesiser and its interface. I mentioned in my introduction the effects that different implementations of different forms of synthesis, the effects of commercialism, competition, obsolescence, etc., have on the general level of acceptance by performers and composers of various forms of music technology. Are these differences real or merely differences in development - will the area settle down over the years or is it a fundamental part of technological development that such equipment will be constantly changing and redeveloping? If the latter is true, is this a good or a bad thing? If we were to judge it a good thing, does this mean that standard acoustic instruments are stagnating - becoming more and more antique-like? Does this matter in terms of performance? **********Cf authors suggesting that 'classical music' is itself dying because of the effects of cultural stagnation/audience apathy.***********
Setting these questions aside for a moment, what of the synthesiser's clear advantage in terms of the range of producible sounds? The violin has certainly been extended in the techniques that may be used and the sounds obtainable this century, but even with the most skilled performer it surely can't match the vast variety to which the synthesiser has access. And in addition the synthesiser has all these controllers and buttons and sliders and things with which to modify those existing sounds if needs be!
Here we face an aesthetic difficulty that possibly lies at the heart of much of this discussion. It is certainly true that in some respects the violin's sound is considerably more restricted, principally in the sense that it has only one fundamental method of creating sound - in this case the resonance of a wooden box by a vibrating string. But what a sound it can be! In the hands of a skilled performer the sound of the instrument can fill a room with subtle and ever changing colour! In the same way, the violinist has a limited repertoire of 'controllers' - most notably his hands (or ten fingers, of which only a few have a direct effect on the string) and a bow. And each of these controllers only controls interactions with up to four strings. In the terms of the large numbers of parameters available to the SY this is quite pathetic! And yet there are few people who would seriously consider comparing the SY to the violin in terms of the quality of the instrument and its sound! What indeed is the reason for the existence of synthesisers? Are they designed to replace acoustic instruments or to supplement them? In the commercial music market their general role tends to be to service needs where sound quality or type is less important than the 'zing' of a fashionable effect, and where the very survival of the manufacturers depends on selling units to people reliant on these fashions. I do not feel these points substantially alter the musical argument. The violin, when in the control of a skilled performer has few (but quality) parameters under the control of few (but extremely subtle, flexible and skilled) controllers. If we take a broad view of the performer, we might say that it is under the control of just one controller - the performer him/herself! Even if we argue that the synthesiser in general has not had enough time to establish the (methods necessary to establish the) repertoire necessary to enable performers to gain the performance experience that I would argue is necessary to produce truly interesting and expressive performances, I would suggest that the very interface of the synthesiser is neither simple nor direct enough to enable this to happen. Nor does there seem to be the likelihood that this will happen in the near (or even distant) future as the principal motivation for manufacturers at present tends to be towards retaining the complex hierarchic structure typical of virtually all synthesisers *********Compare Csound*************. If a synthesiser (and this applies to software as well as hardware synthesisers) requires all these detailed controllable parameters in order to create interesting sounds, it seems inevitable that the sort of subtlety and imagination required to create them could not be created in real-time by a human performer. As an example of this - I have estimated that, running on my Pentium II 450MHz machine, The Copenhagen Interpretation on average sends between 18000 and 19000 MIDI messages during its fifteen or so minutes duration. This is an average of about 20 messages per second. At its peak in the piece this includes three minutes during which an average of 30 MIDI events per second (currently the only real limitations on this are the physical capabilities of MIDI technology itself). Although many of these events are general MIDI (note on, pitch, velocity, etc.), which would not be impossible to produce live, (although it would be extremely difficult to produce in detail), many are also system exclusive and alter the construction of the sound as described above. In the above example, this editing is restricted to about forty parameters. Bearing in mind that to alter one of these would require the navigation via the SYs buttons (I just performed a required action and it requires four or five presses of up to three different buttons just to get to the relevant parameter, without even attempting to change the value itself!) This sort of thing would be clearly impossible in real-time, even if one had thirty of forty sliders or wheels or joy-sticks at one's disposal one would need at least five or so hands to stand any chance to deal with them. As I say, you could argue that such overuse of MIDI messages is itself unnecessary and excessive, but, to me at least, this sort of live subtlety is necessary if the instrument is to have any real chance of competing with acoustic instruments.