Computer Technology and Musical Expression

Version 2, November 2000

.

Introduction

Music has both aesthetic and technical aspects.

In order to express oneself freely in music one has to learn a variety of technical skills to the point where expression is uninhibited.

This balance between technique and expression is a fundamental part of any musical performance.

In this light, is there a difference between a performance on a standard instrument and a 'performance' on a computer or synthesiser? Is there a difference between a live performance and one assembled on tape and then played back 'live'? If a composer develops a computer programme for composing music, who is doing the composing? Can there ever be such as thing as a 'standard' musical instrument that relies on technology - or does one - the computer - currently exist? How does a piano, for instance, compare with a computer as a musical instrument?

To answer these questions fully will involve many disciplines: psychology, artificial intelligence and virtual reality, neurology, and others, and will take many years of cross-disciplinary study to come to an even partial resolution of some of them.

Still, I am going to attempt to answer them, or at least to attempt to confirm that they are questions that need and are worth answering.

1. What, if any, real differences are there between the 'old' and the 'new' technologies?

Many take it for granted that an electronic musical instrument is 'the same' as a traditional one. People may now be 'taught' the electronic keyboard as opposed to the piano. During the last hundred years all sorts of 'new' instruments have emerged - sometimes due to the impact of MIDI, but often attempting to widen the expressive and sonic range of more traditional counterparts. As yet none of these has had any significant effect on the 'traditional' range, except perhaps in popular music. Even in this area, while there are many users who idolise certain manufacturers, configurations and hardware types, there are no single sets of standards, except very broadly - the electric guitar, omnipresent in many popular cultures, the non-technological drum-kit and the synthesiser.

If we want to use a synthesiser, how do we specify it? The make, model, and settings? Do we insist on these precisely, or do what some composers have done and simply say 'synth' - leaving the details to the performer (Would the epithet "strings" or "electric piano" help?). In most popular cultures, even this limited definition is missing.

I am defining a 'standard' instrument to be like a flute, oboe, etc. using physical processes manipulated by a human being to create an audible result. The usual physical processes are through the periodic excitation of strings, membranes, solids, or columns of air.

I am defining an instrument involving a 'new' technology as being one using electricity in order to create, amplify, or otherwise modify a sound.

Most instruments involving 'new technology' could be divided into the following groups:

  1. Instruments which emulate 'traditional' instruments in the manner of their sound production (and maybe include various methods of electroacoustic alteration). In most cases this restricts to some extent the nature and variety of the sound created. It would include the electric guitar. In the case of the latter, many of the methods employed to alter the sound are included in order to overcome the restrictions inherent in the guitar's own interface (it's a fudge).
  2. Instruments which use analogue or digital synthesis to create/express one central sound which cannot be radically altered. Such instruments are usually restricted and are unusual nowadays, but would include the Theremin, the Ondes Martenot and a number of 'novelty' instruments.
  3. Instruments which use analogue electronics to create/emulate sounds.
  4. Instruments which use digital methods to create/emulate sounds.
  5. 'Instruments' which use digital/analogue methods to operate yet are not designed specifically for musical ends.

The latter two groups are effectively computers.

The organ perhaps comes closest to the synthesiser amongst all 'standard' acoustic instruments. It is similar in the complex and mechanical way in which it produces sound. The details of the construction of any particular organ are dependent on the location and maker of the instrument and, as with the synthesiser, there are few absolute specifications. Although the organ is quite capable of a wide range of musical expression, it is, somehow, 'different'. John Deathridge, during a recent radio programme, said:

... the repertoire for the organ is very limited - it's very large but the amount of good music written for the organ is not very great.

The organist Gillian Weir made this hardly robust defence:

...there is a great deal of music written for the organ, but the problem in the twentieth century is that the music arose from many different traditions and ... countries…and to try and play it all on one organ or one kind of organ is very difficult. The music is so much married to the sound of the particular organ…

This is fairly typical of much discussion about the organ, even amongst organists. It seems to reflect an acceptance that there is something different about the instrument.

Similarly, the acoustic organ itself has never become a common part of the concert music environment, perhaps because of this lack of standardisation. Even when a composer uses the organ within a concert environment, they most often require one of a small selection of 'typical' organ sounds. Details are usually and understandably ignored.

All musical instruments, apart from the voice, are technological. In one sense the complex mechanical processes involved in operating a harpsichord or a bassoon are entirely equivalent to a modern computer, although there is a difference in scale. However, there is also, potentially, another difference that the new technologies allow that I think does differentiate these two types of instrument.

2. When is a standard instrument not a standard instrument?

What defines a violin or a piano?

Over the years most standard instruments have changed - pianofortes were developed from fortepianos, harpsichords, claviers. Valves were developed for trumpets, complex machinery for the management of anomalous tones on wind instruments.

This century, instead of changing the structure of the instruments themselves, (although there have been attempts at doing this), temporary modifications have been made to the structure or to methods of performance.

Pianos may be prepared, the body of the instrument may be struck, the strings plucked, muffled, beaten, scraped.

Wind instruments may be muted with hats and cloths. Performers can hum and blow into them, strike or only partially compress keys or valves.

We may bow behind the bridge, on the neck, we may knock the body, bow the body, use mutes, harmonics, completely retune stringed instruments.

We can play instruments outdoors, in another room, broadcast the results from helicopters, play them over walkie-talkies and the internet. With the introduction of commonly available electronics we can amplify them, chorus them, distort them, modify them in an infinite number of ways.

But with all of these modifications, are we actually changing the instrument? How many of us would deny that a Cage prepared piano piece was for 'piano' - even if prepared? Similarly, many modern pieces making use of the above effects, no matter how alien the sound world created is from the 'original intention' are still written for 'standard' instruments.

[Ironically, we can do this - by recording a performance on a 'standard' instrument and then manipulating it electronically, we can create electroacoustic music.]

In a similar, but more subtle way, as young composers, we tend to make the mistake that all instruments have one particular sound, in spite of the fact that we can quite clearly hear that a clarinet's low notes are radically different in almost every respect from the same instrument's higher notes, and that this disparity is even greater in the cases of many other instruments.

All these arguments show that our understanding of the concept of a 'standard' instrument is more complex than we might normally assume. In fact, they would tend to imply that a 'standard instrument' is not simply a structure for creating a particular type of sound, but a body of data that includes performance techniques, an infinite range of sounds based on and limited by the physical structure of the instruments (including any possible additions), the performance situation and the ability and imagination of the performer. In this sense, the question remains unanswered - how far can you disable, modify or in any other way tinker with an acoustic instrument before it ceases to be that instrument?

3. Is there a fundamental difference between a performance on a standard instrument and a 'performance' on a computer or synthesiser? How does a piano, for instance, compare with a computer as a musical instrument?

What happens when someone learns to play a musical instrument - or for that matter, learns to manipulate any arbitrarily complex physical object in order to achieve a arbitrarily complex result?

Someone learning to play a musical instrument must undertake a series of activities each of which can be assigned to one of the two categories mentioned above or both - practical or technical, and aesthetic or expressive.

We might assign to the former anything which pertains specifically to physical ability - the use of exercises, scales and arpeggios to enable manual dexterity on the piano, for instance. The musical material involved may be of little or no aesthetic value. Instead, there is an acceptance of the need for physical fluency if the subject is to be able to make full use of their aesthetic abilities in performance.

How much interaction is there between the 'mind' and the physical processes described above? Our understanding of the nature of this interaction is in no way clear at present. There is evidence that repetitive 'physical' processes physically alter the state of the brain, making any clear distinction between physical practice and mental 'ability' difficult to define.

Apparently [the cerebellum] is responsile for precise coordination and control of the body - its timing, balance, and delicacy of movement. Imagine the flowing artistry of a dancer ... and the sure movements of a painter's or musician's hands.... Without the cerebellum, such precision would not be possible, and all movement would become fumbling and clumsy. It seems that, when one is learning a new skill, be it walking or driving a car, initially one must think through each action in detail, and the cerebrum is in control; but when the skill has been mastered ... it is the cerebellum that takes over. Moreover, it is a familiar experience that if one thinks about one's actions in a skill that has been so mastered, then one's easy control may be temporarily lost. Thinking about it seems to involve the reintroduction of cerebral control and, although a consequent flexibility of activity is thereby introduced, the flowing and precise cerebellar action is lost.

Penrose, The Emporer's New Mind, 1989, OUP, p490

There is also confusion as to the nature of a 'conscious act'. Experiments have shown that brain activity occurs some time before we actually make a physical action even if we think we have only just decided to make it consciously. There would appear to be a delay between our brain's physical activity and our consciousness of it which is entirely contradicted by our own intuition. For instance we do not experience delays when we speak to each other, even though our conversation would appear to be a conscious, self-controlled act. (Libet et al, 1979), ibid, p568

Neither is there felt to be a need for a particularly deep understanding of the physical aspect of the instrument itself by the novice performer. There may be a non-technical understanding of the construction of the instrument, but understanding of the relationship between the construction and the resulting sound is often considered unnecessary, or even damaging.

And what about the understanding of the musical text from which a performer is performing? While it is considered valuable for a performer to understand what is happening musically in a piece, what, if any, are the real benefits or advantages over a performer who understands the notation, even the overriding aesthetic involved, but not the detailed musical syntax? Can someone, or something, be taught precisely which notes to play, in which order, with what force, etc., without an understanding of what they are doing musically? Would they or it pass a musical Turing test?

One of the features of any standard acoustic instrument is that there are a very limited number of physical parameters to control. With a piano, one has however many keys, two or three foot pedals and potentially some other aspects of the piano's body to control. In the case of a stringed instrument, things are more complex - one has a bow to control as well as the strings and body of the instrument itself. A trumpet has a mouthpiece, three valves and a number of tuning slides for performing minor tuning operations. Of course, we also control our own bodies while controlling these parameters, are these physical operations a part of the instrument?

By any account, there are not infinite numbers of parameters to control.

In their development over time, they have become what they are in order to optimise the balance between ease of use with flexibility and depth of expression. However, the range of expression is not limited by this limited range of controllable parameters - there is every chance that it is this very weighting in favour of low numbers of fixed parameters which potentially gives the standard musical instrument its potential for depth of expression.

There is evidence that, as mentioned above, for conscious acts of which we are unfamiliar there is a paradoxical delay in brain activity and our perception of action. It is quite clear that a skillful instrumentalist does not need to 'think' while a conscious decision is made to play a particular note. It is equally clear to anyone who has taught beginners that an unskilled performer with the knowledge to distinguish the correct note often needs such a delay. One might suspect, therefore, that at some point during their development, a talented instrumentalist acheives the ability to express themselves freely with conscious control, because many of the more basic processes are being dealt with 'automatically'. As mentioned above there is evidence that continued and repetitive actions can have a physical effect on the structure of certain parts of the brain, and/or the location of certain information within it.

One of the benefits of electronic instruments is their very flexibility in terms of sound creation - the lack of wide-spread 'custom' electronic instruments based on any design other than the keyboard or guitar seems to imply that amongst the general population this flexibility is very important - presumably one reason why the Ondes Martenot, Theremin and the Melotron are not common musical instruments. In other words, there is a requirement for a standard and well understood interface in order to control an otherwise highly flexible and non-physical sound producing machine, even if those interfaces are themselves irrelevant or even misleading in terms of the actual sounds produced. Students commonly assume that the electronic instrument somehow 'automatically' produces sounds divided into tones and semitones, whereas just as with many standard instruments such as the violin, this is actually far from the case.

However, an electronic instrument is not like a violin in that it is not physically connected to a single configuration of sound producing materials. Of course it is possible to create a single configuration - this is the 'standard' way of using it. But the idea that we might 'make' this configuration and then render the instrument incapable of any further change would appear to be contradicting the point. Is it not possible, however, that it is this very unconfigurability that allows a performer on a standard instrument to develop the unconscious physical and mental abilities to truly 'perform' - is it the fact that such instruments have such a limited set of controllable parameters that allows the performer to control those parameters with depth, detail and speed (and so without self-consciousness or delay)?

Can a computer be a musical instrument?

Can a computer be a musical instrument? I would imagine that many would answer that it could - just as it can 'be' a word processor, a communications device, a scientific analytical tool and so on. However, as has been mentioned, a digital electronic synthesiser is effectively a purpose built computer. A computer, though, is built for the purpose of being multi-purpose. Just as the main point of a synthesiser is that it is flexible in certain ways, so it is with a computer. Accordingly, one could argue that the synthesiser has the same problem. The very flexibility that is not just the computer's advantage, but its very point, prevents it being a 'standard' musical instrument.

How serious is this problem really? It is certainly the case that people are perfectly able to become extremely fluent and expert at programming or operating computers, and it is usual for those people to specialise in certain fields. Would this not be the equivalent of learning a musical instrument? If one knows one particular programme, one particular platform intimately, is this not the same thing?

What about new controls for computers? Hyper-instruments, data-gloves, sound-beams and joy-sticks? They enable greater manipulation of certain elements of the data, but they are not tied to the phyisical source of the sound.

My hypothesis would be that computers cannot express music in the same way as standard instruments, and that, more importantly, it suggests that in principle they cannot. It is based on the proposition that

A standard instrument is limited in terms of its number of controllable parameters, but infinite in the performer's physical and mental ability to control them. An electronic instrument is (in principle) unlimited in terms of its number of controllable parameters, and because of the implications of this, it is limited by the performer/programmer's physical and mental ability to control them.

Because a computer is by definition non-standard, due to its programmability, it is forever within the performer's or composer's gift to alter the structure of the instrument he or she is using. This makes all programmable machines profoundly different from standard acoustic musical instruments. By definition, the 'instrument' is a 'subset' of the machine, in a way that is simply not the case for an acoustic, physical instrument.

The Organ

These ideas put the unusual status of the organ, mentioned above, into a new light. It has a foot in each camp.

It can, in some of its more impressive manifestations, be seen as a first, pre-electronic version of the synthesiser. At the same time, even at its most complex, because of its non-electronic, mechanical nature, and versatile although it is, the number of parameters under the control of any particular organist is still quite limited - it clearly in no way compares to a digital synthesiser or computer - allowing the possibility (in the largest and most elaborate cases probably pushing the dexterity of the organist to the limit) that a musician may learn a particular instrument in some detail. However, it is possible that it is the very differences between instruments that seems to be a limiting factor, even accepting that these are differences only of degree.

Virtual Reality

It is far from inconceivable that at some point in the future it will be possible to build a 'virtual' acoustic instrument. Views are divided on the possibilities of this and other applications of virtual reality, but hypothetically, would not a virtual piano, or a virtual flute, whose physical reality we were unable to distinguish from the 'real thing' not disprove this hypothesis?

Perhaps, and there are many who would consider it a certainty. But I would still object that the possibility of 'editing' the instrument would remain. If you wanted, for instance, to compose a piece in which a violin had six strings, in which the performer had an extra hand, or was able to configure a 'virtual' hand into physically impossible formations, what would stop you from doing it? Composers have been pushing the limits of standard instruments for many years and over time, these have become a part of the 'standard' repertoire.

Unless our virtual performer had similar limitations to those that are imposed by physical reality on acoustic instruments, and unless those limitations were similarly impossible to overcome, I find it impossible to believe that digital electronic instruments can ever be 'performed' in the same way. Nor, I would argue, will it ever be possible to achieve the same subtlety and nuance as is and has been achieved by a highly talented 'acoustic' performer. And if these limitations were somehow put in place, what would be the point of the virtual instrument? If one of it's fundamental features were that its fragility, its limited scope for repair, quality, or adaptation were 'hardwired' into the virtual reality, it would defy the very point of its existence.

None of this is particularly new - most electroacoustic 'performances' - especially those created live have to prepare in advance precisely because of these problems. Currently, the 'instrument' must be meticulously pre-programmed, automated processes must be set up - in fact, the 'instrument' itself must be constructed. What is more, due to the very nature of the electroacoustic medium and environment, where technology is continuously changing and knowledge and experience advancing, there is often a clear pressure to ensure that any configuration used previously is not used again, (presumably because the nature of the machinery is that it is configurable and that currently such a large part of the process of composing and performing electroacoustic music involves this configuration).

Conclusions

Currently, and for the foreseeable future, music created via any sort of computer, analogue or digital, must be inherently (and in principle) different to music composed for and played by real performers on 'real' instruments. To some, this may well appear obvious, but it is my impression that for many it is not. Until a few years ago I think I would have disagreed. I haven't dealt with a number of this conclusion's ramifications - most importantly from my perspective, those relating to composition for standard instruments, computers or both.

To conclude, then, I would like to list some of these other questions and points, if only to prompt further discussion.

5. Is there a fundamental difference between composing for electroacoustic instruments and composing for standard acoustic instruments?

6. Is there a fundamental difference between a live performance and one assembled on tape and then played back 'live'?

7. What makes an 'interpretation' of a live performance? How different is one from another - how much difference, and of what nature, is there between interpretations of one performer to another? Can this be interpreted as simple 'unpredictability'? How might one described this difference in detail?

8. Can there ever be such as thing as a 'standard' musical instrument that relies on technology or does one - the computer - currently exist?

9. If a composer develops a computer programme for composing music, who is doing the composing - and of what relevance are different compositions made by the same programme?

10. Human perceptions of pulse and their role in musical organisation and synchronisation

11. Commercial Pressures

12. Customised Instruments

13. The role of complexity within musical interpretation

14. Is there a fundamental difference between music written for live instruments on a computer programme such as Sibelius or Finale, and music written for live instruments by hand?

15. Possible Solutions: New forms of music composing, recording and editing?

Richard Hoadley, April 2000