rhoadley.net   music   research   software   blogs

cv    music    text    software

index    1    2    3    4    5

Controlling pSY


Bearing in mind that I was due to have a finite concert piece completed in a finite time (that is, by June 21st 1999 and hopefully a little earlier) and that this piece was required to run correctly on the computers and SYs I had available (one Pentium II running Windows 98, three Pentiums running Windows 95, two SY99s and two SY77s), I clearly had no time (although plenty of interest) in this last question. This simple fact was that by the deadline I had to have a complete and reasonably well behaved programme, whatever the aesthetic consequences. To this end, as was the case with Arpeggiator, the answer lay in restricting the choices available to pSY so that the chances of it misbehaving were reduced to chances acceptable to me. This, as I fully understand, was a bit of a cheat, but the process of defining methods of recognising aesthetically acceptable, let alone pleasing sounds will be a lengthy one, I am sure. It was necessary to control the following parameters in order to avoid disaster:

In order to implement these conditions, I devised two new sub programmes, subtitled pSing and pScore. Unsurprisingly, the first was principally concerning with organising and sending general MIDI messages, the latter with storing global settings and timings. In addition, pSY enables you to make and save sets of selections of parameters. These can be saved as *.sel files.
 
 
 
 
 
 
 
 
 
 

pScore and pSing



(Click on above to enlarge)

Figure 5: pScore and pSing

As can be seen from the above, these have eventually turned out to be rather complex programmes in their own right. I should emphasise that they have developed according to the same principals as Arpeggiator (as described above) in the sense that functions were added as I felt aesthetically necessary. The apparent complexity is a symptom of the range of functions I felt it necessary to be able to control in order to avoid dullness. I will return to the aesthetic implications of the visual programming part of this as well as the implications for usability in my conclusions below.


Figure 6: pScore Information

There is not space here to go into a detailed description of the features of these programmes, but a few examples will suffice to give an indication of the process.

Figure 5 shows the pSing Loop - the pSing button ( ) starts this. pSing controls the number of notes, the duration between each note, the note's pitch, velocity, and whether a pitch or velocity pattern is operating. In other words it's a cut down version of Arpeggiator. Files of pSing settings can be saved as *.sng files. pSing and pSY together, then, generate a texture - ideally, a sort of three dimensional one, where the pitches, velocities and timbre together form more or less recognisable patterns. pScore, then, puts these two notions together and adds some more functionality.

Effectively, pScore arranges all this information for real-time performance. Scores are a series of events. An event may be a Voice file (*.syx), a pSing file (*.sng), a selection file (*.sel), a morph file (that is, a second voice file), a MIDI Event or a combination of these. A MIDI Event can include a number of options, for instance, sending an All Notes Off Message (in reality 128 velocity 0 note on messages), or messages setting the pan, microtune, or effect settings of the synthesiser.

pScore, then, simply keeps a track of where the piece is in real-time and when it detects an event, sends whatever messages or opens whichever files the event prescribes. Scores may be saved as *.sco files.

Figure 6 shows a typical such event. At time 294.49 seconds, pScore opens the specified voice, sing and selection files, and the programme then proceeds according to the pSing Loop (figure 5), itself reset according to the new *.sng file, until it reaches a new event.

To complete the performance, at the end of the score the screens blank themselves.
 
 
 
 
 
 
 
 
 
 

Limitations and Future Developments


I shall be investigating in a little detail the general problem of performer-less music in concert performance below, but, for reasons also given below, one of the bases of pSY and indeed, most other forms of electro-acoustic, is that it cannot be performed by people. I shall therefore not deal with this sort of limitation here, although I have plans to make the software partially interactive with a live performer.

I deliberately designed the set of programmes I have called pSY in order to overcome certain problems that I perceived in 'press play' electroacoustic music, and to at least investigate bringing into this medium an element of 'performance'. I also wanted to create a concert piece for live technology, where the details of the piece were decided upon by that technology during the performance. In order to achieve this, I approached the programme from a particular direction, (indeed to a great extent the intentions mentioned above were formed as the software ideas developed), and in taking this direction, I avoided certain other approaches and directions. While pSY creates MIDI output with relative ease, it is relatively difficult to specify precise events - for example, a melody with precise durations, or indeed, a harmonic sequence. This is a current limitation, and I am currently investigating whether it is possible or desirable to introduce such an improvement.

This is a particular limitation associated with the programming, and, I think, will be fairly easily overcome. A far more practical limitation is the hardware required to run the programmes. If one of the principals of the programme is that any score should be different according to applied criteria each time it is performed, and that indeed, the only real performance is a live one, then the fact that a live performance involved so much hardware and preparation time, (as is further discussed below, p19), is itself a serious problem. Along similar lines, the fact that the programme uses the SY77 or 99, now an obsolete product, is clearly a problem. The fact that the whole set-up, should the necessary equipment be available, is so expensive is yet another limitation.

As an option for the future, it should not be necessary to use such a complex piece of machinery as the SY. If a similar construction were available as a card to fit into a computer, this would clearly help. I have considered the possibility of using a similar idea to writes scores for Csound instruments whose construction would reflect that of the SY. However, this could not then occur in real-time, although it would negate the need for a synthesiser altogether. As mentioned in the introduction, it is not absolutely necessary to use four computers and synthesisers - this is something of a luxury. It would even be feasible to use one synthesiser and the multi function to include different versions of the 'same' voices, but then the number of element outputs available to the SY would be compromised during some textures (notes would drop out) and in addition, the MIDI output flow would be affected if that output had to be directed to multiple MIDI channels. As usual with computers, the effects of economy will be felt sooner or later.
 
 
 
 
 
 
 
 
 
 

Part 2: Associated Musical Ideas


pSY has brought up, for me at least, a number of ideas concerning the aesthetics of performance, composition, and what I consider to be an important link between the two, the musical interface, whether this is the acoustic instrument itself, the somewhat more manufactured interface designed for a synthesiser to enable performance, or the software interface presented to a composer (or indeed a performer) by a software package. I shall consider these in turn.


 
 
 
 
 
 
 
 
 
 

pSY and Performance


As I mentioned in the introduction, one of the principal reasons why I became intrigued with the direction in which pSY appeared to be developing was a dissatisfaction concerning the finite nature of much electroacoustic music when compared to acoustic notated material. During performances of the latter an intermediary in the form of a performer takes on a central and yet often underrated and even neglected role. Certainly, the absence of the input provided by the performer only really becomes apparent when it's no longer there. In terms of the programme itself, absolutely no attempt has been made to in any way try to encode interpretation. As can be seen from the above, the process is really quite the reverse - taking an almost entirely random set of parameters and imposing an order upon them. However, the aesthetic result seems to have a similar effect. How is this? What do we actually expect when we hear a piece of music? How is this related to who and/or what we are going to hear? What difference does it make if we've never heard the piece before or if we know it in depth, (or if we wrote it)? Does it make a difference if we know the performer, if we know the composer's other output? What is the role of the visual part of musical performance? Is a perfectly recorded and reproduced 'mimed' performance different in substance from a 'genuinely' live performance? To what extent do we applaud the performer(s), the piece and the composer at the end of a live performance? Do any of these considerations affect the way we feel about 'press play' varieties of electroacoustic music?

Many of these questions are too complex to deal with here, although I will try to suggest some solutions to some of them below. I would emphasise, however, that many of their roots can be traced to the instrumental interface and some fundamental differences between a performer's relationship with an acoustic instrument and a performer/composer's relationship with technology. Ultimately, I feel, composition and performance come together at this point.

However, I would like to briefly discuss how the ideas introduced above with regard to levels of predictability might apply equally to performance as well as to the manner in which Arpeggiator and pSY.

When we attend an acoustic concert, what do we expect from the performers we are to see and hear? With the levels of fidelity in sound recording and reproduction currently achievable, it would arguably be feasible for the performer to pre-record a concert and then mime live, and yet our response to this would be that we were being cheated somehow. Without going into the psychological and cultural reasons as to why this might be, is it not likely that at least a part of the reason for our negative reactions that we prefer to accept the risk of a poor performance and balance this against some sort of interaction that is achieved in a 'live' live performance? Or that we accept the same risk in return for possibly some unique return which will be ours alone and not something that will be repeated in front of many other audiences? If we were to hear a recording in a concert hall, would we applaud so vigorously at its conclusion, and if not why not? How does this compare with our acceptance of the validity of recordings outside the concert hall?

Presumably we do not expect or want a direct and precise repeat of a given performance - otherwise a recording would be better, not worse. What we do expect, at least to some extent, is the unexpected - a moment of insight that we could not have imagined previously. I presume that we grade performers (and indeed any creative artist) by their ability not only to deliver a competent performance, but one that is more or less likely to achieve this sense of insight. I would also suggest that this unpredictability is equally unknown by the performer, (who otherwise would presumably build it into every performance!). A performer hopes that in each performance something will happen of which they are unaware beforehand. What exactly this is and in what form it should come are presumably a part of the processes mentioned above, and might well involve other performers and the audience itself. By definition, I would suggest, we cannot know what these insights might be before they occur and they are therefore, by definition, unpredictable, uncertain and unrepeatable. I would venture to suggest that the phenomena to which these questions seem to point are directly related to the difficulty with some electroacoustic music mentioned in the introduction, and that this difficulty is itself directly related to the processes, whether psychological, cultural, sociological or musical that occur when we see a live performance. These processes, I would conjecture, involve both the performer and the audience and, at one remove, the composer of the musical event, (although the composer, too, I suppose, will also try to build insight into the music - although with no guarantee of success!). I have the strongest doubts as to whether, with current (or any other) technology, these phenomena will be reducible to the algorithms of whatever complexity necessary to artificially generate them on a computer, but I accept that in the future advances may be made which may allow for such generation (I don't think it will be algorithmic in nature). I will, below, however, be suggesting that it may be possible to more acurately reflect these phenomena using 'clusters' of algorithms. As can be seen from the above discussions concerning probabilities, the range of possible musical outcomes can increase enormously using relatively straightforward processes and levels of unpredictability can be achieved which involve quite subtle changes and communications. I would direct those interested in the views put forward concerning the algorithmic or non-algorithmic nature of our minds to Hofstadter and Penrose.