Whenever I read an engineering article or blog, I reflect on my own learning and practice as an engineer; but also, I can't help but make a mental note of how the perspectives might relate to my other world - art music.
Sometimes the parallels are obvious: teleological thinking and the synthesis of multiple cognitive processes, for instance, are at the heart of both music and coding, and allow many people to traverse both industries comfortably.
But once we get into architectural principles, logic gates (if this, then that), and syntactical structure in software, we musicians find ourselves navigating unfamiliar waters. In many ways, they are calmer, more predictable waters, chiefly because we are not burdened with the chief rogue element in music – humans. (Of course, code can also be muddied by the humans who type the code, but for now I'm talking about theory, not practice.)
Some of these engineering principles appear so self-evident that they seem like universal truths, so it is baffling that they are so problematic when applied as a theoretical framework to music, despite the myriad similarities between the two fields. I've been mulling over one such principal - Separation of Concerns.
Separation of concerns (or SoC – we coders love our acronyms) has origins in systems theory and problem-solving, and can be applied to every level of software design, engineering and philosophy. In the software world, it refers to the fundamental principle that a system is more testable, adaptable, readable and scalable if it is broken down into simpler, self-contained parts, all of which can be referenced easily from other parts of the system.
It is quite simple to demonstrate the idea at the surface level. A self-contained component that, say, displays a pink button on the screen, can be replicated infinitely in your app or program, without any need to retype the code, as long as the other parts of the program know that ‘prettyButton’ is the name (or "variable" for those new to coding), of the component in question. By calling it by its variable, the full code for 'prettyButton' is invoked and used as needed, but not moved or removed from its own little module of code.
We can simply and quickly change the colour of all instances of 'prettyButton' to green by changing the code in the component itself, and the change will be reflected across the entire app, without having to find and amend every instance of the button in the entire codebase. By containing the button in this way, we can also ensure that modifying it will not break the code in any other part of the system, and we can test it in isolation (Unit Testing), saving time and leaving the rest of the system alone.
It is such a simple and obvious principle, and can be applied at every level and branch of thinking as a coder. If the individual parts work, the whole will too. How could it not be transferrable to the orchestral world?
Because music.
At the first parse, SoC seems wholly applicable to music. Orchestras are separated into distinct areas of concern - instrument families, compositional layers, solo players, the conductor... In theory, if all of these separate components are functioning well, the system should work the same every time. Even in a piece for solo instrument, which can most easily be regarded as a cohesive whole, we conceive separate sections, particular corners to be practised in isolation, even individual notes and phrases, which are - as in code - self-contained entities, interconnected via encoded reference. Perfect each moment, and the entirety will be expressed with ease.
In larger ensembles, the conductor acts as the primary scaffold, supported by certain principal players to maintain the relationships between the separate components. That could be ensuring that be the individual musicians or instrumental families cohere into one auditory mass at a given moment, or ensuring that successive decisions in performance contribute to a cohesive interpretation of the score.
But here we hit a metaphorical snag. One of the quirks of humans making music in ensemble (or even a single human making music alone) is that our separate concerns are almost always contingent. There is no single, repeatable, correct interpretation or performance, neither on the individual level nor as an ensemble (in acoustic, or non-mediated, music, at least).
The unique beauty of live music lies in the performer's ability to manipulate the infinite possibilities of time, timbre and attention in a unified and transporting manner. This means that if we are to do our job as we should, every circumstance has a cumulative impact upon every moment which is to follow – from the myriad interpretative decisions an artist makes in real-time and rehearsal, to momentary incursions to the performative flow (intentional, environmental, accidental).
It seems nigh impossible to try to capture such scope for variability in a computer system (though, no doubt, AI will eventually change that).
I won't turn this into a trite "computers can't steal our jobs" moan. They can, and they already have, in many cases. They can make music more affordable, accessible, and efficient - none of which should be sneered at, given the current arts funding crises and shifting societal priorities. Computation and digitalisation contribute in powerful ways to people's enjoyment and experience of music.
But the slippage - the subtle, tidal nudge of human action and interaction that incrementally shapes the flow of a performance - is the very thing that makes live performance so uniquely valuable, and so valuably unique. It is a salve for the self-contained modularity of modern life. Expensive, inefficient, unpredictable and profoundly moving – irrational - at its best.
One final gif, for good measure.
Enjoyed the blog? Subscribe to our newsletter to receive it delivered straight to your inbox