QuaverSeries is a live coding system designed to explore the possibility of leveraging the knowledge of electro-acoustic instruments in live coding. The pattern expression in QuaverSeries is initially inspired by music sequencers. Integers are used to denote a MIDI note while the underscore is used to denote a rest. However, instead of writing code such as "60 _ _ _, _ 60 _ _, _ _ 60 _, _ _ _ 60", the pattern dividing idea from TidalCycles is also borrowed in QuaverSeries to make the pattern writing more algorithmic.
The algorithm for the pattern interpretation is as follows:
Split the pattern into different parts by space;
Divide unit 1 by the total number of parts;
Get the total number of elements (notes and rests) in each part;
Divide the unit of one part by the total number of elements;
Get the note event time of each note in the unit of 1.
Here are some examples:
// "1 1" -> [0.0, 0.5]
// "1 _1" -> [0.0, 0.75]
// "1 _ _1 _" -> [0.0, 0.625]
// "1 _ 1" -> [0.0, 0.6666666666666666]
// "_ 1" -> [0.5]
Compared with playing a music sequencer in physical forms, the pattern writing experience in live coding tends to be more disembodied, as we do not have the embodied timing in live coding pattern writing.
Can a body movement pattern be converted to a code pattern? For example, given the keyboard striking time points such as [0.0, 0.6256434] (setting the first one as 0), can the machine interpret it as a pattern?
// [0.0, 0.6256434] -> "1 _ _1 _" -> [0.0, 0.625]
The proposed solution is given in this GitHub repository and will be kept updating. To implement this embodied pattern writing idea in a live coding system, the key is to know the threshold, which requires further user studies of QuaverSeries and this algorithm. Still, when the threshold is not accurate enough, the algorithm can create some interesting patterns. Another question is: if this pattern inputting idea is implemented in other languages such as TidalCycles or SuperCollider, how would the experience differ?