A light-based electronic interface that uses gestures to create improvisational audiovisuals
Loglophone is a light-based electronic interface that allows a performer to use augmented drawing gestures to add improvisational audiovisuals to pre-existing live coding practices.
Loglophone is designed to address a gap in artistic control in my live coding audiovisual practice. When working with tightly structured patterns, I often want to mix-in a particular type of audiovisuals that are loosely structured. (Audiovisuals that are less structured than explicit patterns, but more structured than patterns based on pseudo-random or noise functions.) I also want this type of audiovisuals to relate to the global clock cycle in an asynchronous way.
You can think of it like an eight-sided saltshaker.
Making art with Loglophone should be procedurally similar to cooking with salt. Food without salt can be bland and salt without food isn’t really a meal, but food with just the right amount of salt can be really tasty. A bit of salt can perfectly balance an entire dish while enhancing individual flavors. The same is true for Loglophone in an audiovisual performance.
Alternatively, you can think of it like an epiphyte. An epiphyte is a non-parasitic plant that grows on another plant, such as ferns, air plants, and orchids growing on tree trunks in tropical rainforests. Epiphytes increase ecological biodiversity, but do not affect the larger system. The same is true for Loglophone in an audiovisual performance.
Loglophone is designed to operate independently of or in conjunction with existing live coding environments.
WLAN (Wireless Local Area Network)
Eight LEDs: two LEDs per Raspberry Pi (one for each audio channel L|R)
Photocell audio jack sensors
On Raspberry Pi Zeros pipe audio channels to GPIO pins (PWM)
A distributed system of “headless” PureData patches for the cluster
A controller PureData patch for your personal machine
optional: P5js and openFrameworks (oF) interface sketches
Aside: Loglophone is a work in progress. This iteration is designed to be used by an individual performer. The modular design of the cluster architecture allows for easily scaling up/down the number of data channels (LEDs) or receivers (Photocell audio jacks) to fit performative needs. Alternative iterations of Loglophone can be designed to be used by groups in interactive audiovisual installations.
When a photocell audio jack (sensor) is physically placed near a light source (LED) it receives the source’s signal frequency. The distance between the photocell sensor and the light source determines the amplitude of the signal.
A LED flashes at the same frequency as a speaker would vibrate in a traditional audio system. Depending on the cpu speed of the microcontroller, simple tones or complex sounds like the human voice can be transmitted.
A photocell audio jack (sensor) will translate the light frequency into an electric signal that can be played audibly through a speaker system (in-ear headphones do not require extra amplification) or piped directly back into subsequent software processes as audio buffers for generating audiovisuals.
The Loglophone software system is designed as a platform for performers to write and inject their own custom PureData sub-patches into the interface. The basic format uses these input message parameters: a midi note/file name (data) and an index number to select the desired algorithm (processing).
Additionally, languages like Tidal and FoxDot can be configured to send data via OSC directly to the Loglophone to quickly control audio channels (LEDs). The performer can then switch back and forth between keyboard-based structured patterns and gesture-based improvisation.
Loglophone is a work in progress. I often think about future gestures. There is a gestural deja vu to drawing, using a pipetman in a chemistry lab, and using a wand while casting a spell. However, I’m playing with other experimental methods that use other types of body parts (e.g., feet) and other types of movements (e.g., dance).