These images were not created while I was drinking. Rather, they were created using an algorithm that combines several random walks, which are also known as drunk walks.
In the past few days I’ve completed several programs that compose rather nice notated music using cellular automata. Yesterday I posted seven solos generated by cellular automata. Today I am following up with two duets. Like the solos, these pieces were generated using elementary cellular automata.
All of these pieces look rather naked. In the past I’ve added tempo, dynamics, and articulations to algorithmic pieces where the computer only generated pitches and durations. Lately I feel like it’s best to present the performer with exactly what was generated, and leave the rest up to the performer. So these pieces are a bit more like sketches, in the sense that the performer will fill out some of the details.
I’m pleased to announce the release of Disconnected, and album of algorithmic sound collages generated by pulling sounds from the web.
I prefer to call this album semi-algorithmic because some of the music is purely software-generated, while other pieces are a collaboration between the software and myself. Tracks four and six are purely algorithmic, while the other tracks are a mix of software-generated material and more traditionally composed material.
The software used in the sound collage pieces (1, 3, 4, 6) was inspired by Melissa Schilling’s Small World Network Model of Cognitive Insight. Her theory essentially says that moments of cognitive insight, or creativity, occur whenever a connection is made between previously distantly related ideas. In graph theory, these types of connections are called bridges, and they have the effect of bringing entire neighborhoods of ideas closer together.
I applied Schilling’s theory to sounds from freesound.org. My software searches for neighborhoods of sounds that are related by aural similarity and stores them in a graph of sounds. These sounds are then connected with more distant sounds via lexical connections from wordnik.com. These lexical connections are bridges, or moments of creativity. This process is detailed in the paper Composing with All Sound Using the FreeSound and Wordnik APIs.
Finally, these sound graphs must be activated to generate sound collages. I used a modified boids algorithm to allow a swarm to move over the sound graph. Sounds were triggered whenever the population on a vertex surpassed a threshold.
Music composed by Evan X. Merz. Sculpture and Stage by Sudhu Tewari. Movement by Nuria Bowart and Shira Yaziv.
A multimedia piece generated by pushing the same buffer to both the speakers and the screen (as a line drawing). The live generative version is slightly better than the YouTube render below.