Tagged "software"
Sonifying Processing
Sonifying Processing shows students and artists how to bring sound into their Processing programs. It takes a hands-on approach, incorporating examples into every topic that is discussed. Each section of the book explains a sound programming concept then demonstrates it in code. The examples build from simple synthesizers in the first few chapters, to more complex sound-manglers as the book progresses. Each step of the way is examined at a level that is simple enough for new learners, and comfortable for more experienced programmers.
Topics covered include Additive Synthesis, Frequency Modulation, Sampling, Granular Synthesis, Filters, Compression, Input/Output, MIDI, Analysis and everything else an artist may need to bring their Processing sketches to life.
"Sonifying Processing is a great introduction to sound art in Processing. It's a valuable reference for multimedia artists." - Beads Creator Oliver Bown
Sonifying Processing is available as a free pdf ebook, in a Kindle edition, or in print from Amazon.com.
Downloads
Sonifying Processing The Beads Tutorial
The Beads Library was created by Oliver Bown, and can be downloaded at http://www.beadsproject.net/.
Press
Sonifying Processing on Peter Kirn's wonderful Create Digital Music
Swarm Intelligence in Music in The Signal Culture Cookbook
The Signal Culture Group just published The Signal Culture Cookbook. I contributed a chapter titled "The Mapping Problem", which deals with issues surrounding swarm intelligence in music and the arts. Swarm intelligence is a naturally non-human mode of intelligent behavior, so it presents some unique problems when being applied to the uniquely human behavior of creating art.
The Signal Culture Cookbook is a collection of techniques and creative practices employed by artists working in the field of media arts. Articles include real-time glitch video processing, direct laser animation on film, transforming your drawing into a fake computer, wi-fi mapping, alternative uses for piezo mics, visualizing earthquakes in real time and using swarm algorithms to compose new musical structures. There's even a great, humorous article on how to use offline technology for enhancing your online sentience – and more!
And here's a quote from the introduction to my chapter.
Some composers have explored the music that arises from mathematical functions, such as fractals. Composers such as myself have tried to use computers not just to imitate the human creative process, but also to simulate the possibility of inhuman creativity. This has involved employing models of intelligence and computation that aren't based on cognition, such as cellular automata, genetic algorithms and the topic of this article, swarm intelligence. The most difficult problem with using any of these systems in music is that they aren't inherently musical. In general, they are inherently unrelated to music. To write music using data from an arbitrary process, the composer must find a way of translating the non-musical data into musical data. The problem of mapping a process from one domain to work in an entirely unrelated domain is called the mapping problem. In this article, the problem is mapping from a virtual swarm to music, however, the problem applies in similar ways to algorithmic art in general. Some algorithms may be easily translated into one type of art or music, while other algorithms may require complex math for even basic art to emerge.
Sound Synthesis in Java
Sound Synthesis in Java introduces sound synthesis concepts using the most widely taught programming language in the world, Java. Using the Beads library, it walks readers through the basics of sound generating programs all the way up through imitations of commercial synthesizers. In eleven chapters the book covers additive synthesis, modulation synthesis, subtractive synthesis, granular synthesis, MIDI keyboard input, rendering to audio files and more. Each chapter includes an explanation of the topic and examples that are as simple as possible so even beginning programmers can follow along. Part two of the book includes six projects that show the reader how to build arpeggiators, imitate an analog synthesizer, and create flowing soundscapes using granular synthesis.
Sonifying Processing is available for free online.
Read Online
Read Sound Synthesis in Java online. The source code is available from links in the text.
SoundCloud, I love you, but you're terrible
I finally started using SoundCloud for a new electro project called Fynix. I casually used it in the past under my own name, in order to share WIP tracks, or just odd stuff that didn't fit on bandcamp. But I never used it seriously until recently. Now I am using it every day, and trying to connect with other artists. I am remixing one track a week, listening to everything on The Upload, and liking/commenting as much as I can.
SoundCloud is the best social network for musicians right now. But it still has a terrible identity crisis. Most of the services seem to be aimed at listeners, or aimed at nobody in particular.
So in this post, I'm going to vent about SoundCloud. It's a good platform, but with a few changes it could be great.
1. I am an artist. Stop treating me like a listener.
Is it really that difficult for you to recognize that I am a musician, and not a listener? I've uploaded 15 tracks. It seems like a pretty simple conditional check to me. So why is my home feed cluttered up with reposts? Why can't I easily find the new tracks by my friends?
This is the core underlying problem with SoundCloud. It has two distinct types of users, and yet it treats all users the same.
2. Your "Who to Follow" recommendations suck. They REALLY suck.
I've basically stopped checking "Who to Follow" even though I want to connect with as many musicians as possible. The recommendations seem arbitrary and just plain stupid.
The main problem is that, as a musician, I want to follow other musicians. I want to follow people who will interact with me, and who will promote my work as much as I promote theirs. Yet, the "Who to Follow" list is full of seemingly random people.
Is this person from the same city as me? No. Do they follow lots of people / will they follow back? No. Are they working in a genre similar to mine? No. Do they like and comment on lots of tracks? No.
So why the heck would I want to follow them?
3. Where are my friends latest tracks?
This last one is just infuriating. When I log in, I want to see the latest tracks posted by my friends. So I go to my homescreen, and it is pure luck if I can find something posted by someone I actually talk to on SoundCloud. It's all reposts. Even if I unfollow all the huge repost accounts, I am stuck looking at reposts by my friends, rather than their new tracks.
Okay, so let's click the dropdown and go to the list of users I am "following". Are they sorted by recent activity? No. They are sorted by the order in which I followed them. To find out if they have new tracks, I must click on them individually and check their profiles. Because that is really practical.
Okay, so maybe there's a playlist of my friends tracks on the Discover page? Nope. It's all a random collection of garbage.
As far as I can tell, there is no way for me to listen to my friends' recent tracks. This discourages real interactions.
Ultimately, the problem is data, and intelligence. SoundCloud has none. You could blame design for these problems. The website shows a lack of direction, as if committees are leading the product in lots of different directions. SoundCloud seems to want to focus on listeners, to compete in the same space as Spotify.
But even if that's the case, it should be trivial to see that I don't use the website like a regular listener. I use it like a musician. I want to connect and interact with other musicians.
And this is such a trivial data/analytics problem that I can only think that they aren't led by data at all. Maybe this is just what I see because I lead our data team, but it seems apparent to me that data is either not used, or used poorly in all these features.
For instance, shouldn't the "Who to Follow" list be based on who I have followed in the past? I've followed lots of people who make jazz/electro music, yet no jazz/electro artists are in my "Who to Follow" list. I follow people who like and comment on my tracks, yet I am told to follow people who follow 12 people and have never posted a comment.
The most disappointing thing is that none of this is hard.
4. Oh yeah, and your browser detection sucks.
When I am browsing your site on my tablet, I do not want to use the app. I do not want your very limited mobile site. I just want the regular site (and yes, I know I can get it with a few extra clicks, but it should be the default).
Glitching Images in Processing
This summer I'm going to release a new album of solo electronic music that is heavily influenced by EDM and classic rock. For the past few weeks I've been trying to figure out what to do about the art on the new album.
The album is called "FYNIX Fights Back" so I decided to use images of animals fighting, but I didn't just want to pull up generic images. I wanted to add my own special touch to each image. So I pulled out a tool that I haven't used in years: the Processing programming language.
Processing is great for simple algorithmic art. In this case I wanted to glitch some images interactively, but I wasn't exactly sure what I wanted to do.
So I just started experimenting. I played with colors and shapes and randomness. I like to derive randomess based on mouse movement. The more the mouse moves, the more randomness is imparted to whatever the user is doing.
I added image glitching based on mouse speed. The quicker the cursor moves, the more random are the generated shapes, and the more they are offset from their source position in the original image.
Here's the end result.
Here's the source code. Make it even better.
import java.awt.event.KeyEvent;
// relationship between image size and canvas size
float scalingFactor = 2.0;
// image from unsplash.com
String imageFilename = "YOUR_FILE_IN_DATA_FOLDER.jpg";
// image container
PImage img;
PImage scaledImage;
int minimumVelocity = 40;
int velocityFactorDivisor = 5; // the larger this is, the more vertices you will get
int velocityFactor = 1; // this will be overridden below
float minimumImageSize = 10.0f;
boolean firstDraw = true;
int currentImage = 1;
void settings() {
// load the source image
img = loadImage(imageFilename);
// load the pixel colors into the pixels array
img.loadPixels();
// create a canvas that is proportional to the selected image
size((int)(img.width / scalingFactor), (int)(img.height / scalingFactor));
// scale the image for the window size
scaledImage = loadImage(imageFilename);
scaledImage.resize(width, height);
// override velocityFactor
velocityFactor = (int)(width / velocityFactorDivisor);
}
void setup() {
// disable lines
noStroke();
}
void keyPressed() {
if(keyCode == KeyEvent.VK_R) {
firstDraw = true; // if the user presses ENTER, then reset
}
}
void draw() {
if(firstDraw) {
image(scaledImage, 0, 0);
firstDraw = false;
}
// right click to render to image file
if(mousePressed && mouseButton == RIGHT) {
save(imageFilename.replace(".jpg", "") + "_render_" + currentImage + ".tga");
currentImage++;
}
if(mousePressed && mouseButton == LEFT && mouseX >= 0 && mouseX < width && mouseY >= 0 && mouseY < height) {
int velocityX = minimumVelocity + (3 * velocity(mouseX, pmouseX, width));
int velocityY = minimumVelocity + (3 * velocity(mouseY, pmouseY, height));
color c = img.pixels[mousePositionToPixelCoordinate(mouseX, mouseY)];
int vertexCount = ((3 * velocityFactor) + velocityX + velocityY) / velocityFactor;
int minimumX = mouseX - (velocityX / 2);
int maximumX = mouseX + (velocityX / 2);
int minimumY = mouseY - (velocityY / 2);
int maximumY = mouseY + (velocityY / 2);
PGraphics pg = createGraphics(maximumX - minimumX, maximumY - minimumY);
pg.beginDraw();
pg.noStroke();
pg.fill(c);
// first draw a shape into the buffer
pg.beginShape();
for(int i = 0; i < vertexCount; i++) {
pg.vertex(random(0, pg.width), random(0, pg.height));
}
pg.endShape();
pg.endDraw();
pg.loadPixels();
// then copy image pixels into the shape
// get the upper left coordinate in the source image
int startingCoordinateInSourceImage = mousePositionToPixelCoordinate(minimumX, minimumY);
// get the width of the source image
int sourceImageWidth = (int)(img.width);
// set the offset from the source image
int offsetX = velocity(mouseX, pmouseX, width);
int offsetY = velocity(mouseY, pmouseY, height);
// ensure that the offset doesn't go off the canvas
if(mouseX > width / 2) offsetX *= -1;
if(mouseY > height / 2) offsetY *= -1;
for(int y = 0; y < pg.height; y++) {
for(int x = 0; x < pg.width; x++) {
// calculate the coordinate in the destination image
int newImageY = y * pg.width;
int newImageX = x;
int newImageCoordinate = newImageX + newImageY;
// calculate the location in the source image
//int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + (x * scalingFactor) + ((y * scalingFactor) * sourceImageWidth));
//int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + ((x + offsetX) * scalingFactor) + (((y + offsetY) * scalingFactor) * sourceImageWidth));
int sourceImageX = (int)(((x + offsetX) * scalingFactor));
int sourceImageY = (int)((((y + offsetY) * scalingFactor) * sourceImageWidth));
int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + sourceImageX + sourceImageY);
// ensure the calculated coordinates are within bounds
if(newImageCoordinate > 0 && newImageCoordinate < pg.pixels.length
&& sourceImageCoordinate > 0 && sourceImageCoordinate < img.pixels.length
&& pg.pixels[newImageCoordinate] == c) {
pg.pixels[newImageCoordinate] = img.pixels[sourceImageCoordinate];
}
}
}
image(pg, minimumX, minimumY);
}
}
// convert the mouse position to a coordinate within the source image
int mousePositionToPixelCoordinate(int mouseX, int mouseY) {
return (int)(mouseX * scalingFactor) + (int)((mouseY * scalingFactor) * img.width);
}
// This sort of calculates mouse velocity as relative to canvas size
int velocity(float x1, float x2, float size) {
int val = (int)(Math.abs((x1 - x2) / size) * size);
return val;
}
How to render synchronous audio and video in Processing in 2021
About a decade ago I wrote a blog post about rendering synchronous audio and video in processing. I posted it on my now-defunct blog, computermusicblog.com. Recently, I searched for the same topic, and found that my old post was one of the top hits, but my old blog was gone.
So in this post I want to give searchers an updated guide for rendering synchronous audio and video in processing.
It's still a headache to render synchronous audio and video in Processing, but with the technique here you should be able to copy my work and create a simple 2-click process that will get you the results you want in under 100 lines of code.
Prerequisites
You must install Processing, Minim, VideoExport, and ffmpeg on your computer. Processing can be installed from processing.org/. Minim and VideoExport are Processing libraries that you can add via Processing menus (Sketch > Import Library > Add Library). You must add ffmpeg to your path. Google how to do that.
The final, crappy prerequisite for this particular tutorial is that you must be working with a pre-rendered wav file. In other words, this will work for generating Processing visuals that are based on an audio file, but not for Processing sketches that synthesize video and audio at the same time.
Overview
Here's what the overall process looks like.
- Run the Processing sketch. Press q to quit and render the video file.
- Run ffmpeg to combine the source audio file with the rendered video.
Source Code
Without further ado, here's the source. This code is a simple audio visualizer that paints the waveform over a background image. Notice the ffmpeg instructions in the long comment at the top.
/*
This is a basic audio visualizer created using Processing.
Press q to quit and render the video.
For more information about Minim, see http://code.compartmental.net/tools/minim/quickstart/
For more information about VideoExport, see https://timrodenbroeker.de/processing-tutorial-video-export/
Use ffmpeg to combine the source audio with the rendered video.
See https://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg
The command will look something like this:
ffmpeg -i render.mp4 -i data/audio.wav -c:v copy -c:a aac -shortest output.mp4
I prefer to add ffmpeg to my path (google how to do this), then put the above command
into a batch file.
*/
// Minim for playing audio files
import ddf.minim.*;
// VideoExport for rendering videos
import com.hamoid.*;
// Audio related objects
Minim minim;
AudioPlayer song;
String audioFile = "audio.wav"; // The filename for your music. Must be a 16 bit wav file. Use Audacity to convert.
// image related objects
float scaleFactor = 0.25f; // Multiplied by the image size to set the canvas size. Changing this is how you change the resolution of the sketch.
int middleY = 0; // this will be overridden in setup
PImage background; // the background image
String imageFile = "background.jpg"; // The filename for your background image. The file must be present in the data folder for your sketch.
// video related objects
int frameRate = 24; // This framerate MUST be achievable by your computer. Consider lowering the resolution.
VideoExport videoExport;
public void settings() {
background = loadImage(imageFile);
// set the size of the canvas window based on the loaded image
size((int)(background.width * scaleFactor), (int)(background.height * scaleFactor));
}
void setup() {
frameRate(frameRate);
videoExport = new VideoExport(this, "render.mp4");
videoExport.setFrameRate(frameRate);
videoExport.startMovie();
minim = new Minim(this);
// the second param sets the buffer size to the width of the canvas
song = minim.loadFile(audioFile, width);
middleY = height / 2;
if(song != null) {
song.play();
}
fill(255);
stroke(255);
strokeWeight(2);
// tell Processing to draw images semi-transparent
tint(255, 255, 255, 80);
}
void draw() {
image(background, 0, 0, width, height);
for(int i = 0; i < song.bufferSize() - 1; i++) {
line(i, middleY + (song.mix.get(i) * middleY), i+1, middleY + (song.mix.get(i+1) * middleY));
}
videoExport.saveFrame(); // render a video frame
}
void keyPressed() {
if (key == 'q') {
videoExport.endMovie(); // Render a silent mp4 video.
exit();
}
}
Composing with All Sound
I sometimes get asked about my dissertation, so I wanted to write a blog post to explain it. In this post, I describe the work I did for my dissertation, which brought together web APIs and a network model of creativity.
Composing with all sound
When I was in graduate school I was somewhat obsessed with the idea of composing with all sound. When I say "composing with all sound" I don't mean composing with a lot of sounds. I mean literally all sound that exists. Composing with all sounds that have ever existed is literally impossible, but the internet gives us a treasure trove of sound.
So I looked at the largest collections of sound on the internet, and the best that I found in 2011 was the website freesound.org/. Freesound is great for several reasons.
- The library is absolutely massive and constantly growing
- They have a free to use API
- Users can upload tags and descriptions of the sounds
- Freesound analyzes the sounds and gives access to those descriptors via the API
A model of creativity
Once I had a source for sounds, all I needed was a way to connect them. Neural network research was blossoming in 2011, so I tried to find a neural model that I could connect to the data on Freesound. That's when I found Melissa Schilling's network model of cognitive insight.
Schilling's theory essentially sees ideas as networks in the brain. Ideas are connected by various relationships. Ideas can look alike, sound alike, share a similar space, be described by similar words, and so on. In Schilling's model, cognitive insight, or creativity, occurs when two formerly disparate networks of ideas are connected using a bridge.
So to compose with all the sounds on Freesound, all I needed to do was to organize sounds into networks, then find new ways to connect them. But how could I organize sounds into networks, and what would a new connection look like?
The wordnik API
I realized that I could make lexical networks of sounds. The tags on sounds on Freesound give us a way to connect sounds. For instance, we could find all sounds that have the tag "scream" and form them into one network.
To make creative jumps, I had to bring in a new data source. After all, the sounds that share a tag are already connected.
That's when I incorporated The Wordnik API. Wordnik is an incredibly complete dictionary, thesaurus, and encyclopedia all wrapped into one. And best of all, they expost it using a fast and affordable API.
Composing with all sound using Freesound, a network model of creativity, and lexical relationships
So the final algorithm looks something like this, although there are many ways to vary it.
- Start with a search term provided by a user
- Build a network of sounds with that search term
- Use Wordnik to find related words
- Build a network around the related words
- Connect the original network to the new one
- Repeat
To sonify the resulting networks, I used a simple model of artificial intelligence that is sort of like a cellular automaton. I released a swarm of simple automata on the network and turned on sounds whenever the number of bots on a sound reached a critical mass.
Results
Here are the results of my dissertation. You can read a paper I presented at a conference, and the dissertation itself. Then you can listen to the music written by this program. It's not Beethoven, but I guarantee that you will find it interesting.
Composing with All Sound Using the FreeSound and Wordnik APIs
Method for Simulating Creativity to Generate Sound Collages from Documents on the Web
Disconnected: Algorithmic music composed using all sounds and a network model of creativity