Evan X. Merz

musician / technologist / human being

Tagged "processing"

Sonifying Processing

Sonifying Processing shows students and artists how to bring sound into their Processing programs. It takes a hands-on approach, incorporating examples into every topic that is discussed. Each section of the book explains a sound programming concept then demonstrates it in code. The examples build from simple synthesizers in the first few chapters, to more complex sound-manglers as the book progresses. Each step of the way is examined at a level that is simple enough for new learners, and comfortable for more experienced programmers.

Topics covered include Additive Synthesis, Frequency Modulation, Sampling, Granular Synthesis, Filters, Compression, Input/Output, MIDI, Analysis and everything else an artist may need to bring their Processing sketches to life.

"Sonifying Processing is a great introduction to sound art in Processing. It's a valuable reference for multimedia artists." - Beads Creator Oliver Bown

Sonifying Processing is available as a free pdf ebook, in a Kindle edition, or in print from Amazon.com.

Downloads

Sonifying Processing The Beads Tutorial

The Beads Library was created by Oliver Bown, and can be downloaded at http://www.beadsproject.net/.

Press

Sonifying Processing on Peter Kirn's wonderful Create Digital Music

Sonifying Processing on Rekkerd

Sonifying Processing on Mostly Noise

Swarm Intelligence in Music in The Signal Culture Cookbook

The Signal Culture Group just published The Signal Culture Cookbook. I contributed a chapter titled "The Mapping Problem", which deals with issues surrounding swarm intelligence in music and the arts. Swarm intelligence is a naturally non-human mode of intelligent behavior, so it presents some unique problems when being applied to the uniquely human behavior of creating art.

The cover of The Signal Culture Cookbook

The Signal Culture Cookbook is a collection of techniques and creative practices employed by artists working in the field of media arts. Articles include real-time glitch video processing, direct laser animation on film, transforming your drawing into a fake computer, wi-fi mapping, alternative uses for piezo mics, visualizing earthquakes in real time and using swarm algorithms to compose new musical structures. There's even a great, humorous article on how to use offline technology for enhancing your online sentience – and more!

And here's a quote from the introduction to my chapter.

Some composers have explored the music that arises from mathematical functions, such as fractals. Composers such as myself have tried to use computers not just to imitate the human creative process, but also to simulate the possibility of inhuman creativity. This has involved employing models of intelligence and computation that aren't based on cognition, such as cellular automata, genetic algorithms and the topic of this article, swarm intelligence. The most difficult problem with using any of these systems in music is that they aren't inherently musical. In general, they are inherently unrelated to music. To write music using data from an arbitrary process, the composer must find a way of translating the non-musical data into musical data. The problem of mapping a process from one domain to work in an entirely unrelated domain is called the mapping problem. In this article, the problem is mapping from a virtual swarm to music, however, the problem applies in similar ways to algorithmic art in general. Some algorithms may be easily translated into one type of art or music, while other algorithms may require complex math for even basic art to emerge.

Donate to the Signal Culture Group to get the full book!

Glitching Images in Processing

This summer I'm going to release a new album of solo electronic music that is heavily influenced by EDM and classic rock. For the past few weeks I've been trying to figure out what to do about the art on the new album.

The album is called "FYNIX Fights Back" so I decided to use images of animals fighting, but I didn't just want to pull up generic images. I wanted to add my own special touch to each image. So I pulled out a tool that I haven't used in years: the Processing programming language.

Processing is great for simple algorithmic art. In this case I wanted to glitch some images interactively, but I wasn't exactly sure what I wanted to do.

So I just started experimenting. I played with colors and shapes and randomness. I like to derive randomess based on mouse movement. The more the mouse moves, the more randomness is imparted to whatever the user is doing.

I added image glitching based on mouse speed. The quicker the cursor moves, the more random are the generated shapes, and the more they are offset from their source position in the original image.

Here's the end result.

Fynix Fights Back

Here's the source code. Make it even better.

import java.awt.event.KeyEvent;

// relationship between image size and canvas size
float scalingFactor = 2.0;

// image from unsplash.com
String imageFilename = "YOUR_FILE_IN_DATA_FOLDER.jpg";

// image container
PImage img;
PImage scaledImage;

int minimumVelocity = 40;
int velocityFactorDivisor = 5; // the larger this is, the more vertices you will get
int velocityFactor = 1; // this will be overridden below

float minimumImageSize = 10.0f;

boolean firstDraw = true;

int currentImage = 1;

void settings() {
  // load the source image
  img = loadImage(imageFilename);
  
  // load the pixel colors into the pixels array
  img.loadPixels();
  
  // create a canvas that is proportional to the selected image
  size((int)(img.width / scalingFactor), (int)(img.height / scalingFactor));
  
  // scale the image for the window size
  scaledImage = loadImage(imageFilename);
  scaledImage.resize(width, height);
  
  // override velocityFactor
  velocityFactor = (int)(width / velocityFactorDivisor);
}

void setup() {
  // disable lines
  noStroke();
}

void keyPressed() {
  if(keyCode == KeyEvent.VK_R) {
    firstDraw = true; // if the user presses ENTER, then reset
  }
}

void draw() {
  if(firstDraw) {
    image(scaledImage, 0, 0);
    firstDraw = false;
  }
  
  // right click to render to image file
  if(mousePressed && mouseButton == RIGHT) {
    save(imageFilename.replace(".jpg", "") + "_render_" + currentImage + ".tga");
    currentImage++;
  }
  
  if(mousePressed && mouseButton == LEFT && mouseX >= 0 && mouseX < width && mouseY >= 0 && mouseY < height) {
    int velocityX = minimumVelocity + (3 * velocity(mouseX, pmouseX, width));
    int velocityY = minimumVelocity + (3 * velocity(mouseY, pmouseY, height));
    
    color c = img.pixels[mousePositionToPixelCoordinate(mouseX, mouseY)];

    int vertexCount = ((3 * velocityFactor) + velocityX + velocityY) / velocityFactor;
    
    int minimumX = mouseX - (velocityX / 2);
    int maximumX = mouseX + (velocityX / 2);
    int minimumY = mouseY - (velocityY / 2);
    int maximumY = mouseY + (velocityY / 2);
    
    PGraphics pg = createGraphics(maximumX - minimumX, maximumY - minimumY);
    
    pg.beginDraw();
    pg.noStroke();
    pg.fill(c);

    // first draw a shape into the buffer
    pg.beginShape();
    for(int i = 0; i < vertexCount; i++) {
      pg.vertex(random(0, pg.width), random(0, pg.height));
    }
    pg.endShape();
    pg.endDraw();
    
    pg.loadPixels();
    
    // then copy image pixels into the shape
    
    // get the upper left coordinate in the source image
    int startingCoordinateInSourceImage = mousePositionToPixelCoordinate(minimumX, minimumY);
    
    // get the width of the source image
    int sourceImageWidth = (int)(img.width);
    
    // set the offset from the source image
    int offsetX = velocity(mouseX, pmouseX, width);
    int offsetY = velocity(mouseY, pmouseY, height);
    
    // ensure that the offset doesn't go off the canvas
    if(mouseX > width / 2) offsetX *= -1;
    if(mouseY > height / 2) offsetY *= -1;
    
    for(int y = 0; y < pg.height; y++) {
      for(int x = 0; x < pg.width; x++) {
        // calculate the coordinate in the destination image
        int newImageY = y * pg.width;
        int newImageX = x;
        int newImageCoordinate = newImageX + newImageY;
        
        // calculate the location in the source image
        //int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + (x * scalingFactor) + ((y * scalingFactor) * sourceImageWidth));
        //int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + ((x + offsetX) * scalingFactor) + (((y + offsetY) * scalingFactor) * sourceImageWidth));

        int sourceImageX = (int)(((x + offsetX) * scalingFactor));
        int sourceImageY = (int)((((y + offsetY) * scalingFactor) * sourceImageWidth));
        int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + sourceImageX + sourceImageY);
        
        // ensure the calculated coordinates are within bounds
        if(newImageCoordinate > 0 && newImageCoordinate < pg.pixels.length 
           && sourceImageCoordinate > 0 && sourceImageCoordinate < img.pixels.length
           && pg.pixels[newImageCoordinate] == c) {
          pg.pixels[newImageCoordinate] = img.pixels[sourceImageCoordinate];
        }
      }
    }
    
    image(pg, minimumX, minimumY);
  }
}

// convert the mouse position to a coordinate within the source image
int mousePositionToPixelCoordinate(int mouseX, int mouseY) {
  return (int)(mouseX * scalingFactor) + (int)((mouseY * scalingFactor) * img.width);
}

// This sort of calculates mouse velocity as relative to canvas size
int velocity(float x1, float x2, float size) {
  int val = (int)(Math.abs((x1 - x2) / size) * size);
  return val;
}

How to render synchronous audio and video in Processing in 2021

About a decade ago I wrote a blog post about rendering synchronous audio and video in processing. I posted it on my now-defunct blog, computermusicblog.com. Recently, I searched for the same topic, and found that my old post was one of the top hits, but my old blog was gone.

So in this post I want to give searchers an updated guide for rendering synchronous audio and video in processing.

It's still a headache to render synchronous audio and video in Processing, but with the technique here you should be able to copy my work and create a simple 2-click process that will get you the results you want in under 100 lines of code.

Prerequisites

You must install Processing, Minim, VideoExport, and ffmpeg on your computer. Processing can be installed from processing.org/. Minim and VideoExport are Processing libraries that you can add via Processing menus (Sketch > Import Library > Add Library). You must add ffmpeg to your path. Google how to do that.

The final, crappy prerequisite for this particular tutorial is that you must be working with a pre-rendered wav file. In other words, this will work for generating Processing visuals that are based on an audio file, but not for Processing sketches that synthesize video and audio at the same time.

Overview

Here's what the overall process looks like.

  1. Run the Processing sketch. Press q to quit and render the video file.
  2. Run ffmpeg to combine the source audio file with the rendered video.

Source Code

Without further ado, here's the source. This code is a simple audio visualizer that paints the waveform over a background image. Notice the ffmpeg instructions in the long comment at the top.

/*
  This is a basic audio visualizer created using Processing.
  
  Press q to quit and render the video.
  
  For more information about Minim, see http://code.compartmental.net/tools/minim/quickstart/
  
  For more information about VideoExport, see https://timrodenbroeker.de/processing-tutorial-video-export/

  Use ffmpeg to combine the source audio with the rendered video.
  See https://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg
  
  The command will look something like this:
  ffmpeg -i render.mp4 -i data/audio.wav -c:v copy -c:a aac -shortest output.mp4
  
  I prefer to add ffmpeg to my path (google how to do this), then put the above command
  into a batch file.
*/

// Minim for playing audio files
import ddf.minim.*;

// VideoExport for rendering videos
import com.hamoid.*;

// Audio related objects
Minim minim;
AudioPlayer song;
String audioFile = "audio.wav"; // The filename for your music. Must be a 16 bit wav file. Use Audacity to convert.

// image related objects
float scaleFactor = 0.25f; // Multiplied by the image size to set the canvas size. Changing this is how you change the resolution of the sketch.
int middleY = 0; // this will be overridden in setup
PImage background; // the background image
String imageFile = "background.jpg"; // The filename for your background image. The file must be present in the data folder for your sketch.

// video related objects
int frameRate = 24; // This framerate MUST be achievable by your computer. Consider lowering the resolution.
VideoExport videoExport;

public void settings() {
  background = loadImage(imageFile);
  
  // set the size of the canvas window based on the loaded image
  size((int)(background.width * scaleFactor), (int)(background.height * scaleFactor));
}

void setup() {
  frameRate(frameRate);
  
  videoExport = new VideoExport(this, "render.mp4");
  videoExport.setFrameRate(frameRate);
  videoExport.startMovie();
  
  minim = new Minim(this);
  
  // the second param sets the buffer size to the width of the canvas
  song = minim.loadFile(audioFile, width);
  
  middleY = height / 2;
  
  if(song != null) {
    song.play();
  }

  fill(255);
  stroke(255);
  strokeWeight(2);
  
  // tell Processing to draw images semi-transparent
  tint(255, 255, 255, 80);
}

void draw() {
  image(background, 0, 0, width, height);
  
  for(int i = 0; i < song.bufferSize() - 1; i++) {
    line(i, middleY + (song.mix.get(i) * middleY), i+1, middleY + (song.mix.get(i+1) * middleY));
  }
  
  videoExport.saveFrame(); // render a video frame
}

void keyPressed() {
  if (key == 'q') {
    videoExport.endMovie(); // Render a silent mp4 video.
    exit();
  }
}