In which I complain about the lcd soundsystem show…

This feels like a blog post from 2004. I want to complain about some super popular thing as if anyone cares about my opinion. Whatever. I’m going to write it anyway.

I went to see LCD Soundsystem at The Bill Graham last night. The auditorium was packed with a concert audience that actually made me feel young for once. The show was generally excellent, at least in the music sense. It was a great performance. Maybe you could complain that some of the performances were virtually identical to London Sessions. Or maybe you could complain that they only played five tracks off the new album. But that’s picking nits. They ended with All My Friends, so I really can’t complain too much about the music.

LCD Soundsystem at The Bill Graham

And here’s the part where I get up on my soapbox about some nonsense.

1. POINT THE FUCKING LIGHTS AT THE BAND

Point the fucking lights at the band. No. NO!. Stop your shit. Nobody cares about your art, we just want to see the fucking band. Seriously.

The band was back-lit for 2/3rds of the show. Bright spots were pouring over James Murphy’s shoulders into the audience’s eyes. He looked fabulous in silhouette. At least I think he looked fabulous. It was tough to see him at all. Since the lights were pointed at the fucking audience.

2. YOUR T-SHIRT IDEAS ARE NOT FUNNY

I can’t believe I bought this shirt.

Terrible LCD Soundsystem Shirt

All you had to do was show the picture of James Murphy, and underneath it, write “LCD Soundsystem”. Instead you gave us this monstrosity.

“So Evan, why didn’t you buy the other shirt?”

I did buy it, and it’s a fucking tie-dye.

Terrible Tie Dye LCD Soundsystem T-Shirt

Seriously? SERIOUSLY?!?!? Has anyone who cares about clothing ever actually worn a tie-dyed t-shirt?

Anyway, I know those are pretty minor points. But I was really excited to finally see one of my favorite bands, and these two little things really grated on me.

PS. Yes, there were even more tragic shirt options. In plain white.

16. November 2017 by evan
Categories: Concert | Tags: , , , , | Leave a comment

Learning to Talk About Inaccuracy for a New Data Engineer

About a month ago, the engineering team at Mighty Networks was impacted by China’s now-defunct one child policy. A parent of our data engineer was having health problems. Because there were no other children to help out, he was forced to relocate his family back to China.

It took us around six months to hire him. With a bunch of data projects now in the pipeline, we couldnt’t go through the hiring process again. Fortunately, I was excited to step into the breach. I’ve never formally trained as a data engineer, but I built a data warehouse from scratch for another startup, and I’ve always had a passion for numbers.

Still, I’ve definitely struggled a little bit in the new role. One of the things I’ve struggled with most is how to communicate numbers to the business team. It’s fine to make pretty visualizations, but how do I communicate the subtlety in the data? How do I communicate the fact that the numbers are as accurate as we can get, but there are still some sources of error ever-present in the system?

I came up with the following guidelines to help me talk to the business team, and I thought they might be useful to other programmers who are in a similar position.

Sources of Error

There are two categorical sources of error in any data analysis system.

  1. Data warehouse replication problems
  2. Bugs and algorithmic errors

Data Replication Issues

Inaccuracy of the first type is unavoidable, and is a universal problem with data warehouses. Data warehouses are typically pulling in huge amounts of data from many sources, then transforming it and analyzing it. In our case, we have jobs that should pull data hourly, but these jobs can fail due to infrastructural errors, such as an inability to requisition server resources from Amazon. So we have jobs that run daily as a fallback mechanism, and we have jobs to pull all the data for each table that can be run manually.

Typically, the data should be no more than an hour off of real time.

When ingestion jobs fail, the data can be recovered by future jobs. Typically, data replication errors do not result in any long-term data loss.

Bugs and Algorithmic Errors

It’s important to remember that the data analysis system is ultimately just software. As with any software project, bugs are inevitable. Bugs can arise in several ways and in several different places.

  1. Instrumentation. The instrumentation can be wrong in many ways. New features may not have been instrumented at all. Instrumentation may be out of date with the assumptions in the latest release. Instrumentation could be conditionally incorrect, leading to omitted data or semi-correct data.
  2. Ingestion. The ingestion occurs in multiple steps. The data has to be correctly propagated from the database, to the replicated database, to the data pipeline, to the data warehouse. Errors in ingestion often occur when only part of this process has been updated. In our case, fields must be added to RedShift, to Kinesis Firehose (for events), to Data Pipeline (for db records), then they must be exposed in Looker.
  3. Transformation and Analysis. The presentation of advanced statistics rests on several layers of analysis and aggregation. A small typo, or mistake in one place can lead to a cascade of errors when that mistake effects a huge amount of data.

How to Talk About Inaccuracy

The best way to talk about inaccuracy is to talk about what steps you have taken to validate the data.

  • What did you do to validate the instrumentation? How did you communicate the requirements and purpose of the new events to the developers? Did you review their pull requests and ensure that the events were actually instrumented?
  • What did you do to validate the ingestion? Did you see events coming in on a staging environment? Did you participate in testing the new feature then verify that your tests percolated through to staging analytics? Did you read the monitoring logs?
  • What did you do to validate the analysis? Did you compare the resulting data to the data in another system? Did you talk through the results with a colleague? Did you double-check the calculations that underlie your charts? Even when they were created by other/former developers? Did you create an intermediate chart and verify the correctness at that level of analysis? When you look at the data from another angle/table, do the results make sense with your new results?

Don’t dwell on the sources of error. Talk about what you have done to minimize the sources of error. In the end, this is software. Software evolves. The first release is always buggy, and we are always working to refine, fix bugs, and improve.

Make a plan to validate each data release like the rest of the team validates the consumer-facing product. Use unit tests, regression tests, and spot=checks with production to validate your process.

Top Line Numbers vs. Other Numbers

In general, you can never be sure that a number is absolutely 100% correct due to the assumptions in the process, and the fact that you must rely on the work of many other developers. Most charts should be used in aggregate to paint a picture of what is happening. No single number should be thought of as absolute. If possible, you should try to present confidence intervals in charts or use other tools that represent the idea of fuzziness.

But as in everything, there are exceptions.

With particularly important numbers, if the amount of data that goes into them is relatively small, then we can manually validate the process by comparing the results with the actual production database. The point is that for the most important, top-line calculations, you should be extremely confident in your process. You should have reviewed each step along the way and ensured to the best of your ability that the number is as close as possible to the real number.

TLDR

When you’re trying to communicate the accuracy of your data to the business team…

  • Focus on what you have done to validate the numbers.
  • Keep in mind that the data analysis process is software that evolves toward correctness as all software does.
  • Validate data analysis like you validate any other software.
  • Where it’s possible and important, do manual validation against production so you can have high confidence in your top line numbers.

12. November 2017 by evan
Categories: Software | Tags: , , , , , , , , , , , , | Leave a comment

I paid my dues to see David Gray live

One of the reasons I am fascinated with both computer science and music is that each is a bit like magic. Each has invisible power to make change.

Yesterday, my daughter woke up with the flu. Actually, we found out today that she has croup, which is apparently going around her school. So Erin stayed home with her, while I went to work. But we also had to cancel our plans for the evening. Instead of going to the David Gray concert together, I would go alone.

At work, I was stuck in a meeting that seemed like it would never end. During this meeting, I got a headache that kept getting worse and worse. When I rubbed my head, I could feel my temperature rising. I could tell that I was getting sick too. The meeting dragged on for four hours, but I pushed through it.

By the end of the day, I was exhausted and feverish. I had driven to work, because I was still going to make it to the concert, even if I was going alone. But in Palo Alto, you have to do a dance with the parking authority if you want to park for free. You have to move your car every two hours, from one colored zone to another. I left work a little early because I knew there would be traffic on the drive, but when I found my car, there was a bright orange envelope on the windshield. I owe Palo Alto $53.

At that point I had paid $70 for the tickets, plus $53 for the parking ticket, so I had invested $123 to see David Gray. The parking ticket only steeled my resolve. I was going to see him come hell or high water.

And this is all sort of silly, because I don’t even like David Gray that much. Mostly, I have a deep sense of nostalgia for his one hit album that came out right before I went to college. I listened to it a lot in college. At the time, he was the only person I knew of who was doing singer-songwriter-plus-drum-machine really well. When I found out that Erin couldn’t come to the concert, I tried to explain this to my younger coworkers who I invited to the concert. They were nonplussed to say the least. A singer-songwriter with a drum machine really doesn’t sound very compelling today. It sounds practically commonplace. But nobody had quite figured out the formula back in 1998. So David Gray felt really fresh to me at the time.

My point is, I’m not a David Gray fanboy. I just respect the amount of time I spent listening to him when I was younger. Unfortunately, this is not enough to convince others to drive all the way up to Oakland for a concert.

The drive was hellish. If you have ever commuted from San Jose to/from Oakland during rush hour, then you know how this goes. The Greek Theater is only 40 miles from my workplace. The best route that Google could calculate took two and a half hours. I was in traffic for every minute of that drive, with a rising fever. It was extremely painful, and even though I left work fifteeen minutes early, I still arrived 10 minutes late.

But when I pulled up to the parking garage, things seemed to turn around. By this point I had a very high fever, the sun had gone down, and it was raining. So I couldn’t see the “Full” sign on the parking garage until I had already pulled in using the wrong lane. Everyone was continuing on to the next lot. At first I tried to back out of the garage, but then I realized that it wasn’t really full. So I pulled into a spot. I’d take my chances.

Then I stepped out into the rain, and started running to the theater. I could hear the music pouring over the hills. I saw a man standing in the rain, asking for extra tickets. I knew he was just going to scalp them, so I almost walked by, but fuck it, who cares. I gave him my extra ticket.

Then I ran up the steps, and breezed through security. I climbed to the top of the hill, and the music hit me.

That’s the moment when you feel the true power of music. I was all alone and feverish, in the rain after a long day of work and an awful drive to the theater, yet the music seemed to heal me. I could feel myself recovering as the sound washed over me.

I didn’t really talk to anyone. I listened to the music, and watched from the top of the grass. David Gray has a good band, and he has a good audience rapport. Even though his music isn’t as fresh today as it was in 1998, it still changed me last night.

I bought a shirt, and felt a lot better on the drive home.

David Gray at the Greek Theater in Berkeley

David Gray North American Tour 2017 Shirt

20. October 2017 by evan
Categories: Music | Tags: , , , , , , , , | Leave a comment

When Code Duplication is not Code Duplication

Duplicating code is a bad thing. Any engineer worth his salt knows that the more you repeat yourself, the more difficult it will be to maintain your code. We’ve enshrined this in a well-known principle called the DRY principle, where DRY is an acronym standing for Don’t Repeat Yourself. So code duplication should be avoided at all costs. Right?

At work I recently came across an interesting case of code duplication that merits more thought, and shows how there is some subtlety needed in application of every coding guideline, even the bedrock ones.

Consider the following CSS, which is a simplified version of a common scenario.

.title {
  color: #111111;
}
.text-color-gray-1 {
  color: #111111;
}

This looks like code duplication, right? If both classes are applying the same color, then they do the same thing. If the do the same thing, then they should BE the same thing, right?

But CSS and markup in general presents an interesting case. Are these rules really doing the same thing? Are they both responsible for making the text gray? No.

The function of these two rules is different, even though the effect is the same. The first rule styles titles on the website, while the second rule styles any arbitrary div. The first rule is a generalized style, while the second rule is a special case override. The two rules do fundamentally different things.

Imagine a case where we optimized those two classes by removing the title class and just using the latter class. Then the designer changes the title color to dark blue. To change the title color, the developer now has to replace each occurrence of .text-color-gray-1 where it styles a title. So, by optimizing two things with different purposes, the developer has actually made more work.

It’s important to recognize in this case that code duplication is not always code duplication. Just because these two CSS classes are applying the same color doesn’t mean that they are doing the same thing. In this case, the CSS classes are more like variables than methods. They hold the same value, but that is just a coincidence.

What looks like code duplication is not actually code duplication.

But… what is the correct thing?

There is no right answer here. It’s a complex problem. You could solve it in lots of different ways, and there are probably three or four different approaches that are equally valid, in the sense that they result in the same amount of maintenance.

The important thing is not to insist that there is one right way to solve this problem, but to recognize that blithely applying the DRY principle here may not be the path to less maintenance.

15. March 2017 by evan
Categories: Software Design | Tags: , , , , , , , | Leave a comment

Remaking Cool Music, Rebooting a Blog

I spent the weekend remaking the theme from the Netflix show The White Rabbit Project. It’s a pretty rocking little theme. I really admire any composer who can write a good piece of music that lasts only thirty seconds, and the composer of the original theme certainly succeeded in that respect.

In that blog post, I included the drum presets, synth presets, and audio files used in the remake.

So why didn’t I post that here? In short, I am trying to revive my defunct blog. I was pretty hardcore about blogging for about six years, from around 2007 to 2013, but since then I have let it slip. Social media seemed to take over and make blogs irrelevant.

Or at least it felt like my efforts were wasted three years ago.

Now I feel like I have more to say about computer music, and I feel like mass social media, where everyone is lumped together into one big mass, aka Twitter and Facebook, is dying. I feel like the flaws in it are apparent.

So maybe blogs are both the past and the future? Or maybe the future is something we can’t see yet? Either way, I am rebooting computermusicblog.com.

18. December 2016 by evan
Categories: Uncategorized | Tags: , , , | Leave a comment

I love Pandora, but where is the discovery?

I have been a loyal Pandora subscriber since the month they started offering subscriptions. I love the service. I will continue subscribing forever, even if it’s only to keep my perfectly tuned Christmas music station.

But Pandora is not serving its audience very well, and that annoys me.

I probably listen to Pandora over five hours a day on each work day, and probably an hour or two on days off. When I tell someone I use Pandora, they inevitably ask me, “why don’t you just use Spotify?” More and more, I feel like they have a point.

In the past, I have preferred Pandora because it enabled discovery. It allowed me to create stations that would play music that I liked, but I had never heard. As a person who has spent decades of his life listening to and studying music, one of the main things I like about a piece of music is that I’ve never heard it before. In the past two years or so, I feel like this aspect of Pandora has dwindled or disappeared.

More and more, I feel like my Pandora stations primarily play the tracks that I have already voted for. Admittedly, some of my stations have been around for over a decade, so I have voted for a lot of tracks. When I vote for a track, however, it isn’t an indication that I want to hear that track every time I turn on that station. A vote is an indication that I want to hear tracks that are similar to that track.

But this is just too rare lately on Pandora. I hear the same Ellie Goulding tracks that I voted for last year. I hear the same Glitch Mob tracks that I’ve heard for the past six years. I still like that music, but I would prefer to hear something else. Why not play another track off the album that I voted for? Why play the same single track over and over?

“But why not click the ‘Add Variety’ button?” The ‘Add Variety’ button adds a new seed to that station. I don’t want to change the type of music played by the station, I simply want it to play OTHER music that falls within my already-indicated preferences.

What really irritates me, is that this doesn’t seem like a hard feature to implement. Why can’t a user tune the amount of new music they hear? Why can’t we have a slider that we can control with our mood? If the slider is set to 1.0, then we are in full discovery mode. Every track played will be one that we haven’t voted on. If the slider is set to 0.0, then every track played will be one that we HAVE voted on. In this way, Pandora could act like Spotify for users who like Spotify, and for people like me, it can act as the best shuffle on the planet.

As a programmer who has worked with large datasets, search tools like ElasticSearch, and written lots of web applications, I know that this isn’t a difficult change. It might require one schema change, and less than ten lines of new code. But it should be implementable and testable in under a week. Design might take longer, but here, I will design it for you.

pandora discovery slider

pandora discovery slider

And seriously, Pandora, I will implement this for you if you are that desperate. My current employer will loan me out, and even without knowing your code base, I could get this done in a month.

So come on, Pandora. Serve your audience. Stop making me explain why I prefer Pandora over Spotify. Add a discovery slider. Today.

11. December 2016 by evan
Categories: Criticism | Tags: , , , , | Leave a comment

How to Share an Audio File on Android from Unity/C#

Rendering audio to a file is an important feature of an audio synthesizer, but if the user can’t share the file, then it’s not very useful. In my second pass on my synthesizers, I’m adding the ability to share rendered audio files using email or text message.

The code for sharing audio files is tricky. You have to tell Unity to generate some Java code that launches something called an Intent. So this code basically instantiates the Java classes for the Intent and the File, then starts the activity for the intent.

Figuring out the code is tough, but you also need to change a setting in your player settings. Specifically, I couldn’t get this code to work without Write Access: External (SDCard) enabled in Player Settings. Even if I am writing to internal storage only, I need to tell Unity to request external write access. I’m assuming that the extra privileges are needed for sharing the file.

Here’s the code.

public static void ShareAndroid(string path)
{
    // create the Android/Java Intent objects
    AndroidJavaClass intentClass = new AndroidJavaClass("android.content.Intent");
    AndroidJavaObject intentObject = new AndroidJavaObject("android.content.Intent");

    // set properties of the intent
    intentObject.Call("setAction", intentClass.GetStatic("ACTION_SEND"));
    intentObject.Call("setType", "*/*");

    //instantiate the class Uri
    AndroidJavaClass uriClass = new AndroidJavaClass("android.net.Uri");

    // log the attach path
    Debug.Log("Attempting to attach file://" + path);

    // check if the file exists
    AndroidJavaClass fileClass = new AndroidJavaClass("java.io.File");
    AndroidJavaObject fileObject = new AndroidJavaObject("java.io.File", path);// Set Image Path Here
    //instantiate the object Uri with the parse of the url's file
    AndroidJavaObject uriObject = uriClass.CallStatic("parse", "file://" + path);
    // call the exists method on the File object
    bool fileExist = fileObject.Call("exists");
    Debug.Log("File exists: " + fileExist);

    // attach the Uri instance to the intent
    intentObject.Call("putExtra", intentClass.GetStatic("EXTRA_STREAM"), uriObject);

    // instantiate the current activity    
    AndroidJavaClass unity = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
    AndroidJavaObject currentActivity = unity.GetStatic("currentActivity");

    // start the new intent - for this to work, you must have Write Access: External (SDCard) enabled in Player Settings!
    currentActivity.Call("startActivity", intentObject);
}

07. October 2016 by evan
Categories: Software | Leave a comment

Recording In-Game Audio in Unity

Recently I began doing a second pass on my synthesizers in the Google Play store. I think the core of each of those synths is pretty solid, but they are still missing some key features. For example, if you want to record a performance, you must record the output of the headphone jack.

So I just finished writing a class that renders a Unity audio stream to a wave file, and I wanted to share it here.

The class is called AudioRenderer. It’s a MonoBehaviour that uses the OnAudioFilterRead method to write chunks of data to a stream. When the performance ends, the Save method is used to save to a canonical wav file.

The full AudioRenderer class is pasted here.

using UnityEngine;
using System;
using System.IO;

public class AudioRenderer : MonoBehaviour
{
    #region Fields, Properties, and Inner Classes
    // constants for the wave file header
    private const int HEADER_SIZE = 44;
    private const short BITS_PER_SAMPLE = 16;
    private const int SAMPLE_RATE = 44100;

    // the number of audio channels in the output file
    private int channels = 2;

    // the audio stream instance
    private MemoryStream outputStream;
    private BinaryWriter outputWriter;

    // should this object be rendering to the output stream?
    public bool Rendering = false;

    /// The status of a render
    public enum Status
    {
        UNKNOWN,
        SUCCESS,
        FAIL,
        ASYNC
    }

    /// The result of a render.
    public class Result
    {
        public Status State;
        public string Message;

        public Result(Status newState = Status.UNKNOWN, string newMessage = "")
        {
            this.State = newState;
            this.Message = newMessage;
        }
    }
    #endregion

    public AudioRenderer()
    {
        this.Clear();
    }

    // reset the renderer
    public void Clear()
    {
        this.outputStream = new MemoryStream();
        this.outputWriter = new BinaryWriter(outputStream);
    }

    /// Write a chunk of data to the output stream.
    public void Write(float[] audioData)
    {
        // Convert numeric audio data to bytes
        for (int i = 0; i < audioData.Length; i++)
        {
            // write the short to the stream
            this.outputWriter.Write((short)(audioData[i] * (float)Int16.MaxValue));
        }
    }

    // write the incoming audio to the output string
    void OnAudioFilterRead(float[] data, int channels)
    {
        if( this.Rendering )
        {
            // store the number of channels we are rendering
            this.channels = channels;

            // store the data stream
            this.Write(data);
        }
            
    }

    #region File I/O
    public AudioRenderer.Result Save(string filename)
    {
        Result result = new AudioRenderer.Result();

        if (outputStream.Length > 0)
        {
            // add a header to the file so we can send it to the SoundPlayer
            this.AddHeader();

            // if a filename was passed in
            if (filename.Length > 0)
            {
                // Save to a file. Print a warning if overwriting a file.
                if (File.Exists(filename))
                    Debug.LogWarning("Overwriting " + filename + "...");

                // reset the stream pointer to the beginning of the stream
                outputStream.Position = 0;

                // write the stream to a file
                FileStream fs = File.OpenWrite(filename);

                this.outputStream.WriteTo(fs);

                fs.Close();

                // for debugging only
                Debug.Log("Finished saving to " + filename + ".");
            }

            result.State = Status.SUCCESS;
        }
        else
        {
            Debug.LogWarning("There is no audio data to save!");

            result.State = Status.FAIL;
            result.Message = "There is no audio data to save!";
        }

        return result;
    }

    /// This generates a simple header for a canonical wave file, 
    /// which is the simplest practical audio file format. It
    /// writes the header and the audio file to a new stream, then
    /// moves the reference to that stream.
    /// 
    /// See this page for details on canonical wave files: 
    /// http://www.lightlink.com/tjweber/StripWav/Canon.html
    private void AddHeader()
    {
        // reset the output stream
        outputStream.Position = 0;

        // calculate the number of samples in the data chunk
        long numberOfSamples = outputStream.Length / (BITS_PER_SAMPLE / 8);

        // create a new MemoryStream that will have both the audio data AND the header
        MemoryStream newOutputStream = new MemoryStream();
        BinaryWriter writer = new BinaryWriter(newOutputStream);

        writer.Write(0x46464952); // "RIFF" in ASCII

        // write the number of bytes in the entire file
        writer.Write((int)(HEADER_SIZE + (numberOfSamples * BITS_PER_SAMPLE * channels / 8)) - 8);

        writer.Write(0x45564157); // "WAVE" in ASCII
        writer.Write(0x20746d66); // "fmt " in ASCII
        writer.Write(16);

        // write the format tag. 1 = PCM
        writer.Write((short)1);

        // write the number of channels.
        writer.Write((short)channels);

        // write the sample rate. 44100 in this case. The number of audio samples per second
        writer.Write(SAMPLE_RATE);

        writer.Write(SAMPLE_RATE * channels * (BITS_PER_SAMPLE / 8));
        writer.Write((short)(channels * (BITS_PER_SAMPLE / 8)));

        // 16 bits per sample
        writer.Write(BITS_PER_SAMPLE);

        // "data" in ASCII. Start the data chunk.
        writer.Write(0x61746164);

        // write the number of bytes in the data portion
        writer.Write((int)(numberOfSamples * BITS_PER_SAMPLE * channels / 8));

        // copy over the actual audio data
        this.outputStream.WriteTo(newOutputStream);

        // move the reference to the new stream
        this.outputStream = newOutputStream;
    }
    #endregion
}

As written it will only work on 16bit/44kHz audio streams, but it should be easily adaptable.

07. October 2016 by evan
Categories: Software | Tags: , , , , , , , | Leave a comment

Erratum, an Album Made Entirely with Custom Noise Apps

Erratum is an album that has been in gestation for over a year, and even as I release it into the wild I am refining my ideas about it, and apps, and the place of apps in music-making.

Erratum noise music album cover

Every track on the album was made using freely available sound mangling apps of my own creation. This intersects with my current philosophies about music and music-making in a few ways.

First, by making all the apps publicly available, I’m basically open-sourcing the album. Okay, the apps aren’t open source (yet), but other musicians can now very easily make very similar music. I think this is a good thing. I hope people find my apps useful. But this is a significant change from my thinking of just a few years ago, which was dominated by a slightly-more-insular academic perspective. The academic perspective says something like “I put in a lot of working making the software, so why should I let just anyone use it, or copy my algorithms.” This is an attitude displayed often by the old-guard type of guys I learned from, and in my previous art albums like Disconnected, I took the same stance. With the continuing dominance of social media over good-old-fashioned-blogs, I’m starting to think that sharing is more important than building up my own ivory tower though, and I tried to do that with this album.

Second, this album is full of short pieces. I’m starting to come around to the idea reflected in Cage’s Sonatas and Interludes, which is that if you’re going to write weird music, it’s better to write many short pieces or movements than to write something monolithic. So each of the pieces on this album are short and unique. The album is held together only by the thread of the mobile apps used to make them.

Finally, this album reflects the increasing pleasure I get from listing to music that is very close to noise. Some listeners might call some of this music noise. One of the apps I used to create this album, Radio Synthesizer simply adds radio-like noise to an audio file in greater or lesser proportions. When I had my first child I remember putting her to sleep with white noise, and for awhile, white noise was 100% effective at putting her to sleep. I think that made me more appreciative of all the different ways that noise can be generated. This album reflects a lot of different ways of getting to and from a noise-like state.

Stream Erratum from evanxmerz.bandcamp.com or check out the apps I used to make it on Google Play.

28. June 2016 by evan
Categories: Music, Software | Tags: , , , , , , , , , , , , , , | Leave a comment

Granular Synthesis for Android Phones

Granular is a granular synthesizer for Android devices. Play it by dragging your fingers around the waveform for the source audio file. You can upload your own audio files, or just play with the sounds that are distributed with the app.



The horizontal position on the waveform controls the location from which grains will be pulled. The vertical position controls the grain size. The leftmost slider controls the amount of frequency modulation applied to the grains. The middle slider controls the time interval between grains. The rightmost slider controls randomness.

Download Granular from the Google Play Store and start making grainy soundscapes on your phone.

20. June 2016 by evan
Categories: Software | Tags: , , , , , , , , , , , , | Leave a comment

← Older posts