SoundCloud, I love you, but you’re terrible

I finally started using SoundCloud for a new jazz/electro project called Fynix. I casually used it in the past under my own name, in order to share WIP tracks, or just odd stuff that didn’t fit on bandcamp. But I never used it seriously until recently. Now I am using it every day, and trying to connect with other artists. I am remixing one track a week, listening to everything on The Upload, and liking/commenting as much as I can.

SoundCloud is the best social network for musicians right now. But it still has a terrible identity crisis. Most of the services seem to be aimed at listeners, or aimed at nobody in particular.

So in this post, I’m going to vent about SoundCloud. It’s a good platform, but with a few changes it could be great.

1. I am an artist. Stop treating me like a listener.

Is it really that difficult for you to recognize that I am a musician, and not a listener? I’ve uploaded 15 tracks. It seems like a pretty simple conditional check to me. So why is my home feed cluttered up with reposts? Why can’t I easily find the new tracks by my friends?

This is the core underlying problem with SoundCloud. It has two distinct types of users, and yet it treats all users the same.

2. Your “Who to Follow” recommendations suck. They REALLY suck.

I’ve basically stopped checking “Who to Follow” even though I want to connect with as many musicians as possible. The recommendations seem arbitrary and just plain stupid.

The main problem is that, as a musician, I want to follow other musicians. I want to follow people who will interact with me, and who will promote my work as much as I promote theirs. Yet, the “Who to Follow” list is full of seemingly random people.

Is this person from the same city as me? No. Do they follow lots of people / will they follow back? No. Are they working in a genre similar to mine? No. Do they like and comment on lots of tracks? No.

So why the heck would I want to follow them?

3. Where are my friends latest tracks?

This last one is just infuriating. When I log in, I want to see the latest tracks posted by my friends. So I go to my homescreen, and it is pure luck if I can find something posted by someone I actually talk to on SoundCloud. It’s all reposts. Even if I unfollow all the huge repost accounts, I am stuck looking at reposts by my friends, rather than their new tracks.

Okay, so let’s click the dropdown and go to the list of users I am “following”. Are they sorted by recent activity? No. They are sorted by the order in which I followed them. To find out if they have new tracks, I must click on them individually and check their profiles. Because that is really practical.

Okay, so maybe there’s a playlist of my friends tracks on the Discover page? Nope. It’s all a random collection of garbage.

As far as I can tell, there is no way for me to listen to my friends’ recent tracks. This discourages real interactions.

Ultimately, the problem is data, and intelligence. SoundCloud has none.

You could blame design for these problems. The website shows a lack of direction, as if committees are leading the product in lots of different directions. SoundCloud seems to want to focus on listeners, to compete in the same space as Spotify.

But even if that’s the case, it should be trivial to see that I don’t use the website like a regular listener. I use it like a musician. I want to connect and interact with other musicians.

And this is such a trivial data/analytics problem that I can only think that they aren’t led by data at all. Maybe this is just what I see because I lead our data team, but it seems apparent to me that data is either not used, or used poorly in all these features.

For instance, shouldn’t the “Who to Follow” list be based on who I have followed in the past? I’ve followed lots of people who make jazz/electro music, yet no jazz/electro artists are in my “Who to Follow” list. I follow people who like and comment on my tracks, yet I am told to follow people who follow 12 people and have never posted a comment.

The most disappointing thing is that none of this is hard.

4. Oh yeah, and your browser detection sucks.

When I am browsing your site on my tablet, I do not want to use the app. I do not want your very limited mobile site. I just want the regular site (and yes, I know I can get it with a few extra clicks, but it should be the default).

Tips for Managing Joins in Looker

Looker is a fantastic product. It really makes data and visualizations much more manageable. The main goal of Looker is to allow people who aren’t data analysts to do some basic data analysis. To some extent, it achieves this, but there are limits to how far this can go. Ultimately, Looker is a big graphical user interface for writing SQL and generating charts. Under-the-hood, it’s programmable by data engineers, but it’s limited by the fact that non-technical users are using it.

The major design challenge for Looker is joins. A data engineer writes the joins into what Looker calls “explores”. Explores are rules for how data can be explored, but ultimately just a container for joins. When someone creates a new chart, they start by selecting an explore, and thus selecting the joins that will be used in the chart.

They pick the join from a dropdown under the word “Explore”. This is the main design bottleneck. Such a UI encourages users to have only a limited number of joins that can fit in the vertical resolution of the screen. This means limiting the number of explores, and hence limiting the ways tables are joined. This encourages using pre-existing joins for new charts.

This creates two problems.

  1. A non-technical user will not understand the implication of choosing an explore. They may not see that the explore they chose limits how the data can be analyzed. In fact, a non-savvy user may pick the wrong explore entirely, and create a chart that is entirely wrong.
  2. The joins may evolve over time. A programmer might change a join for a new chart, and this may make old charts incorrect.

The problem is that SQL joins are fundamentally interpretations of the data. Unless a join occurs on id fields AND is a one-to-one relationship, then a join interprets the data in some way.

So how can you limit the negative impact of re-using joins?

1. Encourage simple charts

Encourage your teammates to make charts as simple as possible. If possible, a chart should show a single quantity as it changes over a single dimension. This should eliminate or minimize the use of joins in the chart, thus making it far more future-proof.

2. Give explores long, verbose names

Make explore names as descriptive as possible. Try to communicate the choice that a user is making when they choose an explore. For instance, you might name one explore “Products Today” and another one “Product Events Over Time”. These names might indicate that the first explore looks at the products table, but the second explore shows events relating to products joined with a time dimension.

One of the mistakes I made while first starting out with Looker is naming the explores with single word names. I now see that short names create maintenance nightmares. Before assessing the problems with a given chart, I need to know which explore the maker chose for it, and because the names were selected so poorly, the choice was often incorrect.

I hope these ideas help you find a path to a maintainable data project. To be honest, I have a lot of digging-out to do!

Pride in Software Craftsmanship

As I spend more and more time in Silicon Valley, my views on software management are changing. I read Radical Candor recently, and while I agree with everything in it, I feel like it over-complicates things.

This meditation has been pushed in part by my passion for food. I like going to new restaurants. It brings me joy to try something new, even if it’s not a restaurant that would ever be considered for a Michelin Star. Even crappy looking restaurants can serve great food.

I am often awed by the disconnect between various parts of the restaurant business and the quality of the food. Some restaurants are spotlessly clean, have have beautiful decor, and amazing service… but the food is mediocre. The menu is bland and uninspired, and the food itself is prepared with all the zeal that a minimum wage employee can manage.

Then I’ll go to a dirty looking greek joint down the road, and the service will be awful… but the menu is inspired. It’s not the standard “greek” menu, but it’s got little variations on the dishes. And when the food comes out (finally), maybe it isn’t beautiful on the plate, but the flavors come together to make something greater than the ingredients and the recipe.

What seems to distinguish a good restaurant from a crappy one is pride. At restaurants that I return to, there is someone there, maybe a manager, maybe a cook, maybe the chef who designed the menu, who takes great pride in his work.

There’s a diner by my old house, for instance, where the food is … diner food. There’s no reason to go back to the restaurant… except for the manager. The man who runs the floor, seats the patrons, deals with the kitchen, and does all the little things that make a restaurant tick. He manages to make that particular diner worth going to. And for a guy who has two young kids, that’s terrific.

I am starting to think that the same basic principle applies to software engineers. I’ve met brilliant engineers with all sorts of characteristics. Some of them have a lot of education and read all the latest guides. Others have little education, and don’t read at all. The main thing that makes them good engineers is that they take pride in their work. They care about the quality of their work, regardless of how many people are going to use it, or how much time they put into it. They write quality code because their work matters.

So when it comes to managing software projects, I’m starting to think that all of these systems boil down to two basic steps.

  1. Put your engineers in a position to take pride in their work.
  2. Get out of the way.

Obviously, the first step is non-trivial. It’s why there are so many books on the topic. But at the end of the day, pride is what matters.

Sometimes It’s Okay to NOT Write Unit Tests

I recently lost about two-and-a-half days to unit/integration tests. At Mighty Networks, we are pretty proud of our test coverage, and we make writing tests part of the development process. Developers are required to write tests for every feature they implement, but in the past few days I’ve seen that this policy needs to be applied flexibly.

A few months ago we wrote a pretty expansive integration with the iTunes Store. Since we allow people to sell subscriptions through our app, we required a pretty complex integration. Apple’s developer APIs are notoriously crappy, so this required a full team effort. One developer wrote a series of tests for our Apple integration.

In theory, the tests are very thorough. But getting real data for testing is virtually impossible. So the developer faked up a json file, then wrote a preprocessor to generate fake data in a format that looked like Apple’s format. Then he wrote tests.

You may see where this is going already.

The tests he wrote essentially tested his preprocessor. Rather than testing the actual methods used in the integration with Apple, the tests looked at the values generated by the preprocessor. Essentially, by writing a clever object to fake Apple data, he removed the actual integration from the tests.

The tests looked correct. They seemed to show that our Apple code worked. But really they were mostly testing the test code itself.

So when I modified a related system, and added a few tests, I suddenly saw a massive cascade of failures all over the place. The failures were of different types too. Sometimes there was a null value, or an unexpected ID, or an error seemingly from Apple.

It took me a day to figure out that the Apple integration itself wasn’t failing, only the preprocessor wasn’t set up to actually work with the rest of the system. Then I took another day-and-a-half to pull out the worst part of the system and replace the non-tests.

I don’t blame the developer who wrote the tests. It’s a common mistake, and we all did it at least once.

In part I blame Rails, because it encourages black/white thinking about software development.

The developer followed the rule that he needs to write tests for every new feature. When he integrated with Apple, he diligently wrote his tests.

The problem arose when he realized that he couldn’t run production code to get real data. He didn’t know how to write a test for the algorithm, so he wrote code that generates Apple-like data, then tested that.

The developer failed to see that writing tests is a guideline, rather than a rule. In this case, it is very difficult to test every part of the integration. It’s acceptable to write tests of the core process, without testing specific return values and specific pieces of data. The tests gave the impression of working code, and full test coverage. But they hid a few problems with the integration by testing for specific values, rather than algorithmic correctness.

So what can we do?

Senior developers need to encourage junior developers to talk about problems that arise when following “the rules.” Senior developers need to encourage an environment where it’s okay to admit that portion of the code just can’t be tested. Or at least to see that a portion of the code can’t be tested in the same way as most of the code. Senior developers need to encourage critical thinking and analysis in situations where the strict interpretation of the rules may not lead to the best results for the development process.

Learning to Talk About Inaccuracy for a New Data Engineer

About a month ago, the engineering team at Mighty Networks was impacted by China’s now-defunct one child policy. A parent of our data engineer was having health problems. Because there were no other children to help out, he was forced to relocate his family back to China.

It took us around six months to hire him. With a bunch of data projects now in the pipeline, we couldnt’t go through the hiring process again. Fortunately, I was excited to step into the breach. I’ve never formally trained as a data engineer, but I built a data warehouse from scratch for another startup, and I’ve always had a passion for numbers.

Still, I’ve definitely struggled a little bit in the new role. One of the things I’ve struggled with most is how to communicate numbers to the business team. It’s fine to make pretty visualizations, but how do I communicate the subtlety in the data? How do I communicate the fact that the numbers are as accurate as we can get, but there are still some sources of error ever-present in the system?

I came up with the following guidelines to help me talk to the business team, and I thought they might be useful to other programmers who are in a similar position.

Sources of Error

There are two categorical sources of error in any data analysis system.

  1. Data warehouse replication problems
  2. Bugs and algorithmic errors

Data Replication Issues

Inaccuracy of the first type is unavoidable, and is a universal problem with data warehouses. Data warehouses are typically pulling in huge amounts of data from many sources, then transforming it and analyzing it. In our case, we have jobs that should pull data hourly, but these jobs can fail due to infrastructural errors, such as an inability to requisition server resources from Amazon. So we have jobs that run daily as a fallback mechanism, and we have jobs to pull all the data for each table that can be run manually.

Typically, the data should be no more than an hour off of real time.

When ingestion jobs fail, the data can be recovered by future jobs. Typically, data replication errors do not result in any long-term data loss.

Bugs and Algorithmic Errors

It’s important to remember that the data analysis system is ultimately just software. As with any software project, bugs are inevitable. Bugs can arise in several ways and in several different places.

  1. Instrumentation. The instrumentation can be wrong in many ways. New features may not have been instrumented at all. Instrumentation may be out of date with the assumptions in the latest release. Instrumentation could be conditionally incorrect, leading to omitted data or semi-correct data.
  2. Ingestion. The ingestion occurs in multiple steps. The data has to be correctly propagated from the database, to the replicated database, to the data pipeline, to the data warehouse. Errors in ingestion often occur when only part of this process has been updated. In our case, fields must be added to RedShift, to Kinesis Firehose (for events), to Data Pipeline (for db records), then they must be exposed in Looker.
  3. Transformation and Analysis. The presentation of advanced statistics rests on several layers of analysis and aggregation. A small typo, or mistake in one place can lead to a cascade of errors when that mistake effects a huge amount of data.

How to Talk About Inaccuracy

The best way to talk about inaccuracy is to talk about what steps you have taken to validate the data.

  • What did you do to validate the instrumentation? How did you communicate the requirements and purpose of the new events to the developers? Did you review their pull requests and ensure that the events were actually instrumented?
  • What did you do to validate the ingestion? Did you see events coming in on a staging environment? Did you participate in testing the new feature then verify that your tests percolated through to staging analytics? Did you read the monitoring logs?
  • What did you do to validate the analysis? Did you compare the resulting data to the data in another system? Did you talk through the results with a colleague? Did you double-check the calculations that underlie your charts? Even when they were created by other/former developers? Did you create an intermediate chart and verify the correctness at that level of analysis? When you look at the data from another angle/table, do the results make sense with your new results?

Don’t dwell on the sources of error. Talk about what you have done to minimize the sources of error. In the end, this is software. Software evolves. The first release is always buggy, and we are always working to refine, fix bugs, and improve.

Make a plan to validate each data release like the rest of the team validates the consumer-facing product. Use unit tests, regression tests, and spot=checks with production to validate your process.

Top Line Numbers vs. Other Numbers

In general, you can never be sure that a number is absolutely 100% correct due to the assumptions in the process, and the fact that you must rely on the work of many other developers. Most charts should be used in aggregate to paint a picture of what is happening. No single number should be thought of as absolute. If possible, you should try to present confidence intervals in charts or use other tools that represent the idea of fuzziness.

But as in everything, there are exceptions.

With particularly important numbers, if the amount of data that goes into them is relatively small, then we can manually validate the process by comparing the results with the actual production database. The point is that for the most important, top-line calculations, you should be extremely confident in your process. You should have reviewed each step along the way and ensured to the best of your ability that the number is as close as possible to the real number.

TLDR

When you’re trying to communicate the accuracy of your data to the business team…

  • Focus on what you have done to validate the numbers.
  • Keep in mind that the data analysis process is software that evolves toward correctness as all software does.
  • Validate data analysis like you validate any other software.
  • Where it’s possible and important, do manual validation against production so you can have high confidence in your top line numbers.

When Code Duplication is not Code Duplication

Duplicating code is a bad thing. Any engineer worth his salt knows that the more you repeat yourself, the more difficult it will be to maintain your code. We’ve enshrined this in a well-known principle called the DRY principle, where DRY is an acronym standing for Don’t Repeat Yourself. So code duplication should be avoided at all costs. Right?

At work I recently came across an interesting case of code duplication that merits more thought, and shows how there is some subtlety needed in application of every coding guideline, even the bedrock ones.

Consider the following CSS, which is a simplified version of a common scenario.

.title {
  color: #111111;
}
.text-color-gray-1 {
  color: #111111;
}

This looks like code duplication, right? If both classes are applying the same color, then they do the same thing. If the do the same thing, then they should BE the same thing, right?

But CSS and markup in general presents an interesting case. Are these rules really doing the same thing? Are they both responsible for making the text gray? No.

The function of these two rules is different, even though the effect is the same. The first rule styles titles on the website, while the second rule styles any arbitrary div. The first rule is a generalized style, while the second rule is a special case override. The two rules do fundamentally different things.

Imagine a case where we optimized those two classes by removing the title class and just using the latter class. Then the designer changes the title color to dark blue. To change the title color, the developer now has to replace each occurrence of .text-color-gray-1 where it styles a title. So, by optimizing two things with different purposes, the developer has actually made more work.

It’s important to recognize in this case that code duplication is not always code duplication. Just because these two CSS classes are applying the same color doesn’t mean that they are doing the same thing. In this case, the CSS classes are more like variables than methods. They hold the same value, but that is just a coincidence.

What looks like code duplication is not actually code duplication.

But… what is the correct thing?

There is no right answer here. It’s a complex problem. You could solve it in lots of different ways, and there are probably three or four different approaches that are equally valid, in the sense that they result in the same amount of maintenance.

The important thing is not to insist that there is one right way to solve this problem, but to recognize that blithely applying the DRY principle here may not be the path to less maintenance.

How to Share an Audio File on Android from Unity/C#

Rendering audio to a file is an important feature of an audio synthesizer, but if the user can’t share the file, then it’s not very useful. In my second pass on my synthesizers, I’m adding the ability to share rendered audio files using email or text message.

The code for sharing audio files is tricky. You have to tell Unity to generate some Java code that launches something called an Intent. So this code basically instantiates the Java classes for the Intent and the File, then starts the activity for the intent.

Figuring out the code is tough, but you also need to change a setting in your player settings. Specifically, I couldn’t get this code to work without Write Access: External (SDCard) enabled in Player Settings. Even if I am writing to internal storage only, I need to tell Unity to request external write access. I’m assuming that the extra privileges are needed for sharing the file.

Here’s the code.

public static void ShareAndroid(string path)
{
    // create the Android/Java Intent objects
    AndroidJavaClass intentClass = new AndroidJavaClass("android.content.Intent");
    AndroidJavaObject intentObject = new AndroidJavaObject("android.content.Intent");

    // set properties of the intent
    intentObject.Call("setAction", intentClass.GetStatic("ACTION_SEND"));
    intentObject.Call("setType", "*/*");

    //instantiate the class Uri
    AndroidJavaClass uriClass = new AndroidJavaClass("android.net.Uri");

    // log the attach path
    Debug.Log("Attempting to attach file://" + path);

    // check if the file exists
    AndroidJavaClass fileClass = new AndroidJavaClass("java.io.File");
    AndroidJavaObject fileObject = new AndroidJavaObject("java.io.File", path);// Set Image Path Here
    //instantiate the object Uri with the parse of the url's file
    AndroidJavaObject uriObject = uriClass.CallStatic("parse", "file://" + path);
    // call the exists method on the File object
    bool fileExist = fileObject.Call("exists");
    Debug.Log("File exists: " + fileExist);

    // attach the Uri instance to the intent
    intentObject.Call("putExtra", intentClass.GetStatic("EXTRA_STREAM"), uriObject);

    // instantiate the current activity    
    AndroidJavaClass unity = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
    AndroidJavaObject currentActivity = unity.GetStatic("currentActivity");

    // start the new intent - for this to work, you must have Write Access: External (SDCard) enabled in Player Settings!
    currentActivity.Call("startActivity", intentObject);
}

Recording In-Game Audio in Unity

Recently I began doing a second pass on my synthesizers in the Google Play store. I think the core of each of those synths is pretty solid, but they are still missing some key features. For example, if you want to record a performance, you must record the output of the headphone jack.

So I just finished writing a class that renders a Unity audio stream to a wave file, and I wanted to share it here.

The class is called AudioRenderer. It’s a MonoBehaviour that uses the OnAudioFilterRead method to write chunks of data to a stream. When the performance ends, the Save method is used to save to a canonical wav file.

The full AudioRenderer class is pasted here.

using UnityEngine;
using System;
using System.IO;

public class AudioRenderer : MonoBehaviour
{
    #region Fields, Properties, and Inner Classes
    // constants for the wave file header
    private const int HEADER_SIZE = 44;
    private const short BITS_PER_SAMPLE = 16;
    private const int SAMPLE_RATE = 44100;

    // the number of audio channels in the output file
    private int channels = 2;

    // the audio stream instance
    private MemoryStream outputStream;
    private BinaryWriter outputWriter;

    // should this object be rendering to the output stream?
    public bool Rendering = false;

    /// The status of a render
    public enum Status
    {
        UNKNOWN,
        SUCCESS,
        FAIL,
        ASYNC
    }

    /// The result of a render.
    public class Result
    {
        public Status State;
        public string Message;

        public Result(Status newState = Status.UNKNOWN, string newMessage = "")
        {
            this.State = newState;
            this.Message = newMessage;
        }
    }
    #endregion

    public AudioRenderer()
    {
        this.Clear();
    }

    // reset the renderer
    public void Clear()
    {
        this.outputStream = new MemoryStream();
        this.outputWriter = new BinaryWriter(outputStream);
    }

    /// Write a chunk of data to the output stream.
    public void Write(float[] audioData)
    {
        // Convert numeric audio data to bytes
        for (int i = 0; i < audioData.Length; i++)
        {
            // write the short to the stream
            this.outputWriter.Write((short)(audioData[i] * (float)Int16.MaxValue));
        }
    }

    // write the incoming audio to the output string
    void OnAudioFilterRead(float[] data, int channels)
    {
        if( this.Rendering )
        {
            // store the number of channels we are rendering
            this.channels = channels;

            // store the data stream
            this.Write(data);
        }
            
    }

    #region File I/O
    public AudioRenderer.Result Save(string filename)
    {
        Result result = new AudioRenderer.Result();

        if (outputStream.Length > 0)
        {
            // add a header to the file so we can send it to the SoundPlayer
            this.AddHeader();

            // if a filename was passed in
            if (filename.Length > 0)
            {
                // Save to a file. Print a warning if overwriting a file.
                if (File.Exists(filename))
                    Debug.LogWarning("Overwriting " + filename + "...");

                // reset the stream pointer to the beginning of the stream
                outputStream.Position = 0;

                // write the stream to a file
                FileStream fs = File.OpenWrite(filename);

                this.outputStream.WriteTo(fs);

                fs.Close();

                // for debugging only
                Debug.Log("Finished saving to " + filename + ".");
            }

            result.State = Status.SUCCESS;
        }
        else
        {
            Debug.LogWarning("There is no audio data to save!");

            result.State = Status.FAIL;
            result.Message = "There is no audio data to save!";
        }

        return result;
    }

    /// This generates a simple header for a canonical wave file, 
    /// which is the simplest practical audio file format. It
    /// writes the header and the audio file to a new stream, then
    /// moves the reference to that stream.
    /// 
    /// See this page for details on canonical wave files: 
    /// http://www.lightlink.com/tjweber/StripWav/Canon.html
    private void AddHeader()
    {
        // reset the output stream
        outputStream.Position = 0;

        // calculate the number of samples in the data chunk
        long numberOfSamples = outputStream.Length / (BITS_PER_SAMPLE / 8);

        // create a new MemoryStream that will have both the audio data AND the header
        MemoryStream newOutputStream = new MemoryStream();
        BinaryWriter writer = new BinaryWriter(newOutputStream);

        writer.Write(0x46464952); // "RIFF" in ASCII

        // write the number of bytes in the entire file
        writer.Write((int)(HEADER_SIZE + (numberOfSamples * BITS_PER_SAMPLE * channels / 8)) - 8);

        writer.Write(0x45564157); // "WAVE" in ASCII
        writer.Write(0x20746d66); // "fmt " in ASCII
        writer.Write(16);

        // write the format tag. 1 = PCM
        writer.Write((short)1);

        // write the number of channels.
        writer.Write((short)channels);

        // write the sample rate. 44100 in this case. The number of audio samples per second
        writer.Write(SAMPLE_RATE);

        writer.Write(SAMPLE_RATE * channels * (BITS_PER_SAMPLE / 8));
        writer.Write((short)(channels * (BITS_PER_SAMPLE / 8)));

        // 16 bits per sample
        writer.Write(BITS_PER_SAMPLE);

        // "data" in ASCII. Start the data chunk.
        writer.Write(0x61746164);

        // write the number of bytes in the data portion
        writer.Write((int)(numberOfSamples * BITS_PER_SAMPLE * channels / 8));

        // copy over the actual audio data
        this.outputStream.WriteTo(newOutputStream);

        // move the reference to the new stream
        this.outputStream = newOutputStream;
    }
    #endregion
}

As written it will only work on 16bit/44kHz audio streams, but it should be easily adaptable.

Erratum, an Album Made Entirely with Custom Noise Apps

Erratum is an album that has been in gestation for over a year, and even as I release it into the wild I am refining my ideas about it, and apps, and the place of apps in music-making.

Erratum noise music album cover

Every track on the album was made using freely available sound mangling apps of my own creation. This intersects with my current philosophies about music and music-making in a few ways.

First, by making all the apps publicly available, I’m basically open-sourcing the album. Okay, the apps aren’t open source (yet), but other musicians can now very easily make very similar music. I think this is a good thing. I hope people find my apps useful. But this is a significant change from my thinking of just a few years ago, which was dominated by a slightly-more-insular academic perspective. The academic perspective says something like “I put in a lot of working making the software, so why should I let just anyone use it, or copy my algorithms.” This is an attitude displayed often by the old-guard type of guys I learned from, and in my previous art albums like Disconnected, I took the same stance. With the continuing dominance of social media over good-old-fashioned-blogs, I’m starting to think that sharing is more important than building up my own ivory tower though, and I tried to do that with this album.

Second, this album is full of short pieces. I’m starting to come around to the idea reflected in Cage’s Sonatas and Interludes, which is that if you’re going to write weird music, it’s better to write many short pieces or movements than to write something monolithic. So each of the pieces on this album are short and unique. The album is held together only by the thread of the mobile apps used to make them.

Finally, this album reflects the increasing pleasure I get from listing to music that is very close to noise. Some listeners might call some of this music noise. One of the apps I used to create this album, Radio Synthesizer simply adds radio-like noise to an audio file in greater or lesser proportions. When I had my first child I remember putting her to sleep with white noise, and for awhile, white noise was 100% effective at putting her to sleep. I think that made me more appreciative of all the different ways that noise can be generated. This album reflects a lot of different ways of getting to and from a noise-like state.

Stream Erratum from evanxmerz.bandcamp.com or check out the apps I used to make it on Google Play.

Granular Synthesis for Android Phones

Granular is a granular synthesizer for Android devices. Play it by dragging your fingers around the waveform for the source audio file. You can upload your own audio files, or just play with the sounds that are distributed with the app.



The horizontal position on the waveform controls the location from which grains will be pulled. The vertical position controls the grain size. The leftmost slider controls the amount of frequency modulation applied to the grains. The middle slider controls the time interval between grains. The rightmost slider controls randomness.

Download Granular from the Google Play Store and start making grainy soundscapes on your phone.