Evan X. Merz

gardener / programmer / creator / human being

How to host a static site on AWS

In this post, I'm going to show you how you can put up a simple html webpage on AWS using resources that are usually free. It's probably the cheapest way to set up a website.

Prerequisites

There are a few things you need to do ahead of time.

  1. Sign up for AWS
  2. Create a webpage

For the first one, head on over to https://aws.amazon.com/console/ and create a new account.

For the second one, you will probably need to learn a little coding to make it happen, but I'll give you something simple to start with.

<!DOCTYPE html>
<html>
  <head>
    <title>My Webpage</title>
  </head>
  <body>
    <main>
      <h1>My Webpage</h1>
      <p>
        This is a simple html webpage hosted on AWS S3 and AWS Cloudfront.
      </p>
      <p>
        This was built based on <a href="https://evanxmerz.com/post/how-to-host-a-static-site-on-aws" target="_blank">a tutorial at evanxmerz.com/post/how-to-host-a-static-site-on-aws</a>.
      </p>
    </main>
  </body>
</html>

There are some optional prerequisites that I'm intentionally omitting here. If you want to host your site on your own domain, then you'll need to purchase that domain from a domain registrar, such as AWS Route 53. Then you would also need to get an SSL certificate from a certificate authority such as AWS Certificate Manager.

Create a public S3 bucket

For your site to exist on the internet, it must be served by a computer connected to the internet. To make this possible, we need to upload your html file to "the cloud" somehow. Don't worry, "the cloud" is simply a marketing term for a web server. In this tutorial, our cloud storage service is AWS S3.

First you need to create a bucket.

  1. Browse to the AWS S3 service
  2. Click "Create Bucket". A bucket is a container for files. You might use a different bucket for each of your websites.
  3. Enter a name for your bucket. I name my static site buckets after the site they represent. So empressblog.org is in a bucket called empressblog-org. I named my bucket for this tutorial "static-site-tutorial-1-evanxmerz-com" because I am going to connect it to static-site-tutrial-1.evanxmerz.com.
  4. Next, select the AWS region for your bucket. The region is not really important, but you must write down what you select, because you will need it later. I selected "us-west-1".
  5. Under "Object Ownership" select "ACLs enabled". This will make it easier for us to make this bucket public.
  6. Under "Block Public Access settings for this bucket", unselect "Block all public access", then click the checkbox to acknowledge that you are making the contents of this bucket public.
  7. Then scroll to the bottom and click "Create Bucket".

Next, you need to allow hosting a public website from your bucket.

  1. Locate your bucket in the bucket list in S3, then click it.
  2. You should now be looking at the details for your bucket. Click the "Properties" tab, then scroll down ti "Block all public access" and click "Edit".
  3. Click "Enable", then under "Index document", enter "index.html". Then click "Save changes".
  4. Click the "Permissions" tab for your bucket. Scroll down to "Access control list (ACL)" and click "Edit". Next to "Everyone (public access)", click the box that says "Read". Then click the box that says "I understand the effects of these changes on my objects and buckets" and click "Save changes".

Create your index page

This step assumes that you have already created an html page that you want to make public. You can also upload other pages, css files, and javascript, using this same procedure.

  1. Find the "Objects" tab for your bucket on S3.
  2. Click "Upload".
  3. Click "Add files".
  4. Browse to your index.html
  5. Scroll down to the "Permissions" accordion and expand it.
  6. Click "Grant public-read access" and the checkbox to say that you understand the risk.
  7. Then click "Upload".

Now your page is on the internet. You can go to the objects tab for your bucket, then click on your file. That should display a link called "Object URL". If you click that link, then you should see your page. My link is https://static-site-tutorial-1-evanxmerz-com.s3.us-west-1.amazonaws.com/index.html.

Why to make a CloudFront distribution

Now you have seen your file on the web. Isn't that enough? Isn't this tutorial over? No. There are several problems with using S3 alone.

  1. Your files will be served from S3, which is not optimized for worldwide distribution. Your files have a home on a server in the region you selected. If that region is in the US, then people in Asia are going to have a much slower download experience.
  2. Your site will not work if there are errors. Your index page works fine, but what happens if you entered an incorrect link, and someone ends up at indec.html? You will get a nasty error message from AWS, rather than being redirected to a page on your site.

The final problem is the URL. This can be solved by setting up a domain in Route 53, but it's much wiser to set up a CloudFront distribution, then connect your domain to that.

Set up a CloudFront distribution

AWS CloudFront is a Ccontent Distribution Network (CDN). A CDN is a network of servers all over the world that are close to where people live. So people in Asia will be served by a copy of your page on a server in Asia.

  1. Find the CloudFront service on AWS.
  2. Click "Create distribution".
  3. In the "Origin domain" field you must enter your S3 bucket's Static Website Hosting Endpoint as the CloudFront origin. You can find this on the "Properties" tab of your S3 bucket. So for me, that was "static-site-tutorial-1-evanxmerz-com.s3-website-us-west-1.amazonaws.com".
  4. Then scroll all the way down to "Alternate domain name (CNAME)". This is where you would enter a custom domain that you purchased from AWS Route 53 or another registrar. For instance, if you want to set up your site on mystore.com, then you would enter "*.mystore.com" and "mystore.com" as custom domains. I entered "static-site-tutorial-1.evanxmerz.com" as my custom domain because that's where I'm putting up this tutorial.
  5. Then go to the "Custom SSL certificate" field. If you do not have your own domain, then you can ignore this. But if you have your own domain, then you will need your own SSL certificate. The SSL certificate is what enables private web browsing using the "https" protocol. Go set up a certificate using AWS Certificate Manager before setting up CloudFront if you want to use a custom domain.
  6. Finally, click "Create distribution"

Then you need to modify the distribution to act more like a normal web server. So we will redirect users to index.html if they request an invalid url.

  1. Find your distribution on CloudFron and click it.
  2. Click the "Error pages" tab.
  3. Click "Create custom error response".
  4. Under "HTTP error code" select "400: Bad Request".
  5. Under "Customize error response" click "Yes".
  6. Under "Response page path" enter "/index.html".
  7. Under "HTTP Response code" select "200: OK".
  8. Click "Create custom error response".
  9. Repeat these steps for 403 and 404 errors.

Then, if you have a custom domain, you need to go to AWS Route 53 and enable it.

  1. Go to Route 53 and select the hosted zone for your domain.
  2. Click "Create record".
  3. Create an A record for your domain or subdomain.
  4. Under "Route traffic to" click "Alias".
  5. Click "Alias to CloudFront distribution" and select your distribution.
  6. Click "Create records".

Now if you visit your custom domain, you should see your page. Here's mine: https://static-site-tutorial-1.evanxmerz.com/

Congratulations!

Congratulations! You've erected your first website using AWS S3 and AWS CLoudFront! Let's review the basic architecture here.

  1. Files are stored in a cloud file system called AWS S3.
  2. Files are served by a Content Distribution Network (CDN) called AWS CloudFront.
  3. Optionally, domains are routed to your CloudFront distribution by AWS Route 53.
  4. Optionally, CloudFront uses an SSL certificate from AWS Certificate Manager.

This is about the simplest and cheapest architecture for hosting a fast static site on the internet.

It's important to note that this is also the simplest way for hosting ANY static site. If you generated a static site using React, Gatsby, or Next, then you could host them in the same way.

IT's also important to note that this architecture fails as soon as you need to make decisions server side. This architecture works fine for websites that are frontend only websites, where you don't interact with private data. Once you need private data storage, an API, or custom logic on a page, then you will need a server in some variety. There are multiple solutions in that case, from the so-called "serverless" AWS Lambda, or the more old-fashioned AWS EC2, which is simply a server farm.

But you are now ready to start exploring those more complex options.

Composing with All Sound

I sometimes get asked about my dissertation, so I wanted to write a blog post to explain it. In this post, I describe the work I did for my dissertation, which brought together web APIs and a network model of creativity.

Composing with all sound

When I was in graduate school I was somewhat obsessed with the idea of composing with all sound. When I say "composing with all sound" I don't mean composing with a lot of sounds. I mean literally all sound that exists. Composing with all sounds that have ever existed is literally impossible, but the internet gives us a treasure trove of sound.

So I looked at the largest collections of sound on the internet, and the best that I found in 2011 was the website freesound.org/. Freesound is great for several reasons.

  • The library is absolutely massive and constantly growing
  • They have a free to use API
  • Users can upload tags and descriptions of the sounds
  • Freesound analyzes the sounds and gives access to those descriptors via the API

A model of creativity

Once I had a source for sounds, all I needed was a way to connect them. Neural network research was blossoming in 2011, so I tried to find a neural model that I could connect to the data on Freesound. That's when I found Melissa Schilling's network model of cognitive insight.

Schilling's theory essentially sees ideas as networks in the brain. Ideas are connected by various relationships. Ideas can look alike, sound alike, share a similar space, be described by similar words, and so on. In Schilling's model, cognitive insight, or creativity, occurs when two formerly disparate networks of ideas are connected using a bridge.

So to compose with all the sounds on Freesound, all I needed to do was to organize sounds into networks, then find new ways to connect them. But how could I organize sounds into networks, and what would a new connection look like?

The wordnik API

I realized that I could make lexical networks of sounds. The tags on sounds on Freesound give us a way to connect sounds. For instance, we could find all sounds that have the tag "scream" and form them into one network.

To make creative jumps, I had to bring in a new data source. After all, the sounds that share a tag are already connected.

That's when I incorporated The Wordnik API. Wordnik is an incredibly complete dictionary, thesaurus, and encyclopedia all wrapped into one. And best of all, they expost it using a fast and affordable API.

Composing with all sound using Freesound, a network model of creativity, and lexical relationships

So the final algorithm looks something like this, although there are many ways to vary it.

  1. Start with a search term provided by a user
  2. Build a network of sounds with that search term
  3. Use Wordnik to find related words
  4. Build a network around the related words
  5. Connect the original network to the new one
  6. Repeat

To sonify the resulting networks, I used a simple model of artificial intelligence that is sort of like a cellular automaton. I released a swarm of simple automata on the network and turned on sounds whenever the number of bots on a sound reached a critical mass.

Results

Here are the results of my dissertation. You can read a paper I presented at a conference, and the dissertation itself. Then you can listen to the music written by this program. It's not Beethoven, but I guarantee that you will find it interesting.

Composing with All Sound Using the FreeSound and Wordnik APIs

Method for Simulating Creativity to Generate Sound Collages from Documents on the Web

Disconnected: Algorithmic music composed using all sounds and a network model of creativity

How to render synchronous audio and video in Processing in 2021

About a decade ago I wrote a blog post about rendering synchronous audio and video in processing. I posted it on my now-defunct blog, computermusicblog.com. Recently, I searched for the same topic, and found that my old post was one of the top hits, but my old blog was gone.

So in this post I want to give searchers an updated guide for rendering synchronous audio and video in processing.

It's still a headache to render synchronous audio and video in Processing, but with the technique here you should be able to copy my work and create a simple 2-click process that will get you the results you want in under 100 lines of code.

Prerequisites

You must install Processing, Minim, VideoExport, and ffmpeg on your computer. Processing can be installed from processing.org/. Minim and VideoExport are Processing libraries that you can add via Processing menus (Sketch > Import Library > Add Library). You must add ffmpeg to your path. Google how to do that.

The final, crappy prerequisite for this particular tutorial is that you must be working with a pre-rendered wav file. In other words, this will work for generating Processing visuals that are based on an audio file, but not for Processing sketches that synthesize video and audio at the same time.

Overview

Here's what the overall process looks like.

  1. Run the Processing sketch. Press q to quit and render the video file.
  2. Run ffmpeg to combine the source audio file with the rendered video.

Source Code

Without further ado, here's the source. This code is a simple audio visualizer that paints the waveform over a background image. Notice the ffmpeg instructions in the long comment at the top.

/*
  This is a basic audio visualizer created using Processing.
  
  Press q to quit and render the video.
  
  For more information about Minim, see http://code.compartmental.net/tools/minim/quickstart/
  
  For more information about VideoExport, see https://timrodenbroeker.de/processing-tutorial-video-export/

  Use ffmpeg to combine the source audio with the rendered video.
  See https://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg
  
  The command will look something like this:
  ffmpeg -i render.mp4 -i data/audio.wav -c:v copy -c:a aac -shortest output.mp4
  
  I prefer to add ffmpeg to my path (google how to do this), then put the above command
  into a batch file.
*/

// Minim for playing audio files
import ddf.minim.*;

// VideoExport for rendering videos
import com.hamoid.*;

// Audio related objects
Minim minim;
AudioPlayer song;
String audioFile = "audio.wav"; // The filename for your music. Must be a 16 bit wav file. Use Audacity to convert.

// image related objects
float scaleFactor = 0.25f; // Multiplied by the image size to set the canvas size. Changing this is how you change the resolution of the sketch.
int middleY = 0; // this will be overridden in setup
PImage background; // the background image
String imageFile = "background.jpg"; // The filename for your background image. The file must be present in the data folder for your sketch.

// video related objects
int frameRate = 24; // This framerate MUST be achievable by your computer. Consider lowering the resolution.
VideoExport videoExport;

public void settings() {
  background = loadImage(imageFile);
  
  // set the size of the canvas window based on the loaded image
  size((int)(background.width * scaleFactor), (int)(background.height * scaleFactor));
}

void setup() {
  frameRate(frameRate);
  
  videoExport = new VideoExport(this, "render.mp4");
  videoExport.setFrameRate(frameRate);
  videoExport.startMovie();
  
  minim = new Minim(this);
  
  // the second param sets the buffer size to the width of the canvas
  song = minim.loadFile(audioFile, width);
  
  middleY = height / 2;
  
  if(song != null) {
    song.play();
  }

  fill(255);
  stroke(255);
  strokeWeight(2);
  
  // tell Processing to draw images semi-transparent
  tint(255, 255, 255, 80);
}

void draw() {
  image(background, 0, 0, width, height);
  
  for(int i = 0; i < song.bufferSize() - 1; i++) {
    line(i, middleY + (song.mix.get(i) * middleY), i+1, middleY + (song.mix.get(i+1) * middleY));
  }
  
  videoExport.saveFrame(); // render a video frame
}

void keyPressed() {
  if (key == 'q') {
    videoExport.endMovie(); // Render a silent mp4 video.
    exit();
  }
}

Glitching Images in Processing

This summer I'm going to release a new album of solo electronic music that is heavily influenced by EDM and classic rock. For the past few weeks I've been trying to figure out what to do about the art on the new album.

The album is called "FYNIX Fights Back" so I decided to use images of animals fighting, but I didn't just want to pull up generic images. I wanted to add my own special touch to each image. So I pulled out a tool that I haven't used in years: the Processing programming language.

Processing is great for simple algorithmic art. In this case I wanted to glitch some images interactively, but I wasn't exactly sure what I wanted to do.

So I just started experimenting. I played with colors and shapes and randomness. I like to derive randomess based on mouse movement. The more the mouse moves, the more randomness is imparted to whatever the user is doing.

I added image glitching based on mouse speed. The quicker the cursor moves, the more random are the generated shapes, and the more they are offset from their source position in the original image.

Here's the end result.

Fynix Fights Back

Here's the source code. Make it even better.

import java.awt.event.KeyEvent;

// relationship between image size and canvas size
float scalingFactor = 2.0;

// image from unsplash.com
String imageFilename = "YOUR_FILE_IN_DATA_FOLDER.jpg";

// image container
PImage img;
PImage scaledImage;

int minimumVelocity = 40;
int velocityFactorDivisor = 5; // the larger this is, the more vertices you will get
int velocityFactor = 1; // this will be overridden below

float minimumImageSize = 10.0f;

boolean firstDraw = true;

int currentImage = 1;

void settings() {
  // load the source image
  img = loadImage(imageFilename);
  
  // load the pixel colors into the pixels array
  img.loadPixels();
  
  // create a canvas that is proportional to the selected image
  size((int)(img.width / scalingFactor), (int)(img.height / scalingFactor));
  
  // scale the image for the window size
  scaledImage = loadImage(imageFilename);
  scaledImage.resize(width, height);
  
  // override velocityFactor
  velocityFactor = (int)(width / velocityFactorDivisor);
}

void setup() {
  // disable lines
  noStroke();
}

void keyPressed() {
  if(keyCode == KeyEvent.VK_R) {
    firstDraw = true; // if the user presses ENTER, then reset
  }
}

void draw() {
  if(firstDraw) {
    image(scaledImage, 0, 0);
    firstDraw = false;
  }
  
  // right click to render to image file
  if(mousePressed && mouseButton == RIGHT) {
    save(imageFilename.replace(".jpg", "") + "_render_" + currentImage + ".tga");
    currentImage++;
  }
  
  if(mousePressed && mouseButton == LEFT && mouseX >= 0 && mouseX < width && mouseY >= 0 && mouseY < height) {
    int velocityX = minimumVelocity + (3 * velocity(mouseX, pmouseX, width));
    int velocityY = minimumVelocity + (3 * velocity(mouseY, pmouseY, height));
    
    color c = img.pixels[mousePositionToPixelCoordinate(mouseX, mouseY)];

    int vertexCount = ((3 * velocityFactor) + velocityX + velocityY) / velocityFactor;
    
    int minimumX = mouseX - (velocityX / 2);
    int maximumX = mouseX + (velocityX / 2);
    int minimumY = mouseY - (velocityY / 2);
    int maximumY = mouseY + (velocityY / 2);
    
    PGraphics pg = createGraphics(maximumX - minimumX, maximumY - minimumY);
    
    pg.beginDraw();
    pg.noStroke();
    pg.fill(c);

    // first draw a shape into the buffer
    pg.beginShape();
    for(int i = 0; i < vertexCount; i++) {
      pg.vertex(random(0, pg.width), random(0, pg.height));
    }
    pg.endShape();
    pg.endDraw();
    
    pg.loadPixels();
    
    // then copy image pixels into the shape
    
    // get the upper left coordinate in the source image
    int startingCoordinateInSourceImage = mousePositionToPixelCoordinate(minimumX, minimumY);
    
    // get the width of the source image
    int sourceImageWidth = (int)(img.width);
    
    // set the offset from the source image
    int offsetX = velocity(mouseX, pmouseX, width);
    int offsetY = velocity(mouseY, pmouseY, height);
    
    // ensure that the offset doesn't go off the canvas
    if(mouseX > width / 2) offsetX *= -1;
    if(mouseY > height / 2) offsetY *= -1;
    
    for(int y = 0; y < pg.height; y++) {
      for(int x = 0; x < pg.width; x++) {
        // calculate the coordinate in the destination image
        int newImageY = y * pg.width;
        int newImageX = x;
        int newImageCoordinate = newImageX + newImageY;
        
        // calculate the location in the source image
        //int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + (x * scalingFactor) + ((y * scalingFactor) * sourceImageWidth));
        //int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + ((x + offsetX) * scalingFactor) + (((y + offsetY) * scalingFactor) * sourceImageWidth));

        int sourceImageX = (int)(((x + offsetX) * scalingFactor));
        int sourceImageY = (int)((((y + offsetY) * scalingFactor) * sourceImageWidth));
        int sourceImageCoordinate = (int)(startingCoordinateInSourceImage + sourceImageX + sourceImageY);
        
        // ensure the calculated coordinates are within bounds
        if(newImageCoordinate > 0 && newImageCoordinate < pg.pixels.length 
           && sourceImageCoordinate > 0 && sourceImageCoordinate < img.pixels.length
           && pg.pixels[newImageCoordinate] == c) {
          pg.pixels[newImageCoordinate] = img.pixels[sourceImageCoordinate];
        }
      }
    }
    
    image(pg, minimumX, minimumY);
  }
}

// convert the mouse position to a coordinate within the source image
int mousePositionToPixelCoordinate(int mouseX, int mouseY) {
  return (int)(mouseX * scalingFactor) + (int)((mouseY * scalingFactor) * img.width);
}

// This sort of calculates mouse velocity as relative to canvas size
int velocity(float x1, float x2, float size) {
  int val = (int)(Math.abs((x1 - x2) / size) * size);
  return val;
}

The Last Jump by Quincy Tahoma

The Last Jump, a painting by Quincy Tahoma.

Quincy Tahoma painted The Last Jump in 1954. The subject matter and execution issues reflect the problems in Tahoma's life.

The scene should be familiar to any Tahoma fan. The painting shares a title with a famous Tahoma print. In fact this scene is one he painted frequently because it was easy to sell. A man is trying to break a horse, but the horse is startled by a small animal, a bunny in this case, and the man is thrown by the horse. We can see that he is thrown in the cartouche.

The signature on The Last Jump

The painting has some of the hallmarks of Tahoma's best works. It depicts the most dramatic moment of the effort, where the story could end in success or failure. The horse is nearly flawlessly executed. The poses of the horse and the man are extremely three dimensional, with limbs jutting out at the viewer, and flailing in dramatic ways. The drama is what separates Tahoma's work from his Navajo contemporaries, who often favored flat or static scenes.

Yet when you spend a little more time looking at this painting, you can see that something isn't quite right. It looks like the paintbrush slipped when painting the horse's left nostril and there's a black splotch right next to it. The torso of the man is unfinished, with the outline missing from his right side, and his abs hastily outlined with a liner brush. The green of the man's loin cloth is flat green, as opposed to the bright, multifaceted green that he almost always used for the same piece of clothing in his other paintings.

What happened?

Quincy Tahoma died in 1956 due to complications of alcoholism. In 1954 he drank and painted every day. His work from this time is quite inconsistent. These inconsistencies mar this image. When I look at this image I see a painting that was started in the morning, when he was sober, then hastily finished in the evening when he was drunk. The pose, the horse and the headdress are almost flawlessly planned and executed. This headdress is one of the most spectacular I've ever seen in a Tahoma painting, and the horse reveals Tahoma's skill at capturing the animals.

I imagine Tahoma taking a break after painting most of the picture, continuing to drink, then returning to finish it. He hastily filled in the loin cloth with a basic green straight from the bottle. Then he dabbed some black into the horse's nostrils, but his arm slipped and he splotched it. Then he got angry and decided to do the signature. His arm slipped when making the top of the 5 and he dragged it across where the 4 would go. He took his time and finished the cartouche strong, but then he noticed that he forgot to finish the man's torso. Using the same narrow brush he used for the cartouche, he quickly outlined the man's muscles. Even then, he still forgot one small inner line.

Of course this narrative is pure fancy, but the story told by this painting is plain: the man failed to tame the horse just as Tahoma failed to tame his alcoholism.

Previous page | Next page