Evan X. Merz

musician / technologist / human being

Tagged "google"

How to conquer Google Core Web Vitals

In 2021, Google finally launched their long awaited Core Web Vitals. Core Web Vitals are the measurements that Google uses to measure the user experience (UX) on your website. These include metrics tracking things like how long the page takes to load images, how long it takes to become interactive, and how much it jitters around while loading.

Core Web Vitals are important to all online businesses because they influence where your site appears on Google when users are searching for content related to your site. They are far from the only consideration when it comes to search engine optimization (SEO), but they are increasingly important.

In this article, I'm going to give you a high level overview of how you can get green scores from Google's Core Web Vitals. We're not going to dig into implementation level details, yet, but we will look at the general strategies necessary to optimize the scores from Google.

This article is informed by my experience optimizing Core Web Vitals scores for the websites I manage and develop professionally. It's also informed by my study and experimentation on my own websites and projects.

Prepare for a long battle

Before we begin, I want to set your expectations properly. On any reasonably large website, Core Web Vitals scores cannot be fixed in a night, or a weekend, and maybe not even in a month. It will likely take effort each week, and in each development project, to ensure that your scores get to the highest level and stay there.

This is the same expectation we should have for any SEO efforts. It takes time to get things right in the code, the architecture, and in the content. All three of those things must work together to put your site on the first page of Google search results for high volume search terms.

In my experience, a successful SEO operation requires coordination of web developers, SEO experts, writers, and whoever manages the infrastructure of your website.

1. Understand Core Web Vitals

The first step to conquering Core Web Vitals is to understand what they are measuring. So get access to Google Search Console, and start looking at some reports. Use PageSpeed Insights to start looking at the values that Google is actually calculating for your site.

Don't be scared if your scores are really low at first glance. Most sites on the internet get very low scores because Google's standards are incredibly high.

You should see some aronyms that represent how your site did on key metrics. There are four metrics that are used to calculate the Core Web Vitals score for a page.

  • LCP = Largest Contentful Paint.
    • The LCP of a page is how long it takes for the largest thing to be shown on screen. Typically this is the banner image on a page, but PageSpeed Insights can sometimes get confused if you use header tags in the wrong places.
  • FCP = First Contentful Paint.
    • The FCP of a page is how long it takes for anything to appear on screen.
  • CLS = Cumulative Layout Shift.
    • The CLS is a measure of how much elements on the page jitter around while the page is loading.
  • FID = First Input Delay.
    • FID is how long it takes for the page to respond to a click.

2. Make the right architecture decisions when starting a new site

Most people can't go back in time and choose a different platform for their site. Most people are stuck with the architectural decisions made at the beginning of the project. If you're in that category, then don't fret. The next section will talk about how you can maximize existing infrastructural technologies to get the maximum performance out of your website.

On the other hand, if you are starting from scratch then you need to critically consider your initial architectural decisions. It's important to understand how those architectural decisions will impact your Core Web Vitals score, and your search ranking on Google.

Client side rendering vs. Sever side rendering

The most important decision you have to make is whether you will serve a site that is fully rendered client side or a site that renders pages on the server side.

I want to encourage you to choose a site architecture that is fully client side rendering, because client side rendering provides huge benefits when it comes to both scaling and SEO, and the drawbacks previously associated with client side rendering are no longer relevant.

Imagine the typical render process for a webpage. This process can be broken down into many steps, but it's important to realize that there are 3 phases.

  1. The server fetches data from a data store.
  2. The server renders the page.
  3. The client renders the page.

Steps 1 and 3 are unavoidable. If you have a backend of any complexity, then you must have a data store. If you want users to see your website, then their web browsers must render it.

But what about step 2? Is that step necessary? No. Even worse is the fact that it's often the most time-consuming step.

But if you build your site as a client-side js/html/css bundle that uses an API on the backend, then you can cut out the server side rendering entirely. If you want to maximize your site's Core Web Vitals scores, and hence your SEO, then you should cut out the server side rendering step if possible. Adding that step is inherently slower than strategies that eliminate it.

This means that platforms that are used for server side rendering are probably something to avoid in 2022 and beyond. So Ruby on Rails is probably a bad choice, unless you're only using it to build an API. Similarly, Next.js is probably a bad choice, unless you are only using it to generate a static site.

You should also look at benchmarks for various website architectures. You will see that vanilla React/Vue/Javascript sites generally outperform server side rendered sites by a wide margin.

But isn't a client side rendered site bad for SEO?

If someone hasn't done much SEO lately, then they may come up with this counterargument, that using client side rendering is bad for SEO. Up until around 2017, Google didn't run the javascript on web pages it was scanning. If the javascript wasn't executed, then a purely client side rendered page would show up as an empty page and never rank on Google. We had to come up with lots of workarounds to avoid that fate.

Since 2017, however, Google does run the javascript on the page. So having a purely client-side rendered site is perfectly fine.

Server side rendered sites are also harder to scale

I've scaled several websites in my career, so I can tell you that it's much more difficult to scale a server side rendered website than one that isn't server side rendered. To scale on the server side requires a complex balance of caching, parallelization, and optimization that is entirely unnecessary if your site is rendered client side.

It's true that you would still have to scale the API on the server side, but that's true in both cases.

3. Maximize mature web technologies

But what if you can't change the fundamental architecture of your website? Are you stuck? No. There are a number of web technologies that exist to mitigate the performance impact of various architectural decisions.

  1. Use a Content Distribution Network, especially for images. Take a look at services like ImgIX, which can optimize your images and play the role of a CDN. Other commonly used CDNs include CloudFront and CloudFlare.
  2. Optimize your cache layer. Try to hit the database as rarely as possible. The API that backs your site will get bottlenecked by something. Quite frequently, that something is your database. Use a cache layer such as REDIS to eliminate database queries where possible.
  3. Use geolocated servers. The CDN should take care of serving content from locations near your users, however, if requests are ever getting back to your web servers, make sure you have servers near your users.
  4. Enable automatic horizontal scaling of your server. You need to be able to add more servers during high traffic times, such as the holidays. To do so requires parallelizing your server code, and handling session across multiple servers. There are many ways to mitigate those problems, such as the sticky sessions feature offered by Heroku. You need to be able to scale horizontally at least on your API, if not on the servers that serve your site.

4. Monitor the results

Finally, you need to monitor Google Search Console every day. Google Search Console is one of the first things I check every time I open my work computer to begin my day. Make this a habit, because you need to be aware of the trends Google is seeing in the user experience on your site. If there is a sudden issue, or things are trending the wrong way, you need to proactively address the problem.

You should also use other site monitoring tools. Google's scores are a running 28 day average. You need a response time faster than 28 days. So I encourage you to set up a monitoring service such as Pingdom or Status Cake, which will give you real time monitoring of website performance.

Finally, you should run your own spot checks. Even if everything looks okay at a glance, you should regularly check the scores on your most important pages. Run PageSpeed Insights or Web Page Test on your homepage and landing pages regularly. Make sure to schedule time to take care of the issues that arise.

Conclusion

To conquer Google Core Web Vitals requires coordination of the effort of many people over weeks or months of time, but with a rigorous approach you can get to the first or second page of Google search results for high volume search terms. Make sure to spend time understanding Google Core Web Vitals. Ensure that the critical architectural decisions at the beginning of a project are in line with your long term goals. Maximize your use of mature web technologies to mitigate architectural issues. And monitor your site's performance every day.

With these strategies you can take your site to the next level.

What are common mobile usability issues and how can you solve them?

In this article, I'm going to show you some common mobile usability issues reported by Google, and show you how to fix them using basic css.

Mobile usability issues in Google Search Console

A few days ago I received this rather alarming email from Google.

Email from Google reporting mobile usability issues.

I was surprised to see this, because, although my blog isn't the most beautiful blog on the web, I pride myself on the clarity for reading on any device. So I logged into Google Search Console and found the "Mobile Usability" link on the left side to get more information.

Search console gave me the same three issues.

  1. Viewport not set
  2. Text too small to read
  3. Clickable elements too close together

Search console said they were all coming from https://evanxmerz.com/soundsynthjava/Sound_Synth_Java.html. This url points at the file for my ebook Sound Synthesis in Java. This ebook was written using markup compatible with early generations of Kindle devices, so it's no surprise that Google doesn't like it.

When I opened the book on a simulated mobile device, I could see that the reported mobile usability issues were apparent.

The original mobile experience for my ebook.

Let's go through each of the issues and fix them one by one. This will require some elementary CSS and HTML, and I hope it shows how even basic coding skills can be valuable.

Viewport not set

Viewport is a meta tag that tells mobile devices how to interpret the page on a device. We need to provide it to give web browsers a basis for showing a beautiful page to viewers. In practical terms, this means that without a viewport meta tag, the fonts on the page will look very small to viewers on mobile devices because their browsers don't know how to scale the page.

To solve this issue, add this line to the html head section.

<meta name="viewport" content="width=device-width, initial-scale=1">

This may also solve the remaining issues, but we'll add some css for them just to be sure.

Add a CSS file

The following two fixes require us to use some css, so we will need to add a CSS file to our webpage.

Early kindles didn't support much css, so it was better to just format your books with vanilla html and let the Kindle apply formatting. That limitation doesn't apply to the web, though, so I created a file called styles.css in the same directory as my html file, and added this line within the head section of my html page. This tells the page to pull style information from the file called "styles.css".

<link rel="stylesheet" href="styles.css">

Next we need to add some style rules to fix the remaining mobile usability issues.

Text too small to read

Making the font larger is easy. We could do it for only mobile devices, but I think increasing the font size slightly will help all readers. The default font size is "1rem" so let's bump it up to 1.1 by adding the following code to our css file.

body {
	font-size: 1.1rem;
}

Clickable elements too close together

The only clickable elements in this document are links. So this means that the text is too small, and the lines are too close together. The previous two fixes might also address this issue, but just in case they don't, let's make the lines taller by adding another line to our body css.

body {
	font-size: 1.1rem;
	line-height: 1.5;
}

The default line-height is 1. By moving it to 1.5, we are making lines 50% taller. This should make it easier for users to click the correct link.

Putting it all together

In addition to solving the mobile usability issues from Google, I wanted to ensure a good reading experience on all devices, so here's my final set of rules for the body tag in my css file.

At the bottom I added rules for max-width, margin, and padding. The max-width rule is so that readers with wide monitors don't end up with lines of text going all the way across their screens. The margin rule is an old-fashioned way of horizontally centering an element. The padding rule tells the browser to leave a little space above, below, and beside the body element.

body {
	font-size: 1.1rem;
	line-height: 1.5;

	max-width: 800px;
	margin: 0 auto;
	padding: 10px;
}

When I opened up the result in a simulated mobile device, I could see that the issues were fixed.

The improved mobile experience for my ebook.

How to optimize largest contentful paint (LCP) on client side

In this article I'm going to show you some strategies I've used to optimize websites on the client side for largest contentful paint (lcp) and first contentful paint (fcp).

What is largest contentful paint and why does it matter?

Largest contentful paint (LCP) is a measure of how long from initial request it takes for your site to render the largest thing on screen. Usually it measures the time it takes your largest image to appear on screen.

First contentful paint (FCP) is a measure of how long from initial request it takes for your site to render anything on screen. In most cases, optimizing LCP will also optimize FCP, so this is the last time I'll mention FCP in this article.

Google Core Web Vitals considers 2.5 seconds on mobile to be a good LCP when loading the site at 4g speeds. This is an extremely high bar to reach, and most sites won't come close without addressing it directly.

It's also important to note that reaching an LCP of 2.5 seconds once is not sufficient to pass. Your pages must achieve an average LCP of under 2.5 seconds over a period of 28 days in order to pass. This means that putting off work on LCP will only be more painful in the future. You need to act now to move your LCP in the right direction.

Why not use server side rendering?

When searching for ways to optimize LCP, I came across various sites that suggested server side rendering. They suggest that rendering the page server side and delivering it fully rendered client side would be the fastest. I know from experience that this is wrong for several reasons.

First, you still need to render on client side even if you deliver a flat html/js/css page. The client side still needs to extract and compile the page, and that takes the bulk of rendering time for modern, js-heavy webpages.

Second, rendering server side can only possibly be faster if your site isn't scaling. Yes, when you have a small number of simultaneous users, it's much faster to render on your server than on an old android. Once you hit hundreds or thousands of simultaneous users that math quickly flips, and it's much faster to render on thousands of client machines, no matter how old they are.

Why not use service workers?

Another suggestion I see is to use service workers to preload content. Please keep in mind that Google only measures the first load of a page. It does not measure subsequent loads. So any technique that improves subsequent loads is irrelevant to Google Core Web Vitals. Yes, this is incredibly frustrating, because frameworks like Next.js give you preloading out of the box.

Optimize images using modern web formats and a CDN

The most important thing you can do to achieve a lower LCP is to optimize the delivery of your images. Unless you use very few images, as I do on this blog, then images are the largest part of the payload for your webpages. They are typically around 10 to 100 times larger than all other assets combined. So optimizing your images should be your number one concern.

First, you need to be using modern web image formats. This means using lightweight formats such as webp, rather than heavier formats such as png.

Second, you need to deliver images from a content distribution network (CDN). Delivering images from edge locations near your users is an absolute must.

Third, you need to show images properly scaled for a user's device. This means requesting images at the specific resolution that will be displayed for a user, rather than loading a larger image and scaling it down with css.

Finally, Google seems to prefer progressive images, which give the user an image experience such as a blurred image before the full image has loaded into memory. There are many robust packages on the web for delivering progressive images.

I suggest you consider ImgIX for optimizing your images. ImgIX is both an image processor and a CDN. With open source components that work with various CMSs, and in various environments, ImgIX is a one stop shop that will quickly solve your image delivery issues. I've used it at two scaling websites, and in both cases it has been extremely impactful.

Deliver the smallest amount of data to each page

After you optimize your images, the next thing to consider is how much data you are sending to the client. You need to send the smallest amount of data that is necessary to render the page. This is typically an issue on list pages.

If you're using out-of-the-box CRUD APIs built in Ruby on Rails, or many other frameworks, then you typically have one template for rendering one type of thing. You might have a product template that renders all the information needed about a product. Then that same product template is used on product detail pages, and on list pages. The problem with that is that much less information is needed on the list pages. So it's imperative that you split your templates into light and heavy templates, then differentiate which are used in which places.

This is more of a backend change than a frontend change, but optimizing frontend performance requires the cooperation of an entire team.

Deliver static assets using a CDN

After putting our images through ImgIX, we stopped worrying about CDNs. We thought that because images were so much larger than the static assets, it wouldn't make much difference to serve static assets from our servers rather than a CDN.

This is true, if you are just beginning to optimize your frontend performance. Putting static assets on a CDN won't lead to a tremendous drop in LCP.

However, once you are trying to get your page load time down to the absolute minimum, every little bit counts. We saved an average of around two tenths of a second on our pages when we put our static assets on a CDN, and two tenths of a second is not nothing.

Another great thing about putting your static assets on a CDN is that it typically requires no code changes. It's simply a matter of integrating the CDN into your continuous integration.

Eliminate third party javascript

Unfortunately, third party javascript libraries are frequently sources of a significant amount of load time. Some third party javascript is not minimized, some pulls more javascript from slow third party servers, and some uses old fashioned techniques such as document.write.

To continue optimizing our load time we had to audit the third party javascript loaded on each page. We made a list of what was loaded where, then went around to each department and asked how they were using each package.

We initially found 19 different trackers on our site. When we spoke with each department we found that 6 of them weren't even being used any more, and 2 more were only lightly used.

So we trimmed down to 11 third party javascript libraries then set that as a hard limit. From then on, whenever anyone asked to add a third party library, they had to suggest one they were willing to remove. This was a necessary step to meet the aggressive performance demands required by Google.

Optimize your bundle size

The final thing to do to optimize your client side load time is to optimize your bundle size. When we talk about bundle size, we're talking about the amount of static assets delivered on your pages. This includes javascript, html, css, and more. Typically, extracting and compiling javascript is what takes the bulk of the time, so that's what you should focus on.

Use code splitting

Code splitting means that your app generates multiple bundles that are potentially different for each page. This is necessary in order to deliver the smallest amount required for a given page. Most modern website transpilers like WebPack will do this automatically.

Forget import *

Stop using "import *" entirely. You should only ever import the methods you are using. When you "import *" you import every bit of code in that module as well as every bit of code that it relies on. In most circumstances, you only need a fraction of that.

It's true, that a feature called tree shaking is able to eliminate some of the cruft in scenarios where you're importing more than you need, but it's sometimes tough to figure out where the tree shaking is working and where it's failing. To do so, you need to run bundle analysis, and comb through it carefully.

It's much easier to simply stop using "import *".

Use composition wisely

I almost named this section "Forget the factory pattern", because the factory pattern creates situations very similar to "import *". In the factory pattern, a method is called that returns an object with all the methods needed to fulfill a responsibility or interface. What I see most often, is a misapplication of the factory pattern whereby programmers are dumping a whole bunch of methods into a pseudo-module then using only one or two methods.

// don't do this
const createDateHelpers = () => {
    const formatDate = () => {...};
    const dateToUtc = () => {...};
  return {
    formatDate,
    dateToUtc,
  }
}

You can see that if you want to call "formatDate", then you need to run "createDateHelpers().formatDate()". This is essentially the same as importing every method in the date helpers module, and again, you are importing all their dependencies as well.

This is where composition can be applied to make an object that gives you the full object when needed, but also allows you to export methods individually.

// use composition
export const formatDate = () => {...};
export const dateToUtc = () => {...};
export default const createDateHelpers = () => {
  return {
    formatDate,
    dateToUtc,    
  }
};

Render a simplified static page

It's important to note that optimizing your existing pages isn't the only strategy available. Amazon's website crushes Google's Core Web Vitals, even though it isn't very fast. It does this by rendering a simplified static template for the first load, then filling it in with the correct content. So if you visit Amazon, you may see some evergreen content flash up on the page before content specific to you loads in.

That's a fine way to pass Google's Core Web Vitals, but it isn't optimizing the performance of your page. That's tailoring your page to meet a specific test. It's not cheating, but it's not necessarily honoring the intention of Google's UX metrics.

Conclusion

There are two basic categories of changes that are necessary to optimize the client side of a website for largest contentful paint: optimizing delivery of images, and optimizing delivery of code. In this article I've listed several strategies I've used in the past to address both of these categories. These strategies include using progressive images, using CDNs, and delivering as little data and code as is necessary to render a page.

How to measure the performance of a webpage

Measuring the performance of a webpage is an extremely complex topic that could fill a book. In this article I'm going to introduce some of the commonly used tools for measuring web performance, and show you why you need to take multiple approaches to get a complete picture of the performance of any webpage.

Why measure webpage performance?

Why measure webpage performance? Performance is so critical to the modern web that this question seems almost comical. Here are just a few reasons why you should be measuring and monitoring webpage performance.

  1. User experience impacts Google search rankings via Core Web Vitals.
  2. Page load time correlates with conversion rate and inversely correlates with bounce rate.
  3. Pages must be performant to be accessible to the widest possible audience.

Four approaches to measuring performance

No single tool is going to give you a comprehensive perspective on webpage performance. You must combine multiple approaches to really understand how a page performs. You must look at multiple browsing patterns, multuple devices, multiple times of day, and multiple locations.

In this article, I'm going to talk about four different approaches to measuring the performance of a page and recommend some tools for each of them. You must use at least one from each category to get a complete picture of the performance of a webpage.

The four approaches I'm going to talk about are...

  1. One time assessments
  2. Live monitoring and alerts
  3. Full stack observability
  4. Subjective user tests

Why do we need multiple perspectives to measure performance?

People who are new to monitoring web performance often make mistakes when assessing web performance. They pull up a website on their phone, count off seconds in their head, then angrily email the developers complaining that the website takes 10 seconds to load.

The actual performance experienced by users of your website is an important perspective, but there are dozens of things that can impact the performance of a single page load. Maybe the page wasn't in the page cache. Maybe the site was in the middle of a deploy. Maybe hackers are attacking network infrastructure. Maybe local weather is impacting an ISP's network infrastructure.

You can't generalize from a single page load to the performance of that page. I've seen many people fall into this trap, from executives to marketing team members. It makes everyone look bad. The website looks bad because of the slow page load. The developers look bad because of the poor user experience. The person who called it out looks bad because it looks like they don't know what they are doing. The data analysts look bad because they aren't exposing visualizations of performance that other team members can use.

So read this article and fire up some of the tools before sending out that angry email.

Please note that the tools listed here are just the tools that I have found to be effective in my career. There are many other options that may be omitted simply because I've never used them.

One time assessments

One time assessments are the most commonly used tools. One time assessments can be run against a production webpage at any time to get a perspective of the performance at that moment. One thing that's nice about these tools is that they can be used effectively without paying anything.

PROS:

  1. Easy to use
  2. Easy to quantify
  3. Fast
  4. Reliable
  5. Affordable

CONS:

  1. May lack perspective over time
  2. Lacks perspective on actual browsing patterns, including subsequent page loads
  3. Lacks perspective on other locations
  4. May lack information on the source of an issue

TOOLS:

  1. PageSpeed Insights
  2. Chrome Audit/Lighthouse
  3. WebPageTest
  4. GT Metrix

Live monitoring and alerts

The performance of a webpage can degrade very quickly. It can degrade due to poor network conditions, an influx of bot visitors from a new ad, a influx of legitimate visitors during peak hours, or from the deploy of a non-performant feature.

When the performance does degrade, you need to know immediately, so you can either roll back the deploy, or investigate the other factors that may be slowing down the site.

Notice that price isn't listed as a benefit (pro) or drawback (con) of live monitoring tools. You generally need to pay something to use these tools effectively, but usually that price is less than $100 a month, even on large sites.

PROS:

  1. Real-time notifications of performance changes
  2. Easy to quantify
  3. Can be configured to request from other locations

CONS:

  1. Limited information
  2. Lacks perspective over time
  3. May lack information on the source of an issue
  4. Fragile configuration can lead to false positives

TOOLS:

  1. StatusCake
  2. Pingdom

Full stack observability

Some tools offer insights into the full page lifecycle over time. These tools are constantly ingesting data from your website and compiling that data into configurable visualizations. They look at page load data on the client side and server side using highly granular measurements such as database transaction time, and the number of http requests.

If you wanted a single source of information to measure performance on your site, then these tools are the only option. They can provide one time assessments, monitoring, and backend insights.

One big problem with these tools is that they are quite complex. To use these tools effectively, you need developers to help set them up, and you need data analysts to extract the important information exposed by these tools.

PROS:

  1. Includes detailed breakdowns that can help identify the source of performance issues
  2. Includes data over time
  3. Highly granular data

CONS:

  1. Difficult to use
  2. Expensive
  3. Requires developer setup and configuration

TOOLS:

  1. New Relic
  2. Sentry

Qualitative user tests

The final approach for measuring the performance of a webpage is subjective. In other words, you must actually look at how your site performs when real people are using it. This can be as simple as opening a website on all your devices and trying to browse like a normal user, or you can set up time to interview real users and gather qualitative information about their experience.

I once worked at a company that required developers to attend in-person user tests every two weeks. This allowed every developer to see how users actually browsed and experienced their work. This may be overkill for most companies, but it's a perspective that can't be ignored.

PROS:

  1. No additional tools are necessary
  2. Exposes actual, real world user experience
  3. Exposes issues raised from real browsing patterns including subsequent page loads

CONS:

  1. It's easy to prematurely generalize from a small number of cases
  2. Can be expensive and difficult to do well
  3. Difficult to quantify
  4. Not timely

TOOLS:

  1. Web browsers on multiple devices
  2. Google Analytics
  3. User testing services

Conclusion

In this article, I introduced four different ways to measure the performance of a webpage. Each of them is necessary to get a full understanding of the performance of a page. I also introduced some of my favorite tools that can be used for each approach.

The four approaches are...

  1. One time assessments
  2. Live monitoring and alerts
  3. Full stack observability
  4. Subjective user tests

I hope that this prepares you to start wading into the complex world of measuring website performance.