Evan X. Merz

musician/technologist/human being

How to optimize largest contentful paint (LCP) on client side

In this article I'm going to show you some strategies I've used to optimize websites on the client side for largest contentful paint (lcp) and first contentful paint (fcp).

What is largest contentful paint and why does it matter?

Largest contentful paint (LCP) is a measure of how long from initial request it takes for your site to render the largest thing on screen. Usually it measures the time it takes your largest image to appear on screen.

First contentful paint (FCP) is a measure of how long from initial request it takes for your site to render anything on screen. In most cases, optimizing LCP will also optimize FCP, so this is the last time I'll mention FCP in this article.

Google Core Web Vitals considers 2.5 seconds on mobile to be a good LCP when loading the site at 4g speeds. This is an extremely high bar to reach, and most sites won't come close without addressing it directly.

It's also important to note that reaching an LCP of 2.5 seconds once is not sufficient to pass. Your pages must achieve an average LCP of under 2.5 seconds over a period of 28 days in order to pass. This means that putting off work on LCP will only be more painful in the future. You need to act now to move your LCP in the right direction.

Why not use server side rendering?

When searching for ways to optimize LCP, I came across various sites that suggested server side rendering. They suggest that rendering the page server side and delivering it fully rendered client side would be the fastest. I know from experience that this is wrong for several reasons.

First, you still need to render on client side even if you deliver a flat html/js/css page. The client side still needs to extract and compile the page, and that takes the bulk of rendering time for modern, js-heavy webpages.

Second, rendering server side can only possibly be faster if your site isn't scaling. Yes, when you have a small number of simultaneous users, it's much faster to render on your server than on an old android. Once you hit hundreds or thousands of simultaneous users that math quickly flips, and it's much faster to render on thousands of client machines, no matter how old they are.

Why not use service workers?

Another suggestion I see is to use service workers to preload content. Please keep in mind that Google only measures the first load of a page. It does not measure subsequent loads. So any technique that improves subsequent loads is irrelevant to Google Core Web Vitals. Yes, this is incredibly frustrating, because frameworks like Next.js give you preloading out of the box.

Optimize images using modern web formats and a CDN

The most important thing you can do to achieve a lower LCP is to optimize the delivery of your images. Unless you use very few images, as I do on this blog, then images are the largest part of the payload for your webpages. They are typically around 10 to 100 times larger than all other assets combined. So optimizing your images should be your number one concern.

First, you need to be using modern web image formats. This means using lightweight formats such as webp, rather than heavier formats such as png.

Second, you need to deliver images from a content distribution network (CDN). Delivering images from edge locations near your users is an absolute must.

Third, you need to show images properly scaled for a user's device. This means requesting images at the specific resolution that will be displayed for a user, rather than loading a larger image and scaling it down with css.

Finally, Google seems to prefer progressive images, which give the user an image experience such as a blurred image before the full image has loaded into memory. There are many robust packages on the web for delivering progressive images.

I suggest you consider ImgIX for optimizing your images. ImgIX is both an image processor and a CDN. With open source components that work with various CMSs, and in various environments, ImgIX is a one stop shop that will quickly solve your image delivery issues. I've used it at two scaling websites, and in both cases it has been extremely impactful.

Deliver the smallest amount of data to each page

After you optimize your images, the next thing to consider is how much data you are sending to the client. You need to send the smallest amount of data that is necessary to render the page. This is typically an issue on list pages.

If you're using out-of-the-box CRUD APIs built in Ruby on Rails, or many other frameworks, then you typically have one template for rendering one type of thing. You might have a product template that renders all the information needed about a product. Then that same product template is used on product detail pages, and on list pages. The problem with that is that much less information is needed on the list pages. So it's imperative that you split your templates into light and heavy templates, then differentiate which are used in which places.

This is more of a backend change than a frontend change, but optimizing frontend performance requires the cooperation of an entire team.

Deliver static assets using a CDN

After putting our images through ImgIX, we stopped worrying about CDNs. We thought that because images were so much larger than the static assets, it wouldn't make much difference to serve static assets from our servers rather than a CDN.

This is true, if you are just beginning to optimize your frontend performance. Putting static assets on a CDN won't lead to a tremendous drop in LCP.

However, once you are trying to get your page load time down to the absolute minimum, every little bit counts. We saved an average of around two tenths of a second on our pages when we put our static assets on a CDN, and two tenths of a second is not nothing.

Another great thing about putting your static assets on a CDN is that it typically requires no code changes. It's simply a matter of integrating the CDN into your continuous integration.

Eliminate third party javascript

Unfortunately, third party javascript libraries are frequently sources of a significant amount of load time. Some third party javascript is not minimized, some pulls more javascript from slow third party servers, and some uses old fashioned techniques such as document.write.

To continue optimizing our load time we had to audit the third party javascript loaded on each page. We made a list of what was loaded where, then went around to each department and asked how they were using each package.

We initially found 19 different trackers on our site. When we spoke with each department we found that 6 of them weren't even being used any more, and 2 more were only lightly used.

So we trimmed down to 11 third party javascript libraries then set that as a hard limit. From then on, whenever anyone asked to add a third party library, they had to suggest one they were willing to remove. This was a necessary step to meet the aggressive performance demands required by Google.

Optimize your bundle size

The final thing to do to optimize your client side load time is to optimize your bundle size. When we talk about bundle size, we're talking about the amount of static assets delivered on your pages. This includes javascript, html, css, and more. Typically, extracting and compiling javascript is what takes the bulk of the time, so that's what you should focus on.

Use code splitting

Code splitting means that your app generates multiple bundles that are potentially different for each page. This is necessary in order to deliver the smallest amount required for a given page. Most modern website transpilers like WebPack will do this automatically.

Forget import *

Stop using "import *" entirely. You should only ever import the methods you are using. When you "import *" you import every bit of code in that module as well as every bit of code that it relies on. In most circumstances, you only need a fraction of that.

It's true, that a feature called tree shaking is able to eliminate some of the cruft in scenarios where you're importing more than you need, but it's sometimes tough to figure out where the tree shaking is working and where it's failing. To do so, you need to run bundle analysis, and comb through it carefully.

It's much easier to simply stop using "import *".

Use composition wisely

I almost named this section "Forget the factory pattern", because the factory pattern creates situations very similar to "import *". In the factory pattern, a method is called that returns an object with all the methods needed to fulfill a responsibility or interface. What I see most often, is a misapplication of the factory pattern whereby programmers are dumping a whole bunch of methods into a pseudo-module then using only one or two methods.

// don't do this
const createDateHelpers = () => {
    const formatDate = () => {...};
    const dateToUtc = () => {...};
  return {
    formatDate,
    dateToUtc,
  }
}

You can see that if you want to call "formatDate", then you need to run "createDateHelpers().formatDate()". This is essentially the same as importing every method in the date helpers module, and again, you are importing all their dependencies as well.

This is where composition can be applied to make an object that gives you the full object when needed, but also allows you to export methods individually.

// use composition
export const formatDate = () => {...};
export const dateToUtc = () => {...};
export default const createDateHelpers = () => {
  return {
    formatDate,
    dateToUtc,    
  }
};

Render a simplified static page

It's important to note that optimizing your existing pages isn't the only strategy available. Amazon's website crushes Google's Core Web Vitals, even though it isn't very fast. It does this by rendering a simplified static template for the first load, then filling it in with the correct content. So if you visit Amazon, you may see some evergreen content flash up on the page before content specific to you loads in.

That's a fine way to pass Google's Core Web Vitals, but it isn't optimizing the performance of your page. That's tailoring your page to meet a specific test. It's not cheating, but it's not necessarily honoring the intention of Google's UX metrics.

Conclusion

There are two basic categories of changes that are necessary to optimize the client side of a website for largest contentful paint: optimizing delivery of images, and optimizing delivery of code. In this article I've listed several strategies I've used in the past to address both of these categories. These strategies include using progressive images, using CDNs, and delivering as little data and code as is necessary to render a page.

Avatar for Evan X. Merz

Evan X. Merz

Evan X. Merz holds degrees in computer science and music from The University of Rochester, Northern Illinois University, and University of California at Santa Cruz. He works as a programmer at a tech company in Silicon Valley. In his free time, he is a programmer, musician, and author. He makes his online home at evanxmerz.com and he only writes about himself in third person in this stupid blurb.