Evan X. Merz

musician / technologist / human being

Tagged "frontend"

Single File Components are Good

I was reading the Vue.js documentation recently, and I came across a very compelling little passage. The passage was explaining the thinking behind single file components. In a single file component, the template, javascript logic, and perhaps even styles, are all collocated in the same file.

Putting all the parts of a component into a single file may seem odd to some programmers. After all, shouldn't we be separating logic from templates? Shouldn't we be separating styles from logic too?

And the answer is that the right thing to do is situation dependent, and this is something that a lot of modern programmers don't understand. A lot of programmers seem to think that there are rules, and you must follow the rules. They don't realize that rules should only be followed when they are helping you.

So the core question is this: is separating a single view into several files really helping your development process?

Here's what the Vue documentation says.

One important thing to note is that separation of concerns is not equal to separation of file types. In modern UI development, we have found that instead of dividing the codebase into three huge layers that interweave with one another, it makes much more sense to divide them into loosely-coupled components and compose them. Inside a component, its template, logic and styles are inherently coupled, and collocating them actually makes the component more cohesive and maintainable.

So the argument from the Vue developers is that keeping all the code related to one view inside one file is actually more maintainable than separating them.

I tentatively agree with this.

Obviously, it's good to separate certain things out into their own files. Foundational CSS styles and core JS helper methods should have their own place. But for all the code that makes a component unique, I think it's easiest for developers if that is in one file.

Here are the problems I face with breaking a view into multiple files.

1. What is each file called? Where is it located?

When you separate a view into multiple files, it becomes difficult to find each file. It's fine to have a standard for naming and locating files, but in any large software project, there are going to be situations where the standard can't be followed for one reason or another. Maybe the view was created early in development. Maybe the view is shared. Or maybe the view was created by a trainee developer and the mis-naming was missed in the code review. Whatever the cause, file names and locations can be stumbling blocks.

At my workplace, we have hundreds of templates named item.js.hamlbars. That's not very helpful, but what's more troublesome is when, for whatever reason, a list item couldn't be named item.js.hamlbars. In that case, I'm left searching for files with whatever code words I can guess.

2. Where is the bug?

When you separate a view into multiple files, you don't know where a bug may be coming from. This is especially troublesome if you are in a situation where conditional logic is being put into the templates. Once you have conditional logic in the templates, then you can't know if the bug is in the javascript or in the template itself. Furthermore, it then takes developer time to figure out where to put the fix.

Single File Components are a Bonus

Putting all the code for a single view into a single file is not only logical, but it also doesn't violate the separation-of-concerns ethos. While the pieces are all in the same file, they each have a distinct section. So this isn't the bad old days of web development, where javascript was strewn across a page in any old place. Rather, collocating all the code files for a single view is a way to take away some common development headaches.

Redux in one sentence

Redux is a very simple tool that is very poorly explained. In this article, I will explain Redux in a single sentence that is sorely missing from the Redux documentation.

Redux is a pattern for modifying state using events.

That's it. It's a dedicated event bus for managing application state. Read on for a more detailed explanation, and for a specific explanation of the vocabulary used in Redux.

1. Redux is a design pattern.

The first thing you need to know is that Redux is not really software per se. True, you can download a Redux "library", but this is just a library of convenience functions and definitions. You could implement a Redux architecture without using Redux software.

In this sense, Redux is not like jQuery, or React, or most of the libraries you get from npm or yarn. When you learn Redux, you aren't learning one specific thing, but really just a group of concepts. In computer science, we call concepts like this a design pattern. A design pattern is a group of concepts that simplify an archetypal task (a task that re-occurs in software development). Redux is a design pattern for managing data in a javascript application.

2. Redux is an extension of the Observer Pattern.

I think this is the only bullet point needed to explain Redux to experienced developers.

The observer pattern is the abstract definition of what programmers refer to as events and event listeners. The observer pattern defines a situation where one object is listening to events triggered by another object.

In javascript, we commonly use the observer pattern when we wire up onClick events. When you set up an onClick event, you are saying that an object should listen to an HTML element, and respond when it is clicked. In this scenario, we are usually managing the visual state of the page using events. For example, a switch is clicked and it must respond by changing color.

Redux is a dedicated event bus for managing internal application state using events.

So redux is a way of triggering pre-defined events to modify state in pre-determined ways. This allows us to ensure that state isn't being modified in ad hoc ways and it allows us to examine the history of application state by looking at the sequence of events that led to that state.

3. Actions are event definitions. Reducers are state modifiers.

The new vocabulary used by Redux makes it seem like it's something new, when it's really just a slight twist on existing technology and patterns.

Action = Event definition. Like events in any language, events in Redux have a name and a payload. Events are not active like classes or methods. They are (usually) passive structures that can be contained or transmitted in json.

Reducer = Event listener. A reducer changes application state in response to an action. It's called a Reducer because it literally works like something you would pass to the reduce function.

Dispatch = Trigger event. The dispatch method can be confusing at first, but it is just a way of triggering the event/action. You should think of it like the methods used to trigger exceptions in most languages. After all, exceptions are just another variation on the Observer pattern. So dispatch is like throw in Java or raise in Ruby.

4. Redux manages local state separately from the backend.

Redux is a pattern for managing only the internal state of the application. It does not dictate anything about how you talk to your API, nor does it dictate the way that your internal state should be connected to the backend state.

It's a common mistake to put the backend code directly into the Redux action or reducer. Although this can seem sensible at times, tightly coupling the local application state with backend interactions is dangerous and often undesirable. For instance, a user may toggle a switch three or four times just to see what it does. Usually you want to update local state immediately when they hit the switch, but you only want to update the backend after they have stopped.

Conclusion

Redux is not complex. It is a dedicated event bus for managing local application state. Actions are events. Reducers modify state. That's it. I hope that helps.

Here are a few other useful links:

https://alligator.io/redux/redux-intro/ https://www.tutorialspoint.com/redux/redux_core_concepts.htm https://redux.js.org/introduction/core-concepts/

What are common mobile usability issues and how can you solve them?

In this article, I'm going to show you some common mobile usability issues reported by Google, and show you how to fix them using basic css.

Mobile usability issues in Google Search Console

A few days ago I received this rather alarming email from Google.

Email from Google reporting mobile usability issues.

I was surprised to see this, because, although my blog isn't the most beautiful blog on the web, I pride myself on the clarity for reading on any device. So I logged into Google Search Console and found the "Mobile Usability" link on the left side to get more information.

Search console gave me the same three issues.

  1. Viewport not set
  2. Text too small to read
  3. Clickable elements too close together

Search console said they were all coming from https://evanxmerz.com/soundsynthjava/Sound_Synth_Java.html. This url points at the file for my ebook Sound Synthesis in Java. This ebook was written using markup compatible with early generations of Kindle devices, so it's no surprise that Google doesn't like it.

When I opened the book on a simulated mobile device, I could see that the reported mobile usability issues were apparent.

The original mobile experience for my ebook.

Let's go through each of the issues and fix them one by one. This will require some elementary CSS and HTML, and I hope it shows how even basic coding skills can be valuable.

Viewport not set

Viewport is a meta tag that tells mobile devices how to interpret the page on a device. We need to provide it to give web browsers a basis for showing a beautiful page to viewers. In practical terms, this means that without a viewport meta tag, the fonts on the page will look very small to viewers on mobile devices because their browsers don't know how to scale the page.

To solve this issue, add this line to the html head section.

<meta name="viewport" content="width=device-width, initial-scale=1">

This may also solve the remaining issues, but we'll add some css for them just to be sure.

Add a CSS file

The following two fixes require us to use some css, so we will need to add a CSS file to our webpage.

Early kindles didn't support much css, so it was better to just format your books with vanilla html and let the Kindle apply formatting. That limitation doesn't apply to the web, though, so I created a file called styles.css in the same directory as my html file, and added this line within the head section of my html page. This tells the page to pull style information from the file called "styles.css".

<link rel="stylesheet" href="styles.css">

Next we need to add some style rules to fix the remaining mobile usability issues.

Text too small to read

Making the font larger is easy. We could do it for only mobile devices, but I think increasing the font size slightly will help all readers. The default font size is "1rem" so let's bump it up to 1.1 by adding the following code to our css file.

body {
	font-size: 1.1rem;
}

Clickable elements too close together

The only clickable elements in this document are links. So this means that the text is too small, and the lines are too close together. The previous two fixes might also address this issue, but just in case they don't, let's make the lines taller by adding another line to our body css.

body {
	font-size: 1.1rem;
	line-height: 1.5;
}

The default line-height is 1. By moving it to 1.5, we are making lines 50% taller. This should make it easier for users to click the correct link.

Putting it all together

In addition to solving the mobile usability issues from Google, I wanted to ensure a good reading experience on all devices, so here's my final set of rules for the body tag in my css file.

At the bottom I added rules for max-width, margin, and padding. The max-width rule is so that readers with wide monitors don't end up with lines of text going all the way across their screens. The margin rule is an old-fashioned way of horizontally centering an element. The padding rule tells the browser to leave a little space above, below, and beside the body element.

body {
	font-size: 1.1rem;
	line-height: 1.5;

	max-width: 800px;
	margin: 0 auto;
	padding: 10px;
}

When I opened up the result in a simulated mobile device, I could see that the issues were fixed.

The improved mobile experience for my ebook.

How to make Dropbox ignore the node_modules folder

In this article I'm going to show you how to get Dropbox to ignore the node_modules folder.

Why code in a Dropbox folder?

I recently bought a new computer and experienced the pain of having to sync around 300Gb of files from one Windows machine to another. This was primarily due to the in-progress music projects on my computer.

So I decided that I would just work within my Dropbox folder on my new computer. In the past, I've spoken to colleagues who did this. They were mostly artists, and didn't use git.

What I didn't consider at the time, was that this would mean there would be multiple syncing mechanisms for my code. I would be sending the files both to Dropbox and Git.

Even worse, Dropbox locks files while it is working. So if you are running something like "npm install" or "npm update", those processes can fail because they can't work on a file at the same time as Dropbox.

I began experiencing these problems almost immediately, so I either had to find a way to ignore the node_modules folder in Dropbox or stop working on code within the Dropbox folder.

Dropbox ignore syntax

Fortunately, Dropbox gave us a way to ignore files or folders a few years ago. So for Windows users, the following command will ignore your node_modules folder. Please note that this needs to be done for every project, and that the command needs to be completed with your actual directory structure.

Set-Content -Path 'C:\Users\yourname\Dropbox\yourproject\node_modules' -Stream com.dropbox.ignored -Value 1

How to optimize largest contentful paint (LCP) on client side

In this article I'm going to show you some strategies I've used to optimize websites on the client side for largest contentful paint (lcp) and first contentful paint (fcp).

What is largest contentful paint and why does it matter?

Largest contentful paint (LCP) is a measure of how long from initial request it takes for your site to render the largest thing on screen. Usually it measures the time it takes your largest image to appear on screen.

First contentful paint (FCP) is a measure of how long from initial request it takes for your site to render anything on screen. In most cases, optimizing LCP will also optimize FCP, so this is the last time I'll mention FCP in this article.

Google Core Web Vitals considers 2.5 seconds on mobile to be a good LCP when loading the site at 4g speeds. This is an extremely high bar to reach, and most sites won't come close without addressing it directly.

It's also important to note that reaching an LCP of 2.5 seconds once is not sufficient to pass. Your pages must achieve an average LCP of under 2.5 seconds over a period of 28 days in order to pass. This means that putting off work on LCP will only be more painful in the future. You need to act now to move your LCP in the right direction.

Why not use server side rendering?

When searching for ways to optimize LCP, I came across various sites that suggested server side rendering. They suggest that rendering the page server side and delivering it fully rendered client side would be the fastest. I know from experience that this is wrong for several reasons.

First, you still need to render on client side even if you deliver a flat html/js/css page. The client side still needs to extract and compile the page, and that takes the bulk of rendering time for modern, js-heavy webpages.

Second, rendering server side can only possibly be faster if your site isn't scaling. Yes, when you have a small number of simultaneous users, it's much faster to render on your server than on an old android. Once you hit hundreds or thousands of simultaneous users that math quickly flips, and it's much faster to render on thousands of client machines, no matter how old they are.

Why not use service workers?

Another suggestion I see is to use service workers to preload content. Please keep in mind that Google only measures the first load of a page. It does not measure subsequent loads. So any technique that improves subsequent loads is irrelevant to Google Core Web Vitals. Yes, this is incredibly frustrating, because frameworks like Next.js give you preloading out of the box.

Optimize images using modern web formats and a CDN

The most important thing you can do to achieve a lower LCP is to optimize the delivery of your images. Unless you use very few images, as I do on this blog, then images are the largest part of the payload for your webpages. They are typically around 10 to 100 times larger than all other assets combined. So optimizing your images should be your number one concern.

First, you need to be using modern web image formats. This means using lightweight formats such as webp, rather than heavier formats such as png.

Second, you need to deliver images from a content distribution network (CDN). Delivering images from edge locations near your users is an absolute must.

Third, you need to show images properly scaled for a user's device. This means requesting images at the specific resolution that will be displayed for a user, rather than loading a larger image and scaling it down with css.

Finally, Google seems to prefer progressive images, which give the user an image experience such as a blurred image before the full image has loaded into memory. There are many robust packages on the web for delivering progressive images.

I suggest you consider ImgIX for optimizing your images. ImgIX is both an image processor and a CDN. With open source components that work with various CMSs, and in various environments, ImgIX is a one stop shop that will quickly solve your image delivery issues. I've used it at two scaling websites, and in both cases it has been extremely impactful.

Deliver the smallest amount of data to each page

After you optimize your images, the next thing to consider is how much data you are sending to the client. You need to send the smallest amount of data that is necessary to render the page. This is typically an issue on list pages.

If you're using out-of-the-box CRUD APIs built in Ruby on Rails, or many other frameworks, then you typically have one template for rendering one type of thing. You might have a product template that renders all the information needed about a product. Then that same product template is used on product detail pages, and on list pages. The problem with that is that much less information is needed on the list pages. So it's imperative that you split your templates into light and heavy templates, then differentiate which are used in which places.

This is more of a backend change than a frontend change, but optimizing frontend performance requires the cooperation of an entire team.

Deliver static assets using a CDN

After putting our images through ImgIX, we stopped worrying about CDNs. We thought that because images were so much larger than the static assets, it wouldn't make much difference to serve static assets from our servers rather than a CDN.

This is true, if you are just beginning to optimize your frontend performance. Putting static assets on a CDN won't lead to a tremendous drop in LCP.

However, once you are trying to get your page load time down to the absolute minimum, every little bit counts. We saved an average of around two tenths of a second on our pages when we put our static assets on a CDN, and two tenths of a second is not nothing.

Another great thing about putting your static assets on a CDN is that it typically requires no code changes. It's simply a matter of integrating the CDN into your continuous integration.

Eliminate third party javascript

Unfortunately, third party javascript libraries are frequently sources of a significant amount of load time. Some third party javascript is not minimized, some pulls more javascript from slow third party servers, and some uses old fashioned techniques such as document.write.

To continue optimizing our load time we had to audit the third party javascript loaded on each page. We made a list of what was loaded where, then went around to each department and asked how they were using each package.

We initially found 19 different trackers on our site. When we spoke with each department we found that 6 of them weren't even being used any more, and 2 more were only lightly used.

So we trimmed down to 11 third party javascript libraries then set that as a hard limit. From then on, whenever anyone asked to add a third party library, they had to suggest one they were willing to remove. This was a necessary step to meet the aggressive performance demands required by Google.

Optimize your bundle size

The final thing to do to optimize your client side load time is to optimize your bundle size. When we talk about bundle size, we're talking about the amount of static assets delivered on your pages. This includes javascript, html, css, and more. Typically, extracting and compiling javascript is what takes the bulk of the time, so that's what you should focus on.

Use code splitting

Code splitting means that your app generates multiple bundles that are potentially different for each page. This is necessary in order to deliver the smallest amount required for a given page. Most modern website transpilers like WebPack will do this automatically.

Forget import *

Stop using "import *" entirely. You should only ever import the methods you are using. When you "import *" you import every bit of code in that module as well as every bit of code that it relies on. In most circumstances, you only need a fraction of that.

It's true, that a feature called tree shaking is able to eliminate some of the cruft in scenarios where you're importing more than you need, but it's sometimes tough to figure out where the tree shaking is working and where it's failing. To do so, you need to run bundle analysis, and comb through it carefully.

It's much easier to simply stop using "import *".

Use composition wisely

I almost named this section "Forget the factory pattern", because the factory pattern creates situations very similar to "import *". In the factory pattern, a method is called that returns an object with all the methods needed to fulfill a responsibility or interface. What I see most often, is a misapplication of the factory pattern whereby programmers are dumping a whole bunch of methods into a pseudo-module then using only one or two methods.

// don't do this
const createDateHelpers = () => {
    const formatDate = () => {...};
    const dateToUtc = () => {...};
  return {
    formatDate,
    dateToUtc,
  }
}

You can see that if you want to call "formatDate", then you need to run "createDateHelpers().formatDate()". This is essentially the same as importing every method in the date helpers module, and again, you are importing all their dependencies as well.

This is where composition can be applied to make an object that gives you the full object when needed, but also allows you to export methods individually.

// use composition
export const formatDate = () => {...};
export const dateToUtc = () => {...};
export default const createDateHelpers = () => {
  return {
    formatDate,
    dateToUtc,    
  }
};

Render a simplified static page

It's important to note that optimizing your existing pages isn't the only strategy available. Amazon's website crushes Google's Core Web Vitals, even though it isn't very fast. It does this by rendering a simplified static template for the first load, then filling it in with the correct content. So if you visit Amazon, you may see some evergreen content flash up on the page before content specific to you loads in.

That's a fine way to pass Google's Core Web Vitals, but it isn't optimizing the performance of your page. That's tailoring your page to meet a specific test. It's not cheating, but it's not necessarily honoring the intention of Google's UX metrics.

Conclusion

There are two basic categories of changes that are necessary to optimize the client side of a website for largest contentful paint: optimizing delivery of images, and optimizing delivery of code. In this article I've listed several strategies I've used in the past to address both of these categories. These strategies include using progressive images, using CDNs, and delivering as little data and code as is necessary to render a page.

Evan's React Interview Cheat Sheet

In this article I'm going to list some of the things that are useful in React coding interviews, but are easily forgotten or overlooked. Most of the techniques here are useful for solving problems that often arise in coding interviews.

Table of contents

Imitating componentDidMount using React hooks

The useEffect hook is one of the more popular additions to the React hooks API. The problem with it is that it replaces a handful of critical lifecycle methods that were much easier to understand.

It's much easier to understand the meaning of "componentDidMount" than "useEffect({}, [])", especially when useEffect replaces componentDidMount, componentDidUpdate, and componentWillUnmount.

The most common use of the useEffect hook in interviews is to replace componentDidMount. The componentDidMount method is often used for loading whatever data is needed by the component. In fact, the reason it's called useEffect is because you are using a side effect.

The default behavior of useEffect is to run after ever render, but it can also be run conditionally using a second argument that triggers the effect when it changes.

In this example, we use the useEffect hook to load data from the omdb api.

  // fetch movies. Notice the use of async
  const fetchMovies = (newSearchTerm = searchTerm) => {
    // See http://www.omdbapi.com/
    fetch(`http://www.omdbapi.com/?apikey=${apikey}&s=${newSearchTerm}&page=${pageNumber}`).then(async (response) => {
      const responseJson = await response.json();

      // if this is a new search term, then replace the movies
      if(newSearchTerm != previousSearchTerm) {
        setMovies(responseJson.Search);
      } else {
        // if the search term is the same, then append the new page to the end of movies
        setMovies([...movies, ...responseJson.Search]);
      }
    }).catch((error) => {
      console.error(error);
    });
  };

  // imitate componentDidMount
  useEffect(fetchMovies, []);

Note that this syntax for using asynchronous code in useEffect is the suggested way to do so.

Using the previous state in the useState hook

The useState hook is probably the easiest and most natural hook, but it does obscure one common use case. For instance, do you know how to use the previous state in the useState hook?

It turns out that you can pass a function into the useState set method and that function can take the previous state as an argument.

Here's a simple counter example.

const [count, setCount] = useState({});
setCount(prevState => {
  return prevState + 1;
});

Using useRef to refer to an element

The useRef hook is used to store any mutable value across calls to the component render method. The most common use is to use it to access an element in the DOM.

Initialize the reference using the useRef hook.

// use useRef hook to keep track of a specific element
const movieContainerRef = useRef();

Then attach it to an element in the render return.

<div className={MovieListStyles.movieContainer} ref={movieContainerRef}>
  {movies && movies.length > 0 && 
    movies.map(MovieItem)
  }
</div>

Then you can use the .current property to access the current DOM element for that div, and attach listeners or do anything else you need to do with a div.

// set up a scroll handler
useEffect(() => {
  const handleScroll = debounce(() => {
    const scrollTop = movieContainerRef.current.scrollTop;
    const scrollHeight = movieContainerRef.current.scrollHeight;
    
    // do something with the scrolling properties here...
  }, 150);

  // add the handler to the movie container
  movieContainerRef.current.addEventListener("scroll", handleScroll, { passive: true });

  // remove the handler from the movie container
  return () => movieContainerRef.current.removeEventListener("scroll", handleScroll);
}, []);

Custom usePrevious hook to simplify a common useRef use case

The useRef hook can be used to store any mutable value. So it's a great choice when you want to look at the previous value in a state variable. Unfortunately, the logic to do so is somewhat tortuous and can get repetitive. I prefer to use a custom usePrevious hook from usehooks.com.

import {useEffect, useRef} from 'react';

// See https://usehooks.com/usePrevious/
function usePrevious(value) {
  // The ref object is a generic container whose current property is mutable ...
  // ... and can hold any value, similar to an instance property on a class
  const ref = useRef();
  // Store current value in ref
  useEffect(() => {
    ref.current = value;
  }, [value]); // Only re-run if value changes
  // Return previous value (happens before update in useEffect above)
  return ref.current;
}

export default usePrevious;

Using it is as simple as one extra line when setting up a functional component.

// use the useState hook to store the search term
const [searchTerm, setSearchTerm] = useState('orange');

// use custom usePrevious hook
const previousSearchTerm = usePrevious(searchTerm);

Vanilla js debounce method

Okay, this next one has nothing to do with React, except for the fact that it's a commonly needed helper method. Yes, I'm talking about "debounce". If you want to reduce the jittery quality of a user interface, but you still want to respond to actions by the user, then it's important to throttle the rate of events your code responds to. Debounce is the name of a method for doing this from the lodash library.

The debounce method waits a preset interval until after the last call to debounce to call a callback method. Effectively, it waits until it stops receiving events to call the callback. This is commonly needed when responding to scroll or mouse events.

The problem is that you don't want to install lodash in a coding interview just to use one method. So here's a vanilla javascript debounce method from Josh Comeau.

// 
const debounce = (callback, wait) => {
  let timeoutId = null;
  return (...args) => {
    window.clearTimeout(timeoutId);
    timeoutId = window.setTimeout(() => {
      callback.apply(null, args);
    }, wait);
  };
}

export default debounce;

Here's an example of how to use it to update a movie list when a new search term is entered.

// handle text input
const handleSearchChange = debounce((event) => {
  setSearchTerm(event.target.value);
  fetchMovies(event.target.value);
}, 150);

return (
  <div className={MovieListStyles.movieList}>
    <h2>Movie List</h2>
    <div className={MovieListStyles.searchTermContainer}>
      <label>
        Search:
        <input type="text" defaultValue={searchTerm} onChange={handleSearchChange} />
      </label>
    </div>
    <div 
      className={MovieListStyles.movieContainer}
      ref={movieContainerRef}
      >
      {movies && movies.length > 0 && 
        movies.map(MovieItem)
      }
    </div>
  </div>
);

Use useContext to avoid prop-drilling

The last thing an interviewer wants to see during a React coding interview is prop drilling. Prop drilling occurs when you need to pass a piece of data from one parent component, through several intervening components, to a child component. Prop drilling results in a bunch of repeated code where we are piping a variable through many unrelated components.

To avoid prop drilling, you should use the useContext hook.

The useContext hook is a React implementation of the provider pattern. The provider pattern is a way of providing system wide access to some resource.

It takes three code changes to implement the useContext hook. You've got to call createContext in the parent component that will maintain the data. Then you've got to wrap your app with a special tag.

export const DataContext = React.createContext()

function App() {
  const data = { ... }

  return (
    <div>
      <DataContext.Provider value={data}>
        <SideBar />
        <Content />
      </DataContext.Provider>
    </div>
  )
}

Then you've got to import the context in the child component, and call useContext to get the current value.

import DataContext from '../app.js';

...

const { data } = React.useContext(DataContext);

What has made you stumble in React interviews?

What are some other common ways to make mistakes in React coding interviews? Send me your biggest React coding headaches on Twitter @EvanXMerz.