Why is software development so slow?
Why can't the developers move more quickly? Why does every project take so long? In this article I'm going to talk about what causes software development to slow down. This is based on my experience as a programmer and manager in Silicon Valley for over a decade, as well as conversations with other people in similar roles.
Is my team slow because...
First of all, I want to answer some questions that I've heard from managers.
- Is my team slow because we use PHP? No.
- Is my team slow because we use Java? No.
- Is my team slow because we use INSERT PRODUCT HERE? Probably not.
- Is my team slow because they don't understand web 3? No.
- Is my team slow because of a bad manager? Maybe, but if you've had multiple managers in there who haven't moved faster then this is unlikely.
- Is my team slow because they have or do not have college degrees? No.
- Is my team slow because we don't use agile? Probably not.
So then why is my development team so slow?
Now that we've gotten that out of the way, we can drill into what actually causes teams to move slowly. None of the reasons I've listed here are particularly technical. I tend to think that teams move slowly due to non-technical reasons.
They're estimating poorly
The most common reason why people think developers are moving slowly is that the estimates are poor. For one reason or another, developers are not spending the time, or creating the artifacts necessary to generate accurate estimates.
I once joined a small team where nobody ever wrote anything down. They would attend planning meetings where they would hash out the requirements, they would make up an estimate on the spot, then they would immediately break to start writing code. The stakeholders wanted to know why tasks never seemed to come in within the estimated time frame.
In my experience, it's necessary to generate multiple levels of plans to generate an accurate software development project estimate. You must begin with use-cases, then work through multiple levels of plans to reach units of work that can be estimated.
Here's how I break down a project to make an accurate estimate:
- Write all requirements down in plain language. This can be in the form of use cases, or user stories, or any top-level form that anyone can understand.
- Talk through what an acceptable solution would look like with other developers.
- Break that solution into individual tasks using a tool like JIRA.
- Estimate those tasks.
- Add a little extra room for unknowns, debugging, integration, and testing.
- Add up the final estimate.
There are multiple ways to execute each of these steps, but that's the general flow for all successful software development estimation that I've seen in my career.
Technical debt
Another big reason why a team might be moving slowly is tech debt. Tech debt happens to every project when the company tries to move quickly. It happens because solutions that can be implemented as fast as possible are usually not scalable to future use cases that the developers can't anticipate.
I once worked on a web development team that needed to make the list pages faster on an e-commerce website. The list pages were slow because the server was delivering a massive payload containing every product on the list page then filtering and sorting was done client side. That's easy to fix right? Just serve the first page of products, then serve the next when the user scrolls down. But to do that, we must add server side sorting and filtering because the list will be different for different combinations of sorts and filters. But to do that, we must have products stored in some sort of database, and this team was only using a key-value store at the time. That's how tech debt can be problematic. In that scenario, it would take months to solve that problem in a scalable way.
Ultimately, we found a different solution. We delivered the tiniest possible payload for each product, which achieved a similar end goal while side-stepping the tech debt. But that tech debt was still there, and would still be a problem for many related projects.
There is no silver bullet for solving tech debt. In my career, the best way I've found to deal with tech debt is to reserve one developer each sprint to iteratively move the project closer to what the team agrees is a more scalable solution.
Interpersonal conflict
The single biggest thing that slows down development teams is interpersonal and interdepartmental conflict. Maybe a manager is at odds with a team member, or maybe two team members see their roles very differently. If two people working on one project have different views about how to execute that project it can be absolutely crushing.
I think that proactively avoiding and addressing conflict is the single thing that can instantly help teams move faster. The most staggeringly successful group of techniques I've discovered for doing so is called motivational interviewing.
Motivational interviewing is a way of communicating with peers, stakeholders, and direct reports that allows two people to agree to be on the same team. It's a way of building real rapport with people that allows you to build successful working relationships over time. Specifically, it's a way of focusing a conversation on talk that meaningfully moves both parties toward a mutually acceptable solution.
I strongly recommend Motivational Interviewing for Leadership by Wilcox, Kersh, and Jenkins. To say that that book changed my life is an understatement.
Unclear instructions
The final category of issue that slows down development teams is unclear instructions.
I once worked for a person who was very fond of sending short project descriptions over email and expecting fast results. One email might read something like "I want to be selling in Europe in one month", another might say "I want to be able to hold auctions for some of our products."
It IS possible to work with such minimal instructions. It falls upon the development team to outline what they think the stakeholder wants, develop plans based on that outline, then execute based on those plans.
The problem with working from minimal instructions is that no two people see the same problem in exactly the same way. Say you are developing that auction software. Many questions immediately arise. Who can bid on the auctions? Is the auction for many products or just one? Does the product have selectable options, or are they preset? Are bids visible to other customers? How do we charge the user for the item if the final bid exceeds what is typically put on a credit card?
If the stakeholder isn't able to answer those questions early in the process, then it may go significantly astray. If the development team doesn't understand the instructions, then they may waste days or weeks pursuing irrelevant or untimely work. If the stakeholder is going to give such vague instructions then they must address questions in a timely manner or risk tasting time.
In conclusion
So why is your development team so slow? I've laid out the four common causes that I've seen in my career.
- Poor estimates
- Tech debt
- Interpersonal conflict
- Unclear instructions
If you have something else to add to this list, then let me know on Twitter @EvanXMerz.
Sam Hyde Harris: Seeing the Unusual at Casa Romantica
I recently attended the exhibit Sam Hyde Harris: Seeing the Unusual at Casa Romantica in San Clemente, California. The exhibit was curated by Maurine St. Gaudens and Joseph Marsman and brought together around 60 pieces by Sam Hyde Harris in just two rooms on the lovely Casa Romantica estate.
Sam Hyde Harris is mostly remembered by collectors today as an active member of the California Art Club who produced hundreds of beautiful landscapes of California in the early 20th century and taught numerous students who went on to great careers. He was also a member of a group called the California Impressionists.
The focus of this exhibit was introducing a modern audience to the massive volume of commercial work he produced. This includes numerous posters for companies such as Union Pacific and the Santa Fe Railroad, as well as advertising pieces for local businesses and theater companies.
Here's the Curator's Statement from Maurine St. Gaudens.
Sam Hyde Harris, Seeing the Unusual explores the diverse oeuvre of this noted twentieth century California artist. Although widely known for the fine art compositions, few people realize the extent of Harris' commercial advertising work. Harris' designs shaped the consciousness of early to mid-twentieth century consumers and travelers. I really was quite surprised that so little attention had been paid to Harris the commercial artist. This exhibition explores this complex aspect of the artist's career.
My personal association with Sam Hyde Harris actually began more than thirty years ago when I was contacted by Harris' widow, Marion Dodge Harris, to catalogue the artist's estate. In the process I discovered examples of work the artist had created for a who's who of clients, not only in California, but across the western United States and nationally. It's the commercial work that today represents an historical record of product lines and services that were a part of everyday life from the 1920s - 1950s.
On a national level, Harris had a long and highly creative relationship with the railroad industry, specifically the Santa Fe, Southern Pacific, and Union Pacific rail lines. His iconic Art Deco themed Southern Pacific's New Daylight poster has become one of the most recognizable images of this art form and one that has become highly praised by railroad and design enthusiasts alike.
Unfortunately, over the years, as is common with many commercial artists, their work has gone unaccredited, and Harris is no exception. Although, with new research and recent descoveries, this oversight is now being corrected, and Harris' commercial designs are being recognized by a new generation of historians.
Harris' commercial work wasn't the only thing in the exhibition, though. There were plenty of paintings of the subjects for which he was most famous, including boats in harbor, and the Chavez Ravine before it was developed.
I spent an hour poring over every piece in the exhibit. Many of the pieces came from the collection of Charles N. Mauch, including several massive paintings that became his most famous posters. I particularly enjoyed seeing the different stages of the Taxco Mission poster produced for Southern Pacific. Through a sketch, a painting, and a print of the final poster, visitors could see the complete evolution of one of his commercial pieces.
I only recently became a fan and collector of work by Sam Hyde Harris, and I was glad to have the opportunity to see so much of his work in one place. I was also glad that this less well-known California artist was being brought to new audiences in the 21st century.
Sam Hyde Harris: Seeing the Unusual ran from November 19, 2021 through February 27, 2022.
Discovering artist Viola M. Allen
I recently purchased Emerging from the Shadows, Vol. I: A Survey of Women Artists Working in California, 1860-1960 and while reading it I discovered one particularly interesting artist named Viola M. Allen. Her facility with a palette knife seems almost miraculous, so of course I started searching the web for more of her work. Other than a few items available at auction, I haven't been able to find much information. So I wanted to share a couple quotes from the fabulous book by Maurine St. Gaudens.
Viola M. Allen was born on March 14, 1906, in Queens, New York, the daughter of Safarine D. Allen and Minnie (Eschman) Allen. According to her biographical artist's promotional card, she attended, in New York, the Pratt Institute and the National Academy of Design, where she studied under Charles Curran. Her card also indicates that she studied portrait painting with Moskowitz and Borgdonav and sculpture with Haffner and Monahan. A resident of Manhattan, New York, through the 1930s, by the latter part of the decade she had moved to Los Angeles, California; she remained a California resident until her death.
A study of Viola's paintings shows that she was a palette knife painter. Her ability to create realistic compositions by applying oil paint to a canvas, or board, by the use of a flexible painter's palette knife rather than a brush is found in most of her work; it is a difficult technique and one not widely practiced. Palette knives vary in length and width, and each one has a different tip, enabling the artist to achieve a different type of stroke, with the oil painting usually being applied very thickly on the canvas or board. During her career Viola had a commercial art studio in Malibu for many years where she did illustration and advertising art. In California, she exhibited with the California Art Club, 1955-1967.
This was all I could find out about her, and I'm happy to share it on the internet, and hopefully bring a little more attention to an artist who clearly had a control over the palette knife that few have ever achieved.
Of course I had to see if I could find one of her works at a reasonable price, and ebay came to my aid once again. I was able to find this beautiful small landscape listed for a song and now it hangs over my desk next to Sam Hyde Harris and Quincy Tahoma.
I don't think I've ever seen another artist wield a palette knife as fluently as she did, so I'll be on the lookout for more of her work. I hope that the internet can help preserve the legacy of a great artist who clearly deserves a re-evaluation.
Why to stick with Heroku, or make the switch
One question I hear a lot lately is when a company should stick with Heroku or switch to something else. In this article I'm going to lay out the pros and cons for Heroku, and compare it with the typical alternative, AWS.
Three reasons to stick with Heroku
Don't fall victim to thinking that "the grass is always greener on the other side." That hot new technology on AWS or Azure may look cool now, but Heroku offers a lot of great features that should fit the bill for many growing companies.
Heroku supports easy scaling with sticky sessions
Horizontally scaling any web application is hard. In a traditional web app, you must optimize your code so that it runs in parallel. You must deal with race conditions that arise when multiple servers are trying to interact with a shared resource such as a cache or a database. You must find a way to balance load across multiple servers while sharing state across all instances.
Session data is visitor-specific data that is stored on the server. If a visitors's session is stored on one server, but their request is routed to another server, then that server won't know about anything they've done in the current session. It may not know if they're logged in or not. It may not have the browsing filters that they've configured.
Session affinity, also known as sticky sessions, is one solution to the problem of sharing session across servers. With session affinity enabled, all of a visitor's requests will always be routed to the same instance. There are some drawbacks to session affinity, but the benefit of being able to scale horizontally before having to tackle some parallelization problems may outweigh the drawbacks in your use case.
Many services offer session affinity, but none are as easy to set up as Heroku. With literally two clicks you can enable session affinity and start scaling out. Don't let hype distract you from an approach that may reap benefits for your web property.
Heroku provides zero down time deploys and upgrades
One of the best features of Heroku is that their technology and support teams handle deploys and upgrades. They make it so that your dev team doesn't have to worry about keeping the servers up during deploys or running migrations during upgrades.
The preboot feature allows you to keep the old version of your app running during a deploy. This means that users will only ever be routed to a server that has booted up and is running your app, and that means that you aren't turning away customers in that 30 second window where the new version of your app is loading.
Heroku also supports seamless upgrades for the most popular add-ons, such as REDIS. When I switched my company from Heroku REDIS to AWS REDIS we were surprised that our site went down a few weeks after the switch. AWS may force upgrade your technology without providing a way to seamlessly switch to the new version. So AWS forces you to track each upcoming patch and ensure that your team is ready for the switch.
Heroku is cheaper than the alternative because anyone can use it
Heroku seems very expensive when the bills come due, but in my experience it's cheaper than the alternative. Heroku is so easy to use that anyone can use it. With a few clicks or commands a backend developer can enable sticky sessions. With a few clicks or commands they can enable preboot. With a few clicks or commands they can add REDIS or PostGreSQL or any of the many add-ons provided by Heroku.
To use all those different products on a less managed product, such as AWS or Azure, you must retain a dedicated DevOps specialist. These people have very specialized skills and are not cheap. In my experience, using Heroku saves the cost of around one expensive employee. So as long as your Heroku bill is less than the cost of one employee, it's probably the more affordable option.
Three reasons to switch
There are many good reasons to stick with Heroku, but it certainly has limits. Here are the reasons why I've moved services from Heroku to somewhere else in the past.
Heroku is dangerous because anyone can use it
When you're on Heroku, you may not need to hire dedicated staff to manage your web infrastructure. This is a significant cost savings, but it means that the DevOps tasks are going to be offloaded on your web programmers. So you must ensure that you hire the skills on your team to understand Heroku. Heroku may not be as complex as AWS, but it still requires a foundational understanding of how the web works, linux, and logging. If your programmers are exploring the features in Heroku without the requisite experience or training then they may make mistakes that harm your business.
Heroku offers fewer options for international support
Heroku lacks flexible support for internationalized websites. As it says in the regions documentation, each "Private Space exists in a single region, and all applications in the Private Space run in that region.". This may sound confusing, but it ultimately means that each project in Heroku can only run in one region. If you want to support another region, then it must be in a separate project and hosted at a separate domain or subdomain. So if you want to internationalize your website using subdirectories, which is advantageous because it inherits the existing search reputation of your domain, then you can't do that with geographically distributed servers on Heroku.
Heroku offers fewer vertical scaling options
Heroku dynos come in six different flavors as of this writing. If you need anything outside of those six options then you are out of luck. The beefiest dyno is the performance-l machine, which offers 14Gb of RAM. If you need more than that, then you need to switch to another platform. What if you're using very little memory, but you want to use many CPU cores? The only option is to pay for the most expensive dynos. This lack of flexibility means that if your web service is a pretty standard website or API, then Heroku probably won't serve your needs very well.
How to decide whether to stick with Heroku or to switch
In this article I've listed some reasons to stick with Heroku and some reasons to switch, but the final decision is largely dependent on your use case. If you are making a pretty standard website or API, then Heroku is probably fine even when horizontally scaling. There are three main scenarios where you should strongly consider switching to a more complex cloud hosting service.
- Your service need more flexible options for internationalization
- Your service doesn't match the requirements common to most web apps and APIs
- Your service is scaling exponentially
Evan's React Interview Cheat Sheet
In this article I'm going to list some of the things that are useful in React coding interviews, but are easily forgotten or overlooked. Most of the techniques here are useful for solving problems that often arise in coding interviews.
Table of contents
- Imitating componentDidMount using hooks
- Using the previous state in the useState hook
- Using useRef to refer to an element
- Custom usePrevious hook to simplify a common useRef use case
- Vanilla js debounce method
- Use useContext to avoid prop-drilling
Imitating componentDidMount using React hooks
The useEffect hook is one of the more popular additions to the React hooks API. The problem with it is that it replaces a handful of critical lifecycle methods that were much easier to understand.
It's much easier to understand the meaning of "componentDidMount" than "useEffect({}, [])", especially when useEffect replaces componentDidMount, componentDidUpdate, and componentWillUnmount.
The most common use of the useEffect hook in interviews is to replace componentDidMount. The componentDidMount method is often used for loading whatever data is needed by the component. In fact, the reason it's called useEffect is because you are using a side effect.
The default behavior of useEffect is to run after ever render, but it can also be run conditionally using a second argument that triggers the effect when it changes.
In this example, we use the useEffect hook to load data from the omdb api.
// fetch movies. Notice the use of async
const fetchMovies = (newSearchTerm = searchTerm) => {
// See http://www.omdbapi.com/
fetch(`http://www.omdbapi.com/?apikey=${apikey}&s=${newSearchTerm}&page=${pageNumber}`).then(async (response) => {
const responseJson = await response.json();
// if this is a new search term, then replace the movies
if(newSearchTerm != previousSearchTerm) {
setMovies(responseJson.Search);
} else {
// if the search term is the same, then append the new page to the end of movies
setMovies([...movies, ...responseJson.Search]);
}
}).catch((error) => {
console.error(error);
});
};
// imitate componentDidMount
useEffect(fetchMovies, []);
Note that this syntax for using asynchronous code in useEffect is the suggested way to do so.
Using the previous state in the useState hook
The useState hook is probably the easiest and most natural hook, but it does obscure one common use case. For instance, do you know how to use the previous state in the useState hook?
It turns out that you can pass a function into the useState set method and that function can take the previous state as an argument.
Here's a simple counter example.
const [count, setCount] = useState({});
setCount(prevState => {
return prevState + 1;
});
Using useRef to refer to an element
The useRef hook is used to store any mutable value across calls to the component render method. The most common use is to use it to access an element in the DOM.
Initialize the reference using the useRef hook.
// use useRef hook to keep track of a specific element
const movieContainerRef = useRef();
Then attach it to an element in the render return.
<div className={MovieListStyles.movieContainer} ref={movieContainerRef}>
{movies && movies.length > 0 &&
movies.map(MovieItem)
}
</div>
Then you can use the .current property to access the current DOM element for that div, and attach listeners or do anything else you need to do with a div.
// set up a scroll handler
useEffect(() => {
const handleScroll = debounce(() => {
const scrollTop = movieContainerRef.current.scrollTop;
const scrollHeight = movieContainerRef.current.scrollHeight;
// do something with the scrolling properties here...
}, 150);
// add the handler to the movie container
movieContainerRef.current.addEventListener("scroll", handleScroll, { passive: true });
// remove the handler from the movie container
return () => movieContainerRef.current.removeEventListener("scroll", handleScroll);
}, []);
Custom usePrevious hook to simplify a common useRef use case
The useRef hook can be used to store any mutable value. So it's a great choice when you want to look at the previous value in a state variable. Unfortunately, the logic to do so is somewhat tortuous and can get repetitive. I prefer to use a custom usePrevious hook from usehooks.com.
import {useEffect, useRef} from 'react';
// See https://usehooks.com/usePrevious/
function usePrevious(value) {
// The ref object is a generic container whose current property is mutable ...
// ... and can hold any value, similar to an instance property on a class
const ref = useRef();
// Store current value in ref
useEffect(() => {
ref.current = value;
}, [value]); // Only re-run if value changes
// Return previous value (happens before update in useEffect above)
return ref.current;
}
export default usePrevious;
Using it is as simple as one extra line when setting up a functional component.
// use the useState hook to store the search term
const [searchTerm, setSearchTerm] = useState('orange');
// use custom usePrevious hook
const previousSearchTerm = usePrevious(searchTerm);
Vanilla js debounce method
Okay, this next one has nothing to do with React, except for the fact that it's a commonly needed helper method. Yes, I'm talking about "debounce". If you want to reduce the jittery quality of a user interface, but you still want to respond to actions by the user, then it's important to throttle the rate of events your code responds to. Debounce is the name of a method for doing this from the lodash library.
The debounce method waits a preset interval until after the last call to debounce to call a callback method. Effectively, it waits until it stops receiving events to call the callback. This is commonly needed when responding to scroll or mouse events.
The problem is that you don't want to install lodash in a coding interview just to use one method. So here's a vanilla javascript debounce method from Josh Comeau.
//
const debounce = (callback, wait) => {
let timeoutId = null;
return (...args) => {
window.clearTimeout(timeoutId);
timeoutId = window.setTimeout(() => {
callback.apply(null, args);
}, wait);
};
}
export default debounce;
Here's an example of how to use it to update a movie list when a new search term is entered.
// handle text input
const handleSearchChange = debounce((event) => {
setSearchTerm(event.target.value);
fetchMovies(event.target.value);
}, 150);
return (
<div className={MovieListStyles.movieList}>
<h2>Movie List</h2>
<div className={MovieListStyles.searchTermContainer}>
<label>
Search:
<input type="text" defaultValue={searchTerm} onChange={handleSearchChange} />
</label>
</div>
<div
className={MovieListStyles.movieContainer}
ref={movieContainerRef}
>
{movies && movies.length > 0 &&
movies.map(MovieItem)
}
</div>
</div>
);
Use useContext to avoid prop-drilling
The last thing an interviewer wants to see during a React coding interview is prop drilling. Prop drilling occurs when you need to pass a piece of data from one parent component, through several intervening components, to a child component. Prop drilling results in a bunch of repeated code where we are piping a variable through many unrelated components.
To avoid prop drilling, you should use the useContext hook.
The useContext hook is a React implementation of the provider pattern. The provider pattern is a way of providing system wide access to some resource.
It takes three code changes to implement the useContext hook. You've got to call createContext in the parent component that will maintain the data. Then you've got to wrap your app with a special tag.
export const DataContext = React.createContext()
function App() {
const data = { ... }
return (
<div>
<DataContext.Provider value={data}>
<SideBar />
<Content />
</DataContext.Provider>
</div>
)
}
Then you've got to import the context in the child component, and call useContext to get the current value.
import DataContext from '../app.js';
...
const { data } = React.useContext(DataContext);
What has made you stumble in React interviews?
What are some other common ways to make mistakes in React coding interviews? Send me your biggest React coding headaches on Twitter @EvanXMerz.