How to effectively handle downtime with caching

By Elise 

There are simply too many things that can cause unwanted website downtime, from network failures to software bugs to server overload and maintenance to data center problems and power outages, the eerie list goes on.  

If you own a website or app, you know that downtime and backend errors are sometimes unavoidable; it’s bound to happen sooner or later. You also know the consequences of downtime, such as loss of customer loyalty, losing business to your competitors, and the overall negative impact on your business’s reputation.  

Imagine visiting a company’s website for the very first time, and being greeted with a blank screen or confusing error messages. How would that influence the way you view that company? On the web, your website is the face of your company, and first impressions are everything.

On the bright side, there are steps you can take to strengthen your site and mask this problem when it occurs. You can actually totally prevent unwelcome errors and blank pages in times of crisis by serving previously fetched, successful responses instead.  

Protect your website from downtime

Fly apps can be used to mask downtime caused by backend services. One technique for handling backend errors is to cache known good responses, and serve those in place of transient backend errors.  

This example app is designed to store and deliver cached content from your website. It’s an easy and effective method to defend yourself from errors and short periods of downtime. In short, it uses the fly.cache API to store good HTTP responses and use them later in times of failure.

Build the app

  1. Install Fly: npm install -g @fly/fly

  2. Create the app: fly new cache-for-errors -t cache-for-errors

  3. Run the app: fly server cache-for-errors

  4. Visit these URLs:

http://localhost:3000/?status=500 (this returns a 500 error page)

http://localhost:3000/ (this returns a good response)

http://localhost:3000/?status=500 (this returns a cached response instead of erroring again)

The first URL returns an error because the cache is empty. The second URL gets a good response, and stores it in the cache. The third tries the request, detects an error code, and returns good cached content.

  1. Deploy the app: fly --app [app_name] deploy

View the app on GitHub

How it works

The fly.cache API accesses regional Fly cache storage – a fast and powerful feature of Fly Edge Apps. This example application imports the responseCache API, an API for efficiently caching Response objects. It stores "good" HTTP responses in the cache without setting an expiration time, so the data sticks around semi-permanently. When the app receives a request, it proxies it to the origin. If the origin sends back an error, the app checks for cache data before serving the error. And each time a successful request passes through, the app stores the response for later, neat right?

For example, a user visits your site and receives a 500 internal server error. Instead of serving the error to the user, your app will check the cache to see if it has a previously fetched good response to this request. If it does, the user will receive the cached response, rather than the error. They’ll see your page, interact with your content and all will be good in the world.

Next steps

Clearly downtimes and errors are unexpected and unfortunate. But being prepared will allow you to get your app back on track. This example demonstrates trying requests, then going to different sources when the response is bad. You can use similar techniques to implement retries, such as: trying a backend, and if it fails, trying another backend.  

Try it out for yourself and shoot us a star ⭐ if you love it.