Search…

Client-side caching and offline support

In this series (12 parts)
  1. What frontend system design covers
  2. Rendering strategies: CSR, SSR, SSG, ISR
  3. Performance fundamentals: Core Web Vitals
  4. Loading performance and resource optimization
  5. State management at scale
  6. Component architecture and design systems
  7. Client-side caching and offline support
  8. Real-time on the frontend
  9. Frontend security
  10. Scalability for frontend systems
  11. Accessibility as a system design concern
  12. Monitoring and observability for frontends

Prerequisite: Component architecture.

The browser already caches more than most developers realize. Between HTTP cache headers, the Cache API, IndexedDB, and service workers, you have a full storage stack sitting on the client. The challenge is not finding a place to store data. It is choosing the right layer for each type of data and knowing when that data has gone stale.

This article covers each caching layer from HTTP headers through service workers and offline persistence with IndexedDB. By the end you will understand how they compose into an offline-capable frontend.


HTTP caching headers

Every response from a server can carry caching instructions. These headers tell the browser how long to keep a resource and when to revalidate.

Cache-Control is the primary header. Common directives:

  • max-age=3600: cache for one hour.
  • no-cache: store the response but revalidate before using it.
  • no-store: do not cache at all.
  • immutable: never revalidate. Useful for hashed asset filenames.

ETag provides a fingerprint for a resource. On subsequent requests the browser sends If-None-Match with the ETag. If the resource has not changed, the server responds with 304 Not Modified and zero body bytes.

For static assets served through a CDN, pair immutable with content-hashed filenames. The CDN edge caches the file indefinitely. When you deploy a new build, the filename changes and clients fetch the new version without any cache busting hacks.

For API responses, max-age with stale-while-revalidate gives you the best balance. The browser serves stale content immediately and fetches a fresh copy in the background.


The stale-while-revalidate pattern

This pattern appears in HTTP headers, in SWR libraries like swr and react-query, and inside service workers. The idea is the same everywhere: return what you have immediately, then update in the background.

Cache-Control: max-age=60, stale-while-revalidate=300

This tells the browser: for the first 60 seconds, use the cached version without any network request. Between 60 and 360 seconds, serve the stale version but fire off a background revalidation. After 360 seconds, treat the cache as expired and wait for a fresh response.

The user sees content instantly. The data is at most one request cycle behind. For most read-heavy UIs, this latency trade-off is worth it.


Service workers

A service worker is a JavaScript file that runs in a separate thread from your page. It sits between the browser and the network, intercepting every fetch request. This makes it the control plane for caching on the client.

Lifecycle

  1. Register: your page calls navigator.serviceWorker.register('/sw.js').
  2. Install: the browser downloads and parses the worker. You typically pre-cache critical assets here.
  3. Activate: the new worker takes control. Old caches can be cleaned up.
  4. Fetch: every network request from controlled pages passes through the worker’s fetch event.
sequenceDiagram
  participant Browser
  participant SW as Service Worker
  participant Cache
  participant Network

  Browser->>SW: fetch event (GET /api/data)
  SW->>Cache: match(request)
  alt Cache hit
      Cache-->>SW: cached response
      SW-->>Browser: cached response
      Note over SW,Network: Background revalidation
      SW->>Network: fetch(request)
      Network-->>SW: fresh response
      SW->>Cache: put(request, fresh response)
  else Cache miss
      SW->>Network: fetch(request)
      Network-->>SW: response
      SW->>Cache: put(request, response)
      SW-->>Browser: network response
  end

Service worker intercepting a fetch request using a stale-while-revalidate strategy.

Caching strategies

StrategyBehaviorUse case
Cache firstCheck cache, fall back to networkStatic assets, fonts
Network firstTry network, fall back to cacheAPI data that should be fresh
Stale-while-revalidateReturn cache, update in backgroundSemi-dynamic content
Network onlyAlways go to networkAuth endpoints, analytics
Cache onlyNever hit the networkPre-cached app shell

Pick the strategy per route. A single service worker can apply different strategies based on the URL pattern.

self.addEventListener('fetch', (event) => {
  const url = new URL(event.request.url);

  if (url.pathname.startsWith('/api/')) {
    event.respondWith(networkFirst(event.request));
  } else if (url.pathname.match(/\.(js|css|png|woff2)$/)) {
    event.respondWith(cacheFirst(event.request));
  } else {
    event.respondWith(staleWhileRevalidate(event.request));
  }
});

Cache API

The Cache API is the storage backend that service workers use. It stores Request/Response pairs, which makes it a natural fit for HTTP resources.

// Pre-cache during install
self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open('v1-shell').then((cache) =>
      cache.addAll([
        '/',
        '/app.js',
        '/styles.css',
        '/offline.html',
      ])
    )
  );
});

Key points:

  • Caches are namespaced by string keys. Use versioned names like v2-shell so you can clean up old caches on activation.
  • cache.match() returns undefined on a miss, not an error. Always handle the miss case.
  • The Cache API stores opaque responses from cross-origin requests. You cannot inspect their contents, but you can serve them.

IndexedDB for offline data

The Cache API handles HTTP resources well. For structured application data, IndexedDB is the right choice. It is a transactional, key-value object store that supports indexes, cursors, and large storage quotas.

Common use cases:

  • Offline drafts (email, documents, form submissions).
  • Local copies of paginated API data.
  • User preferences that should survive cache eviction.
const db = await openDB('app-store', 1, {
  upgrade(db) {
    const store = db.createObjectStore('drafts', { keyPath: 'id' });
    store.createIndex('updatedAt', 'updatedAt');
  },
});

await db.put('drafts', {
  id: crypto.randomUUID(),
  content: 'Unsaved note...',
  updatedAt: Date.now(),
});

Use a library like idb to wrap the callback-heavy native API in promises. The raw IndexedDB API works but the ergonomics are painful.

Approximate storage limits across browser storage mechanisms. Cache API and IndexedDB share a large quota managed by the browser’s storage manager.


Background sync

When a user takes an action while offline, you need to queue that action and replay it when connectivity returns. The Background Sync API does exactly this.

// In your page
navigator.serviceWorker.ready.then((reg) => {
  reg.sync.register('sync-drafts');
});

// In your service worker
self.addEventListener('sync', (event) => {
  if (event.tag === 'sync-drafts') {
    event.waitUntil(syncDraftsToServer());
  }
});

The browser fires the sync event when connectivity is restored, even if the user has closed the tab. This is critical for reliability. Without it, you would need to check connectivity on every page load and manually flush queued writes.

For periodic updates, the Periodic Background Sync API lets you register tasks that run at intervals. Browser support is limited and the browser throttles frequency based on site engagement, so do not rely on it for time-sensitive operations.


Putting the layers together

A well-designed offline architecture composes these layers:

  1. HTTP caching headers handle the common case. Most repeat visits are served from the browser’s HTTP cache without any custom code.
  2. Service worker with Cache API gives you control over caching strategies per route and enables the app shell pattern.
  3. IndexedDB stores application state that the user needs to access offline: drafts, queued actions, synced datasets.
  4. Background sync replays queued writes when the network returns.
flowchart TD
  A[User Request] --> B{Service Worker Registered?}
  B -->|No| C[HTTP Cache / Network]
  B -->|Yes| D{Route Type?}
  D -->|Static Asset| E[Cache First]
  D -->|API Read| F[Stale While Revalidate]
  D -->|API Write| G{Online?}
  G -->|Yes| H[Send Request]
  G -->|No| I[Queue in IndexedDB]
  I --> J[Register Background Sync]
  J --> K[Sync When Online]
  E --> L[Response to Browser]
  F --> L
  H --> L
  K --> L

Decision tree for routing requests through the client-side caching stack.


Common pitfalls

Serving stale HTML shells. If your service worker caches index.html aggressively, users may get an old app shell that references JS bundles that no longer exist. Version your caches and update the shell on activation.

Ignoring quota pressure. Browsers evict storage under disk pressure, starting with the least recently used origins. Call navigator.storage.persist() to request durable storage for critical apps.

Caching authenticated responses. Never cache responses that contain user-specific data in a shared cache. Use per-user cache keys or skip caching entirely for authenticated endpoints.

Not testing offline. Chrome DevTools has an offline toggle. Use it in development. Automated tests should simulate offline conditions with Playwright’s context.setOffline(true).


What comes next

With client-side caching and offline support in place, the next layer of frontend complexity is real-time data. Real-time on the frontend covers polling, WebSockets, Server-Sent Events, and how to keep live data flowing without burning through bandwidth.

Start typing to search across all content
navigate Enter open Esc close