How to retry failed requests in Node.js
Transient network failures, rate limit errors, and temporary server unavailability are facts of life when calling external APIs, and retrying with exponential backoff is the standard way to handle them gracefully. As the creator of CoreUI with 25 years of backend development experience, I implement retry logic in every Node.js service that calls external APIs, as a single unretried request failure can cascade into user-visible errors. The key design is exponential backoff with jitter — each retry waits longer than the last, and a random jitter prevents thundering herd problems when many requests fail simultaneously. Only retry idempotent requests (GET, PUT, DELETE) by default; POST requests need careful consideration to avoid duplicate side effects.
Implement a generic retry function with exponential backoff.
// src/utils/retry.js
export async function retry(fn, {
attempts = 3,
baseDelay = 1000,
maxDelay = 10000,
retryOn = [429, 500, 502, 503, 504],
onRetry
} = {}) {
let lastError
for (let attempt = 1; attempt <= attempts; attempt++) {
try {
return await fn()
} catch (err) {
lastError = err
const status = err.status ?? err.response?.status
const isRetryable = !status || retryOn.includes(status)
if (!isRetryable || attempt === attempts) {
throw err
}
// Exponential backoff with jitter
const delay = Math.min(
baseDelay * Math.pow(2, attempt - 1) + Math.random() * 1000,
maxDelay
)
onRetry?.({ attempt, delay, error: err })
await sleep(delay)
}
}
throw lastError
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
Math.pow(2, attempt - 1) produces delays of 1s, 2s, 4s, 8s… Math.random() * 1000 adds up to 1 second of jitter. Math.min(..., maxDelay) caps the delay. The retryOn whitelist prevents retrying 400, 401, 403, and 404 errors which indicate client mistakes, not server issues.
Using retry() for Fetch Requests
Wrap any async operation with retry.
// src/services/external.service.js
import { retry } from '../utils/retry.js'
export async function fetchExternalData(endpoint) {
return retry(
async () => {
const res = await fetch(endpoint)
if (!res.ok) {
const err = new Error(`HTTP ${res.status}`)
err.status = res.status
throw err
}
return res.json()
},
{
attempts: 4,
baseDelay: 500,
onRetry: ({ attempt, delay }) => {
console.log(`Retry ${attempt} in ${Math.round(delay)}ms...`)
}
}
)
}
Creating an error with a status property lets the retry function check whether the failure is retryable. onRetry provides observability — you can log or emit metrics on each retry.
Respecting Retry-After Headers
Honor the server’s requested retry delay.
export async function fetchWithRetryAfter(url) {
return retry(
async () => {
const res = await fetch(url)
if (res.status === 429) {
const retryAfter = res.headers.get('Retry-After')
const delay = retryAfter ? Number(retryAfter) * 1000 : 60000
const err = new Error('Rate limited')
err.status = 429
err.retryAfterMs = delay
throw err
}
if (!res.ok) {
const err = new Error(`HTTP ${res.status}`)
err.status = res.status
throw err
}
return res.json()
},
{
baseDelay: 1000,
onRetry: ({ error, delay }) => {
const actualDelay = error.retryAfterMs ?? delay
return actualDelay // Override default backoff delay
}
}
)
}
The Retry-After header tells you exactly how long to wait before retrying. Respecting it avoids further rate limit violations and gets you back to normal operation as quickly as possible.
Best Practice Note
This is the same retry pattern used in CoreUI backend services that integrate with third-party APIs. For axios, the axios-retry library provides similar functionality with less boilerplate. For more complex resilience patterns — circuit breakers, fallbacks, timeouts — consider cockatiel or opossum. Never retry indefinitely; always set a maximum attempt count to prevent infinite loops. For webhook delivery specifically, see how to handle webhooks in Node.js which covers the provider-side retry behavior.



