How to use cluster module in Node.js

Node.js runs on a single thread by default, which means a single CPU-intensive task can block all requests and multi-core systems are underutilized. With over 12 years of Node.js development experience since 2014 and as the creator of CoreUI, I’ve scaled numerous APIs to handle thousands of concurrent requests. The cluster module allows you to spawn multiple worker processes that share the same server port, distributing load across all CPU cores. This approach dramatically improves throughput and provides automatic restart capability when workers crash.

Use the cluster module to fork multiple worker processes that share the server port across CPU cores.

const cluster = require('cluster')
const http = require('http')
const os = require('os')

const numCPUs = os.cpus().length

if (cluster.isMaster) {
  console.log(`Master process ${process.pid} is running`)

  // Fork workers for each CPU core
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork()
  }

  // Restart worker if it crashes
  cluster.on('exit', (worker, code, signal) => {
    console.log(`Worker ${worker.process.pid} died. Starting new worker...`)
    cluster.fork()
  })

} else {
  // Workers can share any TCP connection
  const server = http.createServer((req, res) => {
    res.writeHead(200)
    res.end(`Hello from worker ${process.pid}\n`)
  })

  server.listen(3000, () => {
    console.log(`Worker ${process.pid} started`)
  })
}

For Express applications:

const cluster = require('cluster')
const express = require('express')
const os = require('os')

if (cluster.isMaster) {
  const numWorkers = os.cpus().length
  console.log(`Master cluster setting up ${numWorkers} workers`)

  for (let i = 0; i < numWorkers; i++) {
    cluster.fork()
  }

  cluster.on('online', (worker) => {
    console.log(`Worker ${worker.process.pid} is online`)
  })

  cluster.on('exit', (worker, code, signal) => {
    console.log(`Worker ${worker.process.pid} exited. Restarting...`)
    cluster.fork()
  })
} else {
  const app = express()

  app.get('/', (req, res) => {
    res.send(`Response from process ${process.pid}`)
  })

  app.listen(3000)
}

Best Practice Note

Each worker process has its own memory space and event loop, so they don’t share state. Use Redis or a database for shared session storage. Workers communicate with the master via IPC using worker.send(). For production deployments, use PM2 which provides clustering with zero-downtime reloads and better process management. Monitor worker health and implement graceful shutdown to prevent dropped connections. This is how we scale CoreUI backend services—clustering across all cores with PM2 for maximum throughput and automatic recovery from crashes.


Speed up your responsive apps and websites with fully-featured, ready-to-use open-source admin panel templates—free to use and built for efficiency.


About the Author