Logging and Monitoring in Node.js

· 6 min read · Updated March 20, 2026 · intermediate
node logging monitoring observability production

Why Logging Matters in Node.js

Node.js applications often run in production for months or years without stopping. When something goes wrong — a request fails, memory leaks, or the server crashes — your logs are the only way to understand what happened. Unlike browser JavaScript where console.log during development is fine, server-side applications need logs that are structured, persistent, and safe to write under load.

Logging is the foundation of observability. Without it, you’re flying blind.

The Built-in Console API

Node.js provides console as a global object with methods that write to the process’s standard streams.

Writing to stdout and stderr

console.log('Server started on port 3000')
// String: Server started on port 3000

console.error('Failed to connect to database')
// String: Failed to connect to database

console.warn('Deprecation notice: use authenticate() instead')
// String: Deprecation notice: use authenticate() instead

In Node.js:

  • console.log and console.info write to stdout
  • console.error and console.warn write to stderr
  • console.debug behaves like console.log but outputs to stderr

Formatting

The console supports printf-style format specifiers:

console.log('User %s logged in at %d', 'alice', 1700000000)
// User alice logged in at 1700000000

console.log('Object: %j', { ok: true, count: 42 })
// Object: {"ok":true,"count":42}

Common specifiers:

  • %s — string
  • %d — number (integer or floating point)
  • %j — JSON (calls JSON.stringify)
  • %% — literal %

Timing

You can measure how long operations take:

console.time('database-query')

// ... do some async work ...
await queryDatabase()

console.timeLog('database-query')
// database-query: 23.456ms

console.timeEnd('database-query')
// database-query: 23.789ms

When console is synchronous vs asynchronous

This trips up many developers. When stdout is connected to a terminal (TTY), console.log is synchronous — it blocks while writing. When stdout is piped to another process, it becomes asynchronous. For high-throughput servers, this can cause unexpected slowdowns.

Structured Logging with Pino

console.log is fine for development, but production Node.js needs structured logging. When something breaks at 3 AM, you want to search, filter, and aggregate your logs programmatically. That’s where Pino comes in.

Pino is the fastest structured logger for Node.js. It outputs JSON by default, which makes logs machine-readable and compatible with log aggregation tools.

Installation and Basic Usage

npm install pino
const pino = require('pino')
const logger = pino({ level: 'info' })

logger.info('Application started')
// {"level":30,"time":1700000000000,"pid":1234,"hostname":"server-1","msg":"Application started"}

The output is JSON. In production you pipe this to a log aggregator. In development, you can use pino-pretty to make it readable:

node app.js | npx pino-pretty

Log Levels

Pino uses six log levels, each with a numeric value:

LevelValueWhen to use
trace10Detailed diagnostic information
debug20Debugging information
info30Normal operation confirmation
warn40Something unexpected happened
error50Error that needs attention
fatal60Application is crashing
logger.trace('Entering function with param=%d', value)
logger.debug('Cache miss for key: %s', key)
logger.info('Request processed in %dms', duration)
logger.warn('Connection pool running low: %d connections', available)
logger.error({ err }, 'Database query failed')
logger.fatal('Out of memory — shutting down')

Child Loggers

When you have multiple modules or want to add context to every log line, use child loggers:

const logger = pino({ level: 'info' })
const authLogger = logger.child({ module: 'auth' })
const dbLogger = logger.child({ module: 'database' })

authLogger.info('User authenticated')
// {"level":30,"module":"auth","msg":"User authenticated",...}

dbLogger.warn('Slow query detected')
// {"level":40,"module":"database","msg":"Slow query detected",...}

This makes it easy to filter logs by module in your log viewer.

Adding Request Context

In a web server, you typically want every log line within a request to include the request ID:

const logger = require('pino')()

function handleRequest(req, res) {
  const requestLogger = logger.child({ requestId: req.id })
  
  requestLogger.info('Incoming request')
  // All log lines in this request now carry requestId
}

Flexible Logging with Winston

Winston is the most widely-used logging library for Node.js. It’s slower than Pino but offers more transport options and a higher-level API.

Winston’s key concept is transports — destinations for your logs. A transport can be a file, an HTTP endpoint, a database, or the console.

Installation and Basic Usage

npm install winston
const winston = require('winston')

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  transports: [
    new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
    new winston.transports.File({ filename: 'logs/combined.log' })
  ]
})

logger.info('Server started', { port: 3000 })
// Writes JSON to logs/combined.log
// {"level":"info","message":"Server started","port":3000,"timestamp":"..."}

Console Transport in Development

In development you want human-readable output, not JSON:

if (process.env.NODE_ENV !== 'production') {
  logger.add(new winston.transports.Console({
    format: winston.format.combine(
      winston.format.colorize(),
      winston.format.simple()
    )
  }))
}

When to Use Winston vs Pino

Choose Winston when you need:

  • Many different transport targets
  • Easy integration with specific services (S3, HTTP endpoints)
  • A higher-level API with more defaults

Choose Pino when you need:

  • Maximum performance (Pino is 5x+ faster)
  • Low overhead logging in hot paths
  • Native async transport support

Node.js Process Events

Node.js emits events when something goes wrong at the process level. You must handle these to log errors before the process exits.

Uncaught Exceptions

When an exception is thrown and not caught by any .catch() or try/catch, Node emits uncaughtException:

process.on('uncaughtException', (err, origin) => {
  console.error('UNCAUGHT EXCEPTION:', err)
  // At this point your application is in an undefined state
  // Log and exit is the safest option
  process.exit(1)
})

// This throws and will trigger the handler above
throw new Error('Something broke')

Always exit after an uncaught exception. Your application state may be corrupted.

Unhandled Promise Rejections

When a Promise is rejected and you don’t attach a .catch(), Node emits unhandledRejection:

process.on('unhandledRejection', (reason, promise) => {
  console.error('UNHANDLED REJECTION:', reason)
})

// This creates an unhandled rejection
Promise.reject(new Error('Broken promise'))

In Node.js 15+, unhandled rejections cause the process to exit with a non-zero code by default. Always attach this handler.

The Exit Event

The exit event fires when the process is about to exit:

process.on('exit', (code) => {
  console.error(`Process exiting with code: ${code}`)
  // Only synchronous operations work here
})

Use exit for synchronous cleanup. For async cleanup, use beforeExit.

Log Levels and NODE_ENV

The NODE_ENV environment variable tells Node.js what environment it’s running in. Most Node.js libraries check this to enable or disable features:

// Express
if (process.env.NODE_ENV === 'production') {
  app.use(compression())
}

// Many logging libraries
// In production: JSON, info level
// In development: pretty, debug level

Set NODE_ENV=production in production deployments. This reduces debug log volume and can enable other optimizations.

Filtering by Level

const pino = require('pino')({
  level: process.env.NODE_ENV === 'production' ? 'info' : 'debug'
})

This keeps logs terse in production while giving you full detail during development.

Production Best Practices

Never Log Sensitive Data

This cannot be stressed enough. Never log:

  • Passwords or hashes
  • API keys or tokens
  • Credit card numbers
  • Social Security Numbers or national IDs
  • Personally identifiable information (PII)

If you need to log user data, log IDs, not the data itself:

// Bad
logger.info('User authenticated', { email: user.email, password: user.password })

// Good
logger.info('User authenticated', { userId: user.id })

Use Async Transports

Synchronous logging blocks the event loop. Pino’s default transports are async. Winston lets you configure async writing. Always use async transports in production.

Log Rotation

Logs grow indefinitely. Use log rotation to manage disk space:

  • System-level: logrotate on Linux
  • Application-level: pino-roll for Pino, winston-daily-rotate-file for Winston

Use consistent field names across your logs. This makes filtering in tools like Elasticsearch or Datadog straightforward:

logger.info({ event: 'user_login', userId: user.id, method: 'oauth' }, 'User logged in')
logger.info({ event: 'api_request', method: 'GET', path: '/api/users', status: 200 }, 'Request completed')

See Also