The Web Audio API

· 6 min read · Updated April 22, 2026 · intermediate
javascript web-audio audio audio-worklet media synthesiser

Overview

The Web Audio API is a high-level JavaScript API for processing and synthesising audio in the browser. Where the <audio> element plays back recorded files, the Web Audio API lets you build audio from scratch — synthesisers, drum machines, real-time effects chains, and visualisations.

The architecture is a directed graph of audio nodes. You connect a source node to processing nodes (gain, filters, effects) and finally to a destination (usually your speakers). Every node does one job, and chaining them together is what makes the API powerful.

The AudioContext

The AudioContext is the foundation. It represents the audio processing graph and owns all the nodes. Create one when you need audio:

const audioCtx = new AudioContext();
```text

On some browsers, the context starts in a suspended state until the user interacts with the page. Resume it explicitly:

```javascript
const audioCtx = new AudioContext();
if (audioCtx.state === "suspended") {
  audioCtx.resume();
}
```text

## Playing a Sound File

Load and play an audio file through a buffer source:

```javascript
async function playSound(url) {
  const response = await fetch(url);
  const arrayBuffer = await response.arrayBuffer();
  const audioBuffer = await audioCtx.decodeAudioData(arrayBuffer);

  const source = audioCtx.createBufferSource();
  source.buffer = audioBuffer;
  source.connect(audioCtx.destination);
  source.start(0);
}
```text

The `AudioBufferSourceNode` plays back the decoded audio. Once started, you can't restart it — create a new source node for each playback.

## Creating Sound with Oscillators

The `OscillatorNode` generates a tone at a specific frequency. It's the building block of synthesis:

```javascript
const osc = audioCtx.createOscillator();
osc.type = "sine";      // sine, square, sawtooth, triangle
osc.frequency.value = 440;  // A4 in Hz

osc.connect(audioCtx.destination);
osc.start();
osc.stop(audioCtx.currentTime + 1);  // stop after 1 second
```text

The oscillator runs until you call `stop()`. Without stopping it, it plays forever.

You can schedule frequency changes precisely:

```javascript
osc.frequency.setValueAtTime(440, audioCtx.currentTime);         // at now
osc.frequency.setValueAtTime(880, audioCtx.currentTime + 0.5);   // at +0.5s
osc.frequency.linearRampToValueAtTime(1760, audioCtx.currentTime + 1);  // glide
```text

## Controlling Volume with GainNode

`GainNode` controls volume. Its `gain` parameter is an `AudioParam` — you don't set it directly, you schedule values on it:

```javascript
const gain = audioCtx.createGain();
gain.gain.setValueAtTime(0, audioCtx.currentTime);
gain.gain.linearRampToValueAtTime(1, audioCtx.currentTime + 0.1);  // fade in
gain.gain.setValueAtTime(1, audioCtx.currentTime + 2);
gain.gain.linearRampToValueAtTime(0, audioCtx.currentTime + 3);    // fade out
```text

Connect the oscillator through the gain node to the destination:

```javascript
osc.connect(gain);
gain.connect(audioCtx.destination);
```text

## The Audio Graph

Audio flows through a graph of connected nodes. A practical setup might look like:

```text
[Oscillator] → [Gain] → [Analyser] → [Destination (speakers)]
```text

```javascript
const osc = audioCtx.createOscillator();
const gain = audioCtx.createGain();
const analyser = audioCtx.createAnalyser();

osc.connect(gain);
gain.connect(analyser);
analyser.connect(audioCtx.destination);

osc.start();
```text

Multiple oscillators can feed into a single gain. A mixer is just several sources connected to one gain node.

## AnalyserNode for Visualisations

The `AnalyserNode` captures frequency and time-domain data for visualisations. It doesn't modify audio — it just exposes it for analysis:

```javascript
const analyser = audioCtx.createAnalyser();
analyser.fftSize = 2048;

const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);

function draw() {
  analyser.getByteFrequencyData(dataArray);
  // draw the frequency data array on a canvas
  requestAnimationFrame(draw);
}

osc.connect(analyser);
osc.start();
draw();
```text

`getByteFrequencyData()` fills the array with values from 0 to 255 for each frequency bin. `getByteTimeDomainData()` gives you waveform data instead.

## AudioWorklet for Custom Processing

The `AudioWorklet` replaces the deprecated `ScriptProcessorNode`. It runs your audio processing code in a separate thread — critical for real-time audio without causing audio glitches:

```javascript
// worklet-processor.js
class NoiseGateProcessor extends AudioWorkletProcessor {
  process(inputs, outputs, parameters) {
    const input = inputs[0];
    const output = outputs[0];
    const threshold = parameters.threshold[0] ?? 0;

    for (let channel = 0; channel < input.length; channel++) {
      const inputChannel = input[channel];
      const outputChannel = output[channel];
      for (let i = 0; i < inputChannel.length; i++) {
        outputChannel[i] = Math.abs(inputChannel[i]) > threshold
          ? inputChannel[i]
          : 0;
      }
    }
    return true;
  }
}

registerProcessor("noise-gate", NoiseGateProcessor);
```text

```javascript
await audioCtx.audioWorklet.addModule("worklet-processor.js");

const noiseGate = new AudioWorkletNode(audioCtx, "noise-gate");
mic.connect(noiseGate);
noiseGate.connect(audioCtx.destination);
```text

The `process()` method is called by the audio thread — keep it fast and avoid allocating memory in it.

## Loading Audio Files

For larger audio files, stream them from a URL using `fetch` and `decodeAudioData`:

```javascript
async function loadAudio(url) {
  const response = await fetch(url);
  const arrayBuffer = await response.arrayBuffer();
  return audioCtx.decodeAudioData(arrayBuffer);
}
```text

For very long files (music, podcasts), use the Media Element Source instead of loading the whole file into memory:

```javascript
const audioElement = document.querySelector("audio");
const source = audioCtx.createMediaElementSource(audioElement);
source.connect(audioCtx.destination);
```text

This routes the `<audio>` element through the Web Audio API graph so you can process it with effects.

## BiquadFilterNode for EQ and Effects

`BiquadFilterNode` provides common filter types:

```javascript
const filter = audioCtx.createBiquadFilter();
filter.type = "lowpass";    // lowpass, highpass, bandpass, peaking, etc.
filter.frequency.value = 1000;  // cutoff frequency in Hz
filter.Q.value = 1;             // resonance

osc.connect(filter);
filter.connect(audioCtx.destination);
```text

For a simple low-pass filter smoothing out harsh frequencies, this is the node to reach for.

## Stereo Panning

`StereoPannerNode` pans a signal left or right:

```javascript
const panner = audioCtx.createStereoPanner();
panner.pan.value = -1;  // full left
panner.pan.value = 0;   // centre
panner.pan.value = 1;   // full right
```text

## ConvolverNode for Reverb

`ConvolverNode` applies a convolution reverb — perfect for room simulation or reverb effects:

```javascript
const convolver = audioCtx.createConvolver();
convolver.buffer = impulseResponseBuffer;  // load an IR file
source.connect(convolver);
convolver.connect(audioCtx.destination);
```text

The impulse response buffer describes the acoustic space. Shorter buffers give tight room reverb; longer ones give large hall sounds.

## Gotchas

**Audio context state.** Browsers suspend the audio context automatically when no interaction has happened. If your audio doesn't play, check `audioCtx.state` and call `resume()`.

**Garbage collecting nodes.** When you disconnect a node, it may be garbage collected if nothing else references it. Keep references to active nodes.

**Sample rate.** `AudioContext.sampleRate` gives you the hardware sample rate. All nodes operate at this rate. You can't mix contexts with different sample rates.

**Mobile autoplay.** On iOS Safari, audio playback requires a user gesture. Call `audioCtx.resume()` inside a click or touch handler before any sound plays.

## See Also

- [/guides/javascript-webgl-basics/](/guides/javascript-webgl-basics/) — WebGL for 3D graphics; combine with Web Audio for game audio
- [/guides/javascript-streams-api/](/guides/javascript-streams-api/) — the Streams API works alongside Web Audio for chunked audio processing
- [/guides/javascript-canvas-api/](/guides/javascript-canvas-api/) — visualise audio data by drawing analyser output on canvas