File Uploads: Client to Server to Storage
Getting a File from the User
The <input type="file"> element gives you a FileList via the input’s .files property. Each item in that list is a File object with .name, .size, .type, and .lastModified.
<input type="file" id="file-input" accept="image/*">
const input = document.getElementById('file-input');
input.addEventListener('change', () => {
const file = input.files[0]; // File object
console.log(file.name); // e.g. "photo.jpg"
console.log(file.size); // bytes
console.log(file.type); // e.g. "image/jpeg"
console.log(file.lastModified); // timestamp
});
To restrict which files the browser shows in the picker, use the accept attribute with MIME types or extensions:
<!-- Images only -->
<input type="file" accept="image/*">
<!-- Specific formats -->
<input type="file" accept=".jpg,.jpeg,.png,.webp">
<!-- Multiple files -->
<input type="file" accept="image/*" multiple>
Drag-and-drop uses the same FileList interface from event.dataTransfer.files:
const dropZone = document.getElementById('drop-zone');
['dragenter', 'dragover', 'dragleave', 'drop'].forEach(eventName => {
document.body.addEventListener(eventName, (e) => {
e.preventDefault();
e.stopPropagation();
}, { passive: false });
});
dropZone.addEventListener('drop', (e) => {
const files = e.dataTransfer.files; // FileList — same API as input.files
if (files.length) {
handleFile(files[0]);
}
});
To upload an entire directory (e.g., a folder of assets), use the webkitdirectory attribute:
<input type="file" id="folder-input" webkitdirectory multiple>
const input = document.getElementById('folder-input');
input.addEventListener('change', () => {
const files = Array.from(input.files);
// Files retain their full path relative to the selected directory
files.forEach(file => {
console.log(file.webkitRelativePath); // e.g. "assets/images/photo.jpg"
});
});
The File object is a Blob subclass, so you can slice it for chunked uploads or read it with FileReader.
Building a FormData Payload
FormData bundles fields and files into multipart/form-data. Construct it, append your file with the original filename as the third argument, then send it with fetch().
const form = new FormData();
form.append('file', file, file.name); // filename third arg matters — server needs it
form.append('username', 'ada');
const response = await fetch('/api/upload', {
method: 'POST',
body: form
// NOTE: Do NOT set 'Content-Type' header here.
// The browser auto-generates the correct multipart boundary.
});
const data = await response.json();
console.log(data);
That last point is critical: never set Content-Type: multipart/form-data manually. When you do, the browser drops the boundary parameter, and the server cannot parse the body. Let the browser handle it.
Sending JSON metadata alongside a file works the same way — just append a stringified JSON blob:
const form = new FormData();
form.append('metadata', JSON.stringify({ title: 'My Photo', category: 'travel' }));
form.append('file', photoFile, photoFile.name);
await fetch('/api/upload', { method: 'POST', body: form });
For forms with multiple distinct file fields (e.g., an avatar and a cover image), use upload.fields() on the server side:
// Client: append files to different field names
const form = new FormData();
form.append('avatar', avatarFile, avatarFile.name);
form.append('coverImage', coverFile, coverFile.name);
await fetch('/api/upload-profile', { method: 'POST', body: form });
Showing Upload Progress
fetch() has no upload progress events. When you need to show a progress bar, fall back to XMLHttpRequest.
function uploadWithProgress(file, url, onProgress) {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', (e) => {
if (e.lengthComputable) {
const percent = Math.round((e.loaded / e.total) * 100);
onProgress({ loaded: e.loaded, total: e.total, percent });
}
});
xhr.addEventListener('load', () => {
if (xhr.status >= 200 && xhr.status < 300) {
resolve(JSON.parse(xhr.responseText));
} else {
reject(new Error(`HTTP ${xhr.status}: ${xhr.responseText}`));
}
});
xhr.addEventListener('error', () => reject(new Error('Network failure')));
xhr.addEventListener('abort', () => reject(new Error('Upload aborted')));
xhr.open('POST', url);
xhr.timeout = 60_000;
const form = new FormData();
form.append('file', file, file.name);
xhr.send(form);
});
}
// Usage
const progressBar = document.getElementById('progress');
const result = await uploadWithProgress(someFile, '/api/upload', ({ percent }) => {
progressBar.style.width = `${percent}%`;
});
For tracking download progress in Node.js, ReadableStream on the response body works:
const response = await fetch(url);
const total = response.headers.get('content-length');
const reader = response.body.getReader();
let received = 0;
while (true) {
const { done, value } = await reader.read();
if (done) break;
received += value.length;
if (total) {
console.log(`Progress: ${Math.round((received / total) * 100)}%`);
} else {
console.log(`Received ${received} bytes`);
}
}
Server-Side: Handling Multipart Uploads
multer is Express middleware for parsing multipart/form-data. It handles size limits, file filters, and gives you the file as a Buffer or written to disk.
const express = require('express');
const multer = require('multer');
const app = express();
const upload = multer({
storage: multer.memoryStorage(), // file available as req.file.buffer
limits: { fileSize: 5 * 1024 * 1024 }, // 5MB
fileFilter: (req, file, cb) => {
if (file.mimetype.startsWith('image/')) {
cb(null, true);
} else {
cb(new Error('Only image files are allowed'), false);
}
}
});
app.post('/api/upload', upload.single('file'), (req, res) => {
// req.file: { fieldname, originalname, mimetype, size, buffer, ... }
// req.body: other form fields
res.json({ filename: req.file.originalname, size: req.file.size });
});
memoryStorage() keeps the file as a Buffer in req.file.buffer. For files that might exceed available RAM, use diskStorage() instead:
const storage = multer.diskStorage({
destination: (req, file, cb) => cb(null, '/uploads/'),
filename: (req, file, cb) => {
const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
cb(null, uniqueSuffix + '-' + file.originalname);
}
});
const upload = multer({ storage });
If you need lower-level streaming without buffering the whole file in memory, busboy parses multipart incrementally:
const { parse } = require('busboy');
app.post('/api/upload', (req, res) => {
const bb = parse({
headers: req.headers,
limits: { maxFileSize: 50 * 1024 * 1024, maxFiles: 5 }
});
const files = [];
const fields = {};
bb.on('file', (name, stream, info) => {
const { filename, mimeType } = info;
const chunks = [];
stream.on('data', (chunk) => chunks.push(chunk));
stream.on('end', () => {
files.push({ name, filename, mimeType, buffer: Buffer.concat(chunks) });
});
});
bb.on('field', (name, value) => { fields[name] = value; });
bb.on('finish', () => res.json({ fields, fileCount: files.length }));
bb.on('error', (err) => {
res.status(400).json({ error: 'PARSE_ERROR', message: err.message });
});
req.pipe(bb);
});
Note: this example buffers each file to memory before responding. For true streaming (piping to disk or S3 as chunks arrive), you’d pipe each stream directly rather than collecting chunks.
For non-Express servers, formidable is a popular alternative that works with any Node.js HTTP framework.
Storage Options
Local Filesystem
Generate a unique filename on every upload to prevent collisions and path traversal:
const fs = require('fs').promises;
const path = require('path');
const crypto = require('crypto');
async function saveToLocal(buffer, uploadDir, originalName) {
const ext = path.extname(originalName);
const safeName = path.basename(originalName, ext).replace(/[^a-zA-Z0-9_-]/g, '_');
const filename = `${Date.now()}-${crypto.randomUUID()}-${safeName}${ext}`;
const filepath = path.join(uploadDir, filename);
await fs.writeFile(filepath, buffer);
return { filepath, filename };
}
AWS S3
For production, upload to S3 instead of the local filesystem. The server-side approach streams the buffer directly using the AWS SDK v3:
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const s3 = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
async function uploadToS3(buffer, filename, contentType) {
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: `uploads/${Date.now()}-${filename}`,
Body: buffer,
ContentType: contentType
});
await s3.send(command);
}
Memory (small files only)
For tiny files like avatars, memory storage is simplest. Just be aware it consumes RAM per upload:
const uploadMemory = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 2 * 1024 * 1024 } // 2MB for avatars
});
const uploads = new Map();
app.post('/api/avatar', uploadMemory.single('avatar'), (req, res) => {
const uploadId = crypto.randomUUID();
uploads.set(uploadId, {
buffer: req.file.buffer,
metadata: { name: req.file.originalname, type: req.file.mimetype }
});
res.json({ uploadId, url: `/avatar/${uploadId}` });
});
S3 Presigned URLs End-to-End
Presigned URLs let the browser upload directly to S3 — the file never touches your server. The server only generates a short-lived URL; the client uploads straight to AWS.
Server: Generate the Presigned URL
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');
app.post('/api/presign', async (req, res) => {
const { filename, contentType, size } = req.body;
const s3 = new S3Client({ region: process.env.AWS_REGION });
const safeName = sanitizeFilename(filename).replace(/[^a-zA-Z0-9._-]/g, '_');
const key = `uploads/${Date.now()}-${safeName}`;
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: key,
ContentType: contentType,
ContentLength: size
});
const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 3600 });
res.json({ presignedUrl, key });
});
Client: Upload Directly to S3
async function uploadDirectToS3(file) {
// 1. Get presigned URL from your server
const { presignedUrl, key, bucket } = await fetch('/api/presign', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
filename: file.name,
contentType: file.type,
size: file.size
})
}).then(r => r.json());
// 2. Upload file straight to S3 — your server is out of the loop
await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type }
});
return { key, url: `https://${bucket}.s3.amazonaws.com/${key}` };
}
Your S3 bucket needs CORS rules that allow the upload origin:
{
"CORSRules": [
{
"AllowedMethods": ["POST", "PUT"],
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]
}
Handling Expired Presigned URLs
Presigned URLs expire. If the user pauses, loses connection, or the upload takes longer than the expiresIn window, the PUT will fail with a 403. For resumable uploads, generate a fresh presigned URL and retry:
async function uploadWithRetry(file, maxAttempts = 3) {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
const { presignedUrl, key, bucket } = await fetch('/api/presign', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ filename: file.name, contentType: file.type, size: file.size })
}).then(r => r.json());
await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type }
});
return { key, url: `https://${bucket}.s3.amazonaws.com/${key}` };
} catch (err) {
if (attempt === maxAttempts - 1) throw err;
// Wait before retrying with a fresh URL
await new Promise(r => setTimeout(r, 1000 * (attempt + 1)));
}
}
}
For production resumable uploads, consider the tus protocol — an open standard for resumable file uploads that handles this at the protocol level rather than with custom retry logic.
Security: Validating Uploaded Files
Client-side validation is for user experience only. Always validate on the server — the client can be bypassed.
File Type: Check Magic Bytes
Never trust file.type from the browser; it’s user-controllable. Instead, verify the file’s magic bytes (file signature):
const MAGIC_BYTES = {
'jpeg': [[0xFF, 0xD8, 0xFF]],
'png': [[0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A]],
'gif': [[0x47, 0x49, 0x46, 0x38, 0x37, 0x61], [0x47, 0x49, 0x46, 0x38, 0x39, 0x61]],
'pdf': [[0x25, 0x50, 0x44, 0x46]]
};
function validateMagicBytes(buffer) {
for (const [type, signatures] of Object.entries(MAGIC_BYTES)) {
for (const sig of signatures) {
const header = buffer.slice(0, sig.length);
if (sig.every((byte, i) => header[i] === byte)) return type;
}
}
return null;
}
Filename Sanitization
Raw filenames from the client can contain path traversal sequences like ../../etc/passwd. Strip everything except the basename:
const path = require('path');
function sanitizeFilename(rawFilename) {
const basename = path.basename(rawFilename);
const cleaned = basename.replace(/\0/g, '').replace(/[/\\:*?"<>|]/g, '_');
return cleaned || 'unnamed';
}
const safeName = `${Date.now()}-${crypto.randomUUID()}-${sanitizeFilename(raw)}`;
Zip Bomb Protection
Compressed files can expand to massive sizes on decompression. Check the compression ratio:
function checkCompressionRatio(compressedSize, decompressedSize) {
const MAX_RATIO = 10; // Allow 10x expansion max
if (decompressedSize / compressedSize > MAX_RATIO) {
throw new Error('Suspicious compression ratio — possible zip bomb');
}
}
Combining It All
app.post('/api/upload',
upload.single('file'),
(req, res) => {
if (!req.file) {
return res.status(400).json({ error: 'NO_FILE', message: 'No file uploaded' });
}
const MAX_SIZE = 5 * 1024 * 1024;
if (req.file.size > MAX_SIZE) {
return res.status(413).json({ error: 'FILE_TOO_LARGE', message: 'File exceeds 5MB limit' });
}
const validType = validateMagicBytes(req.file.buffer);
if (!validType) {
return res.status(415).json({ error: 'INVALID_TYPE', message: 'File type not allowed' });
}
const ALLOWED_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.webp', '.gif'];
if (!ALLOWED_EXTENSIONS.includes(path.extname(req.file.originalname).toLowerCase())) {
return res.status(415).json({ error: 'INVALID_EXTENSION', message: 'Extension not allowed' });
}
// Safe to process...
res.json({ ok: true, type: validType });
}
);
Security: CSRF and Serving Uploads
File upload endpoints are CSRF targets. Validate the Origin or Referer header:
app.post('/api/upload', (req, res, next) => {
const origin = req.headers.origin;
const allowedOrigins = ['https://yourdomain.com'];
if (!allowedOrigins.includes(origin)) {
return res.status(403).json({ error: 'FORBIDDEN', message: 'Invalid origin' });
}
next();
}, upload.single('file'), (req, res) => {
res.json({ ok: true });
});
When serving uploaded files back to users, never set Content-Disposition: inline for unknown files — an uploaded HTML file can execute as XSS. Force download instead:
app.get('/files/:filename', (req, res) => {
const filePath = path.join('/secure/uploads', req.params.filename);
res.setHeader('Content-Disposition', `attachment; filename="${sanitizeFilename(req.params.filename)}"`);
res.setHeader('Content-Type', 'application/octet-stream');
res.sendFile(filePath);
});
Store files outside the web root or in cloud storage — never in public/.
Common Patterns
Multiple File Upload
const input = document.querySelector('input[type="file"]');
input.addEventListener('change', async () => {
const files = Array.from(input.files);
const results = await Promise.allSettled(
files.map(file => {
const form = new FormData();
form.append('file', file, file.name);
return fetch('/api/upload', { method: 'POST', body: form }).then(r => r.json());
})
);
const succeeded = results.filter(r => r.status === 'fulfilled');
const failed = results.filter(r => r.status === 'rejected');
console.log(`Uploaded ${succeeded.length}/${files.length} files`);
failed.forEach(({ reason }) => console.error('Failed:', reason));
});
Promise.allSettled is important here — it waits for all uploads to finish rather than stopping at the first failure.
Image Preview Before Upload
const fileInput = document.querySelector('input[type="file"]');
const preview = document.getElementById('preview');
fileInput.addEventListener('change', () => {
const file = fileInput.files[0];
if (!file) return;
const reader = new FileReader();
reader.onload = (e) => {
preview.src = e.target.result;
preview.style.display = 'block';
};
reader.readAsDataURL(file);
});
Chunked Upload for Large Files
For files over 100MB or unreliable connections, split into chunks with retry logic:
const CHUNK_SIZE = 5 * 1024 * 1024; // 5MB
async function uploadInChunks(file, uploadId) {
const totalChunks = Math.ceil(file.size / CHUNK_SIZE);
for (let i = 0; i < totalChunks; i++) {
const start = i * CHUNK_SIZE;
const end = Math.min(start + CHUNK_SIZE, file.size);
const chunk = file.slice(start, end);
let attempt = 0;
const maxAttempts = 3;
while (attempt < maxAttempts) {
try {
const form = new FormData();
form.append('chunk', chunk, `${uploadId}-${i}`);
form.append('uploadId', uploadId);
form.append('chunkIndex', i);
form.append('totalChunks', totalChunks);
await fetch('/api/upload-chunk', {
method: 'POST',
body: form,
signal: AbortSignal.timeout(60_000)
});
break;
} catch (err) {
attempt++;
if (attempt === maxAttempts) throw new Error(`Chunk ${i} failed after ${maxAttempts} attempts`);
await new Promise(r => setTimeout(r, 1000 * attempt)); // backoff
}
}
const percent = Math.round(((i + 1) / totalChunks) * 100);
console.log(`Progress: ${percent}%`);
}
}
Server-Side Chunk Assembly
The server needs an endpoint to receive chunks and assemble them:
const uploads = new Map(); // uploadId -> { chunks: [], totalChunks: number }
app.post('/api/upload-chunk', upload.single('chunk'), (req, res) => {
const { uploadId, chunkIndex, totalChunks } = req.body;
if (!uploads.has(uploadId)) {
uploads.set(uploadId, { chunks: [], totalChunks: parseInt(totalChunks) });
}
const upload = uploads.get(uploadId);
upload.chunks[parseInt(chunkIndex)] = req.file.buffer;
const received = upload.chunks.filter(Boolean).length;
if (received === upload.totalChunks) {
// All chunks received — assemble
const complete = Buffer.concat(upload.chunks);
// Save to storage, upload to S3, etc.
uploads.delete(uploadId);
res.json({ ok: true, size: complete.length });
} else {
res.json({ ok: true, received, total: upload.totalChunks });
}
});
Error Handling
Distinguish between error types so the UI can react appropriately:
class UploadError extends Error {
constructor(code, message) {
super(message);
this.code = code;
}
}
async function uploadFile(file) {
const form = new FormData();
form.append('file', file, file.name);
const response = await fetch('/api/upload', {
method: 'POST',
body: form,
signal: AbortSignal.timeout(30_000)
});
if (!response.ok) {
if (response.status === 413) throw new UploadError('FILE_TOO_LARGE', 'File exceeds server limit');
if (response.status === 415) throw new UploadError('INVALID_TYPE', 'File type not supported');
if (response.status === 429) throw new UploadError('RATE_LIMITED', 'Too many uploads');
throw new UploadError('SERVER_ERROR', `HTTP ${response.status}`);
}
return response.json();
}
Retry transient failures with exponential backoff — but never retry validation errors, which will fail again:
async function uploadWithRetry(file, maxAttempts = 3) {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await uploadFile(file);
} catch (err) {
if (err instanceof UploadError && ['FILE_TOO_LARGE', 'INVALID_TYPE'].includes(err.code)) {
throw err; // Will fail again — don't retry
}
if (attempt < maxAttempts) {
const delay = 1000 * Math.pow(2, attempt - 1) + Math.random() * 500;
await new Promise(r => setTimeout(r, delay));
}
}
}
}
Wrap the whole thing in an AbortController to handle timeouts cleanly:
const someFile = document.getElementById('file-input').files[0];
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 30_000);
try {
const result = await uploadWithRetry(someFile, 3);
} catch (err) {
if (err.name === 'AbortError') console.error('Upload timed out');
} finally {
clearTimeout(timeout);
}
S3 Lifecycle and Cleanup
If an upload fails partway through a direct-to-S3 upload, orphaned objects are left in your bucket. Configure a lifecycle rule in S3 to delete incomplete multipart uploads after a set period:
{
"Rules": [{
"ID": "Abort-incomplete-multipart-uploads",
"Status": "Enabled",
"Filter": { "Prefix": "uploads/" },
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}]
}
This applies to multipart uploads initiated through the S3 API. For presigned PUT uploads (the direct-from-browser approach), incomplete uploads are less common since the browser sends the full content in one request.
See Also
- /reference/async-apis/fetch/ — the browser API that sends uploads
- /reference/node-modules/path/ — path.basename, path.extname, path.join for filename handling
- /reference/node-modules/crypto/ — crypto.randomUUID for generating unique IDs
- /reference/async-apis/promise-all-settled/ — handling multiple concurrent uploads
- FormData on MDN — browser API reference
- tus resumable upload protocol — open standard for resumable uploads
- @aws-sdk/lib-storage — S3 multipart upload client for Node.js with progress events