·11 min read
Resumable Uploads with the tus Protocol
The tus protocol is an open standard for resumable file uploads over HTTP. It handles the hard parts — chunking, resume, parallel upload, progress — so you don't have to. This guide covers client setup, server options, and testing with large sample files.
Why tus over homegrown
- Standardized — clients and servers from different vendors interoperate
- Battle-tested — Vimeo, Cloudflare, Supabase, and many others use it
- Handles edge cases — network drops, browser refresh, chunk retries
- Free and open — no vendor lock-in
How the protocol works
- Client sends
POST /fileswithUpload-Lengthheader - Server returns
Location: /files/abc123 - Client sends
PATCH /files/abc123with chunk +Upload-Offsetheader - Server responds
204 No Contentwith new offset - Repeat step 3-4 until
Upload-Offset === Upload-Length - On resume, client sends
HEAD /files/abc123to get current offset, continues from there
Client: tus-js-client
import * as tus from 'tus-js-client';
const upload = new tus.Upload(file, {
endpoint: 'https://api.example.com/files/',
retryDelays: [0, 3000, 5000, 10000, 20000],
chunkSize: 5 * 1024 * 1024,
metadata: {
filename: file.name,
filetype: file.type,
},
onError(error) {
console.error('Upload failed:', error);
},
onProgress(uploaded, total) {
console.log(`${(uploaded / total * 100).toFixed(1)}%`);
},
onSuccess() {
console.log('Done:', upload.url);
},
});
// Resume previous uploads automatically
upload.findPreviousUploads().then((prev) => {
if (prev.length > 0) upload.resumeFromPreviousUpload(prev[0]);
upload.start();
});
Server: tusd (official Go server)
# Binary, zero dependencies
docker run -p 1080:1080 -v ./uploads:/data tusproject/tusd \
-base-path /files/ \
-upload-dir /data
Production-ready out of the box. Supports S3 backend, GCS, Azure Blob, and local disk.
Server: Node.js (@tus/server)
import { Server } from '@tus/server';
import { FileStore } from '@tus/file-store';
import http from 'http';
const server = new Server({
path: '/files',
datastore: new FileStore({ directory: './uploads' }),
});
http.createServer(server.handle.bind(server)).listen(1080);
S3 integration
import { Server } from '@tus/server';
import { S3Store } from '@tus/s3-store';
const server = new Server({
path: '/files',
datastore: new S3Store({
partSize: 8 * 1024 * 1024,
s3ClientConfig: {
bucket: 'uploads',
region: 'us-east-1',
},
}),
});
tus uses S3 multipart upload under the hood — resumable even at the storage layer.
Testing with sample files
- 100MB — standard real-world test
- 500MB — exercises pause/resume across minutes
- 1GB — stress chunk boundaries and S3 part limits
- 500MB video — the realistic use case
Testing checklist
- Disconnect network mid-upload, reconnect — upload resumes from last chunk
- Close browser mid-upload, reopen — previous upload is detected and resumable
- Two concurrent uploads don't collide
- Server recovers a partial upload after restart
- Progress callbacks fire at least every second
- Retry delay backoff works (check network tab)
Related
For the chunking fundamentals, see chunked video upload. For general large file strategies, read large file upload guide.