ยท4 min read

Running Super-Resolution in the Browser with TensorFlow.js

tensorflow-jssuper-resolutionwebgldeep-learning

I've been working on getting super-resolution models to run in the browser for a while now. After a lot of trial and error (mostly error), I've got a pipeline that actually works well. Here's the full walkthrough.

Why the Browser?

  • Zero server cost, inference runs on the user's GPU
  • No installation, works on any device with a modern browser
  • Privacy, images never leave the device
  • Low latency, no network round trip

The trade-off is that you're limited by the user's hardware, so you need smaller, faster models than what you'd run on a server. Understanding how WebGL powers GPU-accelerated inference under the hood helps a lot when you're optimizing for this environment.

Step 1: Train or Pick a Model

For browser deployment, you want a lightweight SR model. Full ESRGAN is too heavy (I covered the full SR model landscape from SRCNN to ESRGAN in another post). I've had good results with:

  • ESPCN (Efficient Sub-Pixel CNN), tiny, fast, decent 2x quality
  • A pruned/distilled version of ESRGAN
  • Custom architectures with fewer RRDB blocks (2-4 instead of 23)

Train in PyTorch or TensorFlow as usual. The model needs to be under ~5MB after conversion for reasonable load times.

Step 2: Convert to TensorFlow.js Format

If you trained in TensorFlow/Keras:

tensorflowjs_converter \
  --input_format=tf_saved_model \
  --output_format=tfjs_graph_model \
  --quantize_float16 \
  ./saved_model \
  ./tfjs_model

The --quantize_float16 flag cuts model size in half with negligible quality loss. Always use it. For a deeper look at quantization techniques beyond float16, including INT8 with real benchmarks, see my model quantization guide.

If you trained in PyTorch, export to ONNX first, then convert ONNX to TF SavedModel, then to TF.js. It's annoying but it works.

Step 3: Load and Run

import * as tf from '@tensorflow/tfjs';
 
// Load model
const model = await tf.loadGraphModel('/models/sr_model/model.json');
 
// Warm up (first inference compiles shaders)
const warmup = tf.zeros([1, 64, 64, 3]);
model.predict(warmup).dispose();
warmup.dispose();
 
// Run super-resolution on an image
async function upscale(imgElement) {
  return tf.tidy(() => {
    let input = tf.browser.fromPixels(imgElement);
    input = input.toFloat().div(255.0);
    input = input.expandDims(0);
 
    const output = model.predict(input);
    return output.squeeze().clipByValue(0, 1).mul(255).cast('int32');
  });
}

tf.tidy() is your best friend here. It automatically disposes intermediate tensors so you don't leak memory.

Step 4: Display the Result

const outputTensor = await upscale(sourceImage);
await tf.browser.toPixels(outputTensor, outputCanvas);
outputTensor.dispose();

That's it. The output renders directly to a canvas element.

Performance Tips

Tile large images. If the input is bigger than ~512x512, split it into overlapping tiles, process each one, and stitch the results. This avoids hitting WebGL texture limits and reduces peak memory usage.

Use requestAnimationFrame for video. If you're processing video frames, sync with the browser's render loop to avoid jank.

Test on real devices. A MacBook Pro with a discrete GPU will fly. A budget Android phone will struggle. Test on the devices your users actually have.

Real Numbers

On my setup (2020 MacBook Pro, Chrome):

ModelInput SizeOutput SizeInference Time
ESPCN (2x)256x256512x512~15ms
Small ESRGAN (2x)256x256512x512~85ms
Small ESRGAN (4x)128x128512x512~120ms

The ESPCN model is fast enough for real-time video. The ESRGAN variant is better quality but only suitable for single images.

Browser-based super-resolution is real and practical today. The models are small enough to ship, the inference is fast enough to be useful, and the user experience is frictionless. No app install, no account creation, no data uploaded anywhere. If you want to go further with TF.js deployment beyond SR, I wrote a more complete guide on deploying ML models to the browser covering the full pipeline from Python to production.