Posts tagged with javascript

Download files with progress in Electron via window.fetch

Working on Atom lately, I need to be able to download files to disk. We have ways to achieve this, but they do not show the download progress. This leads to confusion and sometimes frustration on larger downloads such as updates or large packages.

There are many npm libraries out there, but they either don’t expose a progress indicator, or they bypass Chrome (thus not using proxy settings, caching and network inspector) by using Node directly.

I’m also not a fan of sprawling dependencies to achieve what can be done simply in a function or two.

Hello window.fetch

window.fetch is a replacement for XMLHttpRequest currently shipping in Chrome (and therefore Electron) as well as a whatWG living standard. There is some documentation around, but most of it grabs the entire content as JSON, a blob, or text which is not advisable for streaming where the files might be large. You want to not only minimize memory impact but also display a progress indicator to your users.

Thankfully window.fetch has a getReader() function that gives you a ReadableStreamReader that reads in chunks (32KB on my machine) but isn’t compatible with Node’s streams, pipes, and data events.

Download function

With a little effort, we can wire these two things up to get us a file downloader that has no extra dependencies outside of Electron, honours the Chrome cache, proxy and network inspector and best of all, is incredibly easy to use;

import fs from 'fs';

export default async function download(sourceUrl, targetFile, progressCallback, length) {
  const request = new Request(sourceUrl, {
    headers: new Headers({'Content-Type': 'application/octet-stream'})

  const response = await fetch(request);
  if (!response.ok) {
    throw Error(`Unable to download, server returned ${response.status} ${response.statusText}`);

  const body = response.body;
  if (body == null) {
    throw Error('No response body');

  const finalLength = length || parseInt(response.headers.get('Content-Length' || '0'), 10);
  const reader = body.getReader();
  const writer = fs.createWriteStream(targetFile);

  await streamWithProgress(finalLength, reader, writer, progressCallback);

async function streamWithProgress(length, reader, writer, progressCallback) {
  let bytesDone = 0;

  while (true) {
    const result = await;
    if (result.done) {
      if (progressCallback != null) {
        progressCallback(length, 100);
    const chunk = result.value;
    if (chunk == null) {
      throw Error('Empty chunk received during download');
    } else {
      if (progressCallback != null) {
        bytesDone += chunk.byteLength;
        const percent = length === 0 ? null : Math.floor(bytesDone / length * 100);
        progressCallback(bytesDone, percent);

A FlowType annotated version is also available.

Using it

Using it is simplicity. Call it with a URL to download and a local file name to save it, and an optional callback to receive download progress.'', '', (bytes, percent) => console.log(`Downloaded ${bytes} (${percent})`));


Some servers do not send the Content-Length header. You have two options if this applies to you;

  1. Don’t display a percentage - just the KB downloaded count (the percentage is null in the callback)
  2. Bake-in the file size if it’s a static URL - pass it in as the final parameter to the download function



Shrinking JS or CSS is premature optimization

Rick Strahl has a post on a JavaScript minifier utility the sole job of which is to shrink the size of your JavaScript whilst making it almost impossible to read in order to save a few kilobytes.I thought I’d take a quick look at what the gain would be and fed it the latest version (1.6) of the very popular Prototype library:

File (KB) GZip (KB)
Standard 121.0 26.7
Shrunk/minified 90.5 22.0
Saving 30.7 4.7

The 30.7 KB saving looks great at first glance but bear in mind that external JavaScript files are cached on the client between page requests and it looses some appeal.If you also consider the fact that most browsers and clients support GZip compression and the savings there are around 4.7 KB and you might wonder if you are wasting your time.In computer science there is a term for blindly attempting to optimize systems without adequate measurement or justification and that term is premature optimization.As Sir Tony Hoare wrote (and Donald Knuth paraphrased)

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

And he was working on computers throughout the 60’s and 70’s that had much less resources than those today.By all means if your server bandwidth is an issue delve into the stats, identify the cause and take it from there. Going with Yahoo’s YSlow plug-in for Firefox/Firebug is a great starting point but remember to analyze the statistics from your own context.

Rick’s tool had shortcomings with non-ASCII characters such as accents, symbols and non-US currency symbols which goes to show how optimization can have other unintended and undesirable effects.