Skip to content

Javascript articles

Download files with progress in Electron via window.fetch  

Working on Atom lately I need to be able to download files to disk. We have a couple of ways to do this today but they do not show download progress which leads to confusion and sometimes frustration on larger downloads such as updates or big packages.

There are many npm libraries out there but they either don’t expose a progress indicator or they bypass Chrome (thus not using proxy settings, caching and network inspector) by using node directly.

I’m also not a fan of sprawling dependencies to achieve what can be done simply in a function or two.

Hello window.fetch

window.fetch is a replacement for XMLHttpRequest currently shipping in Chrome (and therefore Electron) as well as a whatWG living standard. While there is some documentation around most of it relies on grabbing the entire content as JSON, a blob or text. This is not advised for streaming where the files might be large and you want to not only minimize memory impact but also display a progress indicator to your users.

Thankfully window.fetch has a getReader() function that will give you a ReadableStreamReader although this reads in chunks (32KB on my machine) and isn’t compatible with Node’s streams, pipes and data events.

Download function

With a little work though we can wire these two things up to get us a file downloader that has no extra dependencies outside of Electron, honors the Chrome cache, proxy and network inspector and best of all is incredibly easy to use;

import fs from 'fs';

export default async function download(sourceUrl, targetFile, progressCallback, length) {
  const request = new Request(sourceUrl, {
    headers: new Headers({'Content-Type': 'application/octet-stream'})
  });

  const response = await fetch(request);
  if (!response.ok) throw Error(`Unable to download, server returned ${response.status} ${response.statusText}`);

  const body = response.body;
  if (body == null) throw Error('No response body');

  length = length || parseInt(response.headers.get('Content-Length' || '0'));
  const reader = body.getReader();
  const writer = fs.createWriteStream(targetFile);

  await streamWithProgress(length, reader, writer, progressCallback);
  writer.end();
}

async function streamWithProgress(length, reader, writer, progressCallback) {
  let bytesDone = 0;

  while (true) {
    const result = await reader.read();
    if (result.done) return;

    writer.write(Buffer.from(chunk));
    if (progressCallback != null) {
      bytesDone += chunk.byteLength;
      const percent = length === 0 ? null : Math.floor(bytesDone / length * 100);
      progressCallback(bytesDone, percent);
    }
  }
  if (progressCallback != null) {
    progressCallback(length, 100);
  }
}

A FlowType annotated version is also available.

Using it

Using it is simplicity – call it with a URL to download and a local file name to save it as along with an optional callback that will receive download progress.

Downloader.download('https://download.damieng.com/fonts/original/EnvyCodeR-PR7.zip', 'envy-code-r.zip', 
   (bytes, percent) => console.log(`Downloaded ${bytes} (${percent})`));

Caveats

Some servers do not send the Content-Length header. You have two options if this applies to you;

  1. Don’t display a percentage just the KB downloaded count (percentage will be null in the callback)
  2. Bake-in the file size if it’s a static URL – just pass in as final parameter to the download function

Enjoy!

[)amien

Shrinking JS or CSS is premature optimization  

Rick Strahl has a post on a JavaScript minifier utility the sole job of which is to shrink the size of your JavaScript whilst making it almost impossible to read in order to save a few kilobytes.I thought I’d take a quick look at what the gain would be and fed it the latest version (1.6) of the very popular Prototype library:

File (KB) GZip (KB)
Standard 121.0 26.7
Shrunk/minified 90.5 22.0
Saving 30.7 4.7

The 30.7 KB saving looks great at first glance but bear in mind that external JavaScript files are cached on the client between page requests and it looses some appeal.If you also consider the fact that most browsers and clients support GZip compression and the savings there are around 4.7 KB and you might wonder if you are wasting your time.In computer science there is a term for blindly attempting to optimize systems without adequate measurement or justification and that term is premature optimization.As Sir Tony Hoare wrote (and Donald Knuth paraphrased)

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

And he was working on computers throughout the 60’s and 70’s that had much less resources than those today.By all means if your server bandwidth is an issue delve into the stats, identify the cause and take it from there. Going with Yahoo’s YSlow plug-in for Firefox/Firebug is a great starting point but remember to analyse the statistics from your own context.

Rick’s tool had shortcomings with non-ASCII characters such as accents, symbols and non-US currency symbols which goes to show how optimization can have other unintended and undesirable effects.

[)amien