Download files with progress in Electron via window.fetch

Working on Atom lately, I need to be able to download files to disk. We have ways to achieve this, but they do not show the download progress. This leads to confusion and sometimes frustration on larger downloads such as updates or large packages.

There are many npm libraries out there, but they either don’t expose a progress indicator, or they bypass Chrome (thus not using proxy settings, caching and network inspector) by using Node directly.

I’m also not a fan of sprawling dependencies to achieve what can be done simply in a function or two.

Hello window.fetch

window.fetch is a replacement for XMLHttpRequest currently shipping in Chrome (and therefore Electron) as well as a whatWG living standard. There is some documentation around, but most of it grabs the entire content as JSON, a blob, or text which is not advisable for streaming where the files might be large. You want to not only minimize memory impact but also display a progress indicator to your users.

Thankfully window.fetch has a getReader() function that gives you a ReadableStreamReader that reads in chunks (32KB on my machine) but isn’t compatible with Node’s streams, pipes, and data events.

Download function

With a little effort, we can wire these two things up to get us a file downloader that has no extra dependencies outside of Electron, honours the Chrome cache, proxy and network inspector and best of all, is incredibly easy to use;

import fs from 'fs';

export default async function download(sourceUrl, targetFile, progressCallback, length) {
  const request = new Request(sourceUrl, {
    headers: new Headers({'Content-Type': 'application/octet-stream'})
  });

  const response = await fetch(request);
  if (!response.ok) {
    throw Error(`Unable to download, server returned ${response.status} ${response.statusText}`);
  }

  const body = response.body;
  if (body == null) {
    throw Error('No response body');
  }

  const finalLength = length || parseInt(response.headers.get('Content-Length' || '0'), 10);
  const reader = body.getReader();
  const writer = fs.createWriteStream(targetFile);

  await streamWithProgress(finalLength, reader, writer, progressCallback);
  writer.end();
}

async function streamWithProgress(length, reader, writer, progressCallback) {
  let bytesDone = 0;

  while (true) {
    const result = await reader.read();
    if (result.done) {
      if (progressCallback != null) {
        progressCallback(length, 100);
      }
      return;
    }
    
    const chunk = result.value;
    if (chunk == null) {
      throw Error('Empty chunk received during download');
    } else {
      writer.write(Buffer.from(chunk));
      if (progressCallback != null) {
        bytesDone += chunk.byteLength;
        const percent = length === 0 ? null : Math.floor(bytesDone / length * 100);
        progressCallback(bytesDone, percent);
      }
    }
  }
}

A FlowType annotated version is also available.

Using it

Using it is simplicity. Call it with a URL to download and a local file name to save it, and an optional callback to receive download progress.

Downloader.download('https://download.damieng.com/fonts/original/EnvyCodeR-PR7.zip', 'envy-code-r.zip', (bytes, percent) => console.log(`Downloaded ${bytes} (${percent})`));

Caveats

Some servers do not send the Content-Length header. You have two options if this applies to you;

  1. Don’t display a percentage - just the KB downloaded count (the percentage is null in the callback)
  2. Bake-in the file size if it’s a static URL - pass it in as the final parameter to the download function

Enjoy!

[)amien

Typography in bits: For a few pixels more

It’s been a while since I visited the bitmap fonts of old computers (see the bottom of the post for links), there are still some to evaluate!

There are subtle variations here as machines often used an off-the-shelf video chip and then made a few tweaks or had them slightly customized.

TRS-80 Color Computer & Dragon – custom MC6847 (1982)

TRS-80 system font

The initial model of the TRS 80 Color Computer – affectionately known as CoCo – as well as the UK’s Dragon 32 & 64 computers, used the Motorola MC6847 character generator, and so used the same embedded font.

Unusual characteristics

  • No lower-case
  • Serifs on B&D
  • Over-extended ‘7’
  • Asterisk is a diamond!
  • Square ‘O’
  • Cute ‘@’
  • Thin ‘0?’
  • Tight counter on ‘4’
  • Unjoined strokes on ‘#’

Rationale

The font has some rough edges although, the softer fuzzier look of a CRT TV almost certainly fuzzed those out like many home computer fonts at the time. The awful dark-green-on-light-green colour scheme wasn’t helping.

Influences

It has similar proportions and glyphs to much of the Apple ][ font but feels like they tried to make the characters more distinguished on low-quality TV’s hence the serifs on B & D and the differentiation between 0 and O.

Technical notes

Motorola offered custom versions of this ROM so, it would have been entirely possible to have an alternative character set.

TRS-80 Color Computer v2+ (1985)

TRS-80 v2+ system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Serifs on B&D, over-extended 7 as per v1
  • Ugly ‘@’
  • Very soft center bar on ‘3’
  • Tight counter on ‘4’
  • Tight top of ‘f’

Rationale

Generally, this font is much-improved over v1. It fixes oddities with the asterisk, O, 0, 3, 4, S, ?, and #, as well as straightening the slashes. It reduces the boldness of comma, colon, semi-colon, and apostrophe. Unfortunately, the @ and 3 are worse than the previous version.

Influences

Based on the previous model, however, lower-case does have some resemblance to Apple and MSX. This font may be a custom version as the spec sheet for the T1 variant has bold versions of ,;:.’ glyphs, shorter descenders on y and g, more curvature on p and q, more pronounced curves on 369, tighter t, semi-broken #.

Technical notes

You can identify CoCo2 models featuring the lower-case as they print Tandy on the screen rather than TRS-80.

Tatung Einstein (1984)

Tatung Einstein system font

The Tatung Einstein TC-01 was a British Z80 based machine launched in the UK that never really took off with the public. It enjoyed some success in game development as a compiler and debugger for other, more popular, Z80 systems. This use was likely due to its CP/M compatible OS and disk system (it came with the same oddball 3″ disks used on the Sinclair ZX Spectrum +3 and Amstrad CPC/PCW range)</a>.

Unusual characteristics

  • Odd missing pixels on ‘9S’
  • Little flourishes on ‘aq’
  • Massively tall ‘*’
  • Chunky joins on ‘Kv’
  • High counters and bowls on ‘gpqy’

Rationale

Given the 40 column mode, the generous spacing in 32 column mode makes sense, and the font isn’t too bad. Many of the unusual negative characteristics would be lost on a CRT.

Influences

It feels like the Sinclair Spectrum font with some horizontal width sacrifices.

Commodore 128 (1985)

Commodore 128 80-column font

While the follow-up to the Commodore 64 used the exact same font at boot – it had the same VIC-II video chip – switching it into 80-column mode reveals a new font with double-height pixels powered by the MOS 8563 VDC.

Unusual characteristics

  • ‘£’ aligned left not right, thin strokes
  • ‘Q’ fails to take advantage of descender
  • Cluttered redundant stroke on ‘7’
  • Rounded ‘<>’

Rationale

A nice font that probably looked great on any monitor at the time, although TV’s probably struggled to display detail with such fine verticals on some letters.

Influences

Technical

Switching to 80 column mode could be achieved by using the keyboard or the GRAPHIC 5 command.

Texas Instruments TI-99/4A (TMS9918) (1985)

TI-99/4A system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Lower case is small caps
  • Serifs on ‘BD’
  • Square ‘O’
  • Poor slope on ‘N’
  • Bar very tight on ‘G’

Rationale

The lower-case small-caps feel quite awful and appear to be an attempt to avoid having to deal with descenders. Other fonts brought the bowl up a line. Descenders look a little off, although some machines like the Sinclair QL just left space for them.

Influences

Based on the previous model, however, lower-case does have some resemblance to Apple and MSX.

Oric Atmos (1983)

Oric Atmos system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Bold ‘{}’
  • Vertical line on ‘^’
  • Awkward horizontal stroke on ‘k’
  • Square ‘mw’

Rationale

Not a terrible choice, although I suspect cheaper TV’s would struggle with the non-bold and tight spacing. The high-contrast black-and-white colour scheme helps mitigate this.

Influences

A complete copy of the Apple ][ system font with only a few tweaks to remove over-extension of 6 and 9 and un-bolding [ and ] but they forgot { and } weirdly. Additions of ^ and £ don’t quite fit right.

[)amien

Random tips for PowerShell, Bash & AWS

Now that I am again freelancing, I find myself solving unusual issues, many of which had no online solutions.

Given these no doubt plague other developers, let’s share!

Pass quoted args from BAT/CMD files to PowerShell

Grabbing args from a batch/command files is easy – use %* – but have you ever tried passing them to PowerShell like:

powershell "Something" "%*"

Unfortunately, if one of your arguments has quotes around it (a filename containing a space perhaps), it becomes two separate arguments. e.g. "My File.txt" now becomes My and File.txt.

PowerShell will only preserve it if you use the -f option (to run a .PS1 file) and that requires a relaxed policy via Set-ExecutionPolicy so is a no-go for many people.

Given you can’t make PowerShell do the right thing with the args the trick here is - to not pass them as args at all!

SET MYPSARGS=%*
...
powershell -ArgumentList "$env:MYPSARGS"

Get Bash script path as Windows path

While Cygwin ships with cygpath to convert /c/something to c:\Something etc. MSYS Bash shells do not have this. You can get it another way there however:

#!/bin/sh
pushd "$(dirname "$0")" &gt; /dev/null
if command -v "cygpath" &gt; /dev/null; then
  WINPWD=""$(cygpath . -a -w)""
else
  WINPWD=""$(pwd -W)""
fi
popd &gt; /dev/null
echo $WINPWD

This solution works by switching the working directory to the one the script is in "$(dirname "$0")" and then capturing the print-working-directory command output using the -W option that grabs it in Windows format. It then pops the working directory to make sure it goes back to where it was.

Note that this uses forward slashes as a directory separator still. Many tools and apps are okay with that but some older ones are not.

JSON encoding in API Gateway mapping templates

If you use Amazon’s AWS Lambda you’ll also find yourself touching API Gateway. While most of it is great, the mapping templates are deficient in that they do not encode output by default despite specifying the MIME types.

All of Amazon’s example templates are exploitable via JSON injection. Just put a double-quote in a field and start writing any JSON payload.

Amazon must fix this – encode by default like other templating systems have done, such as ASP.NET Razor. Until then some recommend the Amazon-provided $util.escapeJavaScript() however while it encodes " as \" it also produces illegal JSON by encoding ' as \' .

The mapping language is Apache Velocity Template Language (VTL), and while not extendable, the fine print reveals that it internally uses Java strings and does not sandbox them which let’s us utilize Java’s replace functionality:

#set($i = $input.path('$'))
{
   "safeString": "$i.unsafeString.replaceAll("\""", "\\""")
}

Show active known IPs on the local network

I’m surprised more people don’t know how useful arp -a is, especially if you pipe it into ping…

Bash

arp -a | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' | xargs -L1 ping -c 1 -t 1 | sed -n -e 's/^.*bytes from //p'

PowerShell

(arp -a) -match "dynamic" | Foreach { ping -n 1 -w 1000 ($_ -split "\s+")[1] } | where { $_ -match "Reply from " } | % { $_.replace("Reply from ","") }

Wrapping up

I just want to mention that if you are doing anything on a command-line, be it Bash, OS X, PowerShell or Command/Batch then SS64 is a site worth visiting as they have great docs on many of these things!

[)amien

Monitoring URLs for free with Google Cloud Monitor

As somebody who runs a few sites, I like to keep an eye on them and make sure they’re up and responding correctly.

My go-to for years has been Pingdom, but this year they gutted their free service so that you can now only monitor every 5 minutes.

The free service with Pingdom also had limited alerting options and can only monitor a single endpoint. Instead, I went looking for something better as $15 a month to monitor a couple of personal low-volume sites is not money well spent.

Google Cloud

I’ve played with the Google Cloud Platform offerings for a while, and like many others, theirs includes a monitoring component called unsurprisingly Google Cloud Monitoring.

It’s currently free in beta and is based on StackDriver - acquired by Google in 2014. I can imagine more integration and services to continue to come through as they have a complete product that also monitors AWS.

Uptime checks

Screenshot showing uptime check options

You can create HTTP/HTTPS/TCP/UDP checks, and while designed to monitor the services you’re running on Google Cloud, will happily take arbitrary URLs to services running elsewhere.

Checks can be run every 1/5/10 or 15 minutes, use custom ports, look for specific strings in the response and setting custom headers and authentication credentials.

Each URL is monitored and reported from six geographical locations. They are split between three in the USA (east, central and west), Europe, Asia and Latin America. For example:

damieng.com

  • Virginia responded with 200 (OK) in 357 ms
  • Oregon responded with 200 (OK) in 377 ms
  • Iowa responded with 200 (OK) in 330 ms
  • Belgium responded with 200 (OK) in 673 ms
  • Singapore responded with 200 (OK) in 899 ms
  • Sao Paulo responded with 200 (OK) in 828 ms

Alerting policies

Here’s where Google’s offering surprised me. It has alerting options for SMS and Email, obviously, but also HipChat, Slack, Campfire, and PagerDuty. You can specify combinations together, mixing and matching with different uptime checks etc.

Screenshot of alerting policy options

Incidents

Like Pingdom, if the endpoint monitored goes down, an incident is opened. You can write details (comments) to the incident, and like Pingdom, the incident is closed once the endpoint starts responding again.

Graph & dashboard

The cloud monitoring product has a configurable dashboard geared around monitoring Google Cloud specific services. There is an uptime monitoring component that still provides some value.

You can download the JSON for a graph, an API as well as iframe sharing functionality.

Final thoughts

I’m very impressed with this tool given the lack of limitations in a free product. I am using it for my sites, but it has no SLA right now!

Any other recommendations for free URL monitoring?

[)amien