Skip to content

Typography in bits: For a few pixels more  

It’s been a while since I visited the bitmap fonts of old computers (see the bottom of the post for links) there are still some to look at!

There are a lot of subtle variations here as machines often used an off-the-shelf video chip and then made a few tweaks or had them slightly customized.

TRS-80 Color Computer & Dragon – custom MC6847 (1982)


5 pixels
7 pixels
Lowercase ASCII
256×192 (32×16 text)
Download in TrueType

TRS-80 system font

The initial model of the TRS 80 Color Computer – affectionately known as CoCo – as well as the UK’s Dragon 32 & 64 computers used the Motorola MC6847 character generator and so used the same embedded font.

Unusual characteristics

  • No lowercase
  • Serifs on B&D
  • Over-extended ‘7’
  • Asterisk is a diamond!
  • Square ‘O’
  • Cute ‘@’
  • Thin ‘0?’
  • Tight counter on ‘4’
  • Unjoined strokes on ‘#’


The font has some rough edges although the softer fuzzier look of a CRT TV almost certainly fuzzed those out like many home computer fonts at the time. The awful dark-green on light-green color scheme wasn’t helping though.


Has similar proportions and characters to much of the Apple ][ font but feels like they tried to make the characters more distinguished on low-quality TV’s hence the serifs on B & D and the differentiation between 0 and O.

Technical notes

Motorola actually offered custom versions of this ROM so it would have been entirely possible to have an alternative character set.

TRS-80 Color Computer v2+ (1985)


5 pixels
7 pixels
256×192 (32×16 text)
Download in TrueType

TRS-80 v2+ system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Serifs on B&D, over-extended 7 as per v1
  • Ugly ‘@’
  • Very soft center bar on ‘3’
  • Tight counter on ‘4’
  • Tight top of ‘f’


In general a much improved font over the v1 fixing the oddities with the asterisk, O, 0, 3, 4, S, ? and # as well as making the slashes straighter and reducing the boldness of comma, colon, semi-colon and apostrophe although the @ and 3 are worse than the previous version.


Based on the previous model however lower-case does have some resemblance to Apple and MSX. This may in fact be a custom version as the spec sheet for the T1 variant has bold versions of ,;:.’ glyphs, shorter descenders on y and g, more curvature on p and q, stronger curves on 369, tighter t, semi-broken #

Technical notes

You can identify CoCo2 models that have the lower-case as they say Tandy on the screen not TRS-80.

Tatung Einstein (1984)


5 pixels
6 pixels
256×192 (32×24, 40×24 text)
Download in TrueType

Tatung Einstein system font

The Tatung Einstein TC-01 was a British Z80 based machine launched in the UK that never really took off with the public but had some success in the game development word being a compiler and debugger for other more popular Z80 systems thanks to its CP/M compatible OS and disk system (it came with the same oddball 3″ disks used on the Sinclair ZX Spectrum +3 and Amstrad CPC/PCW range).

Unusual characteristics

  • Odd missing pixels on ‘9S’
  • Little flourishes on ‘aq’
  • Massively tall ‘*’
  • Chunky joins on ‘Kv’
  • High counters and bowls on ‘gpqy’


Given the 40 column mode the generous spacing in 32 column mode makes sense and the font isn’t too bad. Many of the negative unusual characteristics would be lost on a CRT.


It feels like the Sinclair Spectrum font with some horizontal width sacrifices.

Commodore 128 (1985)


7 pixels
7 pixels
640×200 (80×25 text)
Download in TrueType

Commodore 128 80-column font

While the follow-up to the Commodore 64 used the exact same font at boot – it had the same VIC-II video chip – switching it into 80-column mode reveals a new font with double-height pixels powered by the MOS 8563 VDC.

Unusual characteristics

  • ‘£’ aligned left not right, thin strokes
  • ‘Q’ fails to take advantage of descender
  • Cluttered redundant stroke on ‘7’
  • Rounded ‘<>’


Quite a nice font with very little weirdness that probably looked good on any monitor at the time although TV’s probably struggled to display detail with such fine verticals on some letters.



Switching to 80 column mode could be achieved by using the keyboard or the GRAPHIC 5 command.

Texas Instruments TI-99/4A (TMS9918) (1985)


5 pixels
7 pixels
256×192 (32×24 text)
Download in TrueType

TI-99/4A system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Lower case is small caps
  • Serifs on ‘BD’
  • Square ‘O’
  • Poor slope on ‘N’
  • Bar very tight on ‘G’


The lower-case small caps feels quite awful and appears to be an attempt to avoid having to deal with descenders. Other fonts brought the bowl up a line and descenders look a little off instead although some machines like the Sinclair QL just left space for them.


Based on the previous model however lower-case does have some resemblance to Apple and MSX.

Oric Atmos (1983)


5 pixels
7 pixels
240×200 (40×28 text)
Download in TrueType

Oric Atmos system font

The follow-up v2 model of the TRS 80 Color Computer – also known as the Tandy Color Computer used an enhanced Motorola MC6847T1 variant.

Unusual characteristics

  • Bold ‘{}’
  • Vertical line on ‘^’
  • Awkward horizontal stroke on ‘k’
  • Square ‘mw’


Not a bad choice although I suspect cheaper TV’s would struggle with the non-bold and tight spacing which is probably why they went with high-contrast black-and-white.


A complete copy of the Apple ][ system font with only a few tweaks to remove over-extension of 6 and 9 and unbolding [ and ] but they forgot { and } weirdly. Additions of ^ and £ don’t quite fit right.

Also check out articles on 8-bit, 16-bit system fonts and English micros.


Random tips for PowerShell, Bash & AWS  

Now freelance again I find myself solving a variety of unusual issues many of which I could find no online solutions for.

Given these no doubt plague other developers let’s share!

Pass quoted args from BAT/CMD files to PowerShell

Grabbing args from a batch/command files is easy – just use %* – but have you ever tried passing them to PowerShell like:

powershell "Something" "%*"

Unfortunately if one of your arguments has quotes around it (a filename with a space perhaps) then it becomes two separate arguments. e.g. "My File.txt" now becomes My and File.txt.

PowerShell will only preserve is if you use the -f option (to run a .PS1 file) but that requires a relaxed policy via Set-ExecutionPolicy and so is a no-go for many people.

Given you can’t make PowerShell do the right thing with the args the trick here is to not pass them as args at all!

powershell -ArgumentList "$env:MYPSARGS"

Get Bash script path as Windows path

While Cygwin ships with cygpath to convert /c/something to c:\\Something etc. MSYS Bash shells do not have this. However you can get it another way there:

pushd "$(dirname "$0")" > /dev/null
if command -v "cygpath" > /dev/null; then
  WINPWD=""$(cygpath . -a -w)""
  WINPWD=""$(pwd -W)""
popd > /dev/null
echo $WINPWD

This works by switching the working directory to the one the script is in "$(dirname "$0")" and then capturing the print-working-directory command output using the -W option that grabs it in Windows format. It then pops the working directory to make sure it goes back to where it was.

Note that this uses forward slashes as a directory separator still – a lot of stuff is okay with that but older apps and tools are not.

JSON encoding in API Gateway mapping templates

Using Amazon’s AWS Lambda you’ll also find yourself touching API Gateway and while most of it is great the mapping templates are quite deficient in that they do not encode output by default despite specifying the MIME types.

All of Amazon’s example templates are exploitable via JSON injection. Just put a double-quote in a field and start writing your own JSON payload.

Amazon must fix this – encode by default like other templating systems have done such as ASP.NET Razor. Until then some recommend the Amazon-provided $util.escapeJavaScript() however while it encodes " as \" it also produces illegal JSON by encoding ' as \' .

The mapping language is Apache Velocity Template Language (VTL) and while not extendable the fine-print reveals that it internally uses Java strings and does not sandbox us. This let’s us utilize Java’s replace functionality:

#set($i = $input.path('$'))
   "safeString": "$i.unsafeString.replaceAll("\""", "\\""")

Show active known IPs on local network

I’m surprised more people don’t know how useful arp -a is especially if you pipe it into ping…


arp -a | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' | xargs -L1 ping -c 1 -t 1 | sed -n -e 's/^.*bytes from //p'


(arp -a) -match "dynamic" | Foreach { ping -n 1 -w 1000 ($_ -split "\s+")[1] } | where { $_ -match "Reply from " } | % { $_.replace("Reply from ","") }

Wrapping up

I just want to mention that if you are doing anything on a command-line be it Bash, OS X, PowerShell or Command/Batch then SS64 is a site worth visiting as they have great docs on many of these things!


Monitoring URLs for free with Google Cloud Monitor  

As somebody who runs a few sites I like to keep an eye on them and make sure they’re up and responding correctly.

My go-to for years has been Pingdom but this year they gutted their free service so that you can now only monitor every 5 minutes.

The free service with Pingdom wasn’t great to start with – limited alerting options and you can only monitor a single endpoint – so I went searching for something better as $15 a month to monitor a couple of personal low-volume sites is not money well spent.

Google Cloud

I’ve played with the Google Cloud Platform offerings for a while and like many others theirs includes a monitoring component called unsurprisingly Google Cloud Monitoring.

Right now it’s in beta and is free and is based on StackDriver which was acquired by Google in 2014. I can imagine more integration and services will continue to come through as they have a complete product that also monitors AWS.

Uptime checks

Screenshot showing uptime check options

You can create HTTP/HTTPS/TCP/UDP checks and while it was designed to monitor the services you’re running on Google Cloud will happily take arbitrary URLs to services running elsewhere.

Checks can be run every 1/5/10 or 15 minutes, use custom ports, look for specific strings in the response as well as setting custom headers and specifying authentication credentials.

Each URL is monitored and the performance reported from six geographical locations split between east, central and west USA as well as one or Europe, one in Asia and one in Latin America. For example:

  • Virginia responded with 200 (OK) in 357 ms
  • Oregon responded with 200 (OK) in 377 ms
  • Iowa responded with 200 (OK) in 330 ms
  • Belgium responded with 200 (OK) in 673 ms
  • Singapore responded with 200 (OK) in 899 ms
  • Sao Paulo responded with 200 (OK) in 828 ms

Alerting policies

Here’s where Google’s offering really surprised me with alerting options not just for SMS and Email but also for HipChat, Slack, Campfire and PagerDuty and you can specify a number of them together and mix and match them with different uptime checks etc.

Screenshot of alerting policy options


Like Pingdom if the endpoint being monitored goes down an incident is opened that you can write details (comments) into and also like Pingdom it the incident is closed once the endpoint starts responding again.

Graph & dashboard

The cloud monitoring product has a configurable dashboard that like the rest of the product is really geared around monitoring Google Cloud specific services but there is an uptime monitoring component that can still provide some value.

You can download the JSON for a graph, an API as well as iframe-embeddable sharing functionality.

Final thoughts

I’m very impressed with this tool given the lack of limitations in a free product and will be using it for a bunch of my sites for now bearing in mind however that it has no SLA right now!

Any other recommendations for free URL monitoring?


Notes on Edward Tufte’s Presenting Data and Information  

Photograph of Envisioning Information Here are my notes from today’s event by renowned statistician Edward Tufte – author of The Visual Display of Quantitative Information and Envisaging Information primarily for my own reference but perhaps of interest to others.

A dramatic start

No announcement, no preamble. The lights went out and a visually striking video showing a representation of music started. Conversations were immediately hushed and devices put away. An effective technique to get attention and signal an absolute start.

Charts and tables

Sorting: Find a sort for your data that makes sense. Treat it as another axis and don’t waste it with the alphabet.

Sparse columns: Remove sparsely populated columns from tables. Special events should be specially annotated.

Linking lines: Always annotate them to describe the interaction. Prefer verbs over nouns as they are a taxonomy.

Information does not fit in a tree. The web is successful because Tim-Berners Lee understood this and made links the interconnectedness between content. “Vague, but exciting”


Content is not clean. Data that shows behavior in a perfect way has likely been manipulated.

Human beings over-detect clusters and conspiracies. They find links between unrelated events especially in sequences (serial correlation). Sports commentators given any series of scores will develop a false narrative to explain it. They’ll find a reason for 7 wins in a row despite random data producing such sequences.

Self-monitoring is a farce because people can’t keep their own score. Once something is measured it becomes a target and will be subsequently gamed and fudged as needed.

You can make many models to fit any data you are given. It may work well for the past and current data but how far it will last is highly variable. This effect is referred to as shrinkage – no model lasts forever.

Big data is not a substitute for traditional data collection and analysis. Google famously thought this when they created Google Flu which tried to spot the spread of flu based on search terms. It has been seriously criticized by Forbes and the New York Times.


Do not jump to conflict or character assassination. Your motives are likely no better (or worse).

How many nice comments wiped out a bad one? Ten… a hundred?

There is evil in the world but it probably does not exist in your day-to-day life.

A deck of slides

A deck is inefficient. It is easy for the presenter but hard for the audience who are waiting for something they can use. “A diamond in the swamp” Slow reveals further reduce the information density and people will check-out when it gets low.

Prefer spatially adjacent data (a document) over temporally stacked (slides). The often-cited limit of 7±2 items was for temporal retention so limiting a page to this number of items is actually the opposite of what that research was telling us. We can cope with much more data if it is all on-page together.

Meetings and presentations

Do not be afraid of paper.

Prepare a document in advance but do not send it and instead spend 30 minutes at the start of the meeting reading it in silence (known as a study hall). People can read faster than you can talk as well as go back and forth as needed, skipping what they already know and latecomers are less disruptive.. Amazon is famously using this with its 6-page narrative memo system.

Never go meta in your presentation – stick to the content. Respect your audience and do not presume to know them or  you may find yourself pandering or having low expectations. Instead present the data to the best of your ability. Many complicated things are explained to millions of people all the time. You can’t teach if you have low expectations. Negativity and positivity are self-fulfilling.

Does your audience understand and trust you? Credibility is eroded not just by lying but by cherry picking. Evidence of cherry picking includes data too good to be true and hiding the source of the data behind excuses such as copyright, proprietary or others secrets. Why would a conclusion be open when the data needs to be secret? It’s likely a misrepresentation of the data for their own means.

Note a few words when somebody asks you a question to make sure your answer stays on topic. If you don’t know the answer be honest but suggest where you would start looking for the answer. Never heckle or waste time correcting minutiae.

Doctors trip

A trip to the Doctor’s office is a presentation. Write down your list before you go in. Make them listen because they normally interrupt after 22 seconds and consider each item individually. You’ll give up before you reach the end of your list this way and they may not see the connected pattern of the whole.


Every document needs an abstract. It should spell out as simply as possible:

  1. What the problem is
  2. Who cares
  3. What the solution is

If you can’t write this then you don’t have a document and you’re not saying anything.


Real scientists use Latex. There are thousands of templates including official ones for well-known journals. Online tools like Overleaf can reduce the barrier to entry. Latex code appears like this:

\title{My presentation matters}
 Sample of Latex

R is another alternative but it’s considered hard even by people who use Latex.


We are taught to read to extract facts to pass exams at school. We need to practice reading for enjoyment, reading to spot new information, to extract what we want, to form new opinions and ideas, to loot & hack.

Immediately skip words you don’t understand: there won’t be a test – you’re not at school.


Design does not belong to ‘other people’. Support thinking with analytical design and do whatever it takes to explain the data.

Why do bird books use illustrations? Because the authors want to help you spot the birds and using art they exaggerate the differences as well as produce a generic version of the bird.

Nature magazine has some of the best designed visualizations around. Openness, pride and space constraints all help. (DNA only got 1.5 pages) The New York Times also often produces interesting visualizations of data.

User interface

Use the ideas proven by large successful sites on the web. Do not be swayed by arguments that your users won’t understand. Millions of users already do.

Touch is the next-generation of user interface. It allows the chrome (interface junk) to be jettisoned. No scroll bars, no buttons, no cursor, no zoom. Pure information experiences and this came not from academia, finance or medical but from consumer space.

The future of interface design… is information design. Edward Tufte – Seattle, August 4 2015

The original UI metaphors at Xerox Parc on the Alto were around a single document. Instead we have application-owned silos of data. The elegance was lost because companies want to control the content you create with their tools. They isolate your content so they can profit.

Hierarchies are still used for web design because it mimics the organization paying the bill. They see themselves this way and do not focus on how and what their customers need. Famous examples include the Treasury Department burying tax forms 7 levels deep despite being a top user request and the XKCD strip about University web sites. People on the inside have a skewed perspective of what the outside is.

The density of user interfaces is increasing which allows for richer visualizations especially when combined with animation or video. It is hard to get right.