Revisiting my BBC Micro - display, speech & more

It’s been a while since I blogged about Revitalizing my BBC Micro. In that time I’ve performed a few upgrades you might find interesting…

Display requirements

As useful as the tiny Amstrad CRT was I wanted something bigger, brighter and sharper. LCD is terrible for retro systems with blurry scaling attempting to draw images designed to take advantage of CRTs. Emulator authors spend significant effort trying to mimic CRT effects for a true retro feel but the best option is just to use a CRT.

Most machines in the 80s and early 90s were designed for TV compatibility and so operated on a 15KHz horizontal refresh rate. The VGA CRT monitors people struggle to give away aren’t going to work as they start at 31.5KHz. British machines like mine also use the PAL (UK) video system rather than the NTSC (USA) system - ideally a display would handle both.

If you don’t need any VGA frequencies then a Sony PVM is the way to go. I also own an Amiga 1200 which is capable of some VGA modes so it would be nice to have one CRT for everything. Multi-sync monitors can do both but were rare back then and are even rarer now and the shipping cost on CRTs limits can be prohibitive.

Commodore 1942 CRT

Figuring out resistor levels and sync signalsLuckily for me a Commodore 1942 CRT “bi-sync” turned up on Craigslist just 15 minutes from my house for $50. It was designed for the later Amiga models so it does both 15KHz most of my machines can do and some VGA resolutions too. Perfect.

Connecting it to the BBC was a little trickier than I anticipated. That Amiga design means it expects the horizontal (HSYNC) and vertical sync (VSYNC) signals on two different pins to match the Amiga’s video port rather than the composite sync (CSYNC) all my RGB capable machines offer (Amiga excluded).

I briefly experimented with connecting the CSYNC to HSYNC, VSYNC and indeed both but failed failed to get a stable display. Digging in to the Motorola 6845 CRT controller chip that powers the Beeb reveals both VSYNC and HSYNC on pins 40 and 39 respectively. A quick snip of the RGB port’s unused 5v and sync pins let me repurpose them to HSYNC and VSYNC direct on the 6845. A stable but over-saturated picture was a welcome next step that didn’t involve me needing to create a SYNC splitting circuit (I did that later to connect with my Spectrum +3).

Running Citadel on the Commodore 1942The over-saturation is because the BBC Micro outputs either 0V or 5V - off or on - for each color. The Amiga monitor is analogue and accepts any amount of color between 0V and 0.7V. I read guides on calculating the voltage drop but it still looked saturated so I kept increasing resistor values until I found values that looked right.

The final result definitely made me smile. It looked better than the Microvitec CUB monitors our school had back in the day without losing the CRT appeal. Success!

Speech synthesis

Hearing Superior Software’s SPEECH package blurt out any phrase we cared to throw at it was a blown-away moment at school. I’ve always wondered what the official Acorn speech system was like especially as every time I open the case empty sockets IC98 and IC99 call out for the Texas Instruments TMS5220 speech processor and associated TMS6100 voice synthesis memory.

The TMS5220 chip was a successor to that in Speak & Spell, Bally/Midway pinball machines and some arcade games as is quite easy to come by. The TMS6100 was available in many variants and the BBC commissioned some of their own including one sampled by BBC news anchor Kenneth Kendall. This chip is rare now and the fact the TMS6100 is not a regular ROM means you can’t just burn a copy. Thankfully Simon Inns created an emulator which can run on an ATMega32U2 to provide a drop-in replacement!

I obtained a TMS5220 and pre-built TMS6100 emulator board from Mark Haysman at RetroClinic who I can thoroughly recommend! (My SMT soldering skills are not up to this)

After inserting the two chips and powering nothing looks different. This command sequence however provides a good test mechanism:

TMS5220 chip and TMS6100 emulator board

REPEAT : SOUND -1,GET,0,0 : UNTIL 0

Pressing any key on the keyboard will cause the machine to say aloud the letter although it has some odd ideas about what the symbols on the keyboard are.

I will be experimenting with this more as I dig through the capabilities in the manual as it isn’t as easy to use as Superior Software’s Speech! which lets you type things like:

*SPEECH
*SAY Hello there.
*SAY I've got a bad feeling about this.

ROM experiments

My school had a single copy of the Advanced User Guide so I felt privileged when the teacher would let me borrow it although on reflection I doubt anyone else wanted to. Page 395 cryptically teases:

Up to 16 paged ROMS are therefore catered for, 4 of which are on the main circuit board.

So the OS supports 16 ROMs but there are only physical sockets for 4 (IC52, IC88, IC100 and IC101). Typically BASIC and the disc filing system (DFS or ADFS) take two of them leaving just two usable ROM sockets for expansion.

The schematics reveal IC76 is the ROM Select Latch and is a 74LS163 with 4 output pins thus providing 16 possible combinations - one for each ROM you could use so both the OS and the circuitry can support what we need if we could physically get the ROMs wired in.

The Beeb supports either 8K (2764) or 16K (27128) ROMs and EPROMs. Later 64KB (27512) chips became available which are almost pin-compatible with the 27128 except:

A collection of ROMs and an EPROM

27512 Pin 27127
A15 1 Vpp
A14 27 /PGM
/OE 22 /OE, Vpp

The /PGM and Vpp lines are for writing - an EPROM programmer will care about these but our Beeb won’t.

The A14 and A15 lines are the address lines for accessing the higher memory. With them both low the chip just acts like a regular 16K (27128) chip. With A14 high it looks to the next available 16K, with A15 high the next 16K and with A14 and A15 high the final 16K.

So what we can do here is combine four 16K ROM images into a single 64K file and flash it to our 27512 which is just what I did with my Signstek TL866A Universal USB programmer.

Now by connecting A14 and A15 to the IC76 address line C and D outputs we have effectively given whatever socket we connect this two the ability to appear as four ROMs (this works only because a single ROM can be paged in at a time).

The final icing on the cake is that the Beeb sports a push-out section left of the keyboard (affectionately known as the “ash tray”) where a zero insertion force - ZIF socket - could be mounted to allow a ROM to be dropped in without needing to crack open the case (our school was definitely not wanting us to open the machines and yet only one machine had this upgrade installed).

Now I just need to figure out how to mount this ZIF socket in the ash tray hole - there aren’t really any mounts. I suspect I’m going to need to make a PCB of some sort and put legs on it.

Building your own

Parts list

  • 28-pin ZIF socket
  • 28-pin DIP socket 0.6” wide
  • length of 28-way ribbon cable
  • 2.54mm header pins (you just need two)
  • 2x female-to-female jumper wires

Creating the cable

  1. Wire all pins from ZIF to DIP except for 1 & 27
  2. Solder two header pins to 11 & 12 on IC76
  3. Jumper ZIF pin 1 to 11 on IC76
  4. Jumper ZIF pin 27 to 12 on IC76

Now insert a 27512 ROM flashed with four BBC ROMs of your choice, power up and type *HELP or *ROMS to see the images ready.

Also check out alternatives for wiring up 64K ROMs or 32K SRAM chips from J.G. Harston

Second processor via a Pi Zero

The Beeb has a bunch of expansion ports hidden underneath the machine - the most unusual one being the Tube expansion bus which allows for a 2nd processor by way of a FIFO buffers that facilitated message passing IPC for console, errors, data and system calls.

Acorn produced a number of expansions for Tube including:

  • 6502 second processor allowing well-behaved unmodified programs to run faster
  • Z80 for CP/M
  • 80286 for DOS or GEM

Raspberry Pi Zero with Level ShifterThese expansions are hard to come by as they don’t just feature the CPU but necessary additional isolation logic, memory and circuitry. David Banks developed PiTubeDirect to allow a Raspberry Pi to act as second processor plugged into the Tube port by way of a 5V to 3.3V level shifter - I got mine from Kjell Sundby

The Raspberry Pi 3 can emulate these old processors as crazy speeds - 274MHz for the 6502, 112MHz for the Z80, 63MHz for the 80286 and even a 59MHz ARM2 (Acorn were using the Beeb to work on ARM prototypes)

What piqued my interest was using the Raspberry Pi Zero though. It’s small enough to fit under the BBC Micro and remain plugged into the Tube port out of sight. Latency was a problem on the Zero given the lower ARM processor so they ported the CPU emulation core… to the GPU!

The 6502 emulation is reliable and enabled me to run the 6502 Second Processor Elite. I definitely need to try and get GEM running on it just for fun although it’s a little trickier to find suitable disk images for Z80 and 80286 co-processor stuff.

[)amien

WordPress to Jekyll part 5 - Hosting & building

Part of my series on migrating from WordPress to Jekyll.

  1. My history & reasoning
  2. Comments & commenting
  3. Site search
  4. Categories & tags
  5. Hosting & building

The next stage is considering where to host the site and whether to use a content delivery network (CDN). My preferred approach on other sites has been to:

  1. Host the origin on GitHub pages - it’s fast to build and integrates with my source control
  2. Front it with Amazon’s AWS CloudFront CDN - it’s fast, cheap and comes with a free SSL cert

Adding the CloudFront CDN was essential if you wanted SSL plus your own domain name but May sGitHub pages added support for SSL certs with custom domains

Unfortunately my blog is a bit more complex than the other sites I’ve done and two of the plugins I use have not been white-listed for use on GitHub pages. They are:

  1. paginate-v2 which is required to get great tag & category support
  2. Algolia which is needed for search indexing

Part of GitHub’s blazing speed comes from a trusted environment and while I’m sure they’ll be white-listing paginate-v2 in the short term I’m not sure if the Algolia indexer is on the cards.

CircleCI build server

There are always plenty of options in the cloud so I looked for a build server. I’ve used AppVeyor, CodeShip and Travis CI before but decided to this time go with CircleCI as I wanted to try their new faster v2 docker-based infrastructure and take advantage of their free tier.

The v2 mechanism requires a new .circleci/config.yml that splits the process into jobs that are combined with a workflow. I created two jobs - one for the build and another for the deploy. They are:

version: 2
jobs:
  build:
    docker:
      - image: circleci/ruby:2.3
    working_directory: ~/jekyll
    environment:
      - JEKYLL_ENV=production
      - NOKOGIRI_USE_SYSTEM_LIBRARIES=true
      - JOB_RESULTS_PATH=run-results
    steps:
      - checkout
      - restore_cache:
          key: jekyll-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
      - run:
          name: Install dependencies
          command: bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3
      - save_cache:
          key: jekyll-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
          paths:
            - "vendor/bundle"
      - run:
          name: Create results directory
          command: mkdir -p $JOB_RESULTS_PATH
      - run:
          name: Build site
          command: bundle exec jekyll build 2>&1 | tee $JOB_RESULTS_PATH/build-results.txt
      - run:
          name: Remove .html suffixes
          command: find _site -name "*.html" -not -name "index.html" -exec rename -v 's/\.html$//' {} \;
      - run:
          name: Index with Algolia
          command: bundle exec jekyll algolia
      - store_artifacts:
          path: run-results/
          destination: run-results
      - persist_to_workspace:
          root: ~/jekyll
          paths:
            - _site

Origin hosting with S3

Given I’m going to use CloudFront for my CDN and that GitHub pages won’t work for this job I went with S3. I know it well, the command line tools are great, it’s cheap, fast and integrates well with CloudFront.

S3 did however bring a few problems with it’s own - primarily because the links on my blog had no file suffixes - I didn’t want either .php or .html and WordPress makes this a breeze.

Here’s my CircleCI job to deploy to S3. It involves:

  1. Starting with Python to get the AWS command-line tools
  2. Syncing the static site forcing everything as text/html to deal with the lack of file extensions
  3. Fixing up the few files I have that require a different MIME type (css, feed, robots etc)
  4. Creating a few helpful redirects for backward compatibility with existing links in the wild

(This configuration requires you’ve setup the AWS access key and secret in Circle for the command-line tools to use.)

deploy:
  docker:
    - image: circleci/python:2.7
    working_directory: ~/jekyll
  steps:
    - attach_workspace:
        at: ~/jekyll
    - run:
        name: Install awscli
        command: sudo pip install awscli
    - run:
        name: Deploy to S3
        command: aws s3 sync _site s3://damieng-static/ --delete --content-type=text/html
    - run:
        name: Correct MIME for robots.txt automatically
        command: aws s3 cp s3://damieng-static/robots.txt s3://damieng-static/robots.txt --metadata-directive="REPLACE"
    - run:
        name: Correct MIME for sitemap.xml automatically
        command: aws s3 cp s3://damieng-static/sitemap.xml s3://damieng-static/sitemap.xml --metadata-directive="REPLACE"
    - run:
        name: Correct MIME for Atom feed manually
        command: aws s3 cp s3://damieng-static/feed.xml s3://damieng-static/feed.xml --no-guess-mime-type --content-type="application/atom+xml" --metadata-directive="REPLACE"
    - run:
        name: Redirect /damieng for existing RSS subscribers
        command: aws s3api put-object --bucket damieng-static --key "damieng" --website-redirect-location "https://damieng.com/feed.xml"
    - run:
        name: Correct MIME for CSS files
        command: aws s3 cp s3://damieng-static/css s3://damieng-static/css --metadata-directive="REPLACE" --recursive

Tying together the build

Finally you just need a workflow to tie these two steps together at the end of your .circleci/config.yml

workflows:
  version: 2
  build-deploy:
    jobs:
      - build
      - deploy:
          requires:
            - build
          filters:
            branches:
              only: master

A complete version of my circle config is available.

CloudFront CDN

Adding the CloudFront CDN is pretty easy and well covered elsewhere. I’ll just point out that you must paste in the origin domain name from S3 and not choose the S3 bucket in the drop down. The latter ties CloudFront to the storage directly and ignores MIME types, redirects etc. By pasting the origin name in you’re taking advantage of the S3 WebSite features that make redirects etc. possible.

Also, while testing, you might want to specify a low TTL of say 120 (2 minutes) until things are fully stable.

[)amien

WordPress to Jekyll part 4 - Categories and tags

Part of my series on migrating from WordPress to Jekyll.

  1. My history & reasoning
  2. Comments & commenting
  3. Site search
  4. Categories & tags
  5. Hosting & building

Jekyll does support categories and tags itself however it doesn’t support producing pagination of the categories and tag list pages. This is instead solved by the Paginate-v2 gem which also lets you tweak the url format.

My site used the url formats /blog/category/{category-name} and /blog/tag/{tag-name} with 4 articles per page and a little pager at the bottom offering some indication of what page you are on, and some navigation arrows like this:

The pager

In order to render this pager a little Liquid templating is required. Here’s my _includes/pagination.html that’s included within my multiple-posts layout used on the home page, categories and tag results.

{% if paginator.total_pages > 1 %}
<div class="pagination pagination-centered">
  <ul class="page-numbers">
  {% if paginator.previous_page %}
    <li><a href="{{ paginator.previous_page_path }}" class="prev">«</a></li>
  {% endif %}

  {% if paginator.page_trail %}
    {% for trail in paginator.page_trail %}
      <li>
         {% if page.url == trail.path %}
          <span class="page-numbers current">{{ trail.num }}</span>
        {% else %}
          <a href="{{ trail.path | prepend: site.baseurl | replace: '//', '/' }}" title="{{ trail.title }}">{{ trail.num }}</a>
        {% endif %}
    </li>
    {% endfor %}
  {% endif %}

  {% if paginator.next_page %}
    <li><a href="{{ paginator.next_page_path }}" class="next">»</a></li>
  {% endif %}
  </ul>
</div>
{% endif %}

Configuring paginate-v2

I configured paginate-v2 as close as I could to keep the experience consistent with my WordPress install although the page numbers in the url are different:

autopages:
  enabled: true
  collections:
    enabled: false
  categories:
    enabled: true
    layouts:
      - home.html
    permalink: '/blog/category/:cat'
    slugify:
      mode: pretty
  tags:
    enabled: true
    layouts:
      - home.html
    permalink: '/blog/tag/:tag'
    slugify:
      mode: pretty

pagination:
  enabled: true
  per_page: 4
  permalink: '/page/:num/'
  title: ':title - page :num'
  limit: 0
  sort_field: 'date'
  sort_reverse: 'true'
  trail:
      before: 2
      after: 2

Auditing categories and tags

Twelve years of blogging and multiple platforms can play havoc on what categories and tags you’ve used over the years. I wrote a quick page that lists all the categories and tags with a count next to each. Anything with only one or two articles is a waste of space so I’ve been cleaning up.

Here’s that page in case you want to add it to your site to help prune things down.

---
title: Audits
date: 2018-05-30 18:46:00-8:00
---
<h1>Audits</h1>

<h2>Categories</h2>
<ul>
{% for category in site.categories %}
  <li><a href="/blog/category/{{ category | first | replace: ' ', '-' | downcase }}">{{ category | first }}</a> ({{ category[1] | size }})</li>
{% endfor %}
</ul>

<h2>Tags</h2>
<ul>
{% for tag in site.tags %}
  <li><a href="/blog/tag/{{ tag | first | replace: ' ', '-' | downcase }}">{{ tag | first }}</a> ({{ tag[1] | size }})</li>
{% endfor %}
</ul>

See you in part 5 - hosting.

[)amien

WordPress to Jekyll part 3 - Site search

Part of my series on migrating from WordPress to Jekyll.

  1. My history & reasoning
  2. Comments & commenting
  3. Site search
  4. Categories & tags
  5. Hosting & building

Site search is a feature that WordPress got right and, importantly, analytics tell me is popular. A static site is once again at a big disadvantage but we have some options to address that.

Considering options

My first consideration was to use Google Site Search but that was deprecated last year. There are alternative options but few are free. I’m not opposed to people being paid for their services, something has to keep the lights on, but a small personal blog with no income stream can’t justify the cost.

My next thought was to generate reverse index JSON files during site build and then write some client-side JavaScript that would utilize them as the user types in the search box to find the relevant posts. It’s an idea I might come back to but the migration had already taken longer than I anticipated and I like to ship fast and often :)

Algolia

I soon came across Algolia which not only provides a simple API and a few helper libraries but also a Jekyll plug-in to generate the necessary search indexes AND has a free tier that requires just a logo placement and link to their site! Awesome.

Setup was a breeze and Algolia have a specific guide to indexing with Jekyll that was useful. Once you’ve signed up the main parts are configuring indexing and integrating with your site.

Index integration

First install the jekyll-algolia gem making sure it’s specified in your gemfile.

Then configure your Jekyll _config.yml so it knows what to index and where as well as what document attributes are important:

algolia:
  application_id: {your-algolia-app-id}
  index_name: {your-algolia-index-name}
  settings:
    searchableAttributes:
      - title
      - excerpt_text
      - headings
      - content
      - categories
      - tags
    attributesForFaceting:
      - type
      - searchable(categories)
      - searchable(tags)
      - searchable(title)

Finally you’ll need to run the indexing. You need to ensure the environment variable ALGOLIA_API_KEY is set to your private Admin API Key from your Algolia API Keys page then run the following command after your site is built:

bundle exec jekyll algolia

Site integration

Wiring up the search box can be a little overwhelming as they have so many clients, options and APIs available. I went with a design that presents the results as you type like this:

This uses two of their libraries - the search lite and the search helper plus some code to wire it up to my search box and render the results in a drop-down list. I’ll probably further tweak the result format and maybe consider wiring up to the API directly as two libraries for such a simple use case seems a bit overkill.

<script src="https://cdn.jsdelivr.net/npm/algoliasearch@3/dist/algoliasearchLite.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/algoliasearch-helper@2.26.0/dist/algoliasearch.helper.min.js"></script>
<script>
  let searchForm = document.getElementById('search-form')
  let hits = document.getElementById('hits')
  let algolia = algoliasearch('{your-algolia-app-id}', '{your-algolia-search-token}')
  let helper = algoliasearchHelper(algolia, '{your-algolia-index-name}',
    { hitsPerPage: 10, maxValuesPerFacet: 1, getRankingInfo: false })
  helper.on('result', searchCallback)

  function runSearch() {
    let term = document.getElementById('s').value
    if (term.length > 0)
      helper.setQuery(term).search()
    else
      searchForm.classList.remove('open')
  }

  function searchCallback(results) {
    if (results.hits.length === 0) {
      hits.innerHTML = '<li><a>No results!</a></li>'
    } else {
      renderHits(results)
      searchForm.classList.add('open')
    }
    let credits = document.createElement('li');
    credits.innerHTML = "<img src=\"https://www.algolia.com/static_assets/images/press/downloads/search-by-algolia.svg\" onclick=\"window.open('https://www.algolia.com', '_blank')\" />"
    hits.appendChild(credits)
  }

  function renderHits(results) {
    hits.innerHTML = ''
    for (let i = 0; i < results.hits.length; i++) {
      let li = document.createElement('li')
      let title = document.createElement('a')
      title.innerHTML = results.hits[i]._highlightResult.title.value
      title.href = results.hits[i].url
      li.appendChild(title)
      hits.appendChild(li)
    }
  }
</script>

Analytics

I’m a big proponent of analytics when used purely for engineering improvement and Algolia provides a useful dashboard to let you know how performance is doing, what topics are being searching for and what searches might not be returning useful content.

I’ll dig through that when I have a little more time however. The backlog of ideas for posts is taking priority right now!

[)amien Note: I did not and do not receive any compensation from Algolia either directly or via any kind of referral program. I’m just a happy user.