Rendering content with Nuxt3

I've been a big fan of Nuxt2 and Nuxt3 is definitely a learning curve and I have to admit the documentation is a bit lacking - lots of small fragments and many different ways to do things.

What I wanted

I basically wanted a simple example or page that looks at the slug, finds the markdown content for it and renders it unless it's missing in which case returns a 404. Oh, and I want to use Typescript.

Simple right? Yeah, not so much if you want to use...

The Composition API

The composition API (that's the <script setup> bit) is the new hotness but means you have to use different functions and return different structures from Nuxt3. The other API is still around means a lot of the samples you find online aren't going to work. Things like useAsyncData instead of the old asyncData etc. It can be quite overwhelming when all the snippets are using a different mechanism that doesn't easily port.

Grabbing the slug

You now need to useRoute to get the current route and then you can grab parameters off it. If like me you want to use Typescript then you can install nuxt-typed-router which will let you specify the route name into useRoute and as a result will strongly-type the parameters so route.params.slug autocompletes and compiles without warnings.

Nice.

Querying content

The content system has changed quite a lot. The new Nuxt3 content does have a lot of stuff that just helps you now such as components that can go and get the page and render it for you a single step - and the markdown itself can specify the template. I'll dig more into that in the future but for now I just wanted to get the article so get used to things like find() and findOne() which you access through queryContent inside of useAsyncData.

Note that the first parameter into useAsyncData is effectively a cache key for rendering the page so don't let it collide with a component that's also rendering on that page.

404 when not found

None of the snippets I could find showed how to return a 404 with the composition useAsyncData pattern (the non-composition one returns an error object which makes life simpler).

createError is your friend and just throw what it creates with the right statusCode should the content result be missing.

Output Front Matter elements

This turned out to be an easy part - binding is pretty much the same. {{ something }} for inner-Text and v-bind:attribute for attributes. The only oddity here was having to ?. the properties because Typescript believes they can be null.

Render the Markdown

This isn't too tricky. There are a whole bunch of components now for the Nuxt Content package but these two work well if you want a level of control. provides some of the basic infrastructure/setup while ContentRendererMarkdown actually does the Markdown conversion. I could have put the <h1> for example inside <ContentRenderer> but this looked fine.

Set the page title etc.

Finally we need to set the page title and the composition API useHead is what you're after here. We also set the page meta description from the post article Front Matter.

Show me the code

Okay, here's the sample for displaying a single blog post using the composition API and all we talked about.

<template>
  <article>
    <h1>{{ post?.title }}</h1>
    <ContentRenderer :value="post">
      <ContentRendererMarkdown :value="post" />
    </ContentRenderer>
  </article>
</template>

<script setup lang="ts">

const route = useRoute('blog-slug')
const { data: post } = await useAsyncData('post/' + route.params.slug, () => queryContent('blog', route.params.slug).findOne())
if (post.value == null) throw createError({ statusCode: 404, message: 'Post not found' })

useHead({
  title: post.value.title,
  meta: [
  {
    hid: 'description',
    name: 'description',
    content: post.value.description,
  },
})

</script>

You would name this file /pages/blog/[slug].vue

If you're using JavaScript...

If you don't want to use TypeScript remove lang="ts" as well as the blog-slug from useRoute and all should be fine. Obviously also don't install nuxt-typed-router either.

Have fun!

A quick primer on floppy disks

I've always been fascinated by floppy disks from the crazy stories of Steve Wozniak designing the Disk II controller using a handful of logic chips and carefully-timed software to the amazing tricks to create - and break - copy protection recently popularised by 4am.

I'm going to be writing a few articles about data preservation and copy protection but first we need a short primer.

Media sizes

There were all sorts of attempts at creating sizes but these were the major players:

  • 8" - The grand-daddy of them all but not used on home PC's. They did look cool with the IMSAI 8080 in Wargames tho.
  • 5.25" - 1976 saw the 5.25" disk format appear from Shugart Associates soon to be adopted by the BBC Micro and IBM PC with 360KB being the usual capacity for double-sided disks.
  • 3" - Hitachi developed the 3" drive which saw some 3rd-party solutions before being adopted by the Oric and Tatung Einstein. Matsushita licenced it's simpler cheaper version of the drive to Amstrad where it saw use on the CPC, PCW and Spectrum +3.
  • 3.5" - Sony developed this around 1982 and it was quickly adopted by the PC, Mac, Amiga, Atari ST and many third-party add-ons.

Sides

Almost all disks are double-sided but many drives are single-sided to reduce manufacturing costs. In the case of 5.25" disks some were made with the ability to be flipped or kits to turn them into flippable disks. The 3" disk had this built in. Double-sided drives write to both using two heads while single-sided drives just require you flip them.

An interesting artifact of this is while you could read single-sided disks in a double-sided drive by flipping it as usual a single-sided drive has challenges with disks written by a double-sided drive.

While they can be physically read the data is effectively backwards due to the head underneath seeing the drive rotate in the opposite direction. Flux-level imagers can read these and theoretically invert the image to compensate. Computers back in the day had little chance. There is also the complication that most formats interleave the data between the heads for read speed rather than writing one side then the other. Long answer short: Read double-sided disks on a double-sided drive.

Tracks

Each side of the disk has the surface broken down into a number of rings known as tracks that start at track 0 on the outside and work their way in. 40 tracks is typical in earlier lower-density media and 80 in higher density depending on both the drive itself and the designation of the media.

Some disks provided an extra hole for the drive itself to be able to identity if it was high density or not while others like Amstrad's 3" media simply had a different colored label while the media itself was identical.

Some custom formats and copy protection systems pushed this number up to 41 or 42 tracks so it's always worth imaging at least one extra track to make sure it's unformatted and you're not losing anything. Additionally some machines like the C64 used fewer tracks - 35 -

Sectors

Finally we have sectors which are segments of a track. Typically a disk will have 9 or 10 sectors all the same size but some machines have more or less. Each sector is typically a power of 2 in length - 128 bytes through 1024 bytes (1KB) is typical although some copy protection pushes this higher. Each sector has an ID number and while they might be numbered sequentially they are often written out-of-order to improve speed where the host machine can't process the read before the next sector whizzes by. By writing the sector out of order we can optimize them at least for the standard DOS/OS that will be processing them in a technique called interleaving.

Encoding

Floppy disks themselves can store only magnetic charges that are either on or off. You might imaging the computer would map binary 1's to a magnetic charge and a 0 to no charge but this immediately causes problems:

  • Timing drives rotate the disk at slightly different speeds and too infrequent changes in the data will mean we loose sync
  • Strong bits too many on-bits together will cause a strong magnetic charge that will leak over to neighbouring areas
  • Weak bits too many off-bits together will leave such a weak magnetic charge we will pick up background noise

In order to avoid these problems encoding schemes map the computers binary 1's and 0's into on-disk sequences. Two simpler-to-explain ones include:

  • FM - Stores 0 as 10 and 1 as 11 on the disk which gives giving 50% efficiency
  • GCR - Stores a nibble (4 bits) as one of the 16 approved 5-bit sequences on-disk giving 80% efficiency

Other schemes use different tables or invert bit sequences (in the case of MFM which is the most popular) to ensure that every flux transition is wider apart meaning you can actually write the data at twice the density and still be within the tolerances of the disk head's ability to spot transitions.

Controller

Off-the-shelf controller chips added a cost to a disk system and so some systems - notably the Apple ][ and Amiga - performed it using their own custom logic and software. This gave way to some interesting disk formats and incredible copy-protection mechanisms.

Meanwhile companies like Western Digital and NEC produced dedicated floppy controller chips such as the WD1770 (BBC Master), WD1772 (Atari ST), WD 1793 (Beta Disk), VL1772 (Disciple/+D) and the NEC 765A (Spectrum, Amstrad) which trade that flexibility for some simplicity of integration.

Finally there were general-purpose processors which were repurposed for controlling the floppy such as the Intel 8271 (BBC Micro) or MOS 6502 (inside the Commodore 64's 1541 drive).

Copy protection

Many people think computers are all digital and so the only way to protect information is via encryption and obfuscation. While both techniques are used in copy protection the floppy disks themselves existing in our analogue world are open to all sorts of tricks to make things harder to copy from exploiting weak bits to creating tracks so long they wrap back onto themselves etc. Check out Poc || GTFO issue 0x10 for some of the crazy techniques on the Apple ][.

Adding reading time to Nuxt3 content

I've been using Nuxt2 quite a bit for my sites (including this one) and am now starting to use Nuxt3 for a few new ones and am finding the docs lacking in many places or confusing in others so hope to post a few more tips in the coming weeks.

Today we have the "reading time" popularised by sites like Medium.

Nuxt3

In Nuxt3 if we are using the content module then create a new file called /server/plugins/reading-time.ts with the following contents:

import { visit } from 'unist-util-visit'

const wpm = 225

export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('content:file:afterParse', (file) => {
    if (file._id.endsWith('.md') && !file.wordCount) {
      file.wordCount = 0
      visit(file.body, (n: any) => n.type === 'text', (node) => {
        file.wordCount += node.value.trim().split(/\s+/).length
      })
      file.minutes = Math.ceil(file.wordCount / wpm)
    }
  })
})

This is a little more convoluted than in Nuxt2 because there we were able to look at the plain text and set a property before parsing (beforeParse), however in Nuxt3 those properties do not persist all the way through to the page.

Nuxt2

For completeness in case you are still using Nuxt2 the equivalent there was to modify nuxt.config.js to add this block:

export default {
  // ...
  hooks: {
    "content:file:beforeInsert": document => {
      if (document.extension === ".md" && document.minutes === undefined) {
        document.minutes = readingTime(document.text);
      }
      document.category = getCategory(document);
    }
  },
  // ...
}

Then place this function at the end of the file:

export const readingTime = (text) => {
  const wpm = 225;
  const words = text.trim().split(/\s+/).length;
  return Math.ceil(words / wpm);
}

Displaying it

Now, anywhere on your page where you are displaying content from those markdown files you can simply do:

<p v-if="post.minutes">{{ post.minutes}} minutes</p>

I've chosen a reading speed of 225 words-per-minute but obviously reading speed is highly subjective.

One flexible alternative would be to record the number of words an article has and then in the front-end divide it by a user-configurable value. While it's unlikely worth the effort on a small site where somebody is hitting a single article if you're putting out a lot of related interesting content a viewer might peruse through it could be nice.

Estimating JSON size

I've been working on a system that heavily uses message queuing (RabbitMQ via MassTransit specifically) and occasionally the system needs to deal with large object graphs that need to be processed different - either broken into smaller pieces of work or serialized to an external source and a pointer put into the message instead.

The first idea was to serialize all messages to a MemoryStream but unfortunately this has some limitations, specifically:

a. For smaller messages the entire stream is duplicated due to the way the MassTransit interface works b. For larger messages we waste a lot of memory

For short-lived HTTP requests this is generally not a problem but for long-running message queue processing systems we want to be a bit more careful with GC pressures.

Which led me to two possible solutions:

LengthOnlyStream

This method is 100% accurate taking into consideration JSON attributes etc. and yet only requires a few bytes of memory.

Basically it involves a new Stream class that does not record what is written merely the length.

Pros & cons

  • + 100% accurate
  • + Can work with non-JSON serialization too
  • - Still goes through the whole serialization process

Source

internal class LengthOnlyStream : Stream {
    long length;

    public override bool CanRead => false;
    public override bool CanSeek => false;
    public override bool CanWrite => true;
    public override long Length => length;
    public override long Position { get; set; }
    public override void Flush() { }
    public override int Read(byte[] buffer, int offset, int count) => throw new NotImplementedException();
    public void Reset() => length = 0;
    public override long Seek(long offset, SeekOrigin origin) => 0;
    public override void SetLength(long value) => length = value;
    public override void Write(byte[] buffer, int offset, int count) => length += count - offset;
}

Usage

You can now get the actual serialized size with:

using var countStream = new LengthOnlyStream();
JsonSerializer.Serialize(countStream, damien, typeof(Person), options);
var size = countStream.Length;

JsonEstimator

In our particular case we don't need to be 100% accurate and instead would like to minimize the amount of work done as the trade-off.

This is where JsonEstimator comes into play:

Pros & cons

  • - Not 100% accurate (ignores Json attributes)
  • + Fast and efficient

Source

public static class JsonEstimator
{
    public static long Estimate(object obj, bool includeNulls) {
        if (obj is null) return 4;
        if (obj is Byte || obj is SByte) return 1;
        if (obj is Char) return 3;
        if (obj is Boolean b) return b ? 4 : 5;
        if (obj is Guid) return 38;
        if (obj is DateTime || obj is DateTimeOffset) return 35;
        if (obj is Int16 i16) return i16.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Int32 i32) return i32.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Int64 i64) return i64.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt16 u16) return u16.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt32 u32) return u32.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt64 u64) return u64.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is String s) return s.Length + 2;
        if (obj is Decimal dec) {
            var left = (long)Math.Floor(dec % 10);
            var right = BitConverter.GetBytes(decimal.GetBits(dec)[3])[2];
            return right == 0 ? left : left + right + 1;
        }
        if (obj is Double dou) return dou.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Single sin) return sin.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is IDictionary dict) return EstimateDictionary(dict, includeNulls);
        if (obj is IEnumerable enumerable) return EstimateEnumerable(enumerable, includeNulls);

        return EstimateObject(obj, includeNulls);
    }

    static long EstimateEnumerable(IEnumerable enumerable, bool includeNulls) {
        long size = 0;
        foreach (var item in enumerable)
            size += Estimate(item, includeNulls) + 1; // ,
        return size > 0 ? size + 1 : 2;
    }

    static readonly BindingFlags publicInstance = BindingFlags.Instance | BindingFlags.Public;

    static long EstimateDictionary(IDictionary dictionary, bool includeNulls) {
        long size = 2; // { }
        bool wasFirst = true;
        foreach (var key in dictionary.Keys) {
            var value = dictionary[key];
            if (includeNulls || value != null) {
                if (!wasFirst)
                    size++;
                else
                    wasFirst = false;
                size += Estimate(key, includeNulls) + 1 + Estimate(value, includeNulls); // :,
            }
        }
        return size;
    }

    static long EstimateObject(object obj, bool includeNulls) {
        long size = 2;
        bool wasFirst = true;
        var type = obj.GetType();
        var properties = type.GetProperties(publicInstance);
        foreach (var property in properties) {
            if (property.CanRead && property.CanWrite) {
                var value = property.GetValue(obj);
                if (includeNulls || value != null) {
                    if (!wasFirst)
                        size++;
                    else
                        wasFirst = false;
                    size += property.Name.Length + 3 + Estimate(value, includeNulls);
                }
            }
        }

        var fields = type.GetFields(publicInstance);
        foreach (var field in fields) {
            var value = field.GetValue(obj);
            if (includeNulls || value != null) {
                if (!wasFirst)
                    size++;
                else
                    wasFirst = false;
                size += field.Name.Length + 3 + Estimate(value, includeNulls);
            }
        }
        return size;
    }
}

Usage

var size = JsonSizeCalculator.Estimate(damien, true);

Wrap-up

I also had the chance to play with Benchmark.NET and also to look at various optimizations of the estimator (using Stack<T> instead of a recursion, reducing foreach allocations and a simpler switch that did not involve pattern matching) but none of them yielded positive results on my test set - I likely need a bigger set of objects to see the real benefits.

Regards,

[)amien