Adding reading time to Nuxt3 content

I've been using Nuxt2 quite a bit for my sites (including this one) and am now starting to use Nuxt3 for a few new ones and am finding the docs lacking in many places or confusing in others so hope to post a few more tips in the coming weeks.

Today we have the "reading time" popularised by sites like Medium.


In Nuxt3 if we are using the content module then create a new file called /server/plugins/reading-time.ts with the following contents:

import { visit } from 'unist-util-visit'

const wpm = 225

export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('content:file:afterParse', (file) => {
    if (file._id.endsWith('.md') && !file.wordCount) {
      file.wordCount = 0
      visit(file.body, (n: any) => n.type === 'text', (node) => {
        file.wordCount += node.value.trim().split(/\s+/).length
      file.minutes = Math.ceil(file.wordCount / wpm)

This is a little more convoluted than in Nuxt2 because there we were able to look at the plain text and set a property before parsing (beforeParse), however in Nuxt3 those properties do not persist all the way through to the page.


For completeness in case you are still using Nuxt2 the equivalent there was to modify nuxt.config.js to add this block:

export default {
  // ...
  hooks: {
    "content:file:beforeInsert": document => {
      if (document.extension === ".md" && document.minutes === undefined) {
        document.minutes = readingTime(document.text);
      document.category = getCategory(document);
  // ...

Then place this function at the end of the file:

export const readingTime = (text) => {
  const wpm = 225;
  const words = text.trim().split(/\s+/).length;
  return Math.ceil(words / wpm);

Displaying it

Now, anywhere on your page where you are displaying content from those markdown files you can simply do:

<p v-if="post.minutes">{{ post.minutes}} minutes</p>

I've chosen a reading speed of 225 words-per-minute but obviously reading speed is highly subjective.

One flexible alternative would be to record the number of words an article has and then in the front-end divide it by a user-configurable value. While it's unlikely worth the effort on a small site where somebody is hitting a single article if you're putting out a lot of related interesting content a viewer might peruse through it could be nice.

Estimating JSON size

I've been working on a system that heavily uses message queuing (RabbitMQ via MassTransit specifically) and occasionally the system needs to deal with large object graphs that need to be processed different - either broken into smaller pieces of work or serialized to an external source and a pointer put into the message instead.

The first idea was to serialize all messages to a MemoryStream but unfortunately this has some limitations, specifically:

a. For smaller messages the entire stream is duplicated due to the way the MassTransit interface works b. For larger messages we waste a lot of memory

For short-lived HTTP requests this is generally not a problem but for long-running message queue processing systems we want to be a bit more careful with GC pressures.

Which led me to two possible solutions:


This method is 100% accurate taking into consideration JSON attributes etc. and yet only requires a few bytes of memory.

Basically it involves a new Stream class that does not record what is written merely the length.

Pros & cons

  • + 100% accurate
  • + Can work with non-JSON serialization too
  • - Still goes through the whole serialization process


internal class LengthOnlyStream : Stream {
    long length;

    public override bool CanRead => false;
    public override bool CanSeek => false;
    public override bool CanWrite => true;
    public override long Length => length;
    public override long Position { get; set; }
    public override void Flush() { }
    public override int Read(byte[] buffer, int offset, int count) => throw new NotImplementedException();
    public void Reset() => length = 0;
    public override long Seek(long offset, SeekOrigin origin) => 0;
    public override void SetLength(long value) => length = value;
    public override void Write(byte[] buffer, int offset, int count) => length += count - offset;


You can now get the actual serialized size with:

using var countStream = new LengthOnlyStream();
JsonSerializer.Serialize(countStream, damien, typeof(Person), options);
var size = countStream.Length;


In our particular case we don't need to be 100% accurate and instead would like to minimize the amount of work done as the trade-off.

This is where JsonEstimator comes into play:

Pros & cons

  • - Not 100% accurate (ignores Json attributes)
  • + Fast and efficient


public static class JsonEstimator
    public static long Estimate(object obj, bool includeNulls) {
        if (obj is null) return 4;
        if (obj is Byte || obj is SByte) return 1;
        if (obj is Char) return 3;
        if (obj is Boolean b) return b ? 4 : 5;
        if (obj is Guid) return 38;
        if (obj is DateTime || obj is DateTimeOffset) return 35;
        if (obj is Int16 i16) return i16.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Int32 i32) return i32.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Int64 i64) return i64.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt16 u16) return u16.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt32 u32) return u32.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is UInt64 u64) return u64.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is String s) return s.Length + 2;
        if (obj is Decimal dec) {
            var left = (long)Math.Floor(dec % 10);
            var right = BitConverter.GetBytes(decimal.GetBits(dec)[3])[2];
            return right == 0 ? left : left + right + 1;
        if (obj is Double dou) return dou.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is Single sin) return sin.ToString(CultureInfo.InvariantCulture).Length;
        if (obj is IDictionary dict) return EstimateDictionary(dict, includeNulls);
        if (obj is IEnumerable enumerable) return EstimateEnumerable(enumerable, includeNulls);

        return EstimateObject(obj, includeNulls);

    static long EstimateEnumerable(IEnumerable enumerable, bool includeNulls) {
        long size = 0;
        foreach (var item in enumerable)
            size += Estimate(item, includeNulls) + 1; // ,
        return size > 0 ? size + 1 : 2;

    static readonly BindingFlags publicInstance = BindingFlags.Instance | BindingFlags.Public;

    static long EstimateDictionary(IDictionary dictionary, bool includeNulls) {
        long size = 2; // { }
        bool wasFirst = true;
        foreach (var key in dictionary.Keys) {
            var value = dictionary[key];
            if (includeNulls || value != null) {
                if (!wasFirst)
                    wasFirst = false;
                size += Estimate(key, includeNulls) + 1 + Estimate(value, includeNulls); // :,
        return size;

    static long EstimateObject(object obj, bool includeNulls) {
        long size = 2;
        bool wasFirst = true;
        var type = obj.GetType();
        var properties = type.GetProperties(publicInstance);
        foreach (var property in properties) {
            if (property.CanRead && property.CanWrite) {
                var value = property.GetValue(obj);
                if (includeNulls || value != null) {
                    if (!wasFirst)
                        wasFirst = false;
                    size += property.Name.Length + 3 + Estimate(value, includeNulls);

        var fields = type.GetFields(publicInstance);
        foreach (var field in fields) {
            var value = field.GetValue(obj);
            if (includeNulls || value != null) {
                if (!wasFirst)
                    wasFirst = false;
                size += field.Name.Length + 3 + Estimate(value, includeNulls);
        return size;


var size = JsonSizeCalculator.Estimate(damien, true);


I also had the chance to play with Benchmark.NET and also to look at various optimizations of the estimator (using Stack<T> instead of a recursion, reducing foreach allocations and a simpler switch that did not involve pattern matching) but none of them yielded positive results on my test set - I likely need a bigger set of objects to see the real benefits.



Using variable web fonts for speed

Webfonts are now ubiquitous across the web to the point where most of the big players even have their own typefaces and the web looks a lot better for it.

Unfortunately the problem still exists that either the browser has to wait before it draws anything while it is getting the font or it renders without the font then re-renders the page again once the font is available. Neither is a great solution and Google's PageSpeed will hit you for either so what is an enterprising web developer to do?

I was tasked with exactly this problem while redesigning my wife's web site - MKG Marketing Inc. Having been brought in right at the start while design and technologies were still on the table gave me some control that we might be able to mitigate this quite well.

What we need to do is reduce the amount of time it takes to get this font on the screen and then decide whether we want to go with the block or swap approach.

Variable fonts are a boon

The first opportunity here was to go with a good variable font.

Variable fonts - unlike traditional static fonts - have a number of axes which you can think of as sliders in which to push and pull the design.

The most obvious one is how light or bold a font is - rather than just say a light (200), a normal (400) and a bold (700) the browser can make a single font file perform all these roles and anything inbetween. Another one might be how thin or wide a font is - eliminating the need for separate condensed or expanded fonts - another might be how slanted or italic the design is or even whether a font is a serif or a sans-serif... and plenty of small serifs inbetween.

This means instead of having to load 3 or more web fonts we need just one which will decrease our page load time.

Getting a variable font

Google Fonts helpfully has an option to "Show only variable fonts" however it is important to note that while Google can show you them.. it is not capable of serving you the actual variable font to your site! Instead it will use the variable font to make you a set of static fonts to serve up which kind of defeats a lot of the point in using them.

Anyway, feel free to browse their site for a variable font that looks great on your site. Even use their CSS temporary to see what it would look like. Once you're happy it's time to head into the font information to see who created the font and where you can get the actual variable font file from.

For example, the rather nice Readex Pro says in its about section to head over to their GitHub repository to contribute.

Unlike most open source code project binaries for fonts often get checked in as users don't typically have the (sometimes commercial such as Glyphs or FontLab) tool-chain to build them. This is good for us as inside the /fonts/variable/ folder we see two .ttf files. You will note that the filename includes [wght] on one which means it contains the weight axes. The second in this case is [HEXP,wght] which in this case is for "hyper-expansion". Choose the one that has just the axes you need.

You can also check the download link from Google Fonts - that often has a full variable font they are using. It will probably contain much more than you want. Alternatively, some fonts such as Recursive even have an online configurator that lets you choose what options you want and provides a ready-to-go fully-subset and compressed woff2 (so you can skip straight to serving the font!)

Preparing the font

Now you could just load that .ttf file up somewhere but there are still some optimizations we can do. We want to shrink this file down as much as possible and there are two parts to that.

Firstly we want to strip out everything we don't need also known as subsetting. In the case of this font for example if I were using it on I could strip out the Arabic characters as I do not write anything in Arabic (browsers will fall back to an installed font that does should they need to such as somebody entering something into your contact form).

Subsetting the font

The open source fontTools comes to the rescue here. I installed it in WSL but Linux and other operating systems should be the same:

pip install fonttools brotli zopfli

The subset command has a number of options to choose what to keep. Unfortunately there is no easy language option but you can specify unicode ranges --unicodes= or even text files full of characters --text-file= or simply provide a list of characters with --text=. If you go the unicode range route then Unicodepedia has some good coverage of what you'll need.

For this site we just needed Latin and the Latin-1 supplement, so:

fonttools subset ReadexPro[wght].ttf --unicodes=U+0020-007F,U+00A0-00FF --flavor=woff2

Which produces a ReadexPro[wght].subset.woff2 weighing in at just 16KB (down from 188KB).

Serving the font

Now at this point you could just upload the woff2 file to your CDN and reference it from your CSS.

Your CSS would look like this:

@font-face {
  font-family: "ReadexPro";
  src: url(/fonts/readexpro[wght].subset.woff2) format("woff2");
  font-weight: 100 900;

The important bit here is font-weight: 100 900 which lets the browser know that this is a variable font capable of handling all the font weights from 100 to 900. You can do this with other properties too:

AxesCSS propertyNamed exampleNumeric example
[slnt]font-stylenormal obliquenormal oblique 30deg 50deg
[wdth]font-stretchcondensed expanded50% 200%
[wght]font-weightthin heavy100 900

Do not instead create a @font-face per weight all linking to the same url as the browsers are often not smart enough to de-duplicate the requests and you're back to effectively waiting for multiple resources to load.

Now this might be fast enough, you could measure it and test... but this font is only 16KB so there's one more trick we can do.

With the url in the CSS the browser must first request the HTML, then the CSS, then finally the font. If you're smart you'll have them all on the same host so at least there will be no extra DNS lookups slowing things down in there. (Also note that requesting popular files from well-known CDNs will not reuse local copies any more in order to provide better security so it really is worth hosting your own again especially if you have a CDN in front of your site such as Cloudflare, CloudFront etc).

But we can do better!

Base64 the font into the CSS!

By Base64 encoding the font you can drop it directly into the CSS thereby removing a whole other HTTP request/wait! Simply encode your woff2 file using a either an online tool like where you click the "click here to select a file", choose your .woff2 then click Encode and download the encoded text file or the command-line tool such as base64, I used wsl again with:

base64 readexpro[wght].subset.woff -w0 | clip.exe

Which stuffs the base64 encoded version into the clipboard ready to be pasted.

Now change your @font-face replacing the url with a data section, like this:

@font-face {
  font-family: "ReadexPro";
  font-display: block;
  font-weight: 100 900;
  src: url(data:application/x-font-woff;charset=utf-8;base64,d09GMgABAAAAAEK0ABQAAAAAgZAAAEJFAAEz
  [truncated for this blog post]AQ==) format("woff2");

And hey presto! You can now happily slip that font-display: block in as well as the font is always going to be available at the same time as the CSS :)

Hope that helps!



Migrating from OpenTracing.NET to OpenTelemetry.NET


OpenTracing is an interesting project that allows for the collection of various trace sources to be correlated together to provide a timeline of activity that can span services for reporting in a central system. They were competing with OpenCenus but have now merged to form OpenTelemetry.

I was recently brought in as a consultant to help migrate an existing system that used OpenTracing in .NET that recorded trace data into Jaeger so that they might migrate to the latest OpenTelemetry libraries. I thought it would be useful to document what I learnt as the migration process is not particularly clear.

The first thing to note is that OpenTracing and OpenTelemetry are both multi-platform systems and so support many languages and services. While this is great from a standardization process it does mean that a lot of information you find doesn't necessarily relate to the library you are using.

In this particular case we are using the .NET/C# library so moving from OpenTracing API for .NET to OpenTelemetry .NET and while there are plenty of signs that OpenTracing is deprecated and that all efforts are now in OpenTelemetry it's worth noting that as of time of writing OpenTelemetry .NET isn't quite done - many of the non-core libraries are in beta or release-candidate status.

How OpenTracing is used

The OpenTracing standard basically worked through the ITracer interface. You can register it as a singleton via DI and let it from through where it is needed or access it via the GlobalTracer static.

This ITracer has a IScopeManager with am IScope being basically an active thread and the ISpan being a unit of work which can move between threads.

Typically usage looks a little something like this:

class Runner {
  private ITracer tracer;

  public Runner(ITracer tracer) {
    this.tracer = tracer;

  public void Run(string command) {
      using (var scope = tracer.BuildSpan("Run").StartActive()) {
        // ...

With scopes capable of having sub-scopes, tagging and events within them to provide further detail. The moment the StartActive is called the clock starts and the moment the using goes out of scope the clock ends providing you the detail levels you need.

How OpenTelemetry changes things

Now OpenTelemetry changes things a little breaking Tracing, Metrics and Logging into separate things.

While OpenTelemetry uses much of the same terminology as OpenTracing when it comes to the .NET client they decided to take a different approach and rather than implement Span again they use .NET's built in ActivitySource as the replacement so there's less to learn.

So we create an ActivitySource in the class and then use it much as we would have used ITracer.

class Runner {
  static readonly AssemblyName assemblyName = typeof(Runner).Assembly.GetName();
  static readonly ActivitySource activitySource = new ActivitySource(AssemblyName.Name, AssemblyName.Version.ToString());

  public void Run(string command) {
      using (var activity = activitySource.BuildActivity("Run")) {
        // ...

This has the advantage of not needing to pass around iTracer and ActivitySource is optimized to immediately return nulls if no tracing is enabled (which the using keyword handles just fine).

If, however, you are using multiple systems and want to stick the the OpenTelemetry terminology of Tracer and Span instead of ActivitySource and Activity then check out the OpenTelemetry.API shim (not to be confused with the OpenTracing shim covered below).

There is a full comparison available of how OpenTelemetry maps to the .NET Activity API available too.

WARNING using and C# 8+

Some code analysis/refactoring tools may recommend changing the using clause from the traditional using (var x...) to the brace-less using var x... of C# 8.

Do not do this with BuildSpan or BuildActivity unless you have only one of them and you're happy for the timing to consider the end when the method exits.

This is because removing the braces removes the defined end of the span or activity. This means that they will continue on beyond their original intended scope and carry on their timings until the end of the method.

How to migrate

It is possible to migrate in steps by switching your application over from OpenTelemetry to OpenTracing while still temporarily supporting ITracer until you have moved although the code to set this up correctly is easy to get wrong (and if you do you will see duplicate spans in your viewer).

In ASP.NET Core you want to register your AddOpenTelemetryTracing with all your necessary instrumentation. For example:

services.AddOpenTelemetryTracing(builder => {
        .AddSqlClientInstrumentation(o => {
            o.SetDbStatementForText = true;
            o.RecordException = true;
        .AddJaegerExporter(o => {
            o.AgentHost = openTracingConfiguration?.Host ?? "localhost";
            o.AgentPort = openTracingConfiguration?.Port ?? 6831;

Backwards compatible support for ITracer & GlobalTracer

If you still need to support ITracer/GlobalTracer in the mean time you can use OpenTelemetry.Shims.OpenTracing to forward old requests on. Simply add this block of code to the end of the AddOpenTelemetryTracing covered above:

    services.AddSingleton<ITracer>(serviceProvider => {
        var traceProvider = serviceProvider.GetRequiredService<TracerProvider>();
        var tracer = new TracerShim(traceProvider.GetTracer(applicationName), Propagators.DefaultTextMapPropagator);
        return tracer;

Note that other combinations of trying to get this working for us resulted in duplicate spans. It is quite possible that duplicate trace providers can cause this.


With OpenTracing Jaeger required the use of it's own C# Client for Jaeger however this library is now being deprecated in favor of OpenTelemetry.Exporter.Jaeger.

There are some limitations with this - specifically right now the Jaeger support is for tracing only and uses the UDP collector only which means no authentication and limited message sizes.

Jaeger looked at directly supporting the OpenTelemetry collector system but that experiment has been discontinued and is now planned for Jaeger 2.

In the mean time if these limitations are a problem it's possible the ZipKin exporter might be a better choice given that Jaeger also supports that.

Logs on a Span

OpenTracing helpfully correlated logs from ILogger to the appropriate span so they could appear alongside what they related to in, for example, Jaeger.

OpenTelemetry wants to keep logs separate and instead relies on .NET's ActivityEvent for this purpose but if you already have lots of ILogger usage instead then the OpenTelemetry.Preview NuGet package has your back and will re-dispatch ILogger entries as ActivityEvent ensuring they end up in the right place as they did with OpenTracing.

services.AddLogging(o => {
  o.AddOpenTelemetry(t => {

Dealing with pre-release NuGet packages

At time of writing a number of the OpenTelemetry libraries are pre-release. If you are using .NET Analyzers they will prevent a successful build because of this. If you are absolutely sure you want to proceed then add NU5104 to the <NoWarn> section of your .csproj, e.g.


Additional tracing

OpenTracing has a popular Contrib package that provides support for a variety of sources. Much of this is replaced by additional OpenTelemetry .NET packages, specifically the following packages which will need to be added to your assembly and the necessary .

Technology OpenTelemetry Package Extension method
ASP.NET Core Instrumentation.AspNetCore AddAspNetCoreInstrumentation
Entity Framework Core Contrib.Instrumentation.EntityFrameworkCore AddEntityFrameworkCoreInstrumentation
HttpHandler Instrumentation.Http AddHttpClientInstrumentation
Microsoft.SqlClient Instrumentation.SqlClient AddSqlClientInstrumentation
System.SqlClient Instrumentation.SqlClient AddSqlClientInstrumentation

There are many additional services that were not natively or contrib-supported as well including Elasticsearch, AWS, MassTransit, MySqlData, Wcf, Grpc, StackExchangeRedis etc.

Hope that helps!