Ben Frain Ruminations and occasional fugacious panpharmacons of Ben Frain Fri, 17 Jul 2020 15:09:58 +0000 en-US hourly 1 Review: Autonomous ErgoChair 2 office chair Wed, 15 Jul 2020 19:44:18 +0000 TL;DR Summary

The Autonomous ErgoChair 2 is a great value proposition. It’s described as an ‘ergonomic office chair’ and I can’t speak to that academically but thanks to almost limitless adjustment permutations, I can say with confidence, you’re virtually guaranteed to find the ErgoChair 2 one of, if not the, most comfortable office chair you have used.

It lacks a premium price tag, so you’re not going to find premium materials everywhere. Compared to more expensive offerings, you’re missing out on metals for the legs, for example, and perhaps more recycle friendly materials for the packaging.

But, in terms of bang for your buck; this will be money well spent. For the last two weeks I’ve been using the ErgoChair 2, instead of a HAG Futu 1200, a chair that retails in excess of £650; easily more than twice the price of the ErgoChair 2. However, I haven’t just been opting for the ErgoChair 2 to try it out, I’ve chosen to sit in it because it’s far and away a more comfortable chair.

I should say at this point, if you’re reading this review and know nothing of me, I am a web developer and author. Subsequently I’m at a computer all day. My hope was the ErgoChair 2 would provide something better for me than a run-of-the-mill office chair. It absolutely provides that.

So, if you are in a rush, this is ultimately what you need to know:

Yes the ErgoChair 2 from Autonomous is worth the £269/$349. It’s very comfy, thanks to being almost infinitely adjustable, and is stylish enough that it could either smarten up your home office or look entirely at home in a modern office space.

That’s the short version. However, if you want more information and background, read on. Also, Autonomous have provided me with a 10% discount code that will be valid for a month from this review. Details at the end!

Why the ErgoChair 2?

In late summer 2019 I started writing a book in the evenings. The office I work in has a standard sitting desk and I didn’t want to be sitting all day in the day job, coming home and sitting all evening while writing too. Subsequently I bought a standing desk and all was good.

Skip forward a few months and the world entered the Covid-19 lockdown. Like the rest of us lucky enough to carry on working, I was spending all my time in my home office. This meant shifting between standing and getting by with a sub £50 chair from Amazon.

The cheap chair was horrid, but fine for the odd 10-20 minute session it had been used for previously.

During lockdown, I tended to do a couple of hours standing, then an hour or so sitting and alternate between the two.

The sitting part was becoming burdensome. I’d retrieved a decent chair from the office, a HAG Futu 1200, but despite its premium credentials, I’d never really found it that comfortable.

I looked at 2nd hand Herman Miller chairs and the like but they were still very expensive. I wondered what less expensive alternatives were available.

That search led to the ErgoChair 2 by Autonomous.

Who is this suitable for?

I’d say, if you have a home office, or small office, and need an extremely comfortable, aesthetically pleasing office chair, the ErgoChair 2 should absolutely be on your shortlist.

Manufacturer support is great; there are heaps of videos on the Autonomous site and FAQs covering everything I needed an answer to, they offer free shipping and lots of nice colour ways to choose from. In addition, Autonomous look like they are in to this for the long hall. If you are in the US you can even get an entire office! Check out the Zen Work Pod!

Delivery & unpacking

This thing comes in a big box. It weighs in at 30Kg/67lbs so it’s not the kind of parcel you can tuck under your arm and trot upstairs with. You’ll likely need someone to carry it with you, although I confess I resorted to ‘cart-wheeling’ end over end up the stairs with no issues. Think of it as mini #WFH workout!

The ErgoChair 2 delivered in a large cardboard box

Don’t expect fancy packaging, as it’s merely functional; double cardboard-boxed and packaged suitably to keep it in tact during transit. However, quite a bit of bubble-wrap and plastic is used. I’d like to see Autonomous use more simple, recycle friendly materials in future, if at all possible.

Everything you need to assemble the chair is included along with simple clear instructions. If you have ever assembled a basic piece of Ikea furniture, this is no harder. Probably easier!

From opening the box, assembling and sitting on the chair took under 30 minutes. If you want further guidance their is an assembly video you can follow on the Autonomous website.

A pile of parts ready to be put together ErgoChair 2 assembled and ready to use

What I really like

The single biggest thing you lack in a ‘cheap’ office chair is adjustment. Stumping up a little more to get the ErgoChair 2 means you get every possible adjustment you can imagine. As such I have found this chair to be the comfiest office chair I have used.

The main thing for me, being relatively short, is the seat tilt. For you it might be the lumbar support, or the adjustable armrests. Like I said, this thing has LOTS of adjustment possible. To get a good idea of the kind of adjustment possible, I’d recommend taking a look at the features guide on the Autonomous site.

It has a smoothness to the mechanisms. All the leaning and spinning is just lovely and smooth. Whether this will be the case in 6 months or 6 years remains to be seen but in day to day use this feels solid.

I really like the mesh back. On a hot day, you can stick a fan behind you and get a lovely breeze on your back. Even on normal days it just makes sitting for any length of time more comfortable.

What I don’t like

The arm rests can move around horizontally a bit as you rest on them. It’s great to have the option to tweak the horizontal position but as there is no way to lock them, it only really makes sense to enjoy the widest setting as that’s the only position you can guarantee they will stay in. It’s not a deal breaker, and the vertical positioning is rock solid. But, something to be aware of.

There are occasionally aesthetic blemishes. For example, there a little chrome divider on the arms. There is a little dink on both of this chair. Also one of the arm rest pads has a dent on the side, presumably where it has been pushed against another part in transit and storage. They are minor issues, and I know I won’t even think about them in a week or so, but I highlight them here purely to set your expectation. If you are hoping for absolute manufacturing perfection, you’ll need to spend a significant amount more to attain it.

Arm of the ErgoChair 2 slightly dented

I also mentioned the packaging before. I think Autonomous could do a better job here. Less plastic, foam and bubble wrap would be great in future.

Quick Q&A

There is a complete Q&A on the Autonomous website, but here are the first things I was asked about it:

Q. Can the armrests come off?

A. Yes.

Q. Can you lock the armrest position?

A. Vertically, yes, but the tops of the armrests, while very adjustable, don’t lock when set (honestly, not a big deal).

Q. What’s it made from?

A. Mostly plastic. Seat back is nylon so nicely breathable.

Q. Is it ready assembled?

A. No, you need to build it. Takes about 30 minutes, and not difficult. The only tool you will need to assemble it is included.

Q. How long is the warranty?

A. 2 year warranty for manufacturing defects.

Q. What if I’m not happy?

A. Like a lot of modern mattress companies, you can try it for 30 days and they will collect it from you if you are not happy with it.

Q. What weight will it support?

A. 25 stones/350 lbs/158Kg.


The Autonomous ErgoChair 2 is a well priced, fully adjustable office chair. At the time of writing I’ve been using it for roughly two weeks. I’ve found it extremely comfortable and have zero complaints where comfort is concerned. Indeed, it’s the most comfortable office chair I have used.

Angled shot of the ErgoChair 2 showing the mesh back

If you need a modestly priced, attractive, and crucially, comfortable office chair, I would encourage you to check it out.

Off the back of this review, Autonomous have provided a 10% discount code. It’s only valid until August 21st 2020 so if you’re in a position to buy — get on it!

Enter code ‘BENFRAIN710’ for 10% discount on all Autonomous products.

]]> 1
Apple, I fear, is lost Fri, 10 Jul 2020 11:39:38 +0000 I’ve loved Apple products for 20 years. I’ve written for Mac magazines including Macworld and MacUser (UK folk — MacUser was great, wasn’t it?) over the years. I’ve watched Apple go from comparative plucky underdog to champ; like a technological ‘Rocky’ story. However, just like the Rocky films, it seems Apple is hitting the shaky sequels.

The 2020 WWDC keynote wasn’t bad because it was online. It was bad because it highlighted a lack of direction for their software products, and a lack of polish for how they are implemented and presented. Something as simple as the presenters reading from auto-queue machines just off the side of the camera was, sadly, laughably bad, and arguably indicative of where things are headed in Cupertino.

As more and more of the keynote went on, I felt a pity for Apple I’ve never felt before.

So what is iOS 14 giving us?

Apple used to be in the business of selling you simplicity. You felt, for the most part, like there was a unified design/approach. Typically, this manifested in less stuff, and software solutions that required less cognitive load. Across an Apple OS and ecosystem, there was often a prescribed way of accomplishing things. Less, was usually more.

iOS 14’s principal ‘improvement’, UI wise, is basically a load of ‘junk drawers’ for iOS Apps, and a message to ‘go tidy your own room’.

A half-baked solution to a problem, entirely made by Apple.

Thanks to geography and the possibility of cheap labour, we’ve got a modern age where, in the western world, we have arrived at the ridiculous situation, where we have to fight hard to keep our homes clutter free.

Thanks to the financial success of the App Store, Apple are now in the business of making you have more digital stuff; the App Store equivalent of cheap plastic junk — junk they have no solution for how to deal with.

Apple’s proposed solution is the iOS 14 junk drawers, AKA ‘App Library’.

Progress? Innovative thinking? This doesn’t seem like we’re heading in a positive direction. Don’t even get me started on Widgets!

Apple have created a monster. It’s called the App store

The App Store set a bar for how applications could be dealt be. It enabled beautifully simple one-click install and seamless, idiot-proof, updates. A system both powerful and accessible for all. At first, the App Store surfaced delights, and the ‘fart’ apps were aberrations.

Now it is a refuse site, where worthwhile applications are rarities in the abundant and far more common detritus.

‘Free’ applications, particularly those targeted at children, almost always have ads running, or make use of digital slight of hands, to part children and parents from their money. They seem the very antithesis of the secure, high-quality applications we’re told the App Store provides and makes possible.

The only ‘quality control’ that’s abundantly happening at Apple is enforcing apps are paying Apple their 30% cut. Just today, Jon Gruber has talked about this better than I could.

Even if someone at Apple was incensed enough with the state of the App Store to do something about it, how will they ever get that past the shareholders?

The balance has shifted. The way the App Store runs currently is better for Apple than it is for Apple’s users.

Safari and the endless web platform shortcomings

My biggest personal gripe is with Safari.

Safari is so far behind the curve in terms of modern web features it’s not even worth getting into in any detail again. If you’re a web developer you know, you’ve experienced it first-hand and you’ve read plenty of pieces and commentary on the fact.

Bottom line: battery/memory performance, security — great. Features, visual performance, implementation of modern web platform features? Dog shit.

Unless something particularly enhances something Apple want to do in the App Store, it seems standard practice to bunt it into the long grass.

Here are a few of the choice omissions from my own perspective

That’s not even mentioning things around progressive web applications. Go speak to Maximiliano (@firt on Twitter), he’ll give you another list at least twice as long!

When it comes to implementing things in WebKit, I’ve heard the line that ‘there isn’t enough resource’. But you can’t be the worlds largest company and use that line. I don’t believe it because it’s not true.

Apple don’t want Safari to be as good as it could be. It can’t make Safari as good as it could be because it will cannibalise the App Store. This fact is a simple and irrefutable inconvenient truth.

Using an iPhone? Look at the Apps on your home screen right now. How many of those are or could easily be a web app? If ‘Add to Homescreen’ said ‘Install as App’ I imagine most of mine could be done that way.

Apple’s own App Store policy (section 4.2) states it doesn’t want repackaged websites.

Your app should include features, content, and UI that elevate it beyond a repackaged website. If your app is not particularly useful, unique, or “app-like,” it doesn’t belong on the App Store

Apple, if you made Safari more capable, many applications wouldn’t need to be made as native iOS apps!

They have obviously sensed some growing disdain in the web developer community, and perhaps things are changing and they want to tell us about it. But rather than do some research and get themselves a few great devs from the developer community to become Developer Advocates, they have just nicked one of Mozilla’s existing Developer Advocates. It reeks of laziness and arrogance. I mean, get someone, anyone, interested in progressing the web at Safari, but get your own! A fresh voice, some new perspectives, someone from within???!!!!


Because I’ve admired Apple for so long, I feel so saddened by their current state.

Historically, they were the mavericks, and rarely took ‘the easy path’. Now they seem so weighed down by their financial success they can’t turn back on something like the App Store, even if they want to.

Ultimately, it seems Apple needs to re-find itself. Perhaps it’s like some popstar the world at large has been clamouring after relentlessly for so many years only to find up they’ve had to go to rehab because they’re emotionally and artistically spent.

I hope we get to see something of the Apple I admired in the past in the Apple of the future.

]]> 0
June 2020 Update Tue, 30 Jun 2020 16:57:34 +0000 This is a copy of a newsletter I try to get out each month. It goes direct to newsletter subscribers. You can enjoy it straight to your mailbox by signing up here.

I’ve started making YANA

I mentioned in the last update I was thinking about making a notes app. In the last couple of weeks I made a start.

The App is called YANA (you can guess what that stands for, right?) and it will be a web-based application. Eventually, hopefully, something subscription based for others to use too.

I take notes all the time and whilst Notes on macOS/iOS has served me well, it has always been a little clunky, and obviously hopeless if you ever want your notes on another device (Android, Windows, Linux).

I wanted something that took the best part of Notes (and not the ruddy auto-correct!) and simplified things further still, allowing incredibly rapid searching and creation of notes you can access anywhere.

I have lots of features planned but aim to get the MVP up and running first.

Worst case, it’ll be something only I enjoy using. Best case, something many do!

YANA is going to be my new ‘30 mins a night’ project so I’ll be updating you on progress with each newsletter, even if just the odd screen-shot or feature description.

So far I’ve only got as far as getting Node, Express and MongoDB setup and a basic Gulp build file made for rudimentary developer ergonomics as I hack away.

Svelte or lit-html?

I’m a big fan of lit-html having used it extensively for a year or so now, day-to-day, for building prototypes.

I’m currently debating whether to use that client-side for YANA (see above) or give Svelte a whirl.

Svelte appeals to me in a way that React never has. Besides the fact React comes from Facebook, I just never felt the pull to React that others have. I find peoples willingness, indeed eagerness, to pull in a framework with its associated page weight to do the most basic of jobs baffling.

Svelte on the other hand, with its ability to compile away, seems more aligned with my sensibilities. Anyway, maybe by the next update I’ll have made a decision.

Design IS development

I nearly made the mistake of forgetting what I learned last time I made myself something app-like and skipped designing to begin with.

Thankfully I caught myself and forced myself to get some designs made in the first instance to head off as many dead-ends as possible before getting to code.

Moving from Sketch to Figma

Last time I designed something ‘app like’, I did it in Sketch, but I’m using Linux currently so that’s not possible. Thankfully, I had Figma to turn to. And if you ever wanted a demonstration of just how good a browser based application can be, Figma isn’t a bad place to start!

It took an evening or so to feel competent in Figma, and now, I already think I prefer it to Sketch. Obviously it helps that, for now at least, it’s free for lone-gunmen!

Moved to Fedora 32

Earlier this month I wrote about using Linux Ubuntu as my main system. Well, turns out that was pretty short-lived as I’ve now moved onto Fedora.

Again, comments on my blog have proven to be worth writing the piece for. Someone (thanks Gour) suggested Fedora and also Fish shell, both of which I am now using.

I do miss using Tilix a little, which is a Terminal that shoots in from the top of the screen on keypress, and also Thunderbird is still in low-res mode currently in Fedora 32. However, these are minor grievances. Thunderbird will be HiDPI there soon and thanks to the fantastic window-snapping shortcuts of Linux, the standard Terminal is serving me fine.

Ubuntu and Fedora are certainly very similar but I just feel at home on Fedora. Just something about the design feels better to me. Plus the 4K display settings just work, without oddly resetting every so often, as they did on Ubuntu.


After the piece on Linux, a couple of people reached out on Twitter and the post comments to tell me about the ‘Compose’ key in Linux. This is something that feels conceptually similar to a leader key in Vim. You set a ‘compose’ key (mine is set to the right ‘Alt’ key) and then type a series of keys to get the symbol you want. For example, an em dash would be ‘compose key’ and then ---. The UK pound symbol is ‘compose key’ followed by L-. One more, an accented letter ‘a’ would be ‘compose key’ followed by a and then a backtick.

One thing that as a pain to setup on Fedora 32 though was Docker. Wait, what? Did he say Docker. Yes, nerds. I’ve been getting schooled on Docker by Craig Buckler who let me take an advance look at his ‘Docker for Web Designers’ course.

Docker Book from Craig Buckler

I knew the sum total of nothing about Docker a month ago. Well, I had a vague idea about what it was conceptually, but nothing more. I knew the nerds downstairs at work got excited about it from time to time but that was it. I certainly didn’t appreciate what it could do for the average web developer.

Now I tend to only have a few things on the go personally, but if you freelance and regularly have to work on multiple projects on different stacks, I’d say you’re missing a trick if you haven’t looked at Docker yet.

Anyway, I don’t want to steal the book/courses thunder. Go take a look for yourself. It’s going to be on offer for $50 for book and videos to begin with.

My Sony XM3 headphones broke

I loved my Sony XM3s. But then they broke — and not through any fault of my own. I always treated them with great care. However, apparently, this is a well-known and common grievance, as the Sony user forums testify.

I was pretty annoyed as they were very expensive for a set of headphones. I got onto Amazon customer service and despite me having owned them for over a year, they offered to take them back and provide a refund. I really felt I dodged a bullet there but also felt bad for all the other owners who weren’t lucky enough to be offered a full refund.

Long story short: I can’t recommend the Sony XM3s anymore. For all their plus points, something of that nature should last well beyond a year and Sony should stand by their product and provide free repairs to those effected; it’s clearly a manufacturing/design issue.

As a replacement, for now I’ve picked a pair from Amazon made by Boltune for £50 which get me about 80% of the Sony performance for nearly 15% of the cost!

W4H, new KVM and incoming office chair from Autonomous

After the last one of these newsletters I delivered on the promise of a uses page. I’ve already had to update it quite a bit as I make amendments to my working setup and software.

One thing I kept having problems with, was a decent way to switch my monitor between my work MacBook Pro and my own Intel NUC. I thought I had it cracked with a Sabrent USB-C based switch but I found out that wasn’t outputting audio via HDMI so I couldn’t get any sound out of the NUC.

I ended up returning that and picking up a unbranded ‘AV Access’ HDMI KVM switch that not only works perfectly, it also comes with a little IR remote control which has meant I can attach the switch and cables to the underside of the standing desk and switch inputs with the click of a remote. It doesn’t need anything like line-of-sight to work so I’m pretty happy about that — plus it cost less than the Sabrent I had returned!

Hopefully the final piece of the working from home setup that is on the way is an ErgoChair 2 from Autonomous. I’d borrowed a chair from work in the interim but the ErgoChair 2 promises something approaching Herman Miller Aeron levels of comfort and adjustment for a fraction of the price. Full review on that coming once I’ve got it and spent a couple of weeks with it.

Books and learning

I’ve been light on reading since our commutes have disappeared (my Audible subscription is on pause). However, one thing I have enjoyed, learning wise, is Daniel Shiffman AKA Coding Train on YouTube. I watched the section on working with data APIs and this guys enthusiasm is fantastic. Highly recommended.

Other than that I’ve been trying to wrap my head around all things ‘full-stack’ as I attempt to get to grips with making an API, populating a MongoDB and considering all the possibilities that Node offers.


That’s all for this month. Thanks for reading. Hope to see you again next time.

]]> 4
A Gulp build script for Express, PostCSS and JavaScript with BrowserSync Wed, 17 Jun 2020 14:57:30 +0000 I’m building a basic Express-based Node app. I wanted a simple build system to rebuild/reload everything with BrowserSync when I made a change to my source files:

  • change source CSS? It runs postcss, processes it and spits it into the ./public folder
  • change front-end JS? It concatenates in and spits it into the ./public folder
  • change a handlebars template in my ./Views? It should restart the Node/Express app and reload the browser

No good examples

Scouring the net for examples of how to do this I was stumped in two ways; I’d never done this before so wasn’t sure what I needed, and secondly, there didn’t seem to be any straightforward examples of how to achieve this with Express in the mix.

Perhaps scripts would have been the more straightforward option but I opted for Gulp.

I spent a few hours trying to get a basic build system running last night and in desperation turned to Twitter. Thankfully I had my direction validated by Mr Buckler so I just had to figure out how to get it all working.

The solution I ended up with (using gulp-nodemon)

The “js” and “css” tasks were very straightforward, although I had to define them as a task rather than write them as a function for them to play happily with Nodemon.

The key thing to figure out was the gulp-nodemon plugin. Second key thing was getting browserSync to proxy to the right place.

So, my Express app was configured to use port 3000:

app.listen(3000, () => {
    console.log(`listening on ${"3000")}`);

So I set that as the browserSync proxy and actually viewed my work at http://localhost:7000/. Here’s the full function:

function serve(done) {
    server.init(null, {
        proxy: "localhost:3000",
        open: false,
        files: ["public/**/*.*"],
        port: 7000,

Configuring gulp-nodemon

Then, the main work was in figuring out how to get nodemon configured. Fairly straightforward is setting your script key. This is the entry point for your app. For me, it was the inventively titled `app.js’!

Turns out that gulp-nodemon has it’s own watch functionality built in. Set the extensions you are interested in with the ext key and it will do its thing when a file changes.

Just be sure to exclude your output files/folders or it will do things twice; once when you change the source file and again when it gets rewritten in the destination. You do that by adding stuff into the array for the ignore key.

Full gulpfile.js

Anyway, future self and copy and paste fiends, here is the full Gulpfile I ended up with:

var gulp = require("gulp");
var nodemon = require("gulp-nodemon");
var concat = require("gulp-concat");
var postcss = require("gulp-postcss");
var cssnano = require("cssnano");
var notify = require("gulp-notify");
var postcssMixins = require("postcss-mixins");
var simplevars = require("postcss-simple-vars")();
var autoprefixer = require("autoprefixer");
var browserSync = require("browser-sync");
var postcsseasyimport = require("postcss-import");
var postcssColorFunction = require("postcss-color-function");
var assets = require("postcss-assets");
var nested = require("postcss-nested");

const server = browserSync.create();

gulp.task("css", function (done) {
    var processors = [
            relative: true,

        .pipe(notify("Boom! Up in yo CSS face!!"));

function serve(done) {
    server.init(null, {
        proxy: "localhost:3000",
        open: false,
        files: ["public/**/*.*"],
        port: 7000,

gulp.task("js", function (done) {
    gulp.src("preJS/*.js", { sourcemaps: true })
        .pipe(notify("Boom! Up in yo JS face!!"));

function nodeDev(done) {
    var stream = nodemon({
        script: "app.js",
        ignore: ["gulpfile.js", "node_modules/**", "public/**"],
        tasks: ["css", "js"],
        ext: "js css html",
        env: { NODE_ENV: "development" },
        done: done,
        .on("restart", function () {
        .on("crash", function () {
            console.error("Application has crashed!\n");
            stream.emit("restart", 10); // restart the server in 10 seconds

const dev = gulp.series(serve, nodeDev);
exports.default = dev;


I hope that saves someone a few hours in future or at least puts them on a decent path.

Probably not the sleekest solution but I can now get on with trying to build something instead of configure my build tools!

If anyone has something more effective, I’m all ears!

]]> 0
Linux Ubuntu as my main computer, one month in Wed, 03 Jun 2020 19:16:16 +0000 For the last month, I’ve moved my iMac out of the office and used Linux Ubuntu as my main day to day system.

I’ve only ever interacted with Linux on servers, running odd CLI commands I’d found off the net to accomplish some esoteric task. I’d never used it as my desktop system.

What follows are my notes having used Ubuntu for the last month. I’m a web developer so there’s a heavy slant in that direction.


Got a Mac? Unless you do heaps on the CLI, Linux isn’t going to offer you a nicer experience. macOS is peerless when it comes to slickness.

However, for about 40% the cost of an equivalent Mac (a Mac Mini would be closest to what I have) you can have a very capable and stable development system. Let’s say 90% of the Mac experience. I’ve had ups and downs but at this point, one month in, it’s a decision I’m happy I made.


The general consensus is that Linux is very light in terms of hardware needs. I opted for an Intel NUC with i3 processor, added 32GB memory and a 240GB SSD. It’s more than enough!


Installing Linux was ludicrously fast and straightforward. I downloaded an Ubuntu 20.04 LTS image onto a USB stick, booted my fresh system with it in and a few clicks and no more than 10 minutes later I was looking at the Ubuntu desktop.

I didn’t need to mess around with anything. No drivers to install. Everything (WiFi, Bluetooth, Ethernet, peripherals) just worked. The only oddity was getting the 4K screen I had attached (a HP Z27) to get the right resolution at 60HZ. It was a combination of choosing the correct resolution (3840×2160), refresh rate (60.00 Hz) and scale (150%). I also needed ‘Fractional Scaling’ enabled and the scale set to 150%.

Not long after getting all my essential applications installed, I had to swap out my hardware (turns out I had ordered the wrong NUC). Thankfully, I just moved my ram and SSD hard drive (I’d never had a NVMe drive before — can’t believe how tiny they are!) into the new hardware and it all started up again with zero issues! Can’t imagine that happening with Windows.

Random ‘gotchas’ coming from macOS

  • Muscle memory from macOS means initially, with a browser open, I keep pressing Super+L to get to the browser bar, and that puts the system to sleep!

  • To skip to the beginning/end of a line of text you have to use ‘Home’ and ‘End’ keys, there is no equivalent with the arrow keys as you would with Mac

  • Typing an em dash! Definitely a departure. On Mac you just do Shift+Alt+-. Not so in Linux land. There you hold down CTRL+Shift+U and then type the unicode symbol (in this case ‘2014’).

  • the general quality of Linux apps seems an order of magnitude lower than those on macOS. I’m not just talking visuals; the functionality and robustness seems poorer. Or perhaps I’m making poor choices.

Installing stuff WTF???

If you go to a download page for Linux you are often greeted by a bazillion different install options; flatpack, snap store, apt-get, sausage-install etc (yep, I did make one of those up).

Some places will point you to a downloadable file, which you would think you can double-click to install. No such luck. You have to give things all sorts of permissions and ultimately it’s a drag.

Ubuntu also has something called the ‘Snap Store’. As far as I can tell it’s their attempt of something like macOS’s App Store. I found it to be a shower of shit. Half the apps don’t work and when you want to remove something that doesn’t work, that seldom works either. I quickly gave up on it.

For example, I downloaded the ironically named ‘Remarkable’. It didn’t work, so I opened Snap Store and clicked ‘Remove’, it asks for confirmation and then… nothing. You wait, the app stays in the Snap Store list of installed applications. Has it gone? Is it still there? Who knows? Even after running sudo snap remove remarkable it still shows in the list of installed applications.

My current advice is this. Just use the command line and install everything with apt (apt stands for ‘Advanced Package Tool’).

And speaking of the command line…


Doing anything with the Terminal is incredibly fast. I never thought the CLI on macOS was slow, but compared to this it is glacial.

Running something as simple as sudo npm i on a project is blazingly fast in Linux. Installing applications this way seems odd at first but once you get used to it, it’s fantastic and incredibly quick.

The default Terminal is actually pretty decent. Getting Oh My ZSH and the like installed is also straightforward.

A colleague I hold in the highest esteem (Hi Pete) has tried to get me on the Tmux train but as my CLI needs are rudimentary, I’ve opted for Tilix, and using CTRL+Shift+T to bring it in Quake style, like a visor from the top of the screen.

So, yes, Terminal and CLI on Linux is hands-down unbeatable.


Firefox seems to have the nicest looking and fastest rendering on Linux. It’s also the browser that comes installed by default. Chrome is a distant second place. With Chrome you can scroll quickly up and down a page and see the screen ‘tear’ as it re-draws. I’m unsure why Chrome is so poor in this regard. Maybe it doesn’t have GPU rendering enabled??

Firefox on the other hand is great in terms of rendering speed. I have more stuff saved (passwords) in Chrome so I am still spending most time there but if the next few updates don’t improve matters I’ll be swerving hard towards Firefox.

For trouble-shooting Safari bugs, the Epiphany browser is conveniently behind the curve. It’s a Linux browser based on WebKit (the same engine that Safari runs on). This means I was able to trouble-shoot a problem in Safari 11 simply because that was where Epiphany was at in WebKit terms; as I write this, desktop Safari is at version 13.

True macOS/Safari testing can apparently be sorted by using Sosumi but having tried to get it up and running I eventually removed it. Hopefully future versions will be more straightforward or my Linux skills will have improved!

Applications and utilities

uLauncher is my Alfred replacement Nope! Moved to Albert. Everything seems to pretty much work as I expect in Albert, calculator, getting a clipboard history feature (you need to install ‘CopyQ’ and enable the python option in Alfred).

Sublime Text oddities

At first, skipping up and down a page in Sublime didn’t seem anywhere near as nice and smooth as on a Mac. However, Sublime Text 4 has a new hardware-acceleration setting and setting that to "opengl" seemed to solve the issue.

WordPress dev environment

When doing any work on this site with macOS I would use MAMP to spin up what I needed for WordPress. The equivalent in Linux world is XAMPP. Installing XAMPP was an awful experience. With a 4K display you can barely understand the interface! Getting WordPress running in that environment was equally odd.

Getting WordPress running locally

Aside from the resolution problem of the XAMPP interface not dealing well with 4K displays, there are more issues when you opt to install WordPress the ‘easy’ way with a Bitnami module. The visual installer is essentially useless on a 4K display.

Long story short. Run the Bitnami installer from the CLI instead!

  • Spin up XAMPP with sudo /opt/lampp/lampp start
  • Log into the PHPMyAdmin and set a password for the root DB user. Follow steps 1-6 here
  • Run the binami setup. Assuming you have downloaded the Bitnami file to your ‘Downloads’ folder, go there in the CLI and run:
chmod 755
./ --mode text

This will, mercifully, start a text only version of the installer. Follow that through and eventually you should be able to hit and see your default WordPress install. Phew!

Mail client

Thunderbird! I’d not used Thunderbird since about 2004! Visually, it hasn’t changed! However, that fact can be greatly improved. I added a ‘DeepDark’ add-on and it looked acceptable. Toggling off some of the default view options also lowered the immediate noise of a standard mail view. It remains a great mail client for those of use that would rather deal with mail locally.

I also tried ‘Geary’ as the press spoke favourably of it but I didn’t feel it offered anything above and beyond Thunderbird.


Everything I use day to day, which is a whole load of Node/NPM related stuff (parcel, Gulp, Express etc) a text editor and various browsers works beautifully.

After a little friction, Linux starts to feel as normal as anything else. The reality is, for most devs, we spend so little time actually dealing with the Operating System it makes little difference.

With a first class CLI such as the one Linux enjoys, the visual shortcomings of the OS soon fade to irrelevance. Not sure I would feel like that if I wasn’t using it for web dev. I’d miss things like Photos on the Mac or iMovie but my use of those applications is a rarity these days.

There was a point I didn’t think I would be keeping the Linux box but perseverance is paying off. Making things like the mail client palatable and using Noto font instead of the stock Ubuntu for the system font made a disproportionate difference in my happiness.

Hopefully, this Linux set-up will be for life, not just for lockdown.

]]> 8
Converting a basic v3 gulpfile.js to a v4 gulpfile Tue, 26 May 2020 22:06:00 +0000 You have that old project with a Gulp build script and you need to make some updates. You’ve updated NPM and Node in the meantime so when you come to run Gulp it fails.

All the dependencies are out of date so you update them. You try Gulp. It blows up again.

It’s time to bite the bullet. You need to upgrade your Gulpfile.js so it works with v4.

This was the reality of the situation I found myself in recently. I couldn’t be bothered to do this, and if you are reading this, chances are, nor can you. But here we are. So, let’s just get our heads down and get through it.

TL;DR if you just want a basic Gulp v4 buildscript

Go take a look at the repository for the website. It includes a package.json, Gulpfile.js and .browserlistrc. With those in the root of your folder, it assumes a styles.css source file, a rwd.ts typescript file (obviously just change names to suit your needs), afonts folder containing woff files, and an img folder for any images. When Gulp runs it builds the source files out to a \_build folder.

If you want to know what needs to change from a basic v3 gulpfile to a basic v4 gulpfile, read on.


I have a basic ‘Gulpfile.js’ I created to handle developing the WordPress theme for this site. It post-processes the CSS, optimises the JavaScript, watches for changes and updates the browser on save.

About as simple as a Gulp build script gets.

The task before us

I’m not interested in doing anything fancy. I just want my v3 Gulpfile.js to work with v4 as quickly as possible.

Considerations for Gulp v4

The main thing I picked up from doing this process is that Gulp now wants two main changes in a gulpfile.js before it will work with v4:

  • it wants you to call out groups of tasks that can be run in parallel and therefore also which groups of tasks should be executed in order AKA serially.

  • it wants you to write normal JavaScript functions as opposed to gulp.task. And these functions all need to use a callback. More on that shortly.

We just want our existing Gulpfile to work in v4 remember? So, even though with v4 You can also do imports a different way, frankly I couldn’t be arsed to mess with that so I left them as is.

Comparison of the same gulpfile in v3 and v4 form

You may find it useful to consider the complete v3 gulpfile and the complete v4 gulpfile?

We’ll now look a little closer at the differences by examining individual functions.

Example Gulp v3 task:

Here is the largest `gulp.task’ I had in my v3 file. It’s the task for taking my CSS, doing all manner of post-processing and then spitting it out somewhere.

gulp.task("css", function () {
    var processors = [
            relative: true,
        autoprefixer({ browsers: ["defaults"] }),

    return gulp
        .pipe(notify("Boom! Up in yo CSS face!!"))

Converted to v4 style:

Here is that task slightly amended to work for v4:

function css() {
    var processors = [
            relative: true,

    return gulp
        .src("./preCSS/styles.css", { sourcemaps: true })
        .pipe(notify("Boom! Up in yo CSS face!!"))

I’m using a .browserlistrc file for Autoprefixer in the v4 example but that wasn’t a v4 requirement. But note that sourcemaps are just part of an options argument now instead of being a separate pipe() when you wanted to include them in v3.

Gulp v4 wants a callback. Except when it doesn’t!

The golden rule for v4 Gulp task writing is that there are no synchronous tasks. In practice, for our simple situation where we aren’t making use of promises, event emitters or observables, this means we either need to return a stream from a gulp function or provide an ‘error-first’ callback.

In our css() function above, we returned a stream (e.g. return gulp...), but if that isn’t what we will be returning, then we need to pass in a ‘error-first’ callback. That sounds far more involved than it is in practice. Look at this v3 version of the browser-sync task:

gulp.task("browser-sync", function() {
        open: false,
        proxy: "localhost:8888"

To convert it to work in v4 it looks like this (ignore the proxy address change, that was just because I was on a different system):

function serve(done) {
        open: false,
        proxy: "",

See the done callback that is passed in to the function and then executed at the end? Yes, that’s the ‘error-first’ callback. For this simplistic situation I’m not doing anything fancy with the callback — I just let Gulp blow up if I do something wrong.

For more ‘mission critical’ task writing you should probably think about doing something more meaningful with the error-handling. When/if the operation blows up with an error, that callback function is called with the error object passed to it, so it makes sense to consider doing something meaningful with it.

Watch tasks

Here is how I defined the watch items in v3:

gulp.task("watch", function() {
    // Watch .css files"preCSS/**/*.css", ["css", browserSync.reload]);

    // Watch .js files["preJS/**/*.js"], ["js", browserSync.reload]);

    // Watch any files in root html, reload on change"**/*.php", browserSync.reload);

And then my default task looked like this (which fired off the watch task):

gulp.task("default", ["css", "browser-sync", "js", "watch"]);

Converted to v4 it looks like this:

const watchCSS = () =>"preCSS/**/*.css", gulp.series(css, reload));
const watchJS = () =>"preJS/**/*.js", gulp.series(js, reload));
const watchPHP = () =>"**/*.php", reload);

And then the default task looks like this:

const dev = gulp.series(
    gulp.parallel(css, js),
    gulp.parallel(watchCSS, watchJS, watchPHP)
exports.default = dev;

Notice how rather than sticking the jobs in an array with v3 where they would be run one after the other? With v4, by passing them to gulp.parallel it means they will be run, you guessed it, in parallel.

Let’s look at another. This was the job that processed my JavaScript in v3:

gulp.task("js", function () {
    return gulp
        .pipe(notify("Boom! Up in yo JS face!!"))

Not much to change for v4:

function js(done) {
    gulp.src("preJS/*.js", { sourcemaps: true })
        .pipe(notify("Boom! Up in yo JS face!!"))

Again, very little change. It’s just a case of ensuring that when you aren’t returning a stream, you are passing the method a callback as the first argument. This could have been a stream too, as it was before but I decided to suffer for my art :).


With a few simple changes, it’s possible to get a v3 Gulpfile ported to v4 in short order. The biggest hurdle in a quick conversion is understanding that in v4, if you aren’t returning a stream, you need to be using an error-first callback.

And also, take the time to consider which of your tasks you can run in parallel and which need to run in serial; they will need encapsulating as such. No more just sticking tasks in an array and letting Gulp sort it out.

While there are plenty of other niceties these are the essentials to get up and running with Gulp v4.

]]> 0
May 2020 Update Thu, 21 May 2020 07:29:23 +0000 Right, 2nd Newsletter. Thanks for reading!

Responsive Web Design with HTML5 and CSS, Third Edition

I’ve been writing the latest edition of ‘Responsive Web Design with HTML5 and CSS’ since August 2019. Finally, it’s finished and available to buy:

So much in this latest edition! You could use the paperback edition as a weapon! Anyway, there’s a dedicated site with all the lowdown:

Dropping the ‘3’ from ‘CSS3’

In this edition of the book, I have also dropped the ‘3’ from ‘CSS3’ in the title.

It’s a topic being debated in CSS circles currently; should we be promoting a new version of CSS, such as CSS4, or CSS2020? Or does it make more sense to ignore the version numbers altogether?

I’ve come down on the side of doing away with the number suffix. It’s not that I don’t see the value from a marketing perspective — it would certainly help me as a writer of CSS books. However, I don’t feel it’s a truthful representation of how CSS gets ‘made’. There is no current version, not like in the HTML sense.

Anyway, if this debate interests you, go and read the CSS working group Drafts GitHub issue.

Curious to know how you feel about it dear reader? If you have an opinion, I’d love to read it in the comments.

Sublime Text 4

I obsess over my text editor(s) — And flit between them with reckless abandon. However, the one I keep coming back to is Sublime Text. I get why everyone loves VS Code; it’s just so convenient. But it is also just perceptibly slow enough to annoy me.

Anyway, Sublime Text 4 is in development, and if you get on the Sublime Text discord channel and have a paid for version of Sublime Text 3 you can get your hands on the dev builds as they come off the production line.

At first glance you may wonder what the difference is, but there are heaps of wonderful enhancements. From using those builds I have discovered there is functionality in Sublime I didn’t realise we have access to:

Language Server Protocol & NeoVintageous === nirvana

I’ve been using Vim since around September last year in iTerm2. It’s been great.

One of the best things was that I could use CoC to hook up to the same language server that VS Codes uses. All you need to know for the sake of this is that the LSP (Language Server Protocol) is what powers all the completion and IDE like code suggestions you get in VS Code. Well, turns out you can have that in Sublime too.

It took me a while to sort this out and needed some help from the #LSP channel in the Sublime Text Discord channel.

Long story short. If you want TypeScript LSP running in Sublime (3 or 4):

  • If you have already installed a TypeScript package, remove it. Especially this one: TypeScript. Yes, I know it’s the ‘official’ one but trust me, Okay?
  • Install the LSP package.
  • Install the LSP-typescript package.
  • Install the TypeScript Syntax package (the one by ‘braver’).
  • Restart Sublime. Now open a TS file in your folder, and change the syntax to TypeScript Syntax

Enjoy LSP goodness in Sublime!

Mechanical keyboard piece for Smashing Magazine

A long piece I wrote for Smashing Magazine, A complete guide to Mechanical Keyboards is now live.
It’s a bit of a whopper but I wrote it with the intention that someone just looking into Mechanical Keyboards for the first time could wrap their head around all they needed to know.

I’ll copy it to my own blog shortly.

On the Smashing Podcast

Although I rambled terribly, yours truly was a guest on the Smashing Podcast on which I talked about mechanical keyboards, personal well-being, losing my finger and other physical challenges etc. I always enjoy podcasts both as a guest and a listener and it was lovely to spend an hour in Drew’s company.

Building your own keyboard

I posted a picture of my first Keyboard build on Twitter a few weeks back and promised I’d write up the process in case anyone else wanted to follow suit. That will be a blog post soon! I’ve been using my own build as my daily driver since it’s been built and I’ve got the parts for a few more to build.

Maybe it’s something you would consider too. Like all things mechanical keyboards, it’s not an economical undertaking but I found it very satisfying. Never soldered before so doing that was fun too.

Ben Frain KBD67II Mechanical keyboard with Ferrous keycaps
KBD67II Mechanical keyboard with Ferrous keycaps and Zilent switches

Ben Goes Linux!

I hugely enjoyed listening and reading Dave Rupert’s adventures in Windows, not because I fancy trying Windows, but because I’m always curious about how other people work. In that vein, I was reading my RSS feed and stumbled across Nolan Lawson talking about some font-rendering issue he was having in Linux.

I don’t know why but that made me start thinking about trying Linux for myself. I reached out to ask him how he found Linux and suggested he write a post about it; which he duly did:

In the meantime, I’d also opened a poll on Twitter to ask what OS everyone used for web development these days and was surprised when Linux beat out Windows.

Anyway, this and needing to move my personal iMac out of the room I am currently using for #WFH presented a golden opportunity to give myself a pass on purchasing a ahem inexpensive Linux box.

So, I ordered an Intel NUC (teeny weeny lickle box computer), stuck in my own memory and NVMe SSD drive and installed Ubuntu.

As you might imagine, it hasn’t been straightforward, but if, like me, you have never used Linux as your desktop driver before, perhaps you’ll be interested in a post on that experience in the coming weeks?

Uses page

Unlike others, literally no-one has ever asked me what I use. But that’s not going to stop me telling the world! 🙂

So, something I plan to do in the coming weeks is add a uses page.

Text editor, hardware, OS, apps, desk, printer, mouse, keyboard, KVM, etc.

Hopefully that will inspire more of you to do the same and we can ‘exchange notes’ as it were.

A cross-platform notes app

I still have this pull towards trying to make my own web based notes app; something in a similar vein to notational velocity. I stumbled upon Tania Rascia’s website — which incidentally is a gold-mine of tutorials — and saw that she is working on Takenote which has some similar goals and is far further along that anything I have. Despite this, I just can’t scratch the itch to do something of my own.

I’ve never built anything remotely full stack so learning to do API and middleware stuff seems like it will be worth learning regardless.

Expect that sometime 2025 😀


Bit thin on recommendations this Newsletter. With #WFH being a thing I’m no longer getting my audible commute time. However, the one I have for you is a belter — ‘Code Name: Lise’. It’s the true story of Odette Sansom, the French wife of an Englishman who ended up being World War IIs most decorated spy.

It’s a seriously good read. Really makes you appreciate how much easier we have things these days. Even given the Covid-19 situation we all find ourselves in.

Until next time

I hope you all stay well. Be sure to let me know what kind of stuff you would like to read more or less of.

Best, Ben

]]> 0
My fourth book: Responsive Web Design with HTML5 and CSS, Third Edition Sun, 10 May 2020 09:27:02 +0000 TL;DR A complete overhaul and re-working of Packt Publishings best-selling responsive web design title.


  • A huge chapter on CSS Grid
  • CSS Scroll Snap
  • CSS mix-blend modes
  • Variable fonts
  • CSS font loading techniques
  • CSS clip-path and mask-image
  • CSS custom properties
  • prefers-color-scheme media queries (AKA dark mode)

Read all about it here:

Buy it now!

You can get it from all good book stores in e-book and hardcopy:


As I write this, there are only a couple of reviews from folks that kindly agreed to read and provide feedback. I’ll try and update this post with more snippets of reviews as they come in:

“Get up to speed on the modern, professional way to build websites with HTML & CSS.” Dustin Lange

“…it keeps you engaged in the reading process and you will have good laughs, because it is sprinkled with British humour.” Constantin Câmpean


I never got a website together specifically for the 2nd Edition, so I am pleased that for the 3rd Edition, actually has some content! You can go and read all about it there.

The code is all up on GitHub too for the curious:


I’ve been working on this since August 2019 so it’s fantastic to see it finally ‘out there’. Like most of us, I’m lucky enough to have a ‘day job’ so It’s been a case of chipping away, 1–2 hours a night to get this done. Guess I’ll have to find something else to do with my evenings now!


If you do read it, and want to provide feedback, I’d love to hear it, good or bad. It will help with any future editions. You can feedback by:

  • Leaving a comment below
  • Emailing contact at
  • Opening a GitHub issue on the book repository
  • Send me a Tweet if social media is more your thing.

Thanks for any readers, hope you enjoy it and find it useful!

]]> 2
WASD CODE V3 Keyboard Review Wed, 08 Apr 2020 21:44:23 +0000 I’ve been buying WASD boards since 2014. I’ve had the standard 87-Key and the CODE variants. I’ve had them with Cherry clear switches, brown MX switches and blue MX switches. The blues have been the ones I’ve tended to stick with.

What’s new with v3 of the CODE

In looking at a bunch of keyboards for a long piece on mechanical keyboards I wrote for Smashing Magazine (not published yet!) I have been looking at the WASD CODE V3. Two big things here in V3; you can choose Zealio switches, rather than just Cherry MX and the board can also be programmed. Both those points warrant further discussion but before we get to that, let’s get you up to speed.


WASD makes great mechanical keyboards, and crucially, if you are just getting into mechanical keyboards, they offer a ‘one-stop-shop’. You can choose your layout (full size or TKL for example), switches and even get every single keycap in the colour and font of your choice. They even do a white case option now. All in all, WASD is a pretty compelling option. The boards are made in Taiwan and ship from the USA.

The CODE version

The CODE version differs in that whilst the housing is the same as their stock keyboard, it uses backlighting and keycaps that let that backlight through. Otherwise it is functionally identical to the ‘standard’ WASD board of equivalent layout.


The chassis of WASD keyboards is sturdy plastic (ABS). They have a confidence inspiring heft about them — they aren’t something that shifts around your desk unless you make a special effort to do so. The feet are substantial and very grippy. In all my years using one as my daily driver, I’ve never had it move around accidentally. The feet also flip out if you want a greater incline.

Big thumbs up for cable routing

Special mention has to go to the cable routing options on the WASD boards. They have channels in the base meaning you can orient your cable so that it leaves the board at either side, to the side of the back or straight out of the back centre. Oh, it’s USB-C on the CODE V3 too (you get a USB-C => USB-A cable along with a keycap puller in the box). It drives me nuts that more keyboard makers don’t offer more than one option for cable routing. If you like a nice clean desk, this routing capability is a welcome addition.


I said before that the case is ABS plastic. I’m a fan of aluminium cases on mechanical keyboards, I enjoy the colour options available, and the fact they are completely solid to the touch. Conversely, even a super tough board like the WASD can flex and creak slightly if you push in the top case at the sides. Obviously, you won’t be doing that day-to-day and the WASD suffers no more than any other ABS board. But despite it’s toughness, it is still only plastic.

The only thing I would be wary of regarding the case, is choosing a white one. ABS, historically has been prone to discoloration over years (admittedly many years) of use. I don’t know if that’s going to be the case here, but it might be worth asking the question of WASD first if the white is your preferred colour and this is a keyboard you are hoping to live with for years.

Update 30.4.2020 — Aluminium cases

WASD have released aluminium case upgrades! For $150-$160 you can add a black/silver/grey aluminium case to your WASD at order time and have your board built with aluminium chassis. They are available for full size and TKL v2 and v3 boards.

Perhaps even better news for existing owners though is that you can buy it separately and swap out your existing plastic case.

Caveats to note being there are no flip-out feet or those lovely cable channels in the aluminium chassis. Not sure how straightforward the swap-over process would be either but as I’m tempted, perhaps this post will be updated in due course.

Understated aesthetic

Aesthetically, the CODE is a deliberately understated design. I’m a fan of that. What I’m not a fan of is lighting on keyboards, as I’m over 10 years of age, but at least here it’s done tastefully, if that’s your thing. Personally, I’d rather opt for a nice legible PBT keycap set but each to their own.


This is the first version of the CODE to offer programmability. They have always had dip switches on the board to change things like making the Caps Lock key a CTRL instead, or having Dvorak or Colemak layout. However, for V3 you can create macros and move keys around to your hearts desire.

I’ll be honest though, while it’s a robust setup here, I’d much rather have access to QMK for this kind of thing — it’s just a lot easier to wrap your head around. Ultimately, although I’ve played around with creating macros, I never find much use for them practically, long term.

Zealio switches

I’m a fan of tactile switches. For the uninitiated, those are the kind of switches that offer some resistance at the top of the press but don’t make a click sound. For the CODE V3 the biggest news for me was that you can now opt for Zealio 67g tactile switches instead of Cherry MX ones. And I’m happy to report, they are GREAT! Switch preference is a very individual thing but I think these switches are perfect in this board, giving wonderful tactile feedback and a lovely sound signature.


If you want a solid mechanical keyboard, this WASD represents an excellent choice. I wish it used QMK instead of its own method of programming but it certainly isn’t a deal breaker.

If I were choosing, I would also probably opt for the non-CODE variant as backlighting doesn’t appeal to me and I don’t like ABS keycaps — it drives me nuts when they go shiny. They do the standard V3 with a double-shot PBT keycap set which I think is the choice offering the greatest longevity.

There is plenty to recommend the WASD CODE V3 and standard V3 on the practicality front; great gripping feet, substantial heft, clever cable routing and the option of Zealio key switches. If you don’t want to get involved in the minutia of decisions to build your own but want a high quality mechanical keyboard, put the WASD V3 near the top of your shopping list.

]]> 0
Creating an HTML file from markdown source using Vim and Pandoc Wed, 08 Apr 2020 21:30:49 +0000 I have a number of documents I keep up to date. I write the text in markdown and I output to either HTML or PDF.

This post documents how you can run a command in Vim such as :Pandoc -o index.html --metadata date="01.04.2020" -s --template yourTemplate.html and get a HTML file generated with a given name, using a template for the output file, with the relevant meta data inserted.

The problem

Until recently, when I wanted to produce an HTML version of my markdown file, I have selected the markdown text, pasted it into Byword and then run ‘Copy HTML’ from the menu. Then taken that copied HTML and pasted it into the <body> of the HTML file I was using as a template. Then saved that off with the appropriate filename.

Hardly efficient.

The solution

We can use Pandoc, a universal document converter, to make our conversion. All that follows is macOS centric.

Our steps will be:

  • install Pandoc
  • install Vim-pandoc
  • create a HTML template for our md => conversions
  • understand the commands to run for various outputs

Install Pandoc

I used Homebrew brew install pandoc to install but it has a ‘clicky clicky’ installer if you prefer.


vim-pandoc provides some Vim integration for Pandoc. It means you can do conversions from within a buffer rather than having to break open another Terminal window.

I use Vim Plug: so Plug 'vim-pandoc/vim-pandoc' in my init.vim file and :PlugInstall.

Creating a basic HTML template

You can go nuts with your template. There are good examples to steal from here too.

Easiest thing to do is get the HTML file you are already happy with and just remove the contents of the body. Inside the HTML <body> tag, just type ${body}. That’s where the converted output from your markdown file is going to be written. You can specify the location of the CSS separately if you wish but I chose to stick my CSS in the head inside style tags.

I also wanted to print the date into the file. At present I’ve added the interpolated variable ${date} to do this. You can see where this data comes from in the command below.

Now save the template with an appropriate filename e.g. ‘template.html’.

Create the HTML from markdown

Here is an example of the command I now run to create my file:

:Pandoc --template template.html -s --metadata date="01.04.2020"

That’s going to run pandoc on the current buffer, use the template.html we just made, the -s flag tells it we want a standalone file (as opposed to just the content), and also passing in the metadata date so our template will insert 01.04.2020 where we added the variable.

Suppose you wanted to have a different name for the output file. It defaults to the name of your buffer. If your buffer is then the resultant file will be sausages.html. Pass a different name with the -o flag (o for ‘output’). For example, to name the output cake.html and save it into the ‘myFolder’ folder:

:Pandoc --template template.html -o myFolder/cake.html -s --metadata date="01.04.2020"

I only got up and running with this today, I’d like to be able to pass todays date into the HTML template automatically. Haven’t figured that out.

It turns out, if you run the command as :exe you can pass the date programatically, so in the example below, the current days date will be added automatically.

:exe 'Pandoc! --template template.html -s -o myFolder/cake.html --metadata date='.strftime('%Y-%m-%d')


The command line arguments can seen off-putting for casual use. However, if you find yourself doing lots of tedious exports to the same format/file, this can be worth the time to set up.

In addition Pandoc supports output to a multitude of formats, making exports to PDF and the like a breeze too.

]]> 0
March 2020 update Thu, 05 Mar 2020 21:46:09 +0000 I’d appreciate any comments you have on this content; it will shape future editions.

My original intention was to send this content for subscribers initially and then publish it for all a week or so later. However, I love comments (more on which below) and want the chance for subscribers to comment straight away. By all means, if you would rather email, by all means do. However, if you are happy to comment publicly, it will help me collate feedback and allow us to discuss the contents better.

For now, the best I can offer in thanks for your subscription is that this update will be RSS and subscriber only.

What I’m working on

RWD 3rd edition

I’ve just finished the first draft of the final chapter of the 3rd Edition of ‘Responsive Web Design with HTML5 and CSS3’. It’s easily the most popular book I’ve written.

I wrote the first edition back in 2011/2012, the second in 2014/2015. It’s amazing to see how far things have come in that time. Mobile has obviously exploded but the tools we have to work with now are so much better. For example, there’s a huge chapter on CSS Grid in this edition. Something that was just a pipe-dream in 2012! I found Grid pretty tricky at first. I kind of needed to unlearn much of what I knew. Once it clicked though – Oh My! If you haven’t found the time to learn Grid, move it up to the top of your ‘to learn’ list – regardless of if you pick up a copy of my new book to learn it or not.

Getting this edition done has been a slog. It’s something I’ve been doing each evening with the 30–60 minutes I can grab.

When I first started thinking about writing another book, I toyed with the idea of switching from an iMac to a Macbook so I could at least sit with my wife in the evening, however distracted I might be. However, a colleague in a similar place in life suggested I adopt a more habit-based approach and just dedicate 30 mins every day to the task. That was what he had adopted to pursue his music career. By the way, if you like electric music, check out his latest release, ‘Taken By The Heart’:

This habit based approach is the one I adopted to get a little PWA app built – something I wrote about for Smashing Magazine – and it’s worked here for writing this book too.

So, in the absence of a better approach, when it comes to side-projects, I’m embracing this technique – thinking like the tortoise in the ‘Tortoise and the Hare’.

Height adjustable desk

One ‘indulgence’ I did allow myself from the outset was investing in a height adjustable desk. I managed to pick up the Flexispot E5W electric legs in an Amazon lightning deal for £251.99 and added an inexpensive Bamboo table top from Ikea. This thing is a revelation. I’m not lucky enough to have a standing desk in the day job and the last thing I wanted was to be sat down all evening too.

This desk allows up to four memory settings (realistically a sit and stand for two different people) so you just press a button and up/down it goes to your stored height. Brilliant! You can even set it to remind you every 45 minutes to change from sitting to standing and vice-versa.

This is one of the best investments I have made in personal equipment. Easily as worthwhile as the Sony XM3 headphones.

A note taking app?

I’m wondering if anyone would pay $2 a month (say) for a note taking application that works on all devices and (unique selling point) saves every version of a note you ever made? That’s ticking away in the back of my mind as my next side project. Maybe just scratching my own itch with that one??



Audible is the last monthly subscription I would cancel if funds were short. Here are five reads (listens?) from the last 12 months I’ve enjoyed:

The Spy and the Traitor, by Ben MacIntyre – true story of the UKs most successful spy in Russia/Soviet Union. Absolute belter this. Recommended this to my parents, friends and collegues — everyone last enjoyed it.

Can’t Hurt Me, by David Goggins – this is not a perfect book by any stretch. Nor can I agree with all Goggin’s views. However, if you want to recalibrate your idea of what the human body and mind is capable of. This is the book for you! I certainly took a lot from it.

How Not to Die, by Dr Michael Greger, Gene Stone — I’m not entirely on the vegetarian/vegan bandwagon but what this book did do is open my eyes to the fact I was almost certainly eating too much meat and giving it undue attention in my diet. Regardless of your point of view I think you’ll find this an interesting read. It does get a little like a stuck record by the end though!

The Secret Commonwealth, by Philip Pullman — I’ve been a huge fan of the Dark Materials books since there release 20-odd years ago. These prequels, of which ‘The Secret Commonwealth’ is the second, are equally compelling.

The Body, by Bill Bryson — your body is the most incredible thing you will ever own. This book proves it. If you have any interest in human biology whatsoever, this will blow your mind.

Blog posts

In terms of blog posts, it’s a shame I have’t catalogued everything I have read and enjoyed. Here are some I can remember: — I’m a big fan of Andrey’s work. Not only have things like PostCSS and Autoprefixer literally revolutionised the way many of us write CSS, the way he approaches his projects is inspiring.

Bad Boss? Crazy relation? This is a fantastic long piece about narcissism, deadlines and leadership:–7032a5fb12ac

I’m surprised it hasn’t received wider circulation.

Great piece about why iOS and Mac are buggy from ex Apple engineer:–13-and-catalina-are-so-buggy/

I love it when content creators share how they are forging a living for themselves. Craig Mod offers a fantastic blow-by-blow here:

Random thoughts

CMS systems and commenting

So many people are into JAMstack CMS systems currently. I just wish one of them would include a commenting system. There are so few CMS systems that have commenting built in and I think it is absolutely essential for a blogger to control the comments on their own content.

I have no love of PHP but I won’t be moving away from WordPress on the front-end until I find a good CMS with commenting backed in. Maybe Ghost v4?

Design systems

There is a whole industry that has popped up around Design Systems. There are jobs (and even Manager Roles?!?) associated with them. This movement astonishes me. I think their value in creating a product is entirely out of whack with the amount of lip-service and industry attention they receive. I’m clearly in the minority here though. As this Twitter poll I ran illustrates:

MDN data and

A little anecdotal but since merging data with MDN I have found is often completely incorrect — especially when it comes to Safari.

Work kit and tech choices

I’m still on Vim. Still enjoying it! Just switched to JetBrains Mono as my editor font of choice (was using Dank Mono previously). Like Fira Code and growing number of other mono-spaced fonts it has ligatures. I’ve read complaints about ligatures in programming fonts (e.g. but I still disagree; I find those arguments weak. It doesn’t effect the source code so just use what you like I say!

I lost a finger

It took a while to finish this newsletter because half way through I had an accident and subsequently had to have my left ring-finger removed at the first knuckle. If you’re reading that and thinking WTF! Then probably easier to read the blow by blow account I wrote shortly after: I can’t tell you it’s a fun read so don’t say you weren’t warned!

Mechanical keyboards

I’ve spent a few months diving, even deeper into mechanical keyboards to write a monster article on the subject for Smashing magazine.

I was already a convert to mechanical keyboards but this has sent my addiction into overdrive.

The article isn’t published yet but shouldn’t be long. It covers (hopefully) everything on the subject and stops just short of talking about building your own keyboard.

I’ve now looked at a bunch of great mechanical boards, trying things like the ErgoDoxEZ, the WASD CODE v3 and more different keyboard switches than you can shake a keyboard at!

If an article on building your own keyboard is a piece you would like to read let me know and be sure to comment to that effect on the Smashing article if you see it.

I’ve recently finished building my own keyboard. That involved sourcing separate aluminium keyboard case, PCB, key switches, key caps, key stabilizers and more. Think of it like Lego for keyboards! Hugely satisfying and I enjoyed learning to solder.

Until next time

Thanks for reading. If you were someone who signed up when I originally said I would write a newsletter I’m sorry! I know it has been a ridiculous delay.

Hopefully, with your feedback, I can make this a more regular thing.

]]> 2
Ring avulsion — how I lost my wedding ring finger Fri, 28 Feb 2020 22:54:53 +0000 Warning: this post contains some graphic descriptions. If you are squeamish, perhaps not the post to read with your lunch.

Also: this isn’t a post to garner sympathy. I had an accident. I wish I hadn’t. I’m now missing most of one finger. I wish I wasn’t. However, in the scheme of things, people lose far, far more. Let’s take it as read you feel bad for me. 😉


I’m on my way back to the office from the gym. There are 5 of us in the car. It’s started snowing, and, this being England, five snowflakes brings the entire UK transport infrastructure to a shuddering halt.

It’s about 2:45pm. Two of us have a meeting at 3pm. The office is a 5 minute walk away and sitting in a traffic jam will take a whole lot longer.

We decide, fatefully, to jump out and walk while one remains to drive the vehicle back.

There’s a shortcut through a car park and then abandoned waste land we’ve done a handful of times. There are a few ways through the wasteland, I go one way, the other three go another. The snow is falling. Couple of inches thick now.

I walk on another 20–30 meters and see there’s now a new fence; don’t remember it being there before but it’s been months since I last did this. It’s a brand new fence. Maybe 7/8ft high but I’m approaching a gate section. There are solid beams all the way up and across; plenty of foot holes. I’ve been over these kind of fences tens, if not hundreds of time in my life. Nothing about it looked sketchy.

I climb up and over and drop down on the other side. Unremarkable. Did I just catch my wedding ring as I dropped down? I glance down.

I can’t believe what I am looking at.

My ring finger is gone.

Mostly. It’s there to the point my wedding ring was and the rest it stripped, leaving a bloodied bone protruding.

I can’t compute what I’m looking at. Am I hallucinating? This can’t be real. This can’t have just happened.

Focus Ben. Focus.

I grab what’s left and apply pressure. And get my arm above my head. The only thing immediately worse than this right now is if I bleed out and feint. Then I remember shouting.

“Help! HELP!”

I’m opposite a cinema. A man walks by on the other side of the road.

“Please call an ambulance, I’ve cut my finger off!”

Nothing but an inane grin in return.

“Please, can you call an ambulance?”

On he walks. Is he for real? Is he worried about call charges? If I could chase him right now I’d kick him up the arse so hard he wouldn’t sit down for a month.

I remember the others. “Matt! Matt!”. A beat. Then from the distance.



Matt comes sprinting up the road. Thank f!!!.

I tell him what’s happened. He sees my face; he knows I’m not kidding.

He’s on it and calling 999.

And then I start thinking:

Where’s my finger?

I concentrate so I don’t panic. This is going to be OK. This stuff happens all the time. Find the finger, keep it in ice, get to the hospital; they’ll do their thing and it will all be OK.

But where is my finger?

The others joins us. One turns back to go and fetch help.

The phone is pressed to my ear. The ambulance call staff are running through their basic questions. “Do you have a temperature? Have I being sick in the last 24 hours?”. I try and implore her to understand that the most pressing thing right now is that my finger is no longer attached to my hand. Then she advises, “OK, if you can just get yourself to an Accident and Emergency centre within the hour.”

I look at Matt. He says something to the effect of “No f!!!ing way.” I decide to be a little more persuasive given my predicament. “I’ve lost my whole finger. The traffic is completely gridlocked. I can’t walk there!”

She goes away to talk to someone. Returns. “OK, we’ll get an ambulance to you.”

“OK, thanks.”

And now while we wait I’m asking everyone to find my finger.

We can’t.

There’s two inches of snow, no ‘smoking gun’ of a blood trail, nothing. I can’t believe this. If I end up with no finger because we couldn’t find my finger in this freak snow fall I’ll be pissed off until the end of my days.

The first of many tangential thoughts — my sons are going to be freaked out. I feel bad for them.

I at least have the presence of mind to stay where I was. So we have a point to search from. This is surreal. This is grim.

I start worrying about everyone else’s hands as they sweep through the snow trying to find it. Another random thought — what if they get frost bite or something?

I apologise. They tell me not to be ridiculous.

There is no sign of the finger. I can feel panic and desperation starting to build. The ambulance arrives. The paramedics get out in their usual laconic manner. This is nothing to them. They see this every hour of every day. While the rest hunt, they tell me to climb on so they can take details and get my baseline observations.

Turns out I wouldn’t have bled out anyway. The blood vessels in the finger are quite small. Apply pressure to close them and reduce the blood flow (as I had by holding it above my head) and they close pretty quickly.

They peel my good hand away to take a look. Jesus. What a mess.

And then the worst fear hits me. I have to call my wife.

I’m imagining the bollocking I will get. The condemnation. My shame.

Let’s just do this.

I call, she answers. “Are you alright”.

“No, sorry I’m not. I’ve managed to cut my finger off.”

A few back and forths and she realises I’m not joking.

She breaks down. She’s distraught. I tell her I’m OK but this is worse than a bollocking! I could have dealt with that!

Then I realise the time and realise I shouldn’t have called. She’s about to walk and collect our sons from school and walk back over a few busy roads; and I’ve just put her head in a very bad place.

I tell her to be calm. To just concentrate on what she needs to do. I’ll call her when I’m at the hospital. And now I’m worrying about her and the boys getting home safely.

Then back to the business at hand. I want my finger. I really want my finger.

I go back out, there’s more people from the office there now, plus an older man I’ve never seen before. They have a rake too — where the hell did they get that; must be the older guy, he must live near.

Everyone is pawing around in the snow bless them. But still no finger.

“We need to go I’m afraid”, says the paramedic.

With a heavy heart I agree. They’ll keep on looking they tell me as I climb in.

I’m just getting my seat belt on in the back and there is a knock on the side of the ambulance.

They’ve found it!

“We need to keep it cool” I tell them. For once the snow if useful. My finger is placed in a container with some snow. Matt throws my gym bag into the ambulance with me, the doors are closed and away me and my finger, never before apart, go.

Fence showing where ring avulsion occurred
The finger was found on top of that concrete circle, which is about 10 ft from where the ring was, on top of the fence.

Stoke Hospital 3:15pm

It’s a short journey. I don’t remember much about it. I remember walking in through the wide entrance they bring the real accidents in through. I should be thankful I’m walking in. Plenty aren’t that lucky.

My memory is a bit of a jumble at this point. I try and joke with people, try and get people feeling at ease with me, know that they can tell me what is actually going on and not some sanitised version.

More checks and then off for x-rays. Once they are done I’m left on the trolley outside for someone to wheel me back whence I came.

A surgeon arrives, introduces himself. In his best bedside manner he gives me a hammer blow.

“I’m afraid we won’t be able to re-implant the finger.”

The blood in my body recedes. Then, in the same breath…

“So, we can either terminate the finger at the knuckle or we could take away the remaining knuckle as well and close the hand up into a three-fingered hand.”

My face must have changed at this point as he them started to ‘sell’ me having a three-fingered hand over a stump for a finger. I can’t compute this. It’s too much.

I stop him and ask. Are there any hand specialist hospitals near by? Anyone we could get a second opinion from. I tell him I’m really struggling to come to terms with my options.

To his credit he agrees. I hear him make the call to Derby which is quite close and has a hand specialist department. They agree to take a look.

While I wait for transport, I’m talking to 5 of the surgeons. They tell me what I have done is a ‘ring avulsion’.

If I had chopped my finger off clean, they would be fairly confident they could re-implant. But in this kind of injury, the finger is essentially stripped from the bone, taking blood vessels and ligaments with it from down in the palm.

They tell me they get roughly one of these a week. Wait, what? Yes, one a week. Why didn’t I get that memo?

My eyes wander to each of the surgeons hands in turn. Not one, male or female, is wearing a ring. “Is that why none of you wear rings?”

They confirm. “Absolutely, it’s like having a guillotine on your finger waiting to go.”

I feel I’ve been kept in the dark about rings. Surely this should be more common knowledge.

It’s about 6pm now. They sort me an ambulance, get the lights on and off we go to Derby Royal hospital.

Derby Royal Hospital

Again, we enter through the ambulance staff entrance, right next to a corona virus containment/quarantine pod. That doesn’t exactly thrill me.

Then through to the main Accident and Emergency waiting room. It’s pretty grim in there. The usual collection of very ill and injured people.

Within about 30 minutes the hand specialist comes down to see me and my detached finger.

He lies my detached finger out on a sterile sheet, pulling the tendons and blood vessels that hang from it straight. He is inspecting it with some kind of mini-binoculars mounted to his glasses.

Jesus. How did today end up with me here watching someone doing that.

He doesn’t take long.

“I’m very sorry Mr Frain. There is nothing we can do for you.”

By now, odd as it may seem. I was sort of expecting it. The human mind has an enviable ability to re-evaluate and re-appraise things.

He gives me the same options as the surgeon at Stoke but leans towards terminating at the knuckle. I tell him that’s my preferred option (preferred option!). He says they can do the operation in the morning, I might be out same day.

I’m ‘nil by mouth’ from midnight, but I can drink water until 3am.

The specialist leaves. A nurse returns to the cubicle. He cleans up the swabs and other medical paraphernalia the specialist had been using.

There’s my finger.

Nurse looks at me.

I look at him.

We both look at the finger.

We both look at each other.

“Is there anything you want us to do with your finger?”

What a question to be posed with. I feel ill-equipped to answer.

“What do you usually do with them?”

A beat. I know he’s wondering how best to phrase it.

“We…just put them in the bin”

I sigh.

“You’d better put it in the bin then”

It’s perhaps redundant to call it out but to see someone pick up a bit of you, that was only hours ago attached to you, and put it in the bin before your eyes is pretty rough.

The only upbeat part here was talking to a nurse with the surname ‘Wisdom’. She had a Jamaican/West Indies accent, and just had one of those personalities you can’t help but be endeared by. I told her I wasn’t getting to keep the finger. She told me how sorry she was. I told her I would need to tell my wife.

Without missing a beat she glances at me, smiles wryly and says, “You’ll be all right, you still have other bits that work!”

It made me laugh. I needed a laugh.

The night

I’m then waiting to get on a ward. There are no beds free. I have next to no charge on my phone. I’ve had to tell my wife I’ll call her when I get some charge.

It’s was gone 11pm by the time they got me a bed on a ward.

A nurse lends me a charger.

I make a couple of calls. There isn’t much to say. My wife is asking if she should come over. Should my Mum and Dad come over? There’s no real point. What would anyone do? What would anyone say? I was happier on my own.

It’s still not hurting much, all things considered. But it is freaking me out that I can see the shard of my finger bone uncovered when I look down the bandage. I get some gauze from someone and cover it up. Last thing I want is an infection in my bone or some other grizzly associated problem.

I don’t sleep much that night. It’s not through pain. I’m next to a man that, if they wanted to create the worlds loudest snoring machine they would need only copy his mechanics. This is an industrial snorer.

I maybe snatch an hours sleep between snoring man, and the frequent observation checks the nurses perform all night. Oxygen, blood pressure, anti-biotic drip etc.

Tuesday 11th February

That moment you wake and realise it’s still real.

It can’t be! But it is.

I’m oddly motivated. Let’s just get this dealt with so I can get out of here. I want to see my family. I want to get home.

I get a bag taped over my hand and shower with antibiotic soap. Then get some clothes on. My gym bag is still with me and has absolutely nothing of use in. I consider putting it all in the bin just so I don’t have to keep carrying it around with me.

It’s something like 7am when the surgeon comes around and asks how I’m doing. Tells me I should be in soon. He marks my arm with an arrow. I wasn’t even worried they might take a finger off the wrong hand until then!

So then I wait.

Hours pass.

My parents arrive. They came against my wishes bless them. They have never let me down. Ever. The best people I have ever known.

Eventually, around 4pm I’m given notice I’ll be going in soon. My mouth is so dry at this point I wonder if it might spontaneously combust.


I speak to the surgeon once I get to surgery. It’s a different surgeon than the one I spoke to yesterday. But I like her. She inspires my confidence. I feel like I’m in good hands.

The anaesthetist tells me they will be using a ‘blocker’ rather than general anaesthetic. It’s an injection in the neck that will numb my entire arm while they work on it. I don’t like going under but not sure feeling the snap as they clip my finger bone away will be preferable either.

Whatever. Let’s just do this.

I’m wheeled in. I say hello to everyone in the theatre. I go a bit whoosy. “Have you just given me something?”. They confirm they have put the first bit of sedative in my line. I go to start another sentence…


What happened there then? I’m in a recovery area. The surgeon is telling me that everything went well and according to plan.

Someone brings me a cup of tea.

Oh. My. God! I swear the best cup of tea I have ever tasted in my life. And biscuits. They are in packs of three: bourbons, custard creams and rich teas. I must have polished off about 6 packs. Certainly the high point of the last 24 hours.

My recollection is hazy of the rest of the evening. I remember talking to my Mum and Dad some more and then they left to let me get some sleep.

This night I did sleep. The remnants of whatever they gave me giving me immunity against the industrial snorer next to me.

Wednesday 12th February

Yes. It still happened. I feel like this must be some other version of reality I have somehow slipped into.

I’m told I’ll be going home soon. They just have to sort paperwork and get me some medications sorted. That’s about 9am.

I don’t actually get out until about 2pm. In the meantime I overhear the plight of the guy opposite me. Same sort of age, fell off step-ladders about 14 months ago. His ankle broke in an open fracture. 10 operations later and a laundry list of pain killers and they still can’t sort his leg. If the next operation can’t sort it they may need to amputate the bottom of his leg.

And I’m upset about a finger.

That put things in perspective.

Eventually I’m out with my parents and they drive me back home. I arrive not long after my sons are back from school. Seeing my family I can’t help but feel, all things considered, I’m a lucky man.

Friday 14th February

I’m back at Derby Royal for a check-up. I have a list of questions for the surgeon:

  • When can I exercise? Not for a few weeks at least; extra blood pressure could rupture the closing skin.
  • When can I drive? Probably about a month. I need to feel in complete control of the vehicle.
  • When will these phantom pains and itches end? No clear answer.

Aside – I can tell you that feeling itches on a part of your body that is no longer attached is pretty uncomfortable and very disconcerting.

  • Did I have general anaesthetic in the end? No, just extra sedative.
  • How long to heal? 1 month for most things, a year to heal fully.
  • Do I need therapy? We will have to wait and see.

I take some photos of my hand before they redress it. I tell my wife not to look on our shared photo library until I have chance to move them. They are pretty grim.

Until now I’ve had a sizeable dressing on. I leave my appointment with a dressing that makes it completely clear to anyone that cares to glance down that I have no ring finger. It’s hard to take. I feel very self-conscious.

The next two weeks

I’m off work for the remainder of the week and the following one. It doesn’t really hurt exactly. But it does ache from time to time. I’ve got standard pain killers to take if it gets a bit much. Like tooth ache, it’s always worse at night.

There doesn’t seem to be a real correlation between what I do and how much pain and aching comes on later.

I’m pretty desperate to try typing again. I’m in the middle of writing the 3rd Edition of one of my existing books. For once I have a decent excuse why I won’t meet my chapter deadline.

Turns out, apart from the odd letter combination, it doesn’t take long to adapt my typing to the point it is of marginal difference from typing with all my fingers. In that respect at least I am fortunate.

Like always in these situations, it’s telling who bothers to reach out to you and check how you are doing, and who doesn’t. Getting the messages and visits was very much appreciated over those two weeks.

Pro Tip: if something like this happens to someone you know, send them a message at least. You’ll be surprised how much it can lift their spirit.


As of today, eighteen days on, I can’t give you a good summary of how I feel. Some days I feel fairly positive and resolved to just get on. Others, I’m incredibly pissed off at what has happened.

I’ve had plenty of injuries over the years through football, fighting and mountain biking. However, the thing they all had in common was no matter how bad the tear, broken the bone, or black the eye — given time, I knew they would be back to normal.

This is different. It’s not going to grow back. I’m stuck with my 1/3 of a wedding ring finger for the rest of my days (Terminator fingers still aren’t available on the NHS — I did ask).

Like I said at the outset. It’s far from the worst thing that could happen to someone. As losing part of your body goes, it’s probably the least impactful thing that could have happened. I feel both very lucky and very unlucky at the same time. Does that make any sense?

Will I wear my wedding ring on another finger? No, I’m selling it. I never want to see it again. I don’t have a tattoo but maybe something like that might end up on the cards.

Wedding ring showing nick where fence caught it

The nick in the metal shows where the fence must have got caught

What about a prosthetic? Not for me. It’s a bit like a hair transplant or wig for male pattern baldness. I get why people go that way but it would make me feel phoney. I need to own this. But it’ll likely take a while.

How have people reacted? To a person, everyone has been great. I can see kids are a little freaked out. However, I have a growing compendium of alternate, and far more exciting, stories I’ll tell when younger children ask what happened to my finger. For example, stranded in a life raft in the pacific with a fishing line and no bait, bitten by a Peruvian Death Adder in the South American rainforest etc. Never let the truth get in the way of a good story as they say.

Everyone tells me it’ll feel better in time. I’m sure they are right, and frankly, in these scenarios you don’t have much choice.

I have been lucky enough in life that until now, I have been able to believe that these things only happen to someone else.

But now I am one of those random awful accident stories.

All I have for you now is this: accidents can happen to anyone. With that in mind, tell that person you love them. Hold them close and appreciate them. Don’t go separate ways without resolving your fight.

Oh, and maybe give up wearing your rings!

This accident, in the scheme of accidents, is minor. Plenty of others are not so lucky. I’m mentally sending all my love and hope they can enjoy the mental fortitude to get through their challenges.

Finally, if you’re reading this at some point in the future and the same or similar thing has happened to you; by all means reach out. Hopefully I’ll have something more positive for you and if I can offer you even the slightest of comfort it will be my absolute pleasure.

Oh, remember the older man that came out to help the others find my finger. I was in the ambulance when this happened but when Matt explained to the guy what had happened he held up his hand and said “Like this!” and he had exactly the same injury — also from getting his wedding ring caught on a fence. I’m glad I didn’t see and hear that at the time!

A few days after surgery, a bandaged ring avulsion
Forgive the mess. I didn’t really fancy tidying.
]]> 7
Finite State Machines Fri, 28 Feb 2020 11:01:14 +0000 When building user interface, the question of ‘state’ will quickly surface. Simple interactions usually require simple state changes. Something is on or it is off. It is open or closed. However, for anything sufficiently more complex things quickly become difficult. In my experience, an increased number of states for any given section of UI creates a number of problems that increase exponentially rather than grow linearly.

This is the point at which I find great utility in Finite State Machines.

Here’s my own “York Notes” explanation of Finite State Machines; it is a way of describing distinct states of UI as well as defining the allowable paths/transitions from one state to another.

In practical terms, a finite state machine provides a pattern to ensure everything that should occur in one state stays in that state and doesn’t leak across to another. For example, it is a coding pattern that stops you coding oddities in your UI where a logged in avatar is shown when a log in process has failed. The states in your UI are finite so you never get mish-mashes you don’t want.

I’ll write down the pattern I use, much for my own posterity but there are many variations on the theme. I certainly didn’t come up with this pattern, I lifted it from somewhere, I just don’t remember where! If anyone knows, let me know and I’ll link it up.

Resources on Finite State Machines

Also, if you are looking for a more thorough dive into Finite State Machines:

A basic JavaSript pattern for Finite State Machines

My needs are typically pedestrian. As such, my pattern is very basic. However, if you have never used Finite State Machines before, I dare say it would be a good place to ‘dip you toe in’.

The first step in creating a Finite State Machine (here after referred to as FSM), is to draw the state machine. If you think you need to be able to draw well, don’t be alarmed. If you can draw rough boxes and lines you command all the faculties necessary.

Let’s imagine a simple scenario we might want a FSM for: a Logging In process. We want to make allowance for four possible states:

  1. Not logged in
  2. Processing a log in
  3. Log in fail
  4. Logged in

There will be a number of actions and transitions between these states.

A rough diagram of the requirements as a FSM

Here is my rudimentary sketch of the FSM:

Basic finite state diagram

The states are the squares and the lines are the actions/transitions between the states. Let’s build this in code. The pattern requires three distinct sections. The first is called machineInput here. It’s the part that takes some input and moves the FSM to the next appropriate state. You might prefer to name it to something that better suits your own mental model, ‘machineStateSetter’ or similar for example.

The machine that sets the state

This section of code is the same for me every time. It looks like this:

// machine state setter
const machineInput = function(name) {
    const state = machine.currentState;
    if (machine.states[state][name]) {
        machine.currentState = machine.states[state][name];
    console.log(`${state} + ${name} -> ${machine.currentState}`);

To re-iterate, you don’t need to touch the machineInput function. You just send a transition in and it changes the machine state to the appropriate state. Before we do that we need to define our finite states and the transitions between them.

Defining the possible states of our machine and their transitions

So, the second section is where we define the states and transitions of our machine. Again, you may wish to rename the function to something that better suits your mental model.

Here we are not writing any logic of actions. We are just describing as an object each state of the machine, what input that state accepts and what the machine should move the state to when it receives that input.

var machine = {
    currentState: "notLoggedIn",
    states: {
        notLoggedIn: {
            submit: "processing",
        processing: {
            problem: "failed",
            success: "loggedIn",
        failed: {
            submit: "processing",
            cancel: "notLoggedIn",
        loggedIn: {
            logout: "notLoggedIn",

currentState describes the opening state for your FSM. Then for example, if you are not logged in and you click submit, you move to the ‘processing’ state.

Reacting to the state change

The final piece of the puzzle is reacting to the current state of the FSM. That is where the renderUi function comes in. Each time you run the machineInput function, after updating the machine state, we want to update the UI. You could use different methods of control flow but the example I see implemented most often is a switch statement. For example:

function renderUi(state) {
    let willOrWont = [true, true, true, true, false];
    let rand = willOrWont[Math.floor(Math.random() * willOrWont.length)];
    switch (state) {
        case "loggedIn":
            root.setAttribute("data-fsm-state", machine.currentState);
        case "processing":
            root.setAttribute("data-fsm-state", machine.currentState);
        case "failed":
            root.setAttribute("data-fsm-state", machine.currentState);
            setTimeout(() => {
                if (rand) {
                } else {
            }, 2000);
        case "loggedIn":
            root.setAttribute("data-fsm-state", machine.currentState);

Each case in the switch statement represents one of the boxes in our initial drawing. Everything that needs to happen for a given state should be enclosed in the relevant case section. This absolutely guarantees that only what is intended (or at least defined!) for each state occurs.

All other interactions in the DOM merely send something into machineInput and that then, based on what you have defined either moves things on to the relevant state or, if nothing is defined, does nothing.


It feels a little soon to be writing a summary but the truth is, for a beginners introduction, there isn’t much more to say. I really like the FSM pattern. I don’t use it for every little piece of interaction but I’ve gotten better at judging when something is getting sufficiently complex to warrant one. I also find it is a pattern that lends itself to easier trouble-shooting down the line.

In short, I use it often and find myself reaching for it more and more. I’ve often heard people tell me it’s nothing more than a switch statement, which is, in a reductionist sense, largely true. However, I think that denigrates the predictability that it forces, not just in code but in your thinking about the code.

Anyway, give it a go, let me know how you find it.

]]> 0
Material design style click effects with pointer events and CSS Custom Properties Mon, 13 Jan 2020 21:32:43 +0000 These days I’m getting a kick remaking things with more modern approaches that used to be problematic.

Here’s a little example: a material design style click/selection. What I mean is where you have a button and where you click it creates an effect that starts from the point you clicked/touched.

Here is the example we will build:

See the Pen
Material design inspired buttons
by Ben Frain (@benfrain)
on CodePen.

It used to be necasary to handle touch/clicks separately handling events like touchstart and click individually. Nowadays we can just use pointerdown. So nice!

Anyway, let’s get into this:

The building blocks — button elements and pointer events

I’m going to add a few buttons to the DOM. You can tweak the actual effect of the click to your hearts content. However, in terms of requisite mechanics needed to make this work, it’s essential to know just one thing; whereabouts did the user click inside the button? Once you know that you can communicate it back to the DOM so that your CSS can react accordingly.

We could do this kind of thing in the past, it’s just so much easier with CSS Custom Properties.

The complete JavaScript

Anyway, here is the JavaScript I used. Take a look and then we will go through it in more detail:

document.body.addEventListener("pointerdown", e => {
    // Ignore anything that isn't the button or a child of it
    if ( !== "BUTTON" && !== "BUTTON") {
    const theBtn ="button");
    const neededLeft = e.x - theBtn.getBoundingClientRect().left;
    const neededTop = e.y - theBtn.getBoundingClientRect().top;
    const biggestA = Math.max(neededLeft, theBtn.getBoundingClientRect().width - neededLeft);
    const biggestB = Math.max(neededTop, theBtn.getBoundingClientRect().height - neededTop);
    //length of diagonal through rectangle is a2 + b2 = c2.
    const neededWidth = Math.sqrt(biggestA * biggestA + biggestB * biggestB) * 2;"--x", `${neededLeft}px`);"--y", `${neededTop}px`);"--width", `${neededWidth}px`);
    theBtn.setAttribute("aria-selected", theBtn.getAttribute("aria-selected") === "true" ? "false" : "true");

Breaking down the JavaScript

I’ve set a listener on the whole DOM, and while this saves on listeners being applied to every element, I do need to then discard clicks/touches on elements I’m not interested in. I’m doing that with the ‘early return’ for any element that isn’t a button or the child of a button.

Then I’m making sure I’m dealing with the actual button element and not a child of it by using closest.

I then need to get the coordinates for the X and Y of the click/touch. There are a few ways to do this, I’m getting the x from the pointerevent and subtracting the BoundingClientRect left/top of the button. First time around I hadn’t factored in scroll position and used offsetLeft/offsetTop. Using getBoundingClientRect() also takes into account the scroll position of the elements.

Sizing the circle effect

Now, a slight complication that I think is worth the extra effort. We want the circle to expand from the click/touch to fill the button. However, to get the best effect, we only want the circle to be big enough to cover the button. If we make the circle massive to ensure it will cover a button of any size (e.g. 1000px) the effect looks crap; because the transition speed is a constant, depending on the button size, you might now see the effect — the circle will have passed the bounds of the button before you can visually process it.

Getting the largest of two values with Math.max()

So to this ends we want to get the largest distance in both x/y from where the click was and the edge of the button. We do this with Math.max() and pass it the neededLeft (which is the distance of the click from the left edge of the button) and the width of the button minus that same value, which would give us the distance of the click from the right edge of the button. Same approach with the y axis. This gives us a biggestA (x axis) and biggestB (y axis).

Calculating the hypotenuse

Now here’s the fun part as we finally get to use some of the mathematics taught at school.

I mentioned to my Dad (78 years old) I was trying to remember how to get the length of the diagonal of a rectangle and without missing a beat he said “Hypotenuse — a2 + b2 = c2“. Now that’s an engineer!

Once we have the longest x and y we want the hypotenuse of the rectangle those dimensions create. To get the hypotenuse we use the formula ‘a2 + b2 = c2‘; the square root of that formula is the hypotenuse. That hypotenuse is going to be the radius of the circle we want for our button ‘circle’.

Custom Properties and styling with CSS

Right, we have all we need now, we just set some CSS Custom Properties for the --x and --y of our click along with the needed width for our background circle: --width. Naturally, we need to set the aria-selected state on each button click too. Good ’ol ternary operator for that!

The rest is CSS. At the top of the Codepen I have included App Reset but that’s a generic reset. Look from line 185 onwards for the styles relevant to this technique. More specifically, line 226 – 247:

.btn-Btn:before {
    content: "";
    position: absolute;
    top: calc(var(--y) - var(--width) / 2);
    left: calc(var(--x) - var(--width) / 2);
    transform-origin: 50% 50%;
    background-color: #d2d2d2;   
    height: var(--width);
    width: var(--width);
    border-radius: 50%;
    z-index: -1;
    transform: scale(0);
    opacity: 0;
    transition: transform 0.2s ease, opacity 0.2s ease;

.btn-Group-circle .btn-Btn[aria-selected="true"]:before {
    transform: none;
    opacity: 1;
    transition: transform 0.2s ease, opacity 0.1s ease;

That is how the circle effect happens. We use a ::before pseudo element, absolutely positioned according to our click. The position is the click minus half the width/height. The circle width and height is the --width we pass in from JavaScript. We then scale that circle down to nothing using transform: scale(0) and remove that transform when the button is selected. Adjust transition of transform and opacity to taste!


The effect here is quite basic and restrained. You might go wild and add additional elements that animate as needed, maybe just add an additional circle with ::after; the possibilities are near endless.

The important takeaway is just how economical it is thesedays to create these effects with standard CSS and JavaScript.

If browser support is a concern, you can wrap up the more progressive CSS in a @supports. For example, if calc is your worry, @supports(calc(100%)) and use a standard background colour change as the default styling.

Hope you had fun following along. It was fun to remake this effect with simpler syntax!

]]> 0
Vim for front-end development in 2019 Thu, 17 Oct 2019 13:47:46 +0000 Just over 5 years ago, I spent a month learning and using Vim. Ultimately, I went back to Sublime Text with an occasional dalliance with VSCode.

I don’t know what sparked it. But I somehow found myself back trying Vim in September 2019. Here I am. About a month later and I’m feeling pretty content in Vim-land.

Not sure why I am happy with Vim this time around. Has Vim changed? Have I?

I certainly believe that this time around I was ready to have a ‘beginners mind’ and embrace the Vim way.

The biggest mistake I probably made originally was trying to make Vim be like Sublime/VSCode. This time around I have made a conscious effort to do things in a manner Vim intended. Let me try to substantiate that nebulous claim:

Knowing where I am in a file is different in Vim land. It’s done by looking at a percentage amount in the status bar. Or, in my case, a percentage value displayed in the Airline plugin to the bottom right of the Vim window. This percentage value is shown alongside the current line and column number. For example, 91% 11/12 :237 tells me I am 91% down the file on line 11 of 12 and currently on column number 237.

Airline showing the current cursor position

That’s very different than Sublime’s mini-map where you get a visual sense of the code and your position in it. Not as fancy, but Vim still gives me what I need. In short order my brain has adapted and I’m not missing the mini-map.

Another example is multiple-cursors. There is no direct native analogue to that in Vim. However, there are certainly ways to achieve the same goal.

Suppose we have a list of 10 lines of text that we want to wrap in HTML tags; let’s add a class name at the same time. In Sublime, we might hold down the relevant key and drag down the edge of the text in question with the cursor, type our beginning tag e.g. <div class="thing"> then press the relevant ‘super’ key and end-of-line key combo shortcut to place all cursors at the end of the line, type our closing tag and then press escape to exit multi-cursor mode.

One way to achieve the same result in Vim would be:

<C-V> to enter visual block mode
10j$ to select the 10 lines we want to operate on and select to the end of each line
S to enter Surround
Type the tag <div class="thing">

Same result. Both techniques require some practice. Having done both I think there is little between them.

The one benefit Vim has is that it’s a language, which, when you know it even a little, you can kind of think about how you would express on operation by using the correct nouns and verbs to Vim. Expressing yourself gets more and more powerful and there is a smug satisfaction that comes from the moment when you think of a problem, and ponder, “I wonder if…” and the noun/verb combo you enter does exactly what you hope it would. Achievement unlocked!

Spell checking

Another example of embracing the Vim way is spell checking. It lacks some of the visual niceties of modern editors but it is rapid and functional.
Vim has spell checking built in. You just need to enable it. You can also limit it to certain languages if you like. When enabled, mis-spelled words get an underline. [s takes you to a previous error. ]s takes you to the next. When you land on one, z= brings up the list of suggestions. Press the number of the one you want, press enter and it is corrected. Need to add a word to the dictionary? zg adds the current spelling to the dictionary.

Vim highlighting spelling errors

This lacks a fancy graphical pop-up but it’s very fast; no mouse === more speed!

Vim and the eco-system in 2019

In terms of Vim and its eco-system, that’s improved subtly but greatly in the intervening years.

NeoVim > Vim, init.vim > .vimrc

NeoVim is a modern re-thinking of Vim. I’ve been watching NeoVim for a few years. This time out, opting for it over the standard Vim seemed like a no-brainer. Install was straightforward enough and the only real difference is that user preferences are managed in a init.vim file instead of a .vimrc file.

From my humble point of view, the sign of a mature Vim user is a stable .vimrc/init.vim file. I wouldn’t say mine is entirely stable but I’m certainly no longer updating it on an hourly basis as I was in the first few days. Mine is here:

I’m not going through everything in there but I will call out some of the feature plugins that deserve special mention.

Getting around, opening files etc

The short answer to this problem is 3 letters long: FZF. Install this for both Terminal and Vim then you get a blazing fast fuzzy finder. I have it set so that <CTRL>-p opens the pane, along with a little preview window of the files as you tab through them.

FZF plugin showing file matches and a preview

I also use FZF as the manner I navigate buffers (think of buffers as tabs in standard editor parlance). I have <Leader>-b mapped to open a the FZF drawer with my buffers listed. Typing the appropriate number and enter and I’m in.

Airline showing the current cursor position

FZF showing buffers to choose from


Airline has been around for years and there are a few lighter alternatives available. Eleline and Lightline come to mind. However, I have Airline doing exactly what I need currently so for once I have ‘left well alone’.

Code completion and linting

COC.vim showing possible completions

Another three letter answer to this problem: COC. Conquerer Of Completion is essentially the engine that runs code completion in VSCode. It works like a dream in Vim; you get everything you would get in VSCode. Simply install the relevant language packs with :CocInstall and the relevant completion is installed. So, say I wanted the HTML completion, I can just enter: :CocInstall coc-html from normal mode and the HTML completion is installed.

In the past, the code completion in Vim didn’t quite work and I remember a bunch of faff getting it to work as I would expect (coming from Sublime). No such problem here. It works perfectly.

Linting for free

Because COC.vim is essentially the VSCode completion and language server, you also get linting out of the box. The same error warnings you’d get in VSCode are here in Vim too.

cos.vim showing a CSS error

Writing/markdown editing

Sublime Text has long had good Markdown editing tools. I’ve adopted Goyo and the associated Limelight for this job. Not only do you get a nice central editing experience with distractions such as Airline removed. Limelight also provides a highlight for the paragraph you are writing in whilst others are dimmed.

Goyo showing a highlighted paragraph of markdown

Colour schemes

I use the Nord colour scheme and there is a great version for iTerm2.

There is also an accompanying Nord theme for Vim (grab it from the aforementioned init.vim) which I love.

By the way, if you want a bumper pack of different colour schemes for iTerm, head over to

Finally, on the subject of pimping iTerm in general, Stefan Judis has a good post on the subject:


I’ve been playing with and learning SwiftUI lately and the single biggest frustration I have is that there is no auto-format option in Xcode (you can select-all and press CTRL+i though). Thankfully, in web development land we have Prettier. It’s just a plugin away in Vim land too. Thank. Goodness!

Fonts and ligatures

Ligatures: some people love them, some hate them. Personally, I am a fan. However, as I work via iTerm and iTerm advises they prevent GPU acceleration I have them disabled. But they are there if you want them. In iTerm at least!

Start (Neo)Vim with an application

Here is the scenario. You are looking at a JS file in the Finder of macOS and want to open it in Vim. Ordinarily, you would head to the Terminal, start up Vim and run : edit with the path to the file in question (perhaps you drag in the location from the Finder to speed things up).

There is another way. You can use AppleScript to make an ‘App’ of Vim. It’s a 5-minute copy and paste job. I tried a few tutorials but the one that worked for me with NeoVim was this one:–11–01/open-files-neovim-iterm2-macos-finder/. Now you can open files directly in Vim from the Finder. The only piece of the puzzle I have yet to crack is how to get that file opened as another buffer of an existing Vim instance.

Saving and restoring sessions

Once thing I have always loved about Sublime is that you can open it and it restores the project you are working on. Turns out this is simple in Vim too. Here are the relevant functions lifted from my init.vim file. Apologies, but I can’t remember to who credit for these functions should be accredited!

function! MakeSession()
    let b:sessiondir = $HOME . "/.config/nvim/sessions" . getcwd()
    if (filewritable(b:sessiondir) != 2)
        exe 'silent !mkdir -p ' b:sessiondir
   let b:filename = b:sessiondir . '/session.vim'
    exe "mksession! " . b:filename

function! LoadSession()
    let b:sessiondir = $HOME . "/.config/nvim/sessions" . getcwd()
    let b:sessionfile = b:sessiondir . "/session.vim"
    if (filereadable(b:sessionfile))
        exe 'source ' b:sessionfile
        echo "No session loaded."

" Controls the open/close Vim functions
augroup vimSessions
    " Adding automatons for when entering or leaving Vim
    au VimEnter * nested :call LoadSession()
   au VimLeave * :call MakeSession()
augroup END

I still can’t do project wide search and replaceUse Ripgrep for project-wide find and replace

One thing I still miss is the peerless search and replace functionality of Sublime Text. I’m going to be kinder on myself this time around. For the odd times I need to do that, at least to begin with, I’m going to do it with Sublime Text. I’m hoping in time I’ll move to doing it with Vim, with what currently still seems like largely unintelligible gibberish!

Update 24.11.19: For the last few weeks I have been using ripgrep, for all my search and replace jobs. Whenever I want to find some random text string in my project, I can just type :Rg string-here and it is immediately found. Choosing an item from the list takes me not just to the file but the exact piece of text in that file. If you find yourself getting into this, I can recommend this tutorial on YouTube:


How long will my new found harmony with Vim last? Who knows. The only thing I can tell you right now is that I’m enjoying it and feel productive with it.

There’s more plugins available to make front-end coding in Vim more pleasurable than ever before.

Oh, and I’m regretting selling that Happy Hacking Pro 2 keyboard I sold back in 2015 🙁

]]> 3
Automate repetitive tasks by writing and running a simple shell script Tue, 24 Sep 2019 14:52:11 +0000 Occasionally I have a scenario where I want to run a few Terminal commands one after the other. For example, clear a few folders of files, run a build script and then copy the contents somewhere.

Do this enough times and it gets more than a little tiresome. Thankfully, tasks of this nature can be bundled up into a little script you can include in your project and run at any time from the Shell (Terminal).

We will run through a little example here. As much for the benefit of my future self as anything else.

In this case I’m making a little shell script which I can run by navigating to the folder it resides in and running from the command line.

# Move into the right folder
cd ~/Sites/Demo/ 
# delete all files in the dist folder
rm -rf dist/**
# delete all files in the server folder
rm -rf /Destination/For/Build/Site/**
# Run a parcel build
parcel build index.html --public-url './' --no-source-maps --no-cache --no-minify
# Copy the build up to the proto folder
cp -R dist/ ~/Sites/Demo/ 
# Tell me it has finished
echo All done.

So what on earth is going on here. First, you need to save your Shell script with a .sh extension.

You also need the first line of the script to ‘tell’ the environment which Shell to use. I’m using ZSH but you could just as easily opt for Bash or something else. For example, if I was using Bash the first line would look like this: #!/bin/bash

After that you are just writing out the commands you want to happen, just as if you were typing them in a Terminal. I’ve added an extra line at the end echo All done. The echo command just prints the text that follows the command to the screen.
I’m using it to provide some definitive feedback in the Terminal when everything has finished.

When you have finished writing your script, before you can run it you need to set the correct permissions from the Terminal:

chmod +x ~/Sites/Demo/

Then it’s just case of running the script from the Terminal. You can enter the path to the script from where you are e.g. ~/Sites/Demo/, or, if you are already in the relevant folder just run with the script name e.g.


Variables, working directory, tarballing:

  • You can create a variable in Shell with equals and no space around it e.g. DEST="Destination/For/Build". Don’t put spaces around the equals or it won’t work!
  • You can use those variables like this: ${DEST}
  • If you want to make use of the folder you are running your script from you can use ${PWD} (Print Working Directory).
  • If you want a nice progress bar as a file copies, instead of cp you can use rsync -ah --progress. Our command above might be re-written as rsync -ah --progress dist/ ~/Sites/Demo (first is source, second destination)
  • You can tarball files up before moving them with tar so if we wanted to zip everything up before copying it tar -czf ${PWD}/newTarFile.tgz -C sourceLocation . There I am using -C to get just the files from my source location and zipping them up into a tar file in the working directory called ‘newTarFile.tgz’
  • When you want to deflate a file at a destination, use something like tar -xvf  ${DEST}newTarFile.tgz -C ${DEST} where you are locating the ‘newTarFile.tgz’ at the desitnation (see that I am using a variable as above) and deflating it into that destination. Obviously amend paths to suit your needs. 
]]> 1
Designing And Building A Progressive Web Application Without A Framework (Part 3 of 3) Sun, 08 Sep 2019 22:24:43 +0000 Part Three: making a Progressive Web App (PWA) and lessons learnt

This article is re-post of the article I originally wrote for Smashing Magazine.


Back in the first part of this series we explained why this project came to be. Namely a desire to learn how a small web application could be made in vanilla JavaScript and to get a non-designing developer working his design chops a little.
In part two we took some basic initial designs and got things up and running with some tooling and technology choices. We covered how and why parts of the design changed and the ramifications of those changes.

In this final part we will cover turning a basic web application into a Progressive Web Application (PWA) and ‘shipping’ the application before looking at the most valuable lessons learned making the simple web application In/Out:

  • the enormous value of JavaScript array methods
  • Debugging
  • when you are the only developer, you are the other developer
  • design IS development
  • ongoing maintenance and security issues
  • working on side projects without losing your mind, motivation or both
  • shipping some product beats shipping no product

So, before looking a lessons learnt, let’s look at how you turn a basic web application written in HTML, CSS and JavaScript into a Progressive Web Application (PWA).

In terms of total time spent on making this little web-application, I’d guestimate it was likely around two to three weeks. However, as it was done in snatched 30–60 minute chunks in the evenings it actual took around a year from first commit to when I uploaded what I consider the ‘1.0’ version in August 2018. As I’d got the app ‘feature complete’, or more simply speaking, at a stage I was happy with, I anticipated a large final push. You see, I had done nothing towards making the application into a Progressive Web Application. Turns out, this was actually the easiest part of the whole process.

Making a Progressive Web Application

The good news is, when it comes to turning a little JavaScript powered app into a ‘Progressive Web App’ there are heaps of tools to make life easy. If you cast your mind back to [part one](link to part one) of this series, you’ll remember that to be a Progressive Web App means meeting a set of criteria.

To get a handle on how your web-application measures up, your first stop should probably be the Lighthouse tools of Google Chrome. You can find the Progressive Web App audit under the ‘Audits’ tab.

This is what Lighthouse told me when I first ran In/Out through it.

Only 55/100 on the first Lighthouse audit
Things can only get better!

At the outset In/Out was only getting a score of 55/100 for a Progressive Web App. However, I took it from there to 100/100 in well under an hour!

The expediency in improving that score was little to do with my ability. It was simply because Lighthouse told me exactly what was needed to be done!

Some examples of requisite steps: include a manifest.json file (essentially a JSON file providing meta data about the app), add a whole slew of meta tags in the head, switch out images that were inlined in the CSS for standard url referenced images, and add a bunch of home screen images.

Making a number of home screen images, creating a manifest file and adding a bunch of meta tags might seem like a lot to do in under and hour but there are wonderful web applications to help you build web applications. How nice is that! I used Feed it some data about your application and your logo, hit submit and it furnishes you with a zip file containing everything you need! From there it’s just copy and paste time.

Things I’d put off for some time due to lack of knowledge, like a Service Worker, were also added fairly easily thanks to numerous blog posts and sites dedicated to service workers like With a service worker in place it meant the app could work offline, a requisite feature of a Progressive Web Application.

In short order, having worked through the Lighthouse audit recommendations I felt like the teachers pet:

alt text
No, I’m not pulling a smug face, honest I’m not

The reality is that taking the application and making it a Progressive Web Application was actually incredibly straightforward.

With that final piece of development concluded I uploaded the little application to a sub-domain of my website and that was it.


Months have passed since parking up development my little web application.

I’ve used the application casually in the months since. The reality is much of the team sport organisation I do still happens via text message. The application is however, definitely easier than writing down who is in and out than finding a scrap of paper every game night.

So, the truth is that it’s hardly a indispensable service. Nor does it set any bars for development or design. I couldn’t tell you I’m 100% happy with it either. I just got to a point I was happy to abandon it.

But that was never the point of the exercise. I took a lot from the experience. What follows are what I consider the most important takeaways.

Design is development

At the outset, I didn’t value design enough. I started this project believing that time spent sketching with pad and pen, or in Sketch the application, was time that could be better spent coding. However, it turns that when I went straight to code, I was often just being a busy fool. Exploring concepts first at the lowest possible fidelity, saved far more time in the long run.

There were numerous occasions in the beginning where hours were spent getting something working in code only to realise that it was fundamentally flawed from a user experience point of view.

My opinion now is that paper and pencil are the finest planning, design and coding tools. Every significant problem faced was principally solved with paper and a pencil; the text editor merely a means of executing the solution. Without something making sense on paper, it stands no chance of working in code.

The next thing I learnt to appreciate, and I don’t know why it took so long to figure out, is that design is iterative. I’d sub-consciously bought into the myth of a Designer with a capital “D”. Someone flouncing around, holding their mechanical pencil up at straight edges, waxing lyrical about typefaces and sipping on a flat white (with soya milk, obviously) before casually birthing fully formed visual perfection into the world.

This, not unlike the notion of the ‘genius’ programmer, is a myth. If you’re new to design but trying your hand, I’d suggest you don’t get hung up on the first idea that piques your excitement. It’s so cheap to try variations so embrace that possibility. None of the things I like about the design of In/Out were there in the first designs.

I believe it was the novelist, Michael Crichton, who coined the maxim, “Books are not written — they’re rewritten”. Accept that every creative process is essentially the same. Be aware that trusting the process lessens the anxiety and practice will refine your aesthetic understanding and judgement.

You are the other dev on your project

I’m not sure if this is particular to projects that only get worked on sporadically but I made the following foolhardy assumption: “I don’t need to document any of this because it’s just me, and obviously I will understand it, because I wrote it.”

Nothing could be further from the truth!

There were evenings where, for the 30 minutes I had to work on the project, I did nothing more than try to understand a function I had written six months ago. The main reasons code re-orientation took so long was a lack of quality comments and poorly named variables and function arguments.

I’m very diligent in commenting code in my day job, always conscientious that someone else might need to make sense of what I’m writing. However, in this instance I was the someone else. Do you really think you will remember how the block of code works you wrote in 6 months time? You won’t. Trust me on this, take some time out and comment that thing up!


When you hit bugs and you have written all the code, it’s not unfair to suggest the error is likely originating between the keyboard and chair. However, before assuming that, I would suggest you test even your most basic assumptions. For example, I remember taking in excess of two hours to fix a problem I had assumed was due to my code; in iOS I just couldn’t get my input box to accept text entry. I don’t remember why it hadn’t stopped me before but I do remember my frustration with the issue.

Turns out it was due to a, still yet to be fixed, bug in Safari. Turns out that in Safari if you have:

* {
  user-select: none;

In your style sheet, input boxes won’t take any input. You can work around this with:

* {
  user-select: none;

input[type] {
  user-select: text;

Which is the approach I take in my “App Reset” CSS reset:

However, the really frustrating part of this was I had learned this already and subsequently forgotten it. When I finally got around to checking the WebKit bug tracking whilst troubleshooting the issue, I found I had wrote a workaround in the bug report thread move than a year ago complete with reduction:!

Want to build with data? Learn JavaScript Array methods

Perhaps the single biggest advance my JavaScript skills took by undergoing this app-building exercise was getting familiar with JavaScript Array methods. I now use them daily for all my iteration and data manipulation needs. I cannot emphasise enough how useful methods like map(), filter(), every(), findIndex(), find() and reduce() are. You can solve virtually any data problem with them. If you don’t already have them in your arsenal, bookmark now and dig in as soon as you are able. My own run-down of my favoured array methods is documented here.

ES6 has introduced other time savers for manipulating arrays, such as Set, Rest and Spread. Indulge me while I share one example; there used to be a bunch of faff if you wanted to remove duplicates from even a simple flat array. Not any more.

Consider this simple example of an Array with the duplicate entry, “Mr Pink”:

let myArray = [
  "Mr Orange",
  "Mr Pink",
  "Mr Brown",
  "Mr White",
  "Mr Blue",
  "Mr Pink"

To get rid of the duplicates with ES6 JavaScript you can now just do:

let deDuped = [ Set(myArray)]; // deDuped logs ["Mr Orange", "Mr Pink", "Mr Brown", "Mr White", "Mr Blue"]

Something that used to require hand-rolling a solution or reaching for a library is now baked into the language. Admittedly, on such as short Array that may not sound like such a big deal but imagine how much time that saves when looking at arrays with hundreds of entries and duplicates.

Maintenance & Security

Anything you build that makes any use of NPM, even if just for build tools, carries the possibility of being vulnerable to security issues. GitHub does a good job of keeping you aware of potential problems but there is still some burden of maintenance.

For something that is a mere side-project, this can be a bit of a pain in the months and years that follow active development.

The reality is that every time you update dependencies to fix a security issue, you introduce the possibility of breaking your build.

For months my package.json looked like this:

  "dependencies": {
    "gulp": "^3.9.1",
    "postcss": "^6.0.22",
    "postcss-assets": "^5.0.0"
  "name": "In Out",
  "version": "1.0.0",
  "description": "simple utility to see who's in and who's out",
  "main": "index.js",
  "author": "Ben Frain",
  "license": "MIT",
  "devDependencies": {
    "autoprefixer": "^8.5.1",
    "browser-sync": "^2.24.6",
    "cssnano": "^4.0.4",
    "del": "^3.0.0",
    "gulp-htmlmin": "^4.0.0",
    "gulp-postcss": "^7.0.1",
    "gulp-sourcemaps": "^2.6.4",
    "gulp-typescript": "^4.0.2",
    "gulp-uglify": "^3.0.1",
    "postcss-color-function": "^4.0.1",
    "postcss-import": "^11.1.0",
    "postcss-mixins": "^6.2.0",
    "postcss-nested": "^3.0.0",
    "postcss-simple-vars": "^4.1.0",
    "typescript": "^2.8.3"

And by June 2019 I was getting these warnings from GitHub.

alt text
Everyone loves to deal with security issues (not)

None were related to plugins I was using directly, they were all sub-dependencies of the build tools I had used. Such is the double-edged sword of JavaScript packages. In terms of the app itself, there was no problem with In/Out; that wasn’t using any of the project dependencies. But as the code was on GitHub, I felt duty bound to try and fix things up.

It’s possible to update packages manually, with a few choice changes to the package.json. However both Yarn and NPM have their own update commands. I opted to run yarn upgrade-interactive which gives you a simple means to update things from the terminal.

alt text
Quite fancy, especially for the command line

Seems easy enough, there’s even a little coloured key to tell you which updates are most important.

You can add the --latest flag to update to the very latest major version of the dependencies, rather than just the latest patched version. In for a penny…

Trouble is, things move fast in the JavaScript package world, so updating a few packages to the latest version and then attempting a build resulted in this:

alt text

As such I rolled back my package.json file and tried again this time without the --latest flag. That solved my security issues. Not the most fun I’ve had on a Monday evening though I’ll be honest.

That touches on an important part of any side project. Being realistic with your expectations.

Side projects

I don’t know if you are the same but I’ve found that a giddy optimism and excitement makes me start projects and if anything does, embarrassment and guilt makes me finish them.

It would be a lie to say the experience of making this tiny application in my spare time was fun-filled. There were occasions I wish I’d never opened my mouth about to anyone. But now it is done I am 100% convinced it was worth the time invested.

That said, it’s possible to mitigate frustration with such a side project by being realistic about how long it will take to understand and solve the problems you face. Only have 30 mins a night, a few nights a week? You can still complete a side project; just don’t be disgruntled if your pace feels glacial. If things can’t enjoy your full attention be prepared for a slower and steadier pace than you are perhaps used to. That’s true, whether it’s coding, completing a course, learning to juggle or writing a series of articles of why it took so long to write a small web application!

Simple goal setting

You don’t need a fancy process for goal setting. But it might help to break things down into small/short tasks. Things as simple as ‘write CSS for drop-down menu’ are perfectly achievable in a limited space of time. Whereas ‘research and implement design pattern for state management’ is probably not. Break things down. Then, just like Lego, the tiny pieces go together.

Thinking about this process as chipping away at the larger problem, I’m reminded of the famous Bill Gates quote:

Most people overestimate what they can do in one year and underestimate what they can do in ten years.

This from a man that’s helping to eradicate Polio. Bill knows his stuff. Listen to Bill y’all.

Shipping something is better than shipping nothing

Before ‘shipping’ this web application, I reviewed the code and was thoroughly disheartened.

Although I had set out on this journey from a point of complete naivity and inexperience, I had made some decent choices when it came to how I might architect (if you’ll forgive so grand a term) the code. I’d researched and implemented a design pattern and enjoyed everything that pattern had to offer. Sadly, as I got more desperate to conclude the project, I failed to maintain discipline. The code as it stands is a real hodge-bodge of approaches and rife with inefficiencies.

In the months since I’ve come to realise that those shortcomings don’t really matter. Not really.

I’m a fan of this quote from Helmuth von Moltke.

no plan of operations extends with any certainty beyond the first contact with the main hostile force

That’s been paraphrased as, “no plan survives first contact with the enemy”. Perhaps we can boil it down further and simply go with “shit happens”?

I can surmise my coming to terms with what shipped via the following analogy.

If a friend announced they were going to try and run their first marathon, them getting over the finish line would be all that mattered – I wouldn’t be berating them on their finishing time.

I didn’t set out to write the best web application. The remit I set myself was simply to design and make one.

More specifically, from a development perspective, I wanted to learn the fundamentals of how a web application was constructed. From a design point of view, I wanted to try and work through some (albeit simple) design problems for myself. Making this little application met those challenges and then some. The JavaScript for the entire application was just 5KB (gzipped). A small file size I would struggle to get to with any framework. Except maybe Svelte.

If you are setting yourself a challenge of this nature, and expect at some point to ‘ship’ something, write down at the outset why you are doing it. Keep those reasons at the forefront of your mind and be guided by them. Everything is ultimately some sort of compromise. Don’t let lofty ideals paralyse you from finishing what you set out to do.


Overall, as it comes up to a year since I have worked on In/Out, my feelings fall broadly into three areas: things I regretted, things I would like to improve/fix and future possibilities.

Things I regretted

As already alluded to, I was disappointed I hadn’t stuck to what I considered a more elegant method of changing state for the application and rendering it to the DOM. The observer pattern, as discussed in the second part of this series, which solved so many problems in a predictable manner was ultimately cast aside as ‘shipping’ the project became a priority.

I was embarrassed by my code at first but in the following months I have grown more philosophical. If I hadn’t used more pedestrian techniques later on, there is a very real possibility the project would never have concluded. Getting something out into the world that needs improving still feels better than it never being birthed into the world at all.

Improving In/Out

Beyond choosing semantic markup, I’d made no affordances for accessibility. When I built In/Out I was confident with standard web page accessibility but not sufficiently knowledgeable to tackle an application. I’ve done far more work/research in that area now, so I’d enjoy taking the time to do a decent job of making this application more accessible.

The implementation for the revised design of ‘Add Person’ functionality was rushed. It’s not a disaster, just a bit rougher than I would like. It would be nice to make that slicker.

I also made no consideration for larger screens. It would be interesting to consider the design challenges of making it work at larger sizes, beyond simply making it a tube of content.


Using localStorage worked for my simplistic needs but it would be nice to have a ‘proper’ data store so it wasn’t necessary to worry about backing up the data. Adding log-in capability would also open up the possibility of sharing game organisation with another individual. Or maybe every player could just mark whether they were playing themselves? It’s amazing how many avenues to explore you can envisage from such simple and humble beginnings.

SwiftUI for iOS app development is also intriguing. For someone who has only ever worked with web languages, at first glance, SwiftUI looks like something I’m now emboldened to try. I’d likely try rebuilding In/Out with SwiftUI – just to have something specific to build and compare the development experience and results.

And so, it’s time to wrap things up and give you the TL;DR version of all this.

If you want to learn how something works on the Web, I’d suggest skipping the abstractions. Ditch the frameworks, whether that’s CSS or JavaScript, until you really understand what they are dong for you.

Design is iterative, embrace that process.

Solve problems in the lowest fidelity medium at your disposal. Don’t go to code if you can test the idea in Sketch. Don’t draw it in Sketch if you can use pen and paper. Write out logic first. Then write it in code.

Be realistic but never despondent. Developing a habit of chipping away at something for as little as 30 minutes a day can get results. That fact is true whatever form your quest takes.

]]> 0
Designing And Building A Progressive Web Application Without A Framework (Part 2 of 3) Sun, 08 Sep 2019 22:00:58 +0000 Part Two: Development

Quick Summary

This article is re-post of the article I originally wrote for Smashing Magazine.

In the first post of this series, your author, a JavaScript novice, had set themselves the goal of designing and coding a basic web application. The ‘app’ was to be called ‘In/Out’ – an application to organise team based games. In this post we are going to concentrate on how the application ‘In/Out’ actually got made.


The raison d’être of this adventure was to push your humble author a little in the disciplines of visual design and JavaScript coding. The functionality of the application I’d decided to build was not dissimilar to a ‘to do’ application. It is important to stress that this wasn’t an exercise in original thinking. The destination was far less important than the journey.

As the primary reason for starting this journey was learning some fundamentals of how JavaScript applications actually worked, I had decided early on that there would be no leaning on frameworks (React, Vue et al).

It remains my conviction that the use of abstraction layers is most beneficial when an understanding of what is being abstracted is understood. At the outset, I didn’t even understand what exactly a framework would be handling for me!

I would be leaning on some front-end development tooling however. I was planning to employ TypeScript for the JavaScript side of things and PostCSS to aid in style sheet authoring. These two choices allowed for strong static analysis of my code – something I have only ever had positive outcomes from.

Here is a summary of what we will cover in this post:

  • the project set-up and why I opted for Gulp as a build tool
  • application design patterns and what they mean in practice
  • how to store and visualise application state
  • how CSS was scoped to components
  • what UI/UX niceties were employed to make the things more ‘app like’
  • how the remit changed through iteration

Let’s start with build tools.

Build tools

In order to get my basic tooling of TypeScipt and PostCSS up and running and create a decent development experience, I would need a build system.

In my day job, for the last five years or so, I have been building interface prototypes in HTML/CSS and to a lesser extent, JavaScript. Until recently, I have used Gulp with any number of plugins almost exclusively to achieve my fairly humble build needs.

Typically I need to process CSS, convert JavaScript or TypeScript to more widely supported JavaScript, and occasionaly, carry out related tasks like minifying code output and optimising assets. Using Gulp has always allowed me to solve those issues with aplomb.

For those unfamiliar, Gulp lets you write JavaScript to do ‘something’ to files on your local file system.
To use Gulp, you typically have a single file (called gulpfile.js) in the root of your project. This JavaScript file allows you to define tasks as functions. You can add third-party ‘Plugins’, which are essentially further JavaScript functions, that deal with specific tasks.

An example Gulp task

An example Gulp task might be using a plugin to harness PostCSS to process to CSS when you change an authoring style sheet (gulp-postcss). Or compiling TypeScript files to vanilla JavaScript (gulp-typescript) as you save them. Here is a simple example of how you write a task in Gulp. This task uses the ‘del’ gulp plugin to delete all the files in a folder called ‘build’:

var del = require("del");

gulp.task("clean", function() {
    return del(["build/**/*"]);

The require assigns the del plugin to a variable. Then the gulp.task method is called. We name the task with a string as the first argument (“clean”) and then run a function, which in this case uses the ‘del’ method to delete the folder passed to it as an argument. The asterisk symbols there are ‘glob’ patterns which essentially say any file in any folder of build too.

Gulp tasks can get heaps more complicated but in essence, that it the mechanics of how things are handled. The truth is, with Gulp, you don’t need to be a JavaScript wizard to get by; grade 3 copy and paste skills are all you need.

I’d stuck with Gulp as my default build tool/task runner for all these years with a policy of ‘if it ain’t broke; don’t try and fix it’.

However, I was worried I was getting stuck in my ways. It’s an easy trap to fall into. First you start holidaying the same place every year, then refusing to adopt any new fashion trends before eventually and steadfastly refusing to try out any new build tools.

I’d heard plenty of chatter on the Internets about ‘Webpack’ and thought it was my duty to try a project using the new-fangled toast of the front-end developer cool-kids.


I distinctly remember skipping over to the site with keen interest. The first explanation of what Webpack is and does started like this:

import bar from './bar';

Say what? In the words of Dr Evil, “Throw me a frickin’ bone here, Scott”.

I know it’s my own hang-up to deal with but I’ve developed a revulsion to any coding explanations that mention ‘foo’, ‘bar’ or ‘baz’. That aside, the complete lack of succinctly describing what Webpack was actually for had me suspecting it perhaps wasn’t for me.

Digging a little further into the Webpack documentation, a slightly less opaque explanation was offered, “At its core, webpack is a static module bundler for modern JavaScript applications”.

Hmmm. Static module bundler. Was that what I wanted? I wasn’t convinced. I read on but the more I read, the less clear I was. Back then, concepts like dependency graphs, hot module reloading and entry points were essentially lost on me.

A couple of evenings of researching Webpack later, I abandoned any notion of using it.

I’m sure in the right situation and more experienced hands, Webpack is immensely powerful and appropriate but it seemed like complete overkill for my humble needs. Module bundling, tree-shaking and hot-module reloading sounded great; I just wasn’t convinced I needed them for my little ‘app’.

So, back to Gulp then.

On the theme of not changing things for change sake, another piece of technology I wanted to evaluate was Yarn over NPM for managing project dependencies. Until that point I had always used NPM and Yarn was getting touted as a better, faster alternative.

I don’t have much to say about Yarn other than if you are currently using NPM and everything is OK, you don’t need to bother trying Yarn.

I started my Gulp file with basic functionality to get up and running.

A ‘default’ task would watch the ‘source’ folders of style sheets and TypeScript files and compile them out to a build folder along with the basic HTML and associated sourcemaps.

I got BrowserSync working with Gulp too. I might not know what to do with a Webpack configuration file but that didn’t mean I was some kind of animal. Having to manually refresh the browser while iterating with HTML/CSS is sooooo 2010 and BrowserSync gives you that short feedback and iteration loop that is so useful for front-end coding.

Here is the basic gulp file as of 11.6.2017

You can see how I tweaked the Gulpfile nearer to the end of shipping, adding minification with ugilify:

Project structure

By consequence of my technology choices, some elements of code organisation for the application were defining themselves. I would have a gulpfile.js in the root of my project, a node_modules folder (where Gulp stores plugin code) a preCSS folder for my authoring style sheets, a ts folder for my TypeScript files, and a build folder for the compiled code to live.

My idea was to have an index.html that contained the ‘shell’ of the application, including any non-dynamic HTML structure and then links to the styles and the JavaScript file that would make the application work. On disk, it would look something like this:


Configuring BrowserSync to look at that build folder meant I could point my browser at localhost:3000 and all was good.

With a basic build system in place, files organisation settled and some basic designs to make a start with, I had run-out of procrastination fodder I could legitimately use to prevent me from actually building the thing!

Writing an application

When it came to actually writing the application, the two big conceptual challenges I needed to understand were:

  1. How to represent the data for an application in a manner that could be easily extended and manipulated.
  2. How to make the UI react when data was changed from user input.

One of the simplest ways to represent a data structure in JavaScript is with object notation. That sentence reads a little computer science-y. More simply, an ‘object’ in JavaScript lingo is a handy way of storing data.

Consider this JavaScript object assigned to a variable called ioState (for In/Out State):

var ioState = {
    Count: 0, // Running total of how many players
    RosterCount: 0; // Total number of possible players
    ToolsExposed: false, // Whether the UI for the tools is showing
    Players: [], // A holder for the players

If you don’t really know JavaScript that well, you can probably at least grasp what’s going on: each line inside the curly braces is a property (or ‘key’ in JavaScript parlance) and value pair.

The net result is that using that kind of data structure you can get, and set, any of the keys of the object. For example, if I want to set the count to 7:

ioState.Count = 7;

If I wanted to set a piece of text to that value, the notation works like this:

aTextNode.textContent = ioState.Count;

You can see that getting values and setting values to that state object is simple in the JavaScript side of things. However, reflecting those changes in the User Interface is less so. This is the main area where frameworks and libraries seek to abstract away the pain.

In general terms, when it comes to dealing with updating the user interface based upon state, it’s preferable to avoid querying the DOM, as this is generally considered a sub-optimal approach.

Consider the In/Out interface. It’s typically showing a list of potential players for a game. They are vertically listed, one under the other, down the page.

Perhaps each player is represented in the DOM with a label wrapping a checkbox input. This way, clicking a player would toggle the player to ‘In’ by virtue of the label making the input ‘checked’.

To update our interface, we might have a ‘listener’ on each input element in the JavaScript. On a click or change, the function queries the DOM and counts how many of our player inputs are checked. On the basis of that count we would then update something else in the DOM to show the user how many players are checked.

Let’s consider the cost of that basic operation. We are listening on multiple DOM nodes for the click/check of an input, then querying the DOM to see how many of a particular DOM type are checked, then writing something into the DOM to show the user, UI wise, the number of players we just counted.

The alternative would be to hold the application state as a JavaScript object in memory. A button/input click in the DOM could merely update the JavaScript object and then, based on that change in the JavaScript object, do a single-pass update of the all interface changes that are needed. We could skip querying the DOM to count the players as the JavaScript object would already hold that information.

So. Using a JavaScript object structure for the state seemed simple but flexible enough to encapsulate the application state at any given time. The theory of how this could be managed seemed sound enough too – this must be what phrases like ‘one way data flow’ were all about? However, the first real trick would be in creating some code that would automatically update the UI based on any changes to that data.

The good news is that smarter people than I have already figured this stuff out (Thanks goodness!). People have been perfecting approaches to this kind of challenges since the dawn of applications. This category of problems are the bread and butter of ‘design patterns’. The moniker ‘design pattern’ sounded esoteric to me at first but after digging just a little it all started to sound less computer science and more common sense.

Design Patterns

A design pattern, in computer science, is a pre-defined and proven way of solving a common technical challenge. Think of design patterns as the coding equivalent of a cooking recipe.

Observer pattern

Typically design patterns are split into three groups: Creational, Structural and Behavioural. I was looking for something Behavioural that helped to deal with communicating changes around the different parts of the application.

When reading the opening description of the ‘Observer’ pattern in Learning JavaScript Design Patterns I was pretty sure it was the pattern for me. It is described thus:

The Observer is a design pattern where an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state.

When a subject needs to notify observers about something interesting happening, it broadcasts a notification to the observers (which can include specific data related to the topic of the notification).

The key to my excitement was that this seemed to offer some way of things updating themselves when needed.

Suppose the user clicked a player named “Betty” to select that she was ‘In’ for the game. A few things might need to happen in the UI:

  1. Add 1 to the playing count
  2. Remove Betty from the ‘Out’ pool of players
  3. Add Betty to the ‘In’ pool of players

The app would also need to update the data that represented the UI. What I was very keen to avoid was this:

playerName.addEventListener("click", playerToggle);

function playerToggle() {
    if (inPlayers.includes( {
    } else {

The aim was to have an elegant data flow that updated what was needed in the DOM when and if the central data was changed.

With an Observer pattern, it was possible to send out updates to the state and therefore the user interface quite succinctly. Here is an example, the actual function used to add a new player to the list:

function itemAdd(itemString: string) {
    let currentDataSet = getCurrentDataSet();
    var newPerson = new makePerson(itemString);
    io.items[currentDataSet].EventData.splice(0, 0, newPerson);
        items: io.items,

The part relevant to the Observer pattern there being the io.notify method. As that shows us modifying the items part of the application state, let me show you the observer that listened for changes to ‘items’:

    props: ["items"],
    callback: function renderItems() {
        // Code that updates anything to do with items...

We have a notify method that makes changes to the data and then Observers to that data that respond when properties they are interested in are updated.

With this approach, the app could have observes watching for changes in any property of the data and run a function whenever a change occurred.

There was now an approach for updating the UI effectively based on state. Peachy. However, this still left me with two glaring issues.

One was how to store the state across page reloads/sessions and the fact that despite the UI working, visually, it just wasn’t very ‘app like’. For example, if a button was pressed the UI instantly changed on screen. It just wasn’t particularly compelling.

Let’s deal with the storage side of things first.

Saving state

My primary interest from a development side entering into this centred on understanding how app interfaces could be built and made interactive with JavaScript. How to store and retrieve data from a server or tackle user-authentication and logins was ‘out of scope’.

Therefore, instead of hooking up to a web service for the data storage needs, I opted to keep all data on the client. There are a number of web platform methods of storing data on a client. I opted for localStorage.

The API for localStorage is incredibly simple. You set and get data like this:

// Set something
localStorage.setItem("yourKey", "yourValue");
// Get something

LocalStorage has a setItem method that you pass two strings to. The first is the name of the key you want to store the data with and the second string is the actual string you want to store. The getItem method takes a string as an argument that returns to you whatever is stored under that key in localStorage. Nice and simple.

However, amongst the reasons to not use localStorage is the fact that everything has to be saved as a ‘string’. This means you can’t directly store something like an array or object. For example, try running these commands in your browser console:

// Set something
localStorage.setItem("myArray", [1, 2, 3, 4]);
// Get something
localStorage.getItem("myArray"); // Logs "1,2,3,4"

Even though we tried to set the value of ‘myArray’ as an array, when we retrieved it, it had been stored as a string (note the quote marks around ‘1,2,3,4’).

You can certainly store objects and arrays with localStorage but you need to be mindful that they need converting back and forth from strings.

So, in order to write state data into localStorage it was written to a string with the JSON.stringify() method like this:

const storage = window.localStorage;
storage.setItem("players", JSON.stringify(io.items));

When the data needed retrieving from localStorage, the string was turned back into usable data with the JSON.parse() method like this:

const players = JSON.parse(storage.getItem("players"));

Using localStorage meant everything was on the client and that meant no 3rd party services or data storage concerns.

Data was now persisting refreshes and sessions – Yay! The bad news was that localStorage does not survive a user emptying their browser data. When someone did that, all their In/Out data would be lost. That’s a serious shortcoming.

Despite the fragility of saving data locally on a users device, hooking up to a service or database was resisted. Instead, the issue was side-stepped by offering a ‘load/save’ option. This would allow any user of In/Out to save their data as a JSON file which could be loaded back into the app if needed. This worked well on Android but far less elegantly for iOS. On an iPhone, it resulted in a splurge of text on screen like this:

unformatted text on iPhone screen
The shambles that is saving data via iOS

As you can imagine, I was far from alone in berating Apple via WebKit about this shortcoming. The relevant bug was here:

At the time of writing this bug has a solution and patch but has yet to make its way into iOS Safari. Allegedly, iOS13 fixes it but it’s that’s in Beta as I write.

So, for my minimum viable product, that was storage addressed. Now it was time to attempt to make things more ‘app like’!


Turns out after many discussions with many people, defining exactly what ‘app like’ means is quite difficult.

Ultimately, I settled on ‘app-like’ being synonymous with a visual slickness usually missing from the web. When I think of the apps that feel good to use they all feature motion. Not gratuitous, but motion that adds to the story of your actions. It might be the page transitions between screens, the manner in which menus pop into existence. It’s hard to describe in words but most of us know it when we see it.

The first piece of visual flair needed was shifting player names up or down from ‘In’ to ‘Out’ and vice-versa when selected. Making a player instantly move from one section to the other was straightforward but certainly not ‘app-like’. An animation as a player name was clicked would hopefully emphasise the result of that interaction – the player moving from one category to another.

Like many of these kind of visual interactions, their apparent simplicity belies the complexity involved in actually getting it working well.

It took a few iterations to get the movement right but the basic logic was this:

  • Once a ‘player’ is clicked, capture where that player is, geometrically, on the page.
  • measure how far away the top of the area is the player needs to move to if going up (‘In’) and how far away the bottom is, if going down (‘Out’).
  • If going up, a space equal to the height of the player row needs to be left as the player moves up and the players above should collapse downwards at the same rate as the time it takes for the player to travel up to land in the space vacated by the existing ‘In’ players (if any exist) coming down.
  • If a player is going ‘Out’ and moving down, everything else needs to move up to the space left and the player needs to end up below any current ‘Out’ players.

Phew. It was trickier than I thought in English, never mind JavaScript!

There were additional complexities to consider and trial such as transition speeds. At the outset, it wasn’t obvious whether a constant speed of movement (e.g. 20px per 20ms), or a constant duration for the movement (e.g. 0.2s) would look better.

The former was slightly more complicated as the speed needed to be computed ‘on the fly’ based upon how far the player needed to travel (greater distance required a longer transition duration set).

However, it turned out that a constant transition duration was not just simpler in code, it actually produced a more favourable effect. The difference was subtle but these are the kind of choices you can only determine once you have seen both options.

Looking at the code now, I can appreciate that on something beyond my humble app, this functionality could almost certainly be written more effectively. Given that the app would know the number of players and know the fixed height of the slats, it should be entirely possible to make all distance calculations in the JavaScript alone, without any DOM reading.

It’s not that what was shipped doesn’t work, it’s just that it isn’t the kind of code solution you would showcase on the Internet. Oh wait.

Other ‘app like’ interactions were much easier to pull off. Instead of menus simply snapping in and out with something as simple as toggling a display property, a lot of milage was gained by simply exposing them with a little more finesse. It was still triggered simply but CSS was doing all the heavy lifting:

.io-EventLoader {
    position: absolute;
    top: 100%;
    margin-top: 5px;
    z-index: 100;
    width: 100%;
    opacity: 0;
    transition: all 0.2s;
    pointer-events: none;
    transform: translateY(-10px);
    [data-evswitcher-showing="true"] & {
        opacity: 1;
        pointer-events: auto;
        transform: none;

There when the data-evswitcher-showing="true" attribute was toggled on a parent element, the menu would fade in, transform back into its default position and pointer events would be re-enabled so the menu could receive clicks.

ECSS style sheet methodology

You’ll notice in that prior code that from an authoring point of view, CSS overrides are being nested within a parent selector. That’s the way I always favour writing UI style sheets; a single source of truth for each selector and any overrides for that selector encapsulated within a single set of braces. It’s a pattern that requires use of a CSS processor (Sass, PostCSS, LESS, Stylus et al) but I feel is the only positive way to make use of nesting functionality.

I’d cemented this approach in my book, Enduring CSS and despite there being a plethora of more involved methods available to write CSS for interface elements, ECSS has served me and the large development teams I work with well since the approach was first documented way back in 2014!

Partialling the TypeScript

Even without a CSS processor or superset language like Sass, CSS has had the ability to import one or more CSS files into another with the import directive:

@import "other-file.css";

When beginning with JavaScript I was surprised there was no equivalent. Whenever code files get longer than a screen or so high, it always feel like splitting it into smaller pieces would be beneficial.

Another bonus to using TypeScript was that it has a beautifully simple way of splitting code into files and importing them when needed.

This capability pre-dated native JavaScript modules and was a great convenience feature. When TypeScript was compiled it stitched it all back to a single JavaScript file. It meant it was possible to easily break up the application code into manageable partial files for authoring and import then into the main file easily. The top of the main inout.ts looked like this:

/// <reference path="defaultData.ts" />
/// <reference path="splitTeams.ts" />
/// <reference path="deleteOrPaidClickMask.ts" />
/// <reference path="repositionSlat.ts" />
/// <reference path="createSlats.ts" />
/// <reference path="utils.ts" />
/// <reference path="countIn.ts" />
/// <reference path="loadFile.ts" />
/// <reference path="saveText.ts" />
/// <reference path="observerPattern.ts" />
/// <reference path="onBoard.ts" />

This simple house-keeping and organisation task helped enormously.

Multiple events

At the outset, I felt that from a functionality point of view, a single event, like “Tuesday Night Football” would suffice. In that scenario, if you loaded In/Out up you just added/removed or moved players in or out and that was that. There was no notion of multiple events.
I quickly decided that, even going for a minimum viable product, this would make for a pretty limited experience. What if somebody organised two games on different days, with a different roster of players? Surely In/Out could/should accommodate that need?
It didn’t take too long to re-shape the data to make this possible and amend the methods needed to load in a different set.

At the outset, the default data set looked something like this:

var defaultData = [
    { name: "Daz", paid: false, marked: false, team: "", in: false },
    { name: "Carl", paid: false, marked: false, team: "", in: false },
    { name: "Big Dave", paid: false, marked: false, team: "", in: false },
    { name: "Nick", paid: false, marked: false, team: "", in: false },

An array containing an object for each player.

After factoring in multiple events it was amended to look like this:

var defaultDataV2 = [
        EventName: "Tuesday Night Footy",
        Selected: true,
        EventData: [
            { name: "Jack", marked: false, team: "", in: false },
            { name: "Carl", marked: false, team: "", in: false },
            { name: "Big Dave", marked: false, team: "", in: false },
            { name: "Nick", marked: false, team: "", in: false },
            { name: "Red Boots", marked: false, team: "", in: false },
            { name: "Gaz", marked: false, team: "", in: false },
            { name: "Angry Martin", marked: false, team: "", in: false },
        EventName: "Friday PM Bank Job",
        Selected: false,
        EventData: [
            { name: "Mr Pink", marked: false, team: "", in: false },
            { name: "Mr Blonde", marked: false, team: "", in: false },
            { name: "Mr White", marked: false, team: "", in: false },
            { name: "Mr Brown", marked: false, team: "", in: false },
        EventName: "WWII Ladies Baseball",
        Selected: false,
        EventData: [
            { name: "C Dottie Hinson", marked: false, team: "", in: false },
            { name: "P Kit Keller", marked: false, team: "", in: false },
            { name: "Mae Mordabito", marked: false, team: "", in: false },

The new data was an array with an object for each event. Then in each event was an EventData property that was an array with player objects in as before.

It took much longer to re-consider how the interface could best deal with this capability.

From the outset, the design had always been very sterile. Considering this was also supposed to be an exercise in design, I didn’t feel I was being brave enough. So a little more visual flair was added, starting with the header. This is what I mocked up in Sketch:

Application design screen on iPhone
A more adventurous design

It wasn’t going to win awards but it was certainly more arresting than where it started.

Aesthetics aside, it wasn’t until somebody else pointed it out, that I appreciated the big plus icon in the header was very confusing. Most people thought it was a way to add another event. In reality, it switched to a ‘Add Player’ mode with a fancy transition that let you type in the name of the player in the same place the event name was currently.

This was another instance where fresh eyes were invaluable. It was also an important lesson in letting go. The honest truth was I had held on to the input mode transition in the header because I felt it was cool and clever. However, the fact was it was not serving the design as a whole.

This was changed in the live version. Instead, the header just deals with events – a more common scenario. Meanwhile, adding players is done from a sub-menu. This gives the app a much more understandable hierarchy.

The other lesson learned here was that whenever possible, it’s hugely beneficial to get candid feedback from peers. If they are good and honest, they won’t let you give yourself a pass!

Summary: My code stinks

Right. So far, so normal tech-adventure retrospective piece. These things are ten a penny on Medium! The formula goes something like this: the dev details how they smashed down all obstacles to release a finely tuned piece of software into the Internets and then pick up a interview at Google or get acqui-hired somewhere. However, the truth of the matter is that I was a first-timer at this app building malarkey so the code ultimately shipped as the ‘finished’ application stunk to high-heaven!

For example, the Observer pattern implementation used worked very well. I was organised and methodical at the outset but that approach ‘went south’ as I became more desperate to finish things off. Like a serial dieter, old familiar habits crept back in and the code quality subsequently dropped.

Looking now at the code shipped, it is a less than ideal hodge-bodge of clean observer pattern and bog-standard event listeners calling functions. In the main inout.ts file there are over 20 querySelector method calls; hardly a poster child for modern application development!

I was pretty sore about this at the time, especially as at the outset I was aware this was a trap I didn’t want to fall into. However, in the months that have since past I’ve become more philosophical about it.

The final post in this series reflects on finding the balance between silvery-towered code idealism and getting things shipped. It also covers the most important lessons learned during this process and my future aspirations for application development.

]]> 0
Designing And Building A Progressive Web Application Without A Framework (Part 1 of 3) Sun, 08 Sep 2019 21:45:07 +0000 Part One: Rationale, design and planning

This article is re-post of the article I originally wrote for Smashing Magazine.


How does a web application actually work? I don’t mean from the end-user point of view. I mean in the technical sense. How does a web application actually run? What kicks things off? Without any boilerplate code, what’s the right way to structure an application? Particularly a client-side application where all the logic runs on the end-users device. How does data get managed and manipulated? How do you make the interface react to changes in the data?

These are the kind of questions that are simple to side-step or ignore entirely with a framework. Developers reach for something like React, Vue, Ember or Angular, follow the documentation to get up and running and away they go. Those problems are handled by the framework’s box of tricks.

That may be exactly how you want things. Arguably, it’s the smart thing to do if you want to build something to a professional standard. However, with the magic abstracted away, you never get to learn how the tricks are actually performed.

Don’t you want to know how the tricks are done?

I did. So, I decided to try building a basic client-side application, sans-framework, to understand these problems for myself.

But, I’m getting a little ahead of myself; a little background first.

Before starting this journey I considered myself highly proficient at HTML and CSS but not JavaScript. As I felt I’d solved the biggest questions I had of CSS to my satisfaction, the next challenge I set myself was understanding a programming language.

The fact was, I was relatively beginner-level with JavaScript. And, aside from hacking the PHP of WordPress around, I had no exposure or training in any other programming language either.

Let me qualify that ‘beginner-level’ assertion. Sure, I could get interactivity working on a page. Toggle classes, create DOM nodes, append and move them around etc. But when it came to organising the code for anything beyond that I was pretty clueless. I wasn’t confident building anything approaching an application. I had no idea how to define a set of data in JavaScipt, let alone manipulate it with functions.

I had no understanding of JavaScript ‘design patterns’ — established approaches for solving oft-encountered code problems. I certainly didn’t have a feel for how to approach fundamental application-design decisions.

Have you ever played ‘Top Trumps’? Well, in the web developer edition, my card would look something like this (marks out of 100):

CSS: 95
Copy and paste: 90
Hairline: 4
HTML: 90
JavaSript: 13

In addition to wanting to challenge myself on a technical level, I was also lacking in design chops.

With almost exclusively coding other peoples designs for the past decade, my visual design skills hadn’t had any real challenges since the late noughties. Reflecting on that fact and my puny JavaScript skills, cultivated a growing sense of professional inadequacy. It was time to address my shortcomings.

A personal challenge took form in my mind: to design and build a client-side JavaScript web application.

On Learning

There has never been more great resources to learn computing languages. Particularly JavaScript. However, it took me a while to find resources that explained things in a way that clicked. For me, Kyle Simpson’s ‘You Don’t Know JS’ and ‘Eloquent JavaScript’ by Marijn Haverbeke were a big help.

If you are beginning learning JavaScript you will surely need to find your own gurus; people who’s method of explaining works for you.

The first key thing I learned was that it’s pointless trying to learn from a teacher/resource that doesn’t explain things in a way you understand. Some people look at function examples with `foo` and `bar` in and instantly grok the meaning. I’m not one of those people. If you aren’t either, don’t assume programming languages aren’t for you. Just try a different resource and keep trying to apply the skills you are learning.

It’s also not a given that you will enjoy any kind of eureka moment where everything suddenly ‘clicks’; like the coding equivalent of love at first sight. It’s more likely it will take a lot of perseverance and considerable application of your learnings to feel confident.

As soon as you feel even a little competent, trying to apply your learning will teach you even more.

Here are some resources I found helpful along the way:

Fun Fun Function YouTube Channel
Kyle Simpson Plural Sight courses
Wes Bos’s course
Elequent JavaScript by Marijn Haverbeke

Right, that’s pretty much all you need to know about why I arrived at this point. The elephant now in the room is, why not use a framework?

Why not React, Ember, Angular, Vue et al.

Whilst the answer was alluded to at the beginning, I think the subject of why a framework wasn’t used needs expanding upon.

There are an abundance of high quality, well supported, JavaScript frameworks. Each specifically designed for the building of client-side web applications. Exactly the sort of thing I was looking to build. I forgive you for wondering the obvious: like, err, why not use one?

Here’s my stance on that. When you learn to use an abstraction, that’s primarily what you are learning – the abstraction. I wanted to learn the thing, not the abstraction of the thing.

I remember learning some jQuery back in the day. Whilst the lovely API let me make DOM manipulations easier than ever before I became powerless without it. I couldn’t even toggle classes on an element without needing jQuery. Task me with some basic interactivity on a page without jQuery to lean on and I stumbled about in my editor like a shorn Samson.

More recently, as I attempted to improve my understanding of JavaScript, I’d tried to wrap my head around Vue and React a little. But ultimately, I was never sure where standard JavaScript ended and React or Vue began. My opinion is that these abstractions are far more worthwhile when you understand what they are doing for you.

Therefore, if I was going to learn something I wanted to understand the core parts of the language. That way, I had some transferable skills. I wanted to retain something when the current flavour of the month framework had been cast aside for the next ‘hot new thing’.

Okay. Now, we’re caught up on why this app was getting made, and also, like it or not, how it would be made.

Let’s move on to what this thing was going to be.

An application idea

I needed an app idea. Nothing too ambitious; I didn’t have any delusions of creating a business start-up or appearing on Dragon’s Den — learning JavaScript and application basics was my primary goal.

The application needed to be something I had a fighting change of pulling off technically, and making a half-decent design job of to boot.

Tangent time.

Away from work, I organise and play indoor football whenever I can. As the organiser it’s a pain to mentally note who has sent me a message to say they are playing and who hasn’t. 10 people are needed for a game typically, 8 at a push. There’s a roster of about 20 people who may or may not be able to play each game.

The app idea I settled on was something that enabled picking players from a roster, giving me a count of how many players had confirmed they could play.

As I thought about it more I felt I could broaden the scope a little more so that it could be used to organise any simple team-based activity.

Admittedly, I’d hardly dreamt up Google Earth. It did however have all the essential challenges: design, data management, interactivity, data storage, code organisation.

Design-wise I wouldn’t concern myself with anything other that a version that could run and work well on a phone viewport. I’d limit the design challenges to solving the problems on small screens only.

The core idea certainly leant itself to ‘to-do’ style applications, of which there were heaps of existing examples to look at for inspiration whilst also having just enough difference to provide some unique design and coding challenges.

Intended features

An initial bullet-point list of features I intended to design and code looked like this:

  • An input box to add people to the roster
  • The ability to set each person to ‘in’ or ‘out’
  • A tool that splits the people into teams, defaulting to 2 teams
  • The ability to delete a person from the roster
  • Some interface for ‘tools’. Besides splitting, available tools should include the ability to download the entered data as a file, upload previously saved data and delete-all players in one go
  • The app should show a current count of how many people are ‘In’
  • If there are no people selected for a game, it should hide the team splitter
  • pay mode. A toggle in settings that allows ‘in’ users to have an additional toggle to show whether they have paid or not

At the outset, this is what I considered the features for a minimum viable product.


Designs started on scraps of paper. It was illuminating (read: crushing) to find out just how many ideas which were incredible in my head turned out to be ludicrous when subjected to even the meagre scrutiny afforded by a pencil drawing.

Many ideas were therefore quickly ruled out, but the flip side was that by sketching some ideas out, it invariably led to other ideas I would never have otherwise considered.

Now, designers reading this will likely be like, “Duh, of course” but this was a real revelation to me. Developers are used to seeing later stage designs, rarely seeing all the abandoned steps along the way prior to that point.

Once happy with something as a pencil drawing, I’d try and re-create it in the design package, Sketch. Just as ideas fell away at the paper and pencil stage, an equal number failed to make it through the next fidelity stage of Sketch. The ones that seemed to hold up as artboards in Sketch were then chosen as the candidates to code out.

I’d find in turn that when those candidates were built in code, a percentage also failed to work for varying reasons. Each fidelity step exposed new challenges for the design to either pass or fail. And a failure would lead me literally and figuratively back to the drawing board.

As such, ultimately, the design I ended up with is quite a bit different than the one I originally had in Sketch. Here are the first Sketch mockups:

alt text

The initial basic designs of the application
alt text
The initial menu for the application

Even then, I was under no delusions; it was a basic design. However, at this point I had something I was relatively confident could work and I was chomping at the bit to try and build it.

Technical requirements

With some initial feature requirements and a basic visual direction, it was time to consider what should be achieved with the code.

Although received wisdom dictates that the way to make applications for iOS or Android devices is with native code, we have already established that my intention was to build the application with JavaScript.

I was also keen to ensure that the application ticked all the boxes necessary to qualify as a Progressive Web Application, or PWA as they are more commonly known.

On the off chance you are unaware what a Progressive Web Application is, here is the ‘elevator pitch’. Conceptually, just imagine a standard web application but one that meets some particular criteria. The adherence to this set of particular requirements means that a supporting device (think mobile phone) grants the web app special privileges, making the web application greater than the sum of its parts.
On Android in particular, it can be near impossible to distinguish a PWA, built with just HTML, CSS and JavaScript, from an application built with native code.

Here is the Google checklist of requirements for an application to be considered a Progressive Web Application:

  • Site is served over HTTPS
  • Pages are responsive on tablets & mobile devices
  • All app URLs load while offline
  • Metadata provided for Add to Home screen
  • First load fast even on 3G
  • Site works cross-browser
  • Page transitions don’t feel like they block on the network
  • Each page has a URL

Now in addition, if you really want to be the teachers pet and have your application considered as an ‘Exemplary Progressive Web App’ it should also meet the following requirements:

  • Site’s content is indexed by Google
  • metadata is provided where appropriate
  • Social metadata is provided where appropriate
  • Canonical URLs are provided when necessary
  • Pages use the History API
  • Content doesn’t jump as the page loads
  • Pressing back from a detail page retains scroll position on the previous list page
  • When tapped, inputs aren’t obscured by the on screen keyboard
  • Content is easily shareable from standalone or full screen mode
  • Site is responsive across phone, tablet and desktop screen sizes
  • Any app install prompts are not used excessively
  • The Add to Home Screen prompt is intercepted
  • First load very fast even on 3G
  • Site uses cache-first networking
  • Site appropriately informs the user when they’re offline
  • Provide context to the user about how notifications will be used
  • UI encouraging users to turn on Push Notifications must not be overly aggressive
  • Site dims the screen when permission request is showing
  • Push notifications must be timely, precise and relevant
  • Provides controls to enable and disable notifications
  • User is logged in across devices via Credential Management API
  • User can pay easily via native UI from Payment Request API

Crikey! I don’t know about you but that second bunch of stuff seems like a whole lot of work for a basic application! As it happens there are plenty of items there that aren’t relevant to what I had planned anyway. Despite that, I’m not ashamed to say I lowered my sights to only pass the initial tests.

Whilst on the subject of me shirking hard work, another choice made early on was to try and store all data for the application on the users own device. That way it wouldn’t be necessary to hook up with data services and servers and deal with log-ins and authentications. For where my skills were at, figuring out authentication and storing user data seemed like it would almost certainly be biting off more than I could chew and overkill for the remit of the application!

Technology choices

With a fairly clear idea on what the goal was, attention turned to the tools that could be employed to build it.

I decided early on to use TypeScript, which is described on its website as “… a typed superset of JavaScript that compiles to plain JavaScript.” What I’d seen and read of the language I liked, especially the fact it leant itself so well to static analysis.

Static analysis simply means a program can look at your code before running it (e.g. when it is static) and highlight problems. It can’t necessarily point out logical issues but it can point to non-conforming code against a set of rules.

Anything that could point out my (sure to be many) errors as I went along had to be a good thing, right?

If you are unfamiliar with TypeScript consider the following code in vanilla JavaScript:

console.log(`${count} players`);
let count = 0;

Run this code and you will get an error something like: ReferenceError: Cannot access uninitialized variable.

For those with even a little JavaScript prowess, for this basic example, they don’t need a tool to tell them things won’t end well.

However, if you write that same code in TypeScript, this happens in the editor:

Showing TypeScript correcting an error
TypeScript saves me from the worst of me

I’m getting some feedback on my idiocy before I even run the code! That’s the beauty of static analysis. This feedback was often like having a more experienced developer sat with me catching errors as I went.

TypeScript primarily, as the name implies, let’s you specify the ‘type’ expected for each thing in the code. This prevents you inadvertently ‘coercing’ one type to another. Or attempting to run a method on a piece of data that isn’t applicable — an array method on an object for example. This isn’t the sort of thing that necessarily results in an error when the code runs, but it can certainly introduce hard to track bugs. Thanks to TypeScript you get feedback in the editor before even attempting to run the code.

There are other benefits afforded by TypeScript we will come to in the next article in this series but the static analysis capabilities were enough alone for me to want to adopt TypeScript.

There were knock-on considerations of the choices I was making. Opting to build the application as a Progressive Web Application meant I would need to understand Service Workers to some degree. Using TypeScript would mean introducing build tools of some sort. How would I manage those tools? Historically, I’d used NPM as a package manager but what about Yarn? Was it worth using Yarn instead? Being performance focused would mean considering some minification or bundling tools; tools like webpack were becoming more and more popular and would need evaluating.


I’d recognised a need to embark on this quest. My JavaScript powers were weak and nothing girds the loins as much as attempting to put theory into practice. Deciding to build a web application with vanilla JavaScript was to be my baptism of fire.

I’d spent some time researching and considering the options for making the application and decided that making the application a Progressive Web App made the most sense for my skill-set and the relative simplicity of the idea.

Technology choices had been made. I’d need build tools, a package manager, and subsequently, a whole lot of patience.

Ultimately, at this point the fundamental question remained: was this something I could actually manage? Or would I be humbled by my own ineptitude?
I hope you join me in part two when you can read about build tools, JavaScript design patterns and how to make a something more ‘app like’.

]]> 0
How to get the value of phone notches/environment variables `env()` in JavaScript (from CSS) Thu, 29 Aug 2019 10:59:18 +0000 Since Apple introduced the iPhone X there have been a number of further devices that also make use of a ‘notched’ display; a cut out that stops a uniform rectangle canvas for us to work with.

To aid working with this notch, Apple promoted the use of environment variables in CSS. I’ve covered that before.

Brief overview

Much like the syntax of custom properties you can obtain the value of environment variables using env() in CSS like this:

margin-top: env(safe-area-inset-top);

It’s worth knowing that, where supported, you can also use make use of max to get minimum value. For example,

@supports(margin: max(0px)) {
    margin-top: max(env(safe-area-inset-top), 20px);

More on that on the WebKit blog.

So, problems in CSS are largely solved and well documented. Not so much in JavaScript.

The problem: how to access environment variables in JavaScript

I was building something where I needed to access the value of the environment variable/notch size in JavaScript. I was using translateY on an element to move it up the page and needed to factor in phone notches. I wanted to use the full device canvas but make allowances for the safe area when positioning an element.

At first I thought I could do something like this to get the value and then factor it into calculations accordingly:


But to no avail. I read the specification and noted there was an issue filed for this very situation:–1/#issues-index

I also posted on the GitHub issues: and tried the suggestion there:–525414818 but to no avail.

Thankfully, Dean Jackson of WebKit (one of the specification authors) came through with a lovely clean solution that seems so obvious once you hear it!

The solution

You can set the values of an environment variable to a CSS Custom Property and then just read that value in with JavaScript. To exemplify, we can add them in the CSS here:

:root {
    --sat: env(safe-area-inset-top);
    --sar: env(safe-area-inset-right);
    --sab: env(safe-area-inset-bottom);
    --sal: env(safe-area-inset-left);

And then in script read them back like this:


Brilliant! It’s almost like someone had thought this through originally! 😉

]]> 0
CSS scaling: choose isolation or abstraction – not both Wed, 03 Jul 2019 10:22:45 +0000 I had a question via email a few days back from someone working on a Masters project. The subject being how people write CSS. The person in question had heard me talking as a guest on the ShopTalk Show podcast about not mixing abstraction and isolation when it comes to scaling CSS projects and wondered if I had talked/written about the point more fully.

It’s certainly a theme throughout the entire Enduring CSS book (read it free online if you would rather, or just skip through the slides if you want an overview) but I perhaps haven’t addressed my feelings on the subject specifically. For completeness, this post aims to do that.

Off the back of the ShopTalk Show episode in question, one of the hosts, Chris Coyier covered the subject on css-tricks. That post is as good an overview as I can offer.

However, for the sake of completeness and posterity, I’ll surmise my opinion here.

If you are maintaining a large CSS codebase, whether that be large due to qunatity of code, or large due to quantity of developers, or likely both; you need an approach to writing CSS that scales. By ‘scales’, I mean an approach to authoring styles that facilitates developers working on the CSS code with relative ease. Working with the CSS codebase includes fixing a problem or styling new components without adversely effecting anything unintentionally.

Does your approach to writing CSS allow developers to add to, remove from, and amend your product with entirely predictable results? That is the remit of a CSS approach that can scale.

My belief is that when it comes to scaling CSS there are two approaches that work: complete isolation and complete abstraction. Anything else ultimately becomes sub-optimal.

Isolation in CSS terms is the ability for the code you write to be isolated from anything else, preventing anything intended for one visual entity to ‘leak’ into another. Isolated code is easy to reason about, write and delete because by virtue of it being isolated, it cannot effect anything else. Approaches in the isolation camp are ECSS, BEM, styled-components and CSS Modules.

Abstraction in CSS terms is the apposite approach to isolation. A generally finite set of very generic CSS classes are assembled like lego bricks to achieve the desired effect onto each element. Parts of the abstraction CSS codebase never get deleted over time (as any existing component may depend on any piece of the abstraction toolkit) but tends to stay small over time because anything can be made from the little parts you have at your disposal from the outset. Popular approaches in the abstraction camp are Atomic CSS and Tailwind CSS.

For this post I’m not interested in debating which approach is ‘best’. I’m not a fan of absolutes; there are simply problems and solutions. What is important is that either approach can work. What works for you might not work for another. Choose your poison.

One analogy I have used in the past that may help, is that isolation is like a cheque book and abstraction is like coins. A cheque book is very specific about who you want to pay and how much. With a cheque book it’s easy to look back and know where your money went. Abstraction is like notes and coins. They are useful because they can be used everywhere and anything can be paid for with it, but it isn’t easy to look back with any certainty and know where your money went. If you are budgeting, you can of course use cash and cheques at once but it is easier to come undone. If you only use cheques or you only use cash, you have a better chance of knowing where you are at with your money.

The problems lie in the middle

Imagine isolation and abstraction at either end of a continuum. The further you move away from either end, the more complicated and less effective your approach is likely to be.

Abstraction is fine because you know anything can be made from the existing building blocks and they are never going away. Need a new component? No drama; just write the HTML, bang on the relevant classes and there you go. You aren’t going to change a class as generic as m-10p to be anything other than margin-top: 10px so things stay sane and predictable.

Isolation + abstraction

Isolation is fine because you know that the styles you write will only apply to the elements they are targeted at. Things are kept sane in development land because when that components time is up, you just delete everything to do with the component, including the relevant styles and nothing else is effected. Everything you make is ‘green-field’ so you are free to make anything however you like. With isolation, you don’t make things that are easy to extend. You make things that are easy to delete.

However, things generally go bad when you try and mix approaches. Starting with an isolation approach, imagine creating 5 different isolated components.

<div class="my-First_Component"></div>
<div class="my-Second_Component"></div>
<div class="my-Third_Component"></div>
<div class="my-Fourth_Component"></div>
<div class="my-Fifth_Component"></div>

However, as you make them you notice that they all share some similarity. Suppose they all currently have the same main font-size, font-weight and colour. Seems like a perfect time to abstract that similarity and DRY up your code? You make another class that can be shared across these components and any other elements that share those similarities.

<div class="my-First_Component hlt"></div>
<div class="my-Second_Component hlt"></div>
<div class="my-Third_Component hlt"></div>
<div class="my-Fourth_Component hlt"></div>
<div class="my-Fifth_Component hlt"></div>

We’ve added a utility style here, hlt for ‘HeadLine Text’ and that goes into a ‘global.css’ or ‘utility.css’ stylesheet.

Fast forward 6 months and a relatively new dev to the company is on call and gets an urgent ticket. my-Third_Component needs the main text tweaking as the text is too long and it’s obfuscating some important T&Cs information. He inspects the code, find the class in question, makes the tweak, commits the code and gets back to bed. He wasn’t aware that he had inadvertently changed 4 other components.

There are umpteen variations on this scenario. None of them end well. Perhaps instead of a call-out scenario it’s a code removal situation. Those 5 components get removed from the code base, but because hlt is generic and not encapsulated with the component, it then lives on somewhere else, with no-one ever confident as to its usage or whether it too can be removed. By mixing approaches we’ve failed to meet our original remit: we are no longer able to add, remove from or amend the product with entirely predictable results.

Abstraction + isolation

Let’s flip it and look at abstraction.

Suppose the components were made with an abstraction approach. Everything is going well in the codebase until someone decides that they can save time by adding a single class that does a bunch of things in one go. Perhaps they feel adding individual classes to each node is simply laborious. They make the class m10-p10-b3-green-fLarge. It adds 10px margin, 10px padding, a 3px border in green and a large font. They add this one class to the components they are currently making to speed up development. However, when they need some future variation of the thing they are making, or to adjust it slightly (maybe they need one with only 5px of padding), they can’t alter that original class. So they either add an ‘undo’ utility class that overwrites the original 10px of padding or they remove that m10-p10-b3-green-fLarge class from the component and remake with separate utility classes and the that prior class hangs around, potentially for perpetuity.

Again, there are multiple possibilities for things to come undone when you ‘cross the streams’.

Now, none of this takes into account the great tooling and solutions that sit around either isolation or abstraction approaches. It does however help illustrate that the each approach is fundemantally strongest when they remain true to their raison d’être.


My preference is always simplicity. The more rules and caveats an approach has, the more difficult it is to communicate concisely to people. The harder to communicate an idea easily, the more brittle and open to interpretation it is. This includes approaches to writing CSS.

Whenever I encounter approaches that require developers to think about whether or not a class they are making is a utility class, a base class, a decoration class or (insert your own other nebulus characteristic) I always feel they are needlessly-complicated and therefore prone to failure.

Developers that touch the CSS may not be as versed in CSS as others. The appraoch should be as easy for them to deal with as possible. If not you have needlessly created a gatekeeping situation.

Either embrace isolation and deeply understand why it works or embrace abstraction and deeply understand why it works. Whichever you choose, it you embrace WHY they work and enforce that approach you should enjoy a CSS codebase that can scale to any need.

]]> 0
Converting divs into accessible pseudo-buttons Wed, 19 Jun 2019 14:13:08 +0000 Let’s get this out of the way right now: I don’t think there is a compelling reason to turn an unopinionated HTML element like a div or a span into a button. ’Cause, you know, button already exists.

However, the question was asked, “If you had to do it though, could it be done?”.

This post attempts to take you through the steps of how to, in some ways, convert an unopionated HTML element into a accessible ‘pseudo-button’, and hopefully convinces you to just use a button.

I’ve ruminated on the use of button element before. That was four years ago. My opinion today differs. I don’t care about the speed impact; a button is slower because it does more. What’s more, for the number of buttons you are going to have on almost any page/app it adds up to very little overhead in real terms. I don’t think this is a sensible place to make economies.

The button element gives you things you might not see or appreciate but others might.

button powers

What does a button give you that other elements don’t? Here are a few choice features:

A button can receive focus. For the uninitiated this means someone can move to that element easily with, for example, just a keyboard. It gets a tabIndex whether or not you set it.
It supports an accessKey so you can set a keyboard shortcut to simply activate it.
It supports the disabled attribute, which, when added means the button stops receiving clicks.
It has a type attribute, allowing it to submit the form it lives in, if the type is set to submit. It can also reset a form type="reset" or just do nothing type="button".
It has a value property, so you can conveniently set a different value than the text should you need it for scripting.

There are a bunch more features. If you’re interested, here are the details of the button element in the HTML 5.2 specification.

Meanwhile, an unopionated element, like a div gives you nothing. Hence the reason it’s demonstrably faster to render than a button.

Making a div more accessible

Hopefully that covers why you wouldn’t want to do this?

Well, as promised at the outset, we are going to see how far we can get anyway!

Say hello to our div:

<div class="btn">a div</div>


ARIA (Accessible Rich Internet Applications) attributes provide the means to make a div appear as a button to a screen reader. Here is the intro to the button role page on MDN:

Adding role=“button” will make an element appear as a button control to a screen reader. This role can be used in combination with the aria-pressed attribute to create toggle buttons.

Right, so now our button would look like this:

<div class="btn" role="button">a Div</div>


In our instance the button is going to be a switch; it can be pressed or not. With ARIA we could also make it the control for something like a menu with aria-expanded but instead we will use aria-pressed. So default it will need that too:

<div class="btn" role="button" aria-pressed="false">a div</div>
aria-pressed isn’t a strict boolean, it can also be set to ‘mixed’ if considered partially presed.


We need the buttons to be focusable. That means we need a tabindex. By default we want the button to follow the normal sequence so we can set it to “0”. If you are unfamiliar with tabindex you should know that a negative number means the element is focusable but not in the normal focus sequence/sequential keyboard navigation. You want this if you have a modal that you want focusable but only via script, where you would use focus() when appropriate.
A tabindex of “0” means the element is focusable and happy to follow the normal order. A positive value e.g “1” provides an indication to the browser of how that element should be prioritised in the focus order. If you had elements with tabindex 1,2,3 respectively, your intention would be for the browser to focus them in ascending order. First 1, then 2, then 3. Note, it’s just an indication, not a promise! You should also avoid doing that unless you absolutely know you need to though; it’s liable to create an unpredictable experience for keyboard users. Just go with The Web’s Grain.

Right, with the tabindex attribute in, we are now looking like this:

<div class="btn" role="button" aria-pressed="false" tabindex="0">a div</div>

keyboard event handlers

If you have a button element that has focus, pressing space, or enter will activate the button. When you convert a different element with role="button" you don’t get that functionality for free. You need to add your own keyDown listeners. You can add the listener to individual elements or, thanks to event bubbling, you can listen on a containing element. For example:

wrapper.addEventListener("keydown", e => {
    if (e.key === " " || e.key === "Enter" || e.key === "Spacebar") {

disabled (updated – thanks Darek)

The button element has a disabled state built in. When this is present, keyboard actions on the button are ineffective. For a pseudo-button you would need to add this functionality. Stylistically, this could be achieved easily enough. Something like:

[aria-disabled] {
    pointer-events: none;
    opacity: 0.5;

However, you would still need to add the correct aria-* attribute, so that assistive technology knows what to do with the element. Without aria-disabled applied to the element it would only be visually ‘disabled’; not functionally.


There is no way point dancing around this – the button element is not straightforward to style. Browsers have opinions on how a button should look and you need to consider that.

However, switching out a button for a pseudo-button isn’t going to negate that need. Once an element has role="button" it is focusable and as a consequence, browsers indicate this focus in the same manner they would a real button. You will need to consider focus states and styles for when a button is toggled. If you are switching off the default outline for example, make sure you are adding a suitable sustitute.

Example pseudo-buttons

The MDN role="button" documentation gives solid examples that I have amended for this example. Here are a number of divs in a wrapping div. Pressing “Convert to pseudo-buttons” decorates the divs with the necassary attributes and event listeners to give us some of buttons goodnes. You can tab across them and pressing space or enter when one is selected toggles the aria-pressed attribute to indicate the state of the button.

View it on Codepen

Or you can view the embed here:

See the Pen
Making a div a button with aria
by Ben Frain (@benfrain)
on CodePen.


Makming unopionated elements into pseudo-buttons is not a trivial task. A lot of work has to be done, and therefore going forward, managed, to give pseudo-elements just some of the functionality built into the native button element.

There are inherent challenges and arguably shortcomings in dealing with buttons in some situations but appreciate that the friction is likely due to the fact that you are going against the grain of the medium you are working with.

Despite any frustrations, I’d suggest the end result will be far smoother if you choose to work with the grain instead.

]]> 7
Moving from Gulp to Parcel Mon, 18 Mar 2019 17:23:51 +0000 Following this pole on Twitter I thought I’d take a look at spinning up a project using Parceljs.

I’ve used Gulp for years and Grunt before that. Parcel seemed like a logical progression. That, plus, I still don’t have the stomach to try and get Webpack working!

Parcel is touted as ‘zero configuration’ and although that is largely true, if you are moving from Gulp there are definitely some things you need to keep in mind.

My requirements for a task runner/web-app builder ‘thing’ are incredibly modest. I’m typically making a prototype of something so I just want it to compile any meta-languages such as PostCSS or TypeScript to CSS/JS respectively and spin up a local server where any iterations are immediately kept up to date in a browserSync/live reload style.

Before I get to Parcel, I thought it might be useful to see the kind of thing I have in Gulp that I’m trying to replicate in Parcel:


In Gulp, working with PostCSS I require the plugins I want like this in my gulpfile.js:

var cssMixins = require("./global/css/mixins.js");
var cssVariables = require("./global/css/variables.js");
var postcss = require("gulp-postcss");
var nested = require("postcss-nested");
var mixins = require("postcss-mixins")({ mixins: cssMixins });
var simplevars = require("postcss-simple-vars")({ variables: cssVariables });
var autoprefixer = require("autoprefixer");
var postcssimport = require("postcss-import");
var postcssAssets = require("postcss-assets");
var postcssColorFunction = require("postcss-color-function");
var reporter = require("postcss-reporter");
var cssnano = require("cssnano");

And the CSS task using PostCSS looks like this:

gulp.task("css", function() {
    var processors = [
        postcssimport({ glob: true }),
        autoprefixer({ browsers: ["iOS >= 6", "ie_mob >= 8", "android 4"] }),
            zindex: false,
            reduceIdents: false,

    // We only want to compile the root files, as these import the partials
    var cssRootFiles = ["./interface/*.css", "./global/css/*.css"];

    return (

I tend to group visual components in a ‘interface’ folder and writing the task like this means it goes through each component, processes the CSS partials and sends them out to a folder.

Lets start by getting an equivalent CSS task up and running with Parcel.

The basics

You install Parcel with NPM (or Yarn):

npm install -g parcel-bundler

Parcel looks to index.html as the default ‘entry point’ and builds stuff auto-magically from there to a dist folder at the same level as the index.html file. So, assuming you have an index file to kick things off you can run parcel index.html from Terminal and it spins up a local server at localhost:1234.

So far so good. However, Parcel still needs how to handle PostCSS. In the past my PostCSS configuration was largely encapsulated in the Gulp task. We need that configuration isolated into something Parcel can understand.

PostCSS configuration with Parcel

PostCSS configuration can be done a few ways in Parcel. I opted for creating a postcss.config.js file and sticking it in the root of the project.

In terms of folder structure, I have a ‘global’ folder in the root, which contains everything I need to run my mixins (one lot as JS file, and another file as CSS — postcss-mixins is happy to mix the two) and variables; also defined in JS here.


The contents of postcss.config.js:

var colors = require("./global/css/variables");

module.exports = {
    plugins: [
            mixinsDir: "./global/css/mixins",
        require("postcss-simple-vars")({ variables: colors }),

You’ll need to run a npm i after creating that file to ensure that any pulgins you use that aren’t part of the standard ones in Parcel are downloaded.

This file is obviously a lot shorter than the (nearly) equivalent Gulp example. Parcel does a lot of the asset management I was handling with plugins in Gulp all by itself. Globbing patterns for example are already part of Parcel. Nice!

You might be wondering about that first line var colors = require("./global/css/variables"); in that config file. I usually have a standard list of colours I use and this allows me to pass it from a separate file to the postcss-simple-vars plugin.

PostCSS configuration can silently fail

It’s worth noting that if you have a PostCSS plugin that conflicts with Parcel, your PostCSS will silently fail and you will end up with unprocessed CSS being exported. For example, if you author:

.item-Thing {
    &:checked {
        background-color: #f90;

You want to see this exported to CSS:

.item-Thing:checked {
    background-color: #f90;

But you still see what you had in your authoring style sheets:

.item-Thing {
    &:checked {
        background-color: #f90;

I had to use trial and error when writing that postcss.config.js config file, removing from the original list of plugins I had one at a time until things processed correctly.

Failing silently is out of character for Parcel as one of the great features it enjoys is spitting out on the page problems with paths and the like when your configuration is off. That was a common issue for me at first and it was a big help in getting on the right path.

Typescript and Gulp

I confess I have let myself get behind the curve when it comes to best-practice of making ‘partials’ of TypeScript/JavaScript files.

I have had a few failed attempts at switching to JavaScript ES6 modules in the last year. As such, I kept coming back to the simplicity and dependability of the triple-slash file imports that TypeScript allows:

/// <reference path="path/to/partial" />

If you aren’t familiar with TypeScript triple-slash file imports, they work similarly to @import "sausages.css"; in CSS land which makes them easy for me to reason about. I split my TypeScript into separate partial files and bring them into the top of my main app.ts file with a TypeScript triple-slash import.

Parcel seems to flatly refuse to allow triple-slash imports. At least I couldn’t get them working. It’s possibly due to my current ineptitude with Parcel and setting the correct TypeScript compiler options.

However, I actually ended up using ES6 style module syntax with a tsconfig.json file in my project that looked like this:

    "compilerOptions": {
        "module": "ES6",
        "noImplicitAny": true,
        "allowJs": true,
        "lib": ["ES2016"],
        "target": "ES6"

With that in place, I was able to import/export with ES6 syntax (finally!).

I’m thrilled I’m finally there with ES6 modules. Indulge me while I exemplify their syntax and use with Parcel. Suppose we have a project structure like this:


In that root index.html, we actually link to a TS file, not the resultant JS file. For example here is the contents of the index.html:

<!DOCTYPE html>
<html lang="en">
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Parcel Test</title>
    <link rel="stylesheet" type="text/css" href="/styles.css">
    <script src="/app.ts"></script>
When working in Parcel, for my own sanity I’ve found it best to stick to root relative links. However there are choices which are explained fully here:

The link to that app.ts file imports the header.ts and menu.ts files like this:

import * as header from "/interface/header/header";
import * as menu from "/interface/menu/menu";

That’s assuming we have exported relevant functions/const etc from those files like this:

function menuEle() {
    // stuff
export { menuEle };

It took me a bit of trial and error with paths and wrapping my head around imports/exports but it makes sense now and it’s a syntax I like a lot. I’m thankful TypeScript had the triple-slash imports but I’ll try to continue using ES6 modules to solve that need moving forward.

Hot Module Reloading

One essential part of modern front-end tooling is live-reloading. Changes made in authoring code (style sheets, JavaScript, HTML) instantly pushed into the browser for rapid iteration.

When I had v1.11.0 of Parcel installed I was running into an issue where duplicate items were being loaded into the DOM. A reduction of my app.ts file looked like this:

import * as header from "/interface/header/header";
import * as menu from "/interface/menu/menu";

var app = (function() {
    var root = document.documentElement;
    var body = document.body;
    function init() {

Where both header.headerEle and menu.menuEle were functions that returned elements I was appending into the DOM. Any time I made a change in app.ts, menu.ts or header.ts Parcel would run things again, and append both of those elements back into the body; giving me multiples of everything as I worked.

Trying to follow this issue on HMR (the acronym by which Hot Module Replacement is referred) on the Parcel repository: it seemed like v1.12.0 was the answer as it inverted the approach for Hot Module Replacement, making things reload rather than do HMR by default.

Sure enough, an upgrade to v1.12.0 fixed the issue and I finally had everything working as desired.

  • PostCSS compiling: check
  • TypeScript compiling: check
  • Instant browser updates while iterating: check

Publishing output/Running a build

It’s worth calling out that at some point you may want to export or ‘build’ whatever you are making to merely the essential files with no source maps etc. It’s probable that if you copy the files from the `dist` folder ‘as is’ to another location that isn’t running the local host server, none of the assets, such as style sheets and scripts, will load.

The correct way to get a build is to use the specific build command. So, here is the command I run as an example (Note: You might need to type this out as sometimes the quotes get screwed up when pasting to the command line!):

parcel build index.html ‑‑public‑url './' ‑‑no‑source-maps ‑‑no‑cache

There are a bunch of additional options/arguments available with the build command. A full list is at The options I chose were to set the public url, to remove any cache and not output any source maps.


With a little adapting I was able to move from Gulp to Parcel. As a consequence it pulled some of my practices into 2019! Principally I switched from triple-slash TypeScript imports to ES6 modules and extracted my PostCSS configuration into a separate file.

Parcel is capable of a lot more and I’ll be happy to use it again for the next thing I spin up. I’ve stuck my postcss.config.js and tsconfig.json in the root of the folder that holds all my projects so starting a project with Parcel should be far more straightforward next time.

]]> 5
Looping infinitely around an array in JavaScript Fri, 01 Mar 2019 11:44:44 +0000 Let’s say you have an array of items. They might be colours, pieces of text, DOM nodes, whatever. However, you want to loop around them infinitely. To further exemplify, if you have an array of 8 items, and you are currently on item 2 and you skip along 10, we want to end up back at slot 4 and not a non-existent slot 12. This is standard behaviour for carousels and the like.

This is a scenario I’ve come across a few times in the last year or so and as I keep having to remember how to solve the problem I’m writing it out here for developer posterity.

The modulos/remainder operator

The key to solving this problem is the Modulo operator, also known as the Remainder operator and expressed in JavaScript with the % symbol. This operator is used to get the remainder after a division.

In English we might say that the remains of dividing 8 by 3 is 2 as we can get two lots of 3 out of 8 giving us 6 and there are then 2 remaining.

Let’s look at this in JavaScript:

var remainsOfEightByThree = 8 % 3; 
console.log(remainsOfEightByThree); // 2

Let’s do another example just to be clear what’s going on. What about the remains of dividing 13 by 4?

We can get 3 whole lots of 4 out of 13 which takes us to 12 so we would expect the remains to be 1. Here again in JavaScript:

var remainsOfThirteenByFour = 13 % 4; 
console.log(remainsOfThirteenByFour); // 1

OK, so how do use this to solve our looping around problem?

Looping around an array with modulos/remainder operator

Let’s look at an array of colour values:

var colours = ["EC6060", "4DB52E", "31D1B3", "1D9DCB", "1D3ACB","AD70CC", "E84F82"];

We are going to make a div change colour from one colour value to another by taking the existing slot number and adding any number to it. So, say for example, we take the scenario given in the first paragraph, we are at slot 2 (in JS that would be the third value as JS is zero-indexed), which is the value 31D1B3 and we try and skip forward 10 slots. We don’t want to select a non-existent slot 12, we want to end up back at slot 5, with the value AD70CC.

‘zero-indexed’ simply means that counting starts at 0 and not 1. In JavaScript Arrays the first slot is always item 0. So yourArray[0] would select your first item whereas yourArray[1] would actually select the second one.

The solution

Here’s the final solution. Take a look and then we will step through what’s going on.

See the Pen
Remainder/Modulos example
by Ben Frain (@benfrain)

var item = document.querySelector(".item");
var readout = document.querySelector(".readout");
var colours = ["EC6060", "4DB52E", "31D1B3", "1D9DCB", "1D3ACB","AD70CC", "E84F82", "CB1212"];

var count = 0;
setInterval(e => {
    var randomNumber = Math.floor(Math.random()*11);
    count = (count + randomNumber) % colours.length;
    var newColour = colours[count];
    readout.innerHTML = `We added <b>${randomNumber}</b>, meaning the new slot number to display is <b>${count}</b>, meaning a colour value of <b>${newColour}</b>` = `#${newColour}`;   
}, 3000);

Some of this is surplus to the job at hand as it is just to get things reading out on screen.

In essence, every 3 seconds, we generate a random number between 0–10 and add that to the existing count (which starts at 0). The magic part, that makes the array loop around is this:

count = (count + randomNumber) % colours.length;

Here we re-assign our count to be: the remains of the current count plus the new random number divided by the length of the array. So, given our 8 slot colours array, if we were on slot 0, the random number was 9 the calculation would be:

count = (0 + 9) % 8 // result would be 1

Our modulus operator forces the resultant number to be something that is the result of a clean division of the array we are working on. In practice this means we loop around the array as the number can never be greater than the array length.


Looping infinitely around an array, rather than merely through it, is something I’ve had to do an inordinate number of times recently, particularly when animating elements. Understanding the remainder/modulus operator is key to solving the issue elegantly in JavaScript.

]]> 0
Determining the direction of IntersectionObserver events Thu, 14 Feb 2019 15:13:02 +0000 I had a situation where I wanted to visually introduce a footer element recently when a certain element was passed in the viewport. Instead of using scroll events I had opted to use the newer IntersectionObserver (finally we are getting broad browser support thanks to iOS 12.2 and Safari in macOS 10.14.4).

Anyway, there are some good tutorials already on the fundamentals of the IntersectionObserver. WebKit’s own being one good example.

However, what doesn’t seem well covered is dealing with the direction of how the IntersectionObserver has been triggered.

I solved this by comparing the boundingClientRect.y properties of multiple IntersectionObserver events. Perhaps there is a better way?

In short, my problem was that after observing something on the page I wanted to visually introduce a new element. However, I wanted the new element to stay visible, even if the observed element was no longer visible but the user was still ‘past’ the trigger element. If the user scrolled back up past the element I wanted to remove the element from view.

Anyway, that probably sounds more complicated than it needs to be. Take a look at the reduction here:

See the Pen
Determine IntersectionObserver direction
by Ben Frain (@benfrain)
on CodePen.

In terms of code, we create a new empty array in the global scope (or at least outside of where the function is fired). Then on each event the array gets populated by the boundingClientRect.y value of that event.

We then create a new sliced copy of that array with just the last two items of the main array and compare the last and next to last entries of the array. If the next to last was greater than the last entry we know we are heading down.

With this I was able to write a function to accommodate my needs. I’ve included all the IntersectionObserver stuff here for the sake of completeness:

var triggerElementPositions = new Array();
var triggerElem = document.querySelector(".your-TriggerElement");
var options = {
    root: null,
    rootMargin: "0px",
    threshold: [0, 0.5, 0.75, 1],
var observeFooter = new IntersectionObserver(showTutorialFooter, options);

function showTutorialFooter(e) {
    let compareArray = triggerElementPositions.slice(triggerElementPositions.length - 2, triggerElementPositions.length);
    let down = compareArray[0] > compareArray[1] ? true : false;
    if (!down) {
        dashDepositors.setAttribute("data-footer-visible", "false");
    } else {
        if (e[0].intersectionRatio > 0.5) {
            dashDepositors.setAttribute("data-footer-visible", "true");

Essentially, if a user is scrolling up the page (or more accurately the value of the next to last position is less than the last), we set the footer to hide by default. If the user is scrolling down and the intersection is greater than 50%, we show the footer.

]]> 0
Sony WH-1000 XM3 Noise Cancelling headphones review Tue, 12 Feb 2019 15:52:05 +0000 UPDATE: June 2020
I can’t recommend the Sony XM3 headphones. A great many users, myself included, have suffered a manufacturing failure on the headband of these headphones. I always treated them very carefully. Only ever used them in the office. More information on the Sony user forums. That and the fact they can’t easily switch between two input sources meant I ultimately got them refunded.

I’ve left the remainder of this review as it was:

TL;DR – I believe Sony XM3’s are worth the significant outlay if you want to optimise your concentration in a distraction rich environment.

All decent noise cancelling headphones are expensive. As in £250+ expensive. However, I work in a large open plan office. Each desk is about 120cm wide so I’m in close proximity to plenty of other people talking, laughing and basically being human. The upshot is that it’s very noisy. Much of the time I find that noise incredibly distracting. Noise-cancelling headphones seemed like a panacea to my concentration excuses woes.

I had been to a couple of local stores to try the XM3’s out and compare them with the Bose Quite Comfort 35 IIs. They are both premium, over the ear, Bluetooth wireless noise-cancelling headphones.

Physically, they both felt very comfortable to me; nothing seemed to stand out with either.

When it came to judging the sonic quality, I found it pretty difficult to judge at the store. I wasn’t convinced I was getting a good idea of what high-end noise cancelling headphones would do for me outside of the environment I intended to use them in.

Pairing seemed more intuitive on the Sony’s – so they got a nod for that. Although the Bose pairing method, where you need to slide a button forward to put them in pairing mode is also fine, once you know how.

The truth is I think I would have been just as happy with the Bose QC35 IIs as with the Sony XM3s. My only reasons to opt for the Sony models were the faster USB C charging, and the more dubious reason that a few other people sitting near me have Bose QC25 headphones and my contrary nature wouldn’t allow me to go with the same brand (if everyone is zigging, I’m zagging).

I took a leap of faith and ordered a pair of Sony WH–1000XM3 headphones for £290 a couple of weeks back. What follows are my thoughts having lived with them for a few weeks.

I really wanted to get the Audio Technica ATH-ANC900BT which are a little cheaper and support Bluetooth 5.0 but they aren’t due to be released until April 2019. Apple is also allegedly launching some soon although who knows if that will actually happen.

Sony XM3’s

In case you are unfamiliar with the WH–1000XM3 headphones, they are over the ear, Bluetooth wireless noise-cancelling headphones. They come with a nice semi-rigid carry case that includes a USB charger (USB A -> USB C) and a standard headphone cable with mini-jack connector.

Until this point I had been using a set of wired Sony MDR-V6, over the ear studio headphones in my day-to-day environment so my comparison is largely based against those.


Out of the box, I downloaded the Sony ‘Headphone Connect’ app from the App Store. I’m on iOS so can’t speak of the Android equivalent. I’ve been using version 4.1.1. As I understand it the App version number is the same as the firmware version of the headphones.

It’s worth pointing out that by many accounts the 4.1.1 version of the software has degraded noise cancellation performance:
I don’t have a comparison of a different version yet so I don’t know if they would have been better before the update or not.

You connect to the headphones via Bluetooth in iOS and then open the Headphones Connect and connect to the headphones there too. For some reason (security?) the app can’t do the Bluetooth connection part for you. It’s also pretty temperamental with connecting again if you close the app and re-open it.

Sadly, there is no desktop equivalent of ‘Headphone Connect’ which I feel is a large oversight. For example, all the Equalizer settings are only changeable via the app so if you don’t have access to a smart phone these probably aren’t the headphones for you. I posted on the support forums about that here but doesn’t seem like there is any appetite to change that:

There are a bunch of settings you can play with the in the Headphone Connect app. The only ones I altered were the ‘Noise Cancelling Optimizer’, which I assume tweaks things based on the environment you are currently in. I also changed the Equalizer to the ‘Bright’ preset. This is a mere preference a preference thing. I listen to a lot of old-school hip-hop (Ice Cube, NWA, Cypress Hill, Wu-Tang etc – don’t judge) and this suited that style of music well for me. Whichever you opt for, once set, the preset stays set on the headphones. The upshot being you can set them how you like them and then happily connect to a desktop machine and it retains your EQ preference.

Noise Cancelling

Let’s get this out of the way early. The difference between ‘standard’ over the ear headphones and the noise-cancelling headphones is stark. At the outset I wasn’t convinced that removing the extra unwanted ambient noise around me would add up to much real-world difference. I was wrong. I have found it a major help in the concentration stakes. People can no longer merely stand close to get my attention or inadvertently distract me. Typically they need to tap me on the shoulder or I just don’t realise they are there.

I believe they are worth the significant outlay if you want to optimise your concentration in a distraction rich environment.

It is worth pointing out that they can feel a little weird when you first put them on as they somehow adjust pressure to aid in noise cancellation. After optimization the app told me mine were optimized to 1.0atm of pressure. I have no idea if that is good or bad. I’m not even sure that’s a healthy thing to be messing with! A couple of people who tried them said they felt a bit dizzy with them on. I wouldn’t go that far but it is a consideration. More sensitive people may find the pressure change more disconcerting.

With the noise cancellation question out of the way, let me tell you more about the features, pro’s and con’s.

Ear-cup controls

There are touch controls on the right ear cup but I find them a bit of a gimmick. For example, you can cover the right ear-cup and it adjusts the volume and turns the noise-cancelling off, letting the ambient noise it. It works OK but when I went to listen to something beyond the headphones I find it easier to merely slip the headphones off, or momentarily lift an ear-cup. By the same token you can swipe up and down for volume and forwards and backwards for track navigation but in my environment I find it easier and more reliable to use the media control short-cuts on my keyboard. Obviously if you want noise-cancelling headphones for commuting these features will prove more useful.


The XM3s handle phone calls too with a built in microphone. So far I’ve no experience with them in this respect. Closest I got was when I was working with the Web Speech API and used them for voice input into a web form. They were fine in that situation but I’ll update here if and when I’ve made some phone/Skype calls with them.

Sound Quality and Codecs

I like the sound ‘signature’ of the Sony XM3s. I realise that endorsement is not very scientific but despite streaming over Bluetooth the quality far exceeds the studio headphones I had been using. I went back to the MDR-V6 model for a couple of tracks for comparison and they seemed laughably bad.

In terms of codes the Sony XM3s can handle SBC, AAC, aptX, aptX HD and LDAC. Of interest in Android land is aptX HD and in Apple land it would be AAC. For the uninitiated, the codec is the algorithm used when transmitting sound over Bluetooth. Bluetooth is limited when it comes to bandwidth so this can be a ‘big deal’.

I share a Spotify ‘Family’ account with a work colleague (again, don’t judge) and have the ‘High Quality streaming’ setting enabled. However, what that actually equates to in terms of the codec I’m getting into the headphones I don’t know. As the App is only Smartphone-based there is no way to tell what codec is being streamed from a desktop computer.

From a cursory back and forth with iTunes and Spotify on the Mac I can tell you with absolute certainty that iTunes sounds far ‘better’. A colleague who sits nearby who is versed in audio engineering tells me this is nothing to do with the codec so I’m unsure what to make of this phenomenon. Probably that I just need to move to iTunes!

Considerations and shortcomings

The battery isn’t going to last forever. I don’t know how much it will cost to get a new battery installed when the time comes. At a guess, it won’t be cheap! That said, it is very liberating to have wireless headphones, even in a desk-bound office environment. I wouldn’t want to be without that capability now.

Left or right?

There is a little ‘R’ and ‘L’ symbol on each strut coming from the ear-cup so you know which way around they are. Why they don’t stick a dirty great ‘R’ and ‘L’ on the mesh on the inner of the cups is beyond me. I’ve seen that on other branded headphones and it seems so obvious and beneficial I’m not sure why it isn’t the defacto manner for labelling the cups. With the XM3s it’s hard enough to see in broad daylight, I imagine it would be a real pain to determine orientation in the variable light of a commute.
Apple is allegedly working on some new noise cancelling headphones that can auto-detect left/right side as you put them on. That sounds brilliant. These might go on eBay if that happens!

Headphone cup rotation

When I take the headphones off, it would make sense for me to twist the ear cups clockwise with my right hand and anti-clockwise with my left hand. That would make the ear-pads face the surface to put them down. Instead, they actually rotate the opposite way, meaning that unless I flip them over to rest them on the desk, they go cup outer down. And that is liable to end up in scratches over the outside of the cups.

I can’t think of a good reason they go the way they do so that’s pretty annoying. At a guess so you can hang them around your neck with the cups down and out of the rain? I dunno.

Occasional audio glitches

There are occasional drops/skips in audio. Not enough to be a deal-breaker and I can’t attribute this to anything in particular. But it happens very occasionally and it is annoying. Oddly enough, I tend to get it less when I choose the ‘concentrate on audio quality’ rather than ‘concentrate on connection quality’ in the app.

Switching Bluetooth source

You can’t easily switch between two Bluetooth sources. Say you want to connect to Spotify on your desktop and iTunes or YouTube on your phone. Switching between the two sources is a pain. You need to disconnect one before you can listen to the other. I don’t think this is necessary with the Bose QC35 IIs so I hope the shortcomings can be addressed in a future update of the Sony XM3s.


On the face of it, I still think spending £290 on a pair of headphones is ridiculous. That said, this is price of entry for high quality noise-cancellation currently. Having enjoyed the benefits of the extra focus these Sony XM3s have afforded I can’t help feeling they are worth it.

It is no exaggeration to say it’s doubled my ability to concentrate on what I am doing at work.

If you find your environment perpetually distracting you may find, like me, that the significant outlay is actually money well spent in the longer term.

]]> 4
Beginner JS tutorial: automatically make anchor ‘jump’ links with JavaScript Wed, 12 Dec 2018 20:36:53 +0000 If you’re writing documents or blog posts, it’s sometimes desirable to make a list of ‘anchor links’ that jump a reader to a different section of the document. Perhaps it’s easier to think of this pattern as a table of contents.

If you’re reading this article on a wide enough screen, you’ll see this kind of thing over on the right. Click a link and you get scooted down to the relevant heading.

No-one wants to make these things manually and, as they aren’t essential to the understanding of the document, I always make them with a little JavaScript snippet.

If you’re a beginner with JavaScript, it may interest you to understand how it works. Despite being relatively simple, we will be using a number of ES6 (latest JavaScript) language features: de-structuring, arrow functions and template literals.

Let’s go!


Before writing a line of code it is important to think through, conceptually how this might be achieved. Initially I thought about it like this: “find every header, make a link to that header and stick all of those links into a single container”.

Then I considered that I might want to limit where I search for any headers. It might also be beneficial to change where I placed the container in the DOM. Fleshing it out a little more, here is my layman’s terms approach. “Find every header within a given target area, make a link to each header and stick all of those links into a single container, then place that container into a given area”. That sounded good enough to make a start.

Function arguments

If we want this snippet of code to be reusable, it needs to facilitate options. Historically, I’ve written options, and their default values, for functions like this:

function TOC(options) {
    var appendInto = options.appendInto || "body";
    var headerScope = options.headerScope || "body";
    var containerClass = options.containerClass || "toc-Wrapper";
    var linkClass = options.linkClass || "toc-Linkc";
    var hTagsToLink = options.hTagsToLink || "h1,h2";
    // Rest of function

// Call that function with some options
    appendInto: "#intro",
    containerClass: "toc-Wrapper",
    linkClass: "toc-Link",
    hTagsToLink: "h1,h2",

With that approach we set our function up to accept a single parameter. We want that parameter to be an object. Then inside the function we ‘wire up’ the various values of the object to variables that we can then use inside the function.

When we invoke the function we pass an object to it (everything in the curly braces). The function uses those values unless one isn’t supplied, in which case it uses the alternate value (specified with the bit after the or || for each variable).

Function arguments – ES6 style

ES6 provides object de-structuring. Object de-structuring has many use-cases, but here we will use them to provide default function parameters.

I’m not going to explain the in’s and out’s of using object de-structuring for this use-case. Instead I am going to refer you to this excellent post on the subject:

Instead, compare that prior example to an ES6 version:

function TOC({ appendInto = "body", headerScope = "body", containerClass = "toc-Wrapper", linkClass = "toc-Link", hTagsToLink = "h1,h2" } = {}) {
    // Rest of function

One important thing to note here. Notice how the entire object is made optional with the = {} at the end. Without that, calling the function without passing an object, e.g. TOC() wouldn’t work.

Entire function walk-through

Let’s look at that the whole function now, and then we can consider how that achieves the original approach:

function TOC({ appendInto = "#intro", headerScope = "body", containerClass = "toc-Wrapper", linkClass = "toc-Link", hTagsToLink = "h1,h2" } = {}) {
    let jsNav = document.createElement("nav");
    let appendArea = document.querySelector(appendInto);
    let hTags = document.querySelector(headerScope).querySelectorAll(hTagsToLink);
    hTags.forEach((el, i) => { = `h-${el.tagName}_${i}`;
        let link = document.createElement("a");
        link.setAttribute("href", `#h-${el.tagName}_${i}`);
        link.textContent = el.textContent;

We create a nav element and add a class to it. Because of how we set up the options, even if we don’t pass something specific it will get the default class. Next we grab a reference to where we want to add this list of anchor links. Again if not specified we have a default.

Then we want to find all the heading tags on the page:

let hTags = document.querySelector(headerScope).querySelectorAll(hTagsToLink);

That might look a little complicated but it’s just building up a CSS selector from what we provide. For example, if we invoke the function like this:

    appendInto: "#intro",
    containerClass: "toc-Wrapper",
    linkClass: "toc-Link",
    hTagsToLink: "h1,h2",

We haven’t provided a headerScope so it will become the default: body. So, in this instance, the line is effectively evaluating to:

let hTags = document.querySelector("body").querySelectorAll("h1,h2");

Iterating with an array method and arrow function

So, we now have all the header tags we want in the hTags variable. We will use an array method to loop through them all and create a link element for each of them. In the code block below, we’re using an arrow function but obviously a standard ES5 function would work just as well. Either way, we have the el which is a reference to the element we are iterating over and the second parameter, i is the iteration count (if it’s on the third thing it will evaluate to 2 as it is zero-indexed).

Create an name for ids and links with template literals

In terms of what we are doing on each iteration. First we add an id to the header tag we are iterating over with a template literal. = `h-${el.tagName}_${i}`;

It looks a bit funky but if you think about the third tag found, if a h3 it would create an id of h-H3_2.

This same pattern is used on the anchor links to wire them up too. We create an a anchor tag, set the href to the aforementioned pattern, add a couple of classes (one the same as all the others, one specific to this iteration, just in case it may be useful). Then we set the text of the link to be the same as the header tag. Finally we append each link into the nav element made outside the loop/forEach.

hTags.forEach((el, i) => { = `h-${el.tagName}_${i}`;
    let link = document.createElement("a");
    link.setAttribute("href", `#h-${el.tagName}_${i}`);
    link.textContent = el.textContent;

Append it where you want it!

Finally, we append the nav element into the relevant place in the DOM:


Smooth scrolling with one line of CSS

Nowadays, Firefox and Chrome support smooth scroll behaviour, so make those anchor links behave a little nicer with this one-liner in CSS:

html {
    scroll-behavior: smooth;


That’s all there is to it. A few lines of JavaScript and we have a flexible anchor tag creating snippet. More importantly, if you’ve followed along you will have a handle on object de-structuring, arrow functions and template literals. Each of which is a great addition to the JavaScript language.

]]> 0
The highest paid executive Thu, 22 Nov 2018 17:37:17 +0000 Yesterday, here in the UK, various media outlets reported, in largely negative terms, about the pay of bet365’s founder, Denise Coates.

She was paid 265 million pounds according to the latest set of accounts filed at Companies House.

A couple of example stories:
The Guardian

Some relevant snippets:

Her pay is more than 9,500 times the average UK salary, 1,700 times that collected by the prime minister and more than double that paid to the entire Stoke City football team…
The Guardian

Vince Gable, leader of the Liberal Democrats was quoted as saying,

In any circumstance it is hard to justify, but more so given the money comes from people struggling with compulsive gambling
Vince Gable


Why does someone who is already a billionaire need to take such an obscene amount of money out of their company? It is difficult to find a reason beyond pure greed
Luke Hildyard, a director of the High Pay Centre

The Guardian’s parting shot:

bet365 made a £75m donation to the Denise Coates Foundation, which mostly funds medical and education charities. The charity has not made any donations to gambling or addiction charities.

The Labour MP Jonathan Ashworth was quoted in the Daily Mail saying:

From gambling to alcohol to drug misuse we face an addiction crisis. Services are slashed, mental health services neglected. Lives are ruins while the CEO of a betting company is paid 22 times more than the whole industry ‘donates’ to treatment. Disgusting.

For the really curious, thanks to Companies House you can get all the financial information these stories are based upon by reading the latest bet365 accounts.

The other side of the story

So, that covers the negative side of things. I want to offer some alternate facts and thoughts.

Disclosure: I have been working for bet365 since 2012. Although working most days no more than 50 metres from Denise Coates, I’ve never exchanged so much as pleasantries with her. I have no idea what she is like personally. I offer that information merely to highlight that what follows is entirely my own opinion.

End of disclosure.

Reading the various stories in the last 24 hours I was struck by the complete lack of balance. Most noticeably the BBC which, as a public service, at least has a remit to impartiality.

I didn’t see anything heralding the fact that this pay was awarded to a woman, which given the inequality in this respect I found pretty surprising. In addition, nothing mentioning the tax she pays. Little to nothing about her work ethic. Jack on how the level of success was achieved and nada about the benefits that creating the company has done for the locality.

The UKs single biggest tax payer

Some facts. Not only is Denise Coates the highest paid Director in the UK. By extension, that also makes her the UK’s highest single taxpayer. At 45% tax rate here in the UK, that means she gifted the Treasury at least £119,000,000 in tax last year.

Let’s attempt to weigh up what that number means. Think about that like this. The most expensive surgery that the NHS has to perform is brain surgery for children. This costs around £40,000 per surgery. Denise Coates has, single-handedly, with the tax she has paid, effectively picked up the tab for 2975 of those surgeries! Or think of it as over 14,000 knee replacements if you would rather.

She single handedly paid over 7 times the amount of tax that Facebook paid (it paid just 15.8 million in tax last year), Or 24 times more than Amazon (just £4.5 million).

Apple, the worlds most profitable company, paid comparatively little more tax than Denise Coates alone: £136,000,000 million in tax, and only then after an investigation by the HMRC.

The hardest worker

I’ve not worked at bet365 since its inception but I can tell you that based upon multiple independent anecdotal reports, Denise and her family getting bet365 to where it was now was no mean feat.

As successful as it now is, you would be forgiven for thinking there would be little need for her to be in the office; just phone in from [insert luxury retreat of your choice] occasionally.

Nothing could be further from the truth.

Happen to be in the office on a Saturday? Guess who else is? Working late to finish something off? Guess who else is?

Day in.

Day out.

Nothing about this in the media. No admiration of the work ethic and dedication required to run a company of this scale.

Bottom line. 99.99999% of people would not put in the kind of effort needed to create and sustain something of this magnitude. Yet 99.999999% of people think they deserve the spoils it generates all the same.

Benefits to the locality

bet365 is Stoke On Trent’s largest employer. About 4000 employees and counting.

I have lived in and around Stoke-On-Trent my whole life. It doesn’t have a great reputation. It’s largely fair criticism.

However, thanks to the pull of bet365, for the technically minded, I have the pleasure of a number of acquaintances that just wouldn’t live in and around the area were it not for bet365. There are employees from every one of the many countries bet365 operates in. They enrich the area. They base their lives here. They send their children to school here. They pay their taxes and spend their income in and around here.

It didn’t have to be this way. The directors could have moved. Could have sold up. Could have avoided paying tax. They did none of those things. They stayed, pay their taxes and the area is better for it. Stoke On Trent now has top talent in the area. Talent that would otherwise have lived and worked in London or Birmingham and the like.


It’s tough to try and sell the idea that somebody deserves 265 million pounds for a years work. However, I certainly don’t begrudge it anyone either. Especially when that person is generating 75 million pounds for charity in the same year. What have I done in that regard? What about you? If that money is earned fairly within the realms of the law, who am I or anyone else to cast aspersions?

Every man is guilty of all the good he didn’t do.

Update 27.1.19

The BBC reports that the UK newspaper ‘The Sunday Times’ has revealed a first ever ‘highest taxpayers’ list. Guess who is in the No2 position? That’s right. Denise Coates, along with her brother John and father Peter paid at least 156 million pounds in tax last year. £99 million of that from Denise’s salary alone.

]]> 2
Social media is a failed experiment Thu, 18 Oct 2018 11:24:10 +0000 I have never met anyone whose life is truly enriched thanks to social media. I’ve met plenty of people whose life seems worse due to it.

People, what are we still doing here? It’s time we accept social media is a failed experiment.

I’ve had a version of this post in my drafts for over 12 months. Reading:, which is in a similar vein, prompted me to finish this and click ‘post’. If you only have time to read that or this, go read that instead. It’s better.

My social media interactions

I’ve had only a fleeting relationship with social media over the years. The fundamental premise of social media has never sat well with me. The idea that you would publicly share trivial, yet intimate details of your life with complete strangers, goes against every natural instinct I have. This feeling is exemplified by my limited social media usage:

I managed one day on Facebook before deleting my account – that was back at the beginning of Facebook. I managed a little longer on Instagram; a month or so.

I have had a Twitter account for much longer though.

I recognise I am not typical in this respect. It’s perhaps therefore far easier for me to renounce social media. However, I think my conviction in this regard is in large part because I am old enough to remember social interaction before social media. I’m convinced it was better before. I’m confident this isn’t just the rose-tinted glasses of nostalgia.

I opened a Twitter account from the outset primarily to promote my writing. This was my tech books for the most part but, at the time I joined (2011?), Twitter had also become the de-facto place to announce a blog posts to the world. You have to be on social media people told me. People are ditching RSS they said. And so I did my best to play ball.

My followers on Twitter, while limited in numbers (approx 4000 at time of writing) include many web professionals I greatly admire.

At the beginning of using Twitter, seeing peers I had learned so much from over the years ‘follow’ me offered some kind of vindication for my work.

In the subsequent years of using Twitter, there were points when having a Tweet or link to a post I’d written tweeted by a ‘big-hitter’ luminary would send my ‘like’ notifications into relative meltdown giving me ‘something’; perhaps social media’s closest thing to euphoria?

At the other end of the scale, while I have suffered nothing like the horror-show trolling I’m aware others have suffered. I’ve always felt discourse on social media has been the lowest value discourse I have engaged in.


The biggest failings of discourse by social media became apparent when trying to discuss topics with peers. As an illustration, compare and contrast: Tyler Sticka wrote a post on the CloudFours blog about icon fonts. I had some counter-points and wrote a blog post in response. Tyler commented on this. He showed me respect and I had the utmost for him. Adults discussing something like er… adults. All good.

The same subject was ‘debated’ on Twitter with other high-profile web professionals. Detail was lost, nuance was lacking. It was, as far as I am concerned, utterly pointless. It’s hard enough to navigate differences of opinion by email, where tone and intention can be misconstrued. With a limited character count I found it futile. There’s also a weird thing on Twitter where follower count seems to play into the validity of someone’s argument and opinion. And I find that stinks of bullshit.

I think on-line forums are the best medium to discuss things of this nature. I remember vBulletin boards and the like were all the rage a few years ago and I think social media largely killed their use off. I think that was an error. I’d love to see a resurgence of their use.

You’re paying but are you getting your moneys worth?

It’s seems so redundant to point out but we are all creating the content that keeps social media platforms going. If I think objectively about it, the time I spend looking at social media and sometimes posting, doesn’t seem like a good investment. Sure, there are occasional funnies but none of the things I see couldn’t be better consumed in a different way.

The drip, drip of social media is just completely inefficient. The more we interact with each other and consume content this way, the more order and civility we have taken centuries to carve out, we seem to lose. We, as humans, are losing the ability to batch process tasks. We’re training ourselves to feed on drips of content. Notifications on our handsets turning us all into so many Pavlovian dogs.

We check in on social media while making a drink, nipping to the loo, waiting for the train, waiting for our coffee to be poured, in the moment it takes our children to search for their reading book or brush their teeth (seriously Ben, you’re an idiot) etc.

Does that stop us checking again in 5 minutes? Nope, we just check again at the next 10 spare seconds that come along. Instead of ever being completely in the moment, this clamour for ‘something’ is mentally tugging at us; some weird inescapable urge to see if anything happened (note: it didn’t).

Add that all up. The small and transient highs of ‘good’ social media interaction in no way compensate for the hours invested. At least not for me.

And that’s primarily when we are with ourselves. What about how we use when amongst others?

Public interactions

Consider this scenario. You’re sat with one other person having a drink and talking. Your phones are, inevitably on the table. Like two modern-day gun-slingers ready to ‘draw’ as soon as a buzz is heard.

A notification comes through and both parties glance down. And here’s the thing. More often than not, we are disappointed when it isn’t our phone buzzing! Not because we are expecting something important, just because!

Sometimes, whatever is buzzed through is interesting enough that we just pick it up and engage with that instead for a moment. Sometimes it is not. We should be annoyed at the intrusion but subconsciously at least, I think we often welcome it.

What are we actually saying to the other person here? I think, whether we mean to or not, we are basically saying, “I’ll converse with you, but regardless of what you say, if anything more interesting pops up, I’m ditching you for a few seconds”.

Is that how much we value the other person sat with us?

Setting examples to the next generation

When it comes to children I have two issues around social media: privacy and the problem we just considered; being in the moment.


I have children. Unless I have done something wrong, you won’t easily know what their names are, how old they are or what they look like.

This is because they are children. They have a right to anonymity and I consider it my responsibility to provide it until they can do so themselves. At some point they may make the decision to change that but I won’t be making that decision for them.

Parents need to ask themselves some serious questions here. Weigh up the delight you get of comments/likes from your followers/friends on social media when you post a picture of your kids against their basic human right for privacy. How would you have liked getting to 16 years of age and having countless episodes from your ‘private’ life exposed and searchable by anyone?

Think about that.

We are bringing up the first generation of humans, outside of an oppressive regime, that don’t have a basic human right of a private personal life as a given.

It’s insane.

The moment

The other consideration is that I’m sometimes not in the moment. There are occasions one can’t be in the moment. But reading/posting something on social media is not one of them.

Confession. I’ve found myself sneaking a look at my phone whilst my child reads aloud to me. There is so much wrong with that tiny action I don’t know where to start. You know that, right?

If our children see that every time Dad waits for something he needs his phone, what are we teaching them? Kids pay far more attention to what they see us do compared to what we tell them to do.

What are we teaching the next generation? That all moments should be filled consuming something? That someone is only worthy of your time until something better comes along?

The endless pursuit of vapid information and interaction is destroying the fundamental utility of sharing time and conversation with others.

Moments of nothing

Social media is stealing the respite our brains need. When I wait 10 seconds for a drink, why can I no longer just wait? Simply alone with mere thoughts.

When we wait longer than just a few seconds, we are no longer just pondering. Every possible spare brain cycle is being used for consumption. Nothing left for just thinking – thinking without input.



Intuitively, I feel that is a misstep. I think there is great utility is ‘letting your brain breathe’. I can’t categorically tell you why. I just feel it.

I think if you are honest with yourself, you feel it too?


I don’t know where all this leads me other than the title of this post. I feel, now more than ever, that Social Media is a failed experiment. Rather than bringing people together I feel that, for the most part, it pushes them apart.

Instagram has become the de facto app for generating ‘fear of missing out’. At best it seems to incubate resentment for the (admitted fallacy) of others ‘perfect’ lives. I’m not sure what I would substitute it for as it doesn’t fulfil any need I have.

I don’t have any direct experience with Facebook other than the observation that people I know that spend any degree of time on Facebook seem to be continually falling out with other people on Facebook! It seems to be the go-to location for tittle-tattle and small-minded mud-slinging.

Twitter seems little more than an announcement platform. There are moments it comes into its own, such as live incidents or response to current events. Sadly this is usually some manner of tragedy. However, day-to-day, it does nothing for me that an RSS feed doesn’t do better. Or, for lots of small snippets of information I find far more utility in newsletters.

Where do I go from here? Firstly, I need to extricate myself from Twitter. I also harbour desires to ditch the iPhone and move to a Classic/Feature phone. But I need want Audible for my commute. I need email. Above all, I really don’t want to do without the convenience of a decent camera/video. Perhaps though, that would also help me experience a moment rather than trying to capture it? I’m not quite there yet.

But I’m striving to.

]]> 3
Notes on prototyping Wed, 03 Oct 2018 13:42:55 +0000 The majority of my working days for the past 5+ years have been filled with building front-end prototypes.

In this post, I’d like to extol the many virtues of prototypes and some observations on the process.

I’m not a fan of prescriptive advice on the web, but I’m going to stick my neck out here with the assertion that if you are working on a project of any real size, and not prototyping, you’re doing it wrong.

Prototype or proof of concept

There is an important distinction to make up-front. That is the distinction between a ‘prototype’ and a ‘proof of concept’.

The definition of ‘prototype’ on my system dictionary:

a first or preliminary version of a device or vehicle from which other forms are developed: the firm is testing a prototype of the weapon.

Compare this to the definition of ‘proof of concept’:

evidence, typically deriving from an experiment or pilot project, which demonstrates that a design concept, business proposal, etc. is feasible: the company was awarded the contract on the strength of evaluation, proof of concept, and budget | [as modifier] : proof of concept trials | [count noun] : as a proof of concept, he set up the system to monitor Twitter for specific hashtags.

From the front-end web development perspective, I would make the distinction thus.

A prototype is a visually and interactively correct implementation of a design idea made with the technologies it will ultimately be delivered in.


A proof of concept is evidence, in whatever technical form, to suggest the intended design goal may be feasible. However far-reaching a proof of concept, if not made in the technologies the design should ultimately be delivered in, it does not quality as a prototype.

Consider yourself designing a web-based weather application. In my terms, creating something to show going from a summary to detail view, created with a tool like Framer or Principle is a proof of concept.

Creating that same interactivity with HTML, CSS, SVG and JavaScript would be a prototype, regardless of whether or not the application is actually wired up to any real weather data.

Why prototype

I believe creating prototypes of intended functionality offers the greatest information to cost ratio when compared to any other deliverable, prior to sending the actual ‘thing’ into the wild.

If skipping a prototype, flat designs of intended functionality, however well-considered and aesthetically pleasing can ultimately be exposed as flawed when seen living and breathing in hand.

That’s not to say that by prototyping you can always skip design or proof of concept; sometimes they are essential pre-cursors to a prototype. However, nothing else can confirm categorically whether or not a new feature/product/amendment actually works.

Being able to get to the end of any potential product blind alleys quicker is a time and therefore cost saving to all. A prototype allows testing to complete satisfaction and fidelity the intended feature in advance.

If you end up prototyping 3 or 4 versions of a feature only to discover none of them are quite right, congratulate yourself and your co-workers. You have just saved countless hours of other developer work and a likely risible experience for your users. The comparative cost of one or two developers fully prototyping a feature is a pittance compared to mobilising whole departments and organisations into action creating a fundamentally flawed new feature.

A prototype is the design and development equivalent to a war game as opposed to risking lives in a real war.

Speed vs accuracy

The line to walk when prototyping features for your product is weighing up speed against accuracy.

When I talk to designers, they are often of the opinion that the fidelity of the prototype is very important in determining whether something is actually working or not. I typically agree.

Some problems reveal themselves very early, when fidelity is lower, others take longer and subsequently higher fidelity to expose.

When there are competing solutions to explore it is often essential to ‘get to the end’ of each as a fully realised and full-fidelity prototype before weighing each up against the other. I have lost count of the amount of times, elements of an abortive but fully realised prototype have been key ingredients in a new solution, often the hybrid of two or more separate previous approaches. Seeing those failed attempts in their full glory allowed the cherry-picking of their positive attributes.

Producing something of high fidelity, perfectly matching any flat designs, runs counter to the notion of creating something quickly. The challenge for the prototyper is therefore how to cut corners that don’t impact the fidelity of the prototype.

My go-to list:

Ignore the functionality of any unrelated UI

Working on a feature related to your main navigation? Perhaps it is a floating action button that makes a radial menu appear. This is the area you are testing. You probably don’t need to build in the functionality of what the other buttons of the nav do. Ignore them. This category of choices are also covered in the section on remit below.

Mock-up data

Whilst occasionally beneficial, hooking a prototype up to live APIs can be fraught and introduces another vector for failure. Unless you absolutely have to use real data, mock-up server responses and data using JSON or JavaScript objects.

Ignore contexts until you can’t ignore them

If the main audience for your intended feature is mobile, don’t concern yourself with how this version will work/look on large screens at the outset. Conversely, if you’re making a dashboard for a primarily desktop audience, confine your prototype to figuring out that problem before concerning yourself with small screens.
At some point, when a prototype has answered all other questions you can turn your attentions to the areas you ignored.

The remit of a prototype

The remit of a prototype can, and almost always does change in relation to how successful it is. It is however, very useful for the prototyper to get a solid steer from relevant parties on what the remit of the prototype is at each stage. This ensures no-one is surprised or disappointed.

This doesn’t need to be a big hullabaloo; just a simple agreement and list of things to hold on to. This isn’t just to safeguard the prototyper from scope-creep, it is just as useful to pull the prototyper out of the weeds when they find themselves 2 days into writing a touch controls library that they probably don’t need to be writing given the remit.

If you are the person to be building prototypes, know this. Things will change. This is the whole point of prototyping. The rubber will hit the road and then hit the road again in a different way — ad infinitum. This is the very nature of prototyping. Your job is to facilitate this as quickly and easily as possible.

Facilitating change

If you are not already intimately familiar with Git, it’s time to get familiar. You will often find yourself hitting philosophical forks in the road. Two or more possibilities that require exploration in a prototype. Be disciplined in stopping at that point and branching.

Truthfully, occasionally I still find myself heading down a fork before realising I should have forked. But the sooner you, dear reader, can prevent yourself doing so, the easier your prototyping life will be.

Tangentially related, another discipline to master is to ensure work on that branch only facilitates investigation of the subject the branch was created to explore. Start fixing aesthetic things common to more than one branch and things will end badly when you come to discard the current branch. Easier said than done, I know.

Ultimately, whatever work and procedures you can put in place that will facilitate you being able to simply and rapidly adapt to change will serve you well.

What a prototype should do and what it shouldn’t

Earlier I stated that I believed a prototype should be, “… a visually and interactively correct implementation of a design idea, made with the technologies it will ultimately be delivered in.”

With this in mind, it is for the prototyper to decide how to facilitate that. If meeting that objective is best served with a library such as Ember, Vue or React then fine. However, I would typically caution to introduce such complexity only if absolutely necessary. Can you achieve the same goal quicker and just as maintain-ably with vanilla HTML, CSS and JavaScript?

An eye on production; strong opinions, loosely held

Depending upon how involved the prototyper will be in the eventual development of the final solution will likely inform how they author the prototype code. If any of the prototype code can be written in such a way as to be transferable to production environments, so much the better. However, I would discourage being to prescriptive about intended implementations.

For example, I always write my CSS using ECSS as it is highly transferable and no greater effort to author than any other CSS methodology. However I make no attempt to make interactivity in JavaScript compatible with production as there are likely a billion considerations in that world that would stunt prototyping flexibility and speed.

My current stance is that if the feature, once complete and delivered to the world, looks and behaves exactly the final prototype — I couldn’t give a monkeys how it was coded! As a prototyper, i.e. someone not building the feature in production, it is probable you lack the context to fully understand all choices and considerations of that arena. Therefore, try not to be prescriptive about ‘how the sausage is made’.

Temper that with the fact that if the final result is lacking compared to the prototype, when it could equalled it had they followed your approach, the developer in responsible needs their head wobbling.

More on prototyping

Contractually I’m not able to disclose details ans specifics of the nature of the prototypes I work on day-to-day.

However, if you want any further evidence and background of prototyping and its importance in product development, I’d refer you to ‘Creative Selection’ by Ken Kocienda, which is an inside account of Apple’s creative process. It includes plenty of great anecdotes and evidence as to the efficacy of prototyping (demo’s in their parlance).


In web front-end, prototypes are the gold standard for proving the validity of a new feature.

  • Prototypes should be made from the same technologies the eventual solution will be delivered in. Otherwise, they are merely a proof of concept.
  • Time spent exploring feature ideas as prototypes pays for itself 10x.
  • Prototypes avoid the expensive development time of fundamentally flawed features and implementations.
  • Working on prototypes requires developers happy to be constantly making and throwing away code. Happy to change course as required and happy to create a development environment that facilitates that need.
  • Introduce complexity to prototype environments only if needed
  • A prototype shouldn’t be prescriptive about the production implementation, unless the production implementation stands to be inferior from the users perspective to the prototype.
  • Prototypes often need following through to complete fidelity before they can be categorically dismissed.
  • Keep fidelity high but scope low. Mock data, ignore unrelated UI and contexts.
]]> 0
The frustrations of using CSS Shapes and CSS Exclusions Mon, 23 Jul 2018 15:42:09 +0000 I don’t like to write ‘moany’ and negative articles. That’s not what you need.

But indulge me as I explain my current feelings of woe with CSS Shapes and CSS Exclusions.

The promise of CSS Shapes

Around 2014, articles surfaced extolling the virtues of the forthcoming CSS Shapes implementations.

Namely, the ability to have non-rectangular shapes for content via the power of CSS. Think about the gazzilion magazine articles you have seen where text flows around images; basically that for the web.

Fantastic. We have wanted this in CSS since forever.

Back then I got enthusiastic about the possibilities but as support was scant, I mentally parked CSS Shapes up until they were something that could be used in anger and I had a suitable use-case.

Current state of browser support

Fast-forward to 2018 and support is much better. Take a looky-see at

They have been in Safari and Chrome for a while and we finally (from v62) have them in Firefox.

IE and Edge have no support. However, as they are edge browsers (pun intended) with very low market share I could work around that.

By the way, if you are an Edge user, you can cast a vote to get supported added at the Windows Developer Feeback site.

It’s been in the backlog with a priority of ‘medium’ since 2015 so don’t hold your breath. Not quite the 18+ years we have been waiting for custom scrollbars in Firefox but you know, hardly a break-neck pace either.

Anyway, don’t start hating on IE and Edge until you have read the rest. No-one really comes out of this smelling of roses.

How to use CSS Shapes

The first paragraph of the CSS Shapes Module Level 1 specification provides this description:

CSS Shapes describe geometric shapes for use in CSS. For Level 1, CSS Shapes can be applied to floats. A circle shape on a float will cause inline content to wrap around the circle shape instead of the float’s bounding box.

Suppose you have a bunch of flowing text and you want to let it flow around a 200px circle. The markup might look like this:

Lorem ipsum dolor, sit amet consectetur adipisicing elit. Repudiandae vel possimus voluptatibus culpa eius ea totam animi fugiat, laboriosam repellat, nisi obcaecati ab natus! Nisi quasi sapiente nulla libero non?
    <div class="shape"></div>
Lorem ipsum dolor sit amet consectetur adipisicing elit. Laborum, voluptatem nobis. Ex alias, perspiciatis accusantium ab ad magni quaerat minus vel accusamus soluta adipisci expedita numquam reprehenderit sequi veniam eveniet!
Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad at tempora aut earum. Quasi quaerat vel perspiciatis totam deleniti ullam eius dolore magni aliquam harum necessitatibus sunt accusamus, delectus animi.

And your CSS could be:

body {
    max-width: 500px;
    margin: 0 auto;
    font-size: 1.2rem;
    line-height: 1.5;

.shape {
    height: 200px;
    width: 200px;
    background-color: #e4e4e4;
    shape-outside: circle(50%);
    float: left;

The main ‘lorem’ text will now flow around a 200px circle shape. However, if you have a background-color on your shape element, you would be forgiven for thinking “WTF” when the shape itself is still square:

See the Pen CSS Shapes 1 by Ben Frain (@benfrain) on CodePen.

Turns out you will need to add a clip-path to the shape div which is the same as your shape-outside. So, here goes:

.shape {
    height: 200px;
    width: 200px;
    background-color: #e4e4e4;
    shape-outside: circle(50%);
    clip-path: circle(50%);
    float: left;

That gives us this result

See the Pen CSS Shapes 2 by Ben Frain (@benfrain) on CodePen.

Then you can play with shape-margin to add a little space around the shape itself if you need it.

With CSS Shapes, you can have basic shapes like circle() or ellipse() or pass in values to create a polygon. When it comes to polygons, the best advice I have is to use Bennett Feely’s ‘Clippy’ tool, choose the custom shape and go wild.

You don’t have to use coordinates with CSS Shapes. You can also flow text around an image based on the Alpha channel. Pretty cool, right?

I thought so, so jumped to the first practical example I thought of.

What about a pull quote?

With CSS Shapes we can flow text in irregular shapes. Great. I’m thinking I’ll make a great pull-quote for the blockquotes on a blog site.

I’ll take a polygon shape, add some content for the pull-quote that will sit inside and I’ll liberate my content from the rectangular world it has grown accustomed to. Easy right? Not so fast Bat-man.

See the Pen CSS Shapes 3 by Ben Frain (@benfrain) on CodePen.

There is no shape-inside

See the text inside the shape getting clipped? I must have set something wrong surely? No. I’m afraid not.

Turns out, what is needed here is shape-inside. That would allow us to define a shape inside the element which internal text can flow within.

Turns out shape-inside isn’t implemented anywhere yet; it’s a CSS Shapes Level 2 property.


I’m pretty sore about that.

I suppose we can kind of work around that by setting some internal padding on the shape that will safeguard the content from the ‘shape’ of the shape. Not ideal, but arguably workable depending upon the use-case.

Moving on, the next thing I wanted to do was pull the quote out of the left-hand visual flow a little.

Let’s make the shape relatively positioned and apply some negative positioning to achieve that.

left: -100px;
position: relative;

Here is the effect:

See the Pen CSS Shapes 4 by Ben Frain (@benfrain) on CodePen.

The pull-quote does indeed come left by –100px but the shape of the text just kind of stays there. Hmmm. Not exactly what we were after.

At first I thought this was a browser quirk but Firefox does the same. So, this is intentional. I’m sure there is a sensible reason for this.

A read of the specification tells me that this will happen but doesn’t offer an explanation as to why it makes sense to do so. Bummer.

Regardless, I don’t like to give in easily. Let’s use calc() to amend the coordinates in that prior polygon:

shape-outside: polygon(calc(75% - 100px) 0%, calc(100% - 100px) 50%, calc(75% - 100px) 100%, 0% 100%, 0 50%, 0% 0%);

That way, we are taking off the 100px from the horizontal percentages. OK, here we go:

See the Pen CSS Shapes 5 by Ben Frain (@benfrain) on CodePen.

Right, the essence of the technique seems to work. I should probably do that left-pull thing with CSS Grid but that’s another matter.

I’m not sure if it’s my own ineptitude and lack of practice with CSS Shapes but I don’t find working with CSS Shapes particularly predictable or fruitful. I’ve worked around one issue to get the result I needed but I’m left with the less than ideal situation of the text inside a shape not adhering to the shape it sits within.

Another shortcoming is that with CSS Shapes, as they make use of floats, you are limited to the shape having text flowing to the left or right but not both.

CSS Exclusions

While IE/Edge have failed to implement CSS Shapes, they are the only platform to ship CSS Exclusions. The specification for CSS Exclusions is here:

Here’s a line from the abstraction:

CSS Exclusions define arbitrary areas around which inline content ([CSS21]) can flow.

Wow, doesn’t sound that different to CSS Shapes really. One thing flows around another.

From a cursory read through the specification, it also allows text to flow around non-floated elements. You have some granularity about how things flow with the wrap-flow property too. Plus using z-index to set the exclusions order. Very cool. But…

Oh, wait. These exclusions are just for rectangular boxes. No shapes. Just boxes.

And just Microsoft Edge support.

So. About as much use as a chocolate fire-guard. Why read any further. Sad trombone time.


I don’t know what’s going on here. As an outsider, coming to these features fresh in 2018 this situation seems like a bit of a car crash.

Safari, Chrome and Firefox has CSS Shapes and no Exclusions, Microsoft has CSS Exclusions and no Shapes – what a great situation for a developer!

Right now I’d take CSS Shapes with shape-inside. I could make things happen with that. I’d also find Exclusions useful but they look about as likely to drop across the board as custom scrollbars making their way into Firefox.

I guess we all just need to hold on until CSS Shapes Level 2?

OK, enough grumbling. I promise the next post will be more positive.

]]> 8
An introduction to the JavaScript MutationObserver API Wed, 18 Jul 2018 19:35:58 +0000 I had a play with the JavaScript MutationObserver API recently and came away very impressed. I’m already considering all the places I could probably tidy up code by making use of them. In case you haven’t heard about them before, here’s a little primer.

MDN describes the MutationObserver interface as:

The MutationObserver interface provides the ability to watch for changes being made to the DOM tree.

Think if it like an Event Listener for changes to DOM elements and I don’t think you’re far off.

Support is also good. Back to IE11 plus all the ever-green browsers on desktop. On mobile, it’s Android 4.4 > and iOS6.

A basic example

At this point let me demonstrate a quick example. Suppose we have a contenteditable piece of text and we want to do something when a user edits that text. For this example, we’d like to know what the text was before the user pressed a key.

See the Pen MutationObserver by Ben Frain (@benfrain) on CodePen.

So, given this markup:

<div class="container">
    <div class="value" contenteditable="true">type something here</div>

<div class="previous"></div>

We can use this JavaScript code:

const container = document.querySelector(".container");
const previous = document.querySelector(".previous");

const mutationConfig = { attributes: true, childList: true, subtree: true, characterData: true,
    characterDataOldValue: true };

var onMutate = function(mutationsList) {
    mutationsList.forEach(mutation => {
        previous.textContent = mutation.oldValue;

var observer = new MutationObserver(onMutate);
observer.observe(container, mutationConfig);

On each keypress we can see what the prior string of text was. We don’t have to store off the existing value to a variable or worry about listening to keyup events, it’s just there in the mutation MutationRecord that gets provided with each Mutation. If you log out the mutation inside the forEach above you can see the MutationRecord in the console. I’ve listened to characterData but if you were inserting/removing DOM nodes you could see that detailed too.

Anatomy of writing a MutationObserver

Right, now we get what they can do for us, what’s the code above actually doing.

First off, we’re just grabbing the container element. Notice that we are grabbing the parent of the element where the changes happen? That’s because you can set the scope of an MutationObserver to be as tight or as wide as you need. We are also grabbing the `previous` element where we write in what the text was previously.

const container = document.querySelector(".container");
const previous = document.querySelector(".previous");

Next is the configuration that I’ll want to pass to the MutationObserver presently:

const mutationConfig = { attributes: true, childList: true, subtree: true, characterData: true,
    characterDataOldValue: true };

I don’t need to separate that into a separate `const`, I could just easily pass it in when I call the method like this instead:

observer.observe(container, { 
    attributes: true,
    childList: true,
    subtree: true,
    characterData: true,
    characterDataOldValue: true 

Next up is the function I want to run when any mutations are observed. I have inventively called this onMutate:

var onMutate = function(mutationsList) {
    mutationsList.forEach(mutation => {
        previous.textContent = mutation.oldValue;

This is passed the mutation list and for each of those (the mutation) I am writing in the oldValue from the mutation into the DOM. At the risk of stating the obvious, you can whatever you want here give the enormity of what’s available in the MutationRecord.

You actually create a MutationObsever with the new keyword and name the callback you want to run when a mutation is observed:

var observer = new MutationObserver(onMutate);

Now we have our observer we can observe it like this:

observer.observe(container, mutationConfig);

We pass the element we want to observe and the configuration by which we should process any mutations.

Options for MutationObserver

It is worth knowing that the MDN page currently omits details of the options available in the MutationObserver config. This is detailed in the specification at

For completness they are:

  • childList
  • attributes
  • characterData
  • subtree
  • attributeOldValue
  • characterDataOldValue
  • attributeFilter

Of interest for our little demo are the characterData and characterDataOldValue options. Without these we wouldn’t see anything. The options are a great way to tune out some of the noise based upon your requirements.

The MutationObserver also has a takeRecords() method that returns whatever is in the record queue and a disconnect method which stops the observer.


The MutationObserver API seems to provide a very clean way to deal with DOM changes outside of the more usual input/form element handlers. Support is excellent and the API is mercifully simple and, for me at least, very logical.

If, like me, you hadn’t tried them out previously, I’d encourage you to give them a whirl.


Robert Smith pointed out (via Twitter) that Kent C. Dodds has a DOM Testing Library that makes good use of MutationObserver. This via Kent on Twitter:

dom-testing-library’s waitForElement uses MutationObserver to know as soon as possible when to call your callback and check whether the element you’re waiting for is available! Very interesting API!

Examples are definitely out there in the wild although it seems there isn’t as much take-up as there perhaps should be.

]]> 0
Creating a Sketch plugin with JavaScript Thu, 28 Jun 2018 11:30:00 +0000 Not too long ago, Bohemian Coding created a JavaScript API for their design application, Sketch.

The API is now friendlier than it originally was. If you looked at it initially and were put off, I’d encourage you to take another look. The documentation isn’t perfect. However, I was able to muddle through and produce a plugin in short order – and I am no JavaScript wizard!

This post provides a birds–eye walk-through. It covers how to make and update a simple plugin for Sketch with JavaScript. I’d suggest you don’t concern yourself too much with what this plugin does, it was purely to scratch my own itch. Instead, consider the possibilities of what you might do with the API.

In a prior post I described making a language switcher with JavaScript. This was to test whether the longer and shorter equivalent strings of different languages work in the same situation. Longer strings can dictate different design choices; it’s useful to know these things up front. A colleague suggested this functionality would be useful in Sketch.

This seemed like a golden opportunity to further cut my teeth with JavaScript. This became the challenge to surmount with JavaScript and the Sketch API.

The Sketch developer documentation gives a great starting point for making a Sketch plugin. Particularly the ‘Your First Plugin’ page. Follow that tutorial through so your dev environment is set up. Then you are ready to make something more meaningful.

Initially, I had an issue installing the Sketch Package Manager. I had a prior version of Node caching something or other. Long story short, if you have a similar issue, try running the following two commands in Terminal. Then re-install Node from

rm /usr/local/bin/node
rm -rf /usr/local/lib/node_modules/npm

I’m continuing here as if you have followed that prior Sketch tutorial through. The current developer experience is that there is a local folder for your plugin that contains the following folders:

  • assets this folder contains any images and can also contain your appcast.xml. More on the appcast.xml file shortly.
  • node_modules the Sketch plugin dev environment runs on Node so no surprises here
  • src this folder contains your plugin JavaScript file(s) alongside a manifest.json

Also in the root are a package.json, a package-lock.json a .gitignore and a It is only package.json that needs to be edited. Of the folder, it is only the src and assets folders you should concern yourself with while writing a plugin.

If you follow the ‘Your First Plugin’ tutorial through, you will know you can run npm run watch so that changes to your files are monitored and the plugin will re-build. Setting the flag defaults write ~/Library/Preferences/com.bohemiancoding.sketch3.plist AlwaysReloadScript -bool YES in Terminal also means you don’t have to constantly restart Sketch on each edit.

I did get in a weird situation where the plugin kept caching a local version. I was guided to a workaround on the Sketch API GitHub.

The basics of writing a plugin for Sketch in JavaScript

First of all, rename your plugin and files to something more meaningful. Open the src/manifest.json file. You can see here where I have amended mine to “translateText.js” or “Translate Text”.

  "compatibleVersion": 3,
  "bundleVersion": 1,
  "icon": "icon.png",
  "appcast": "appcast.xml",
  "commands": [
      "name": "Translate Text",
      "identifier": "my-command-identifier",
      "script": "./translateStrings.js"
  "menu": {
    "title": "Translate",
    "items": ["my-command-identifier"]

The script key is where you reference the actual ‘meat and potatoes’ of your plugin. In this case, my main JS file was ‘translateStrings.js’ and you can name yours appropriately. The name key is the text that will appear in the menu of the Sketch interface for users.

So, I am now going to concentrate on writing the actual plugin in that translateText.js file. By the way, the completed translateText.js plugin file is included at the end of this post.

If you have ever written a Gulpfile or done anything in Node you will feel immediately at home. You import things you want to use like this:

var mlStrings = require("./mlStrings");

And your plugin should be the default export from the file. For example:

export default function(context) {
  // your plugin code

That is the mechanics of things. Let’s look at what you can do with Sketch. We will explore this a little by way of wiring up some parts of this language string switcher.

The remit of this plugin

I have a ‘dictionary’ of phrases as JavaScript objects inside an Array with a different language key for each value. For example:

var MLstrings = [
    English: "Sausages",
    Polish: "Kiełbaski",
    German: "Würste",
    Russian: "Колбасные изделия",
    Traditional: "香腸"
    English: "Carrot",
    Polish: "Marchewka",
    German: "Karotte",
    Russian: "Морковь",
    Traditional: "胡蘿蔔"

In Sketch I wanted the user to be able to select a layer(s) or symbol(s). Then choose a language from a selection menu and have any matching strings swapped out with that languages equivalent string.

It was suggested an option should exist to ‘stress test’. This would swap a selection with the longest possible alternative, regardless of language. Therefore, in the above example, if my text was “Sausages”, it would replace the text with “Колбасные изделия” as it is the longest matching string.

Grabbing documents, layers and symbols

So imagine that the user is in Sketch, they browse to our plugin in the menu and activate it. This is when our script gets executed. So, first of all, you might want to grab some ‘stuff’. This is covered in the ‘Document’ section of the JS API docs:

I needed the current document, the selected layers and how many layers were selected. I could grab those like this:

const document = Document.getSelectedDocument();
const selection = document.selectedLayers;
const selectedLayers = selection.layers;
const selectedCount = selectedLayers.length;

The first bit of control flow I wanted to introduce was to return early. This should pop up a message if the user activated the plugin but hadn’t selected anything. That was achieved by getting the number of selected layers and warning if it was zero:

// User hasn't selected a layer:
if (selectedCount === 0) {
        "Throw me a frikkin' bone here Scott; you need to select a layer for this to work"

The context.document.showMessage() method is what let’s you show a message on the Sketch interface.

I then wanted to show a drop-down menu to the user asking them to choose a language. Plus a default menu option to ‘Stress Test’.

You can show a menu in Sketch like this:

var choice = UI.getSelectionFromUser("Which Language?", listOfLanguages);

Where listOfLanguages is an array of strings – which become the choices in the menu.


You know when a user has clicked ‘OK’ and the choice they made like this:

var ok = choice[2];
var value = listOfLanguages[choice[1]];

So, the flow can go like this if the user clicks OK on the menu:

if (ok) {
    // Do this code if the user has clicked OK.

When the user clicked OK, I wanted to loop through each layer and any symbol overrides, switching out the text. I went for this inside the OK block:

// Once OK is clicked on the menu
if (ok) {
    selectedLayers.forEach((layer, idx) => {
        let existingString = layer.text;
        layer.text = resolveMLString(existingString);
        layer.overrides.forEach(override => {
            let existingOverrideValue = override.value;
            override.value = resolveMLString(existingOverrideValue);

You can see that the layer text is accessed with layer.text and overrides are accessed with layer.overrides.

Sketch Developer Tools and debugging

By the way, logging things out is easier if you install the Sketch Developer Tools plugin. The Sketch documentation says it polyfills console.log so that it logs to the Sketch Developer Tools console but I didn’t find that to be the case. Instead I used the log() command. It is also worth knowing you can debug with Safari web inspector. There are more details on that here

Publishing a plugin locally

You can publish a plugin publicly using skpm publish but that isn’t something I have tried. I only wanted this plugin to available locally to team members. I did however want to give users the ability to update the plugin easily when new versions (read: bug fixes) were available. Thankfully there is a way to do this documented on the Sketch site.

The basics of this mechanism are having a appcast.xml file. This XML file follows the Sparkle update framework for OS X. When you want to issue an update, you add an extra item element to the appcast.xml file. For example:

  <title>Version 0.1.3</title>
        <li>"Stress Test" now matches line breaks as well as spaces for substrings</li>
  <enclosure url="https://yourUrl/Sketch-Plugins/Translate-Text/builds/" sparkle:version="1.1" />

Use any text you like for the title. But you need to ensure that the url for your plugin in the enclosure is accessible at the URL provided; that is where the update download is resolved from.

Assets folder

I had the appcast.xml in my assets folder. This meant the Sketch build tool would automatically put it into the correct folder to make the plugin. Sketch creates a folder for the plugin which is the name of your plugin plus the file extension .sketchplugin. So in my case, I had a folder called TranslateText.sketchplugin.

When you are happy with a plugin, zip the yourPlugin.sketchplugin file up, name it and place it appropriately – matching the name and url in your appcast.xml.

Updating the package.json in the project root

When you have a new version ready to upload, you need to update the version value in the package.json in the root of your project. For example, if I have just updated to version 0.1.4 I would change the value to:

"version": "0.1.4",

That value change updates the manifest.json file inside the plugin automatically. Which in turn provides a reference for what version a user has installed.

In summary, if updating locally, you need a new entry in the appcast.xml and a version bump to the number in the package.json.


I wouldn’t describe it as a perfect developer experience but developing a plugin for an application like Sketch is pretty fun. With JavaScript APIs popping up for things like FitBit devices and of course already present for code editors like VS Code and Atom, day to day tools have never been more ‘tweakable’ and approachable for casual JavaScript programmers.

Appendix: complete plugin code

In case it is of interest, here is the complete contents of the translateText.js file that made up all the logic for my little plugin:

var mlStrings = require("./mlStrings");
var UI = require("sketch/ui");
var Document = require("sketch/dom").Document;
var SymbolMaster = require("sketch/dom").SymbolMaster;
var SymbolInstance = require("sketch/dom").SymbolInstance;

export default function(context) {
    const document = Document.getSelectedDocument();
    const selection = document.selectedLayers;
    const selectedLayers = selection.layers;
    const selectedCount = selectedLayers.length;
    const stressTextMenuString = "Stress Test";

    // User hasn't selected a layer:
    if (selectedCount === 0) {
            "Throw me a frikkin' bone here Scott; you need to select a layer for this to work"

    // Create a list of languages for the dropdown
    var listOfLanguages = Object.keys(mlStrings[0]);

    // Get choice of drop-down from user
    var choice = UI.getSelectionFromUser("Which Language?", listOfLanguages);
    var ok = choice[2];
    var value = listOfLanguages[choice[1]];

    // Once OK is clicked on the menu
    if (ok) {
        selectedLayers.forEach((layer, idx) => {
            let existingString = layer.text;
            layer.text = resolveMLString(existingString);
            layer.overrides.forEach(override => {
                let existingOverrideValue = override.value;
                override.value = resolveMLString(existingOverrideValue);

    // Utility to get the longest string from the values of an object
    function returnLongestValueFromObject(inputObject) {
        let objectValuesAsArray = Object.values(inputObject);
        return objectValuesAsArray.sort(function(a, b) {
            return b.length - a.length;

    function resolveMLString(stringToBeResolved) {
        // Check if stringToBeResolved is actually something and we aren't being sent non-object
        if (stringToBeResolved) {
            var objectFoundInArrayThatIncludesString = mlStrings.find(function(
            ) {
                // Create an array of the objects values:
                let stringValues = Object.values(stringObj);
                // Now return if we can find that string anywhere in there

                return stringValues.includes(stringToBeResolved);
            // We had a complete match here so send back the new string
            if (objectFoundInArrayThatIncludesString) {
                if (value === stressTextMenuString) {
                    return returnLongestValueFromObject(
                } else {
                    return objectFoundInArrayThatIncludesString[value];
            } else {
                // If we don't have a match in our language strings, first try a partial match otherwise return the original
                let arrayOfStringPartsSplitBySpace = stringToBeResolved.split(

                if (arrayOfStringPartsSplitBySpace) {
                    arrayOfStringPartsSplitBySpace.forEach((substring, idx) => {
                        let objectFoundInArrayThatIncludesSubString = mlStrings.find(
                            function(stringObj) {
                                // Create an array of the objects values:
                                let stringValues = Object.values(stringObj);

                                // Match inside a string here
                                return stringValues.includes(substring);
                        if (objectFoundInArrayThatIncludesSubString) {
                            if (value === stressTextMenuString) {
                                ] = returnLongestValueFromObject(
                            } else {
                                arrayOfStringPartsSplitBySpace[idx] =
                    return arrayOfStringPartsSplitBySpace.join(" ");
                } else {
                    return stringToBeResolved;

]]> 1
Text editing techniques every Front-End developer should know Fri, 25 May 2018 20:03:28 +0000 Any Front-end developer is going to spend a lot of time typing and manipulating code. It pays to know how to ‘drive’ your editor to get the best performance.

Following are what I consider some of the most useful or perhaps underused techniques. They are techniques I think it pays to know about and that hopefully you are able to perform with fluidly in your editor or IDE of choice.

I’m using Sublime Text and Code for the examples here but nearly all of these techniques are possible in VS Code, Atom, Brackets, Vim, Emacs etc.

Important: Don’t get hung up on the shortcut keys being used here. They differ from editor to editor and I’ve changed some I use from the editors default. I want to show you some techniques you may not have used or had forgotten about. Not necessarily how to perform them.

Let’s begin.


You have two lumps of code and you want them to switch positions. Select the two sections, hit your short-cut and hey presto! They are transposed.


Most of us use Git. You should have a straightforward way to add, commit and push changes. For day to day tasks, doing it in the editor saves context switching into a terminal. In Sublime I use the ‘Git’ plug-in for Sublime. Great Git tools are baked into VS Code.

Start/stop task runner

Much like the Git point above, if you use a task runner or build tool such as Gulp it’s beneficial if you don’t need to break out of your current environment to interact with it. VS Code has a great task running tool built in. In Sublime I use sublime-gulp.

Open/Close files list

This is pretty basic but know the short-cut for opening and closing your sidebar! I don’t want to see you reaching for that mouse!

Line bubbling

I’m still amazed that developers (sorry Søren) are amazed when they see this. You have a line or X lines of code and you need to shift it/them up or down. Select the text, use your short-cut and move them up or down. Bonus points to VS Code here as it auto-indents as it does this.

Split selection to lines

Say you want to turn a bunch of lines into individual items. Grab the text, hit the relevant short-cut and you have each line selected. You can then jump to the beginning/end of each line with ease!

Ragged line select

This is an alternative to splitting selection to lines. Here hold down the relevant key and use your mouse to move down the ragged edges of the text and provide a cursor point at each.

Select all occurrences

You find a selection and then want to do something with every instance of the selection. Useful for renaming a method or any other file wide change.

VS Code has an even nicer method re-name feature that lets you press F2 on a method name and it automatically updates it everywhere for you.

Select next occurrence and skip an occurrence

Suppose you have found a word and want to find the next instance. You should know the short-cut that gets you to the next but you should also know that if you go to far you can back-up and if you want to skip an occurrence you can and move on to the next. I don’t use this as often and regularly forget the shortcut but it’s good to know you can should you need to.

Find symbol in project

Almost every editor apart from Sublime fails at this. The VS Code team don’t seem so bothered about implementing it for CSS which is a shame because it is a KILLER feature for any devs that spend a decent amount of time working with CSS. Suppose you have a class and you want to edit what the properties of that class declaration in your codebase. A global find is for wimps. What you need is ‘find symbol in project’. Hit the shortcut, type/paste in your class name (or other language symbol), and jump straight to the file and position in that file you want.

Jump to line

Dev tools tells you there is an error on line 234. Sure you can scroll but you should be able to jump straight there with a few key presses.

Select line

Pretty obvious this but if you are still selecting lines with your mouse you need to have a quiet word with yourself.

Expand selection

You’re in a word but you want the whole paragraph. Or you want to expand a selection out to the next set of braces. You should know how to do that with a short-cut.

Indent block

Ordinarily this seems too obvious to point out but I’ve seen people indenting their CSS property/values or JS function bodies one line at a time and it is a little painful to watch. You should know you can select a block (preferably using Expand Selction from prior tip) and then indent.

BUT!! With the advent of Prettier I’d argue that even doing this a block at a time is a little redundant.


This one is rarely baked into editors and I’ve known the plug-in by other names than AceJump in other editors (“QuickJump” for example) but this principle is the same. You type a shortcut and it tokenizes and labels your text. You then type the relevant keys to jump straight to that label point on the page.

Clipboard History

Going back and re-copying something because you have copied something else to your clipboard is incredibly inefficient. Sublime/Vim/Emacs has a clipboard history built in. Others like Code can have the functionality added via a plugin. Better still, use something like Alfred which is program agnostic so you have your clipboard history available to paste in any application.

Quick Switch Project

It should be very quick to switch from one project to another. It shouldn’t require a visit to the menus with a mouse.


Need to create 100 divs with the same or incrementing class? Don’t even consider doing that one at a time. Learn to do it with Emmet.

Time spent learning Emmet is worth the investment, as it is transportable skill. There are plug-ins for all editors and external services like Codepen support it too.



That’s it for now, if you have any great features you think deserve a mention, pop it in the comments.

It’s a great time for Text Editors with so many good choices. It’s important to know what these tools can do for you so you can invest a little time honing your skills. If most of your day is spent in a text editor, it’s surely worth driving it (or trying to) like you mean it.

]]> 18
Creating a language switcher in JavaScript Tue, 17 Apr 2018 10:44:20 +0000 About 90% of my working days in the last 5 years has been making feature prototypes for a web application.

With any designs you create that have to work across different languages it’s not uncommon for different string lengths to create new design problems. Text you imagined would fit in space X no longer does in Greek or Italian etc.

There are number of fully featured JS powered translation libraries out there. This won’t be one of them!

I wanted to make something very simple. This is the kind of solution that is bread and butter for any seasoned JavaScript capable developer but I wanted to document a possible solution for the less experienced.

The requirements are minor; the ability to swap out particular strings of text in my prototype based on the language chosen from a drop-down.

Here is how we can do that.

If any seasoned JS devs can point any obvious ways to improve this, I am all ears. A comment or tweet will be appreciated.

The HTML side of things

First of all, we will make our language selector with a standard select element. The internal option elements are going to be populated by JavaScript. You can provide whatever id you like for the select.

<select name="language" class="mb-POCControls_Selector" id="mbPOCControlsLangDrop">

Then at any place you would like to swap some text in a node, add the attribute data-mlr-text. For example:

<span data-mlr-text>Sausages</span>

OK, that’s all you need to do in the HTML, everything else will be JavaScript.


We are going to have a main function called mlr. We are going to invoke this function with a number of options like this.

    dropID: "mbPOCControlsLangDrop",
    stringAttribute: "data-mlr-text",
    chosenLang: "English",
    mLstrings: MLstrings,
    countryCodes: true,
    countryCodeData: mlCodes,

Here is an explanation of the options we are going to need:

  • dropID – the id of the select that will be used to chose languages (whatever you set in the HTML)
  • stringAttribute – the attribute name you will add to the nodes you want to switch the text on. Leave this as is unless you have a compelling reason to change it. In our example we used data-mlr-text
  • chosenLang is the initial language you want each string rendered in. The string you pass in should match the string in the language files (case sensitive)
  • mLstrings is the name of the variable/array you have all your translations stored in
  • countryCodes is an optional boolean value for if we want the lang attribute on the HTML element updating too
  • countryCodeData is an optional variable for an array containing country codes for your chosen language

Language files

To store our translations for each string, we can use an array of objects. Each object contains a relevant string in each of the languages to switch between. For example:

var MLstrings = [
        English: "Sausages",
        Polish: "Kiełbaski",
        German: "Würste",
        Russian: "Колбасные изделия",
        Traditional: "香腸",
        English: "Carrot",
        Polish: "Marchewka",
        German: "Karotte",
        Russian: "Морковь",
        Traditional: "胡蘿蔔",

You would add another object for each of the strings you need to switch out. I have added only two here to keep things simple.

I tend to keep these strings in a separate file for convenience and ensure that file is loaded before the ‘meat and potatoes’ of the code we are about to write. I tend to use TypeScript these days and a triple slash import works well here. For example: /// <reference path="ts/mlStrings.ts">

Country Codes

The two character language codes for countries is defined in ISO 639–1. If you also want to update your HTML tag with the correct code for the chosen language you need to include a file containing the relevant country codes for the languages you are translating between. It should be formatted like this:

var mlCodes = [
    code: "bg",
    name: "Bulgarian",
  // More country codes

Note that the casing of the country names should match the casing in your translation data.

Our string replacement function

Now lets look at the main functions that will drive our functionality. We will step-by-step it afterwards.

// Global var :(
var mlrLangInUse;

var mlr = function({
    dropID = "mbPOCControlsLangDrop",
    stringAttribute = "data-mlr-text",
    chosenLang = "English",
    mLstrings = MLstrings,
    countryCodes = false,
    countryCodeData = [],
} = {}) {
    const root = document.documentElement;

    var listOfLanguages = Object.keys(mLstrings[0]);
    mlrLangInUse = chosenLang;

    (function createMLDrop() {
        var mbPOCControlsLangDrop = document.getElementById(dropID);
        // Reset the menu
        mbPOCControlsLangDrop.innerHTML = "";
        // Now build the options
        listOfLanguages.forEach((lang, langidx) => {
            let HTMLoption = document.createElement("option");
            HTMLoption.value = lang;
            HTMLoption.textContent = lang;
            if (lang === chosenLang) {
                mbPOCControlsLangDrop.value = lang;
        mbPOCControlsLangDrop.addEventListener("change", function(e) {
            mlrLangInUse = mbPOCControlsLangDrop[mbPOCControlsLangDrop.selectedIndex].value;
            // Here we update the 2-digit lang attribute if required
            if (countryCodes === true) {
                if (!Array.isArray(countryCodeData) || !countryCodeData.length) {
                    console.warn("Cannot access strings for language codes");
                root.setAttribute("lang", updateCountryCodeOnHTML().code);

    function updateCountryCodeOnHTML() {
        return countryCodeData.find(this2Digit => === mlrLangInUse);

    function resolveAllMLStrings() {
        let stringsToBeResolved = document.querySelectorAll(`[${stringAttribute}]`);
        stringsToBeResolved.forEach(stringToBeResolved => {
            let originaltextContent = stringToBeResolved.textContent;
            let resolvedText = resolveMLString(originaltextContent, mLstrings);
            stringToBeResolved.textContent = resolvedText;

function resolveMLString(stringToBeResolved, mLstrings) {
    var matchingStringIndex = mLstrings.find(function(stringObj) {
        // Create an array of the objects values:
        let stringValues = Object.values(stringObj);
        // Now return if we can find that string anywhere in there
        return stringValues.includes(stringToBeResolved);
    if (matchingStringIndex) {
        return matchingStringIndex[mlrLangInUse];
    } else {
        // If we don't have a match in our language strings, return the original
        return stringToBeResolved;

I’ve got a globally accessible variable at the outset, mlrLangInUse. I’ve namespaced it with mlr to avoid conflicts but I’m still not massively happy about it. However, I couldn’t find an elegant away around it. I need the value that variable stores to be accessible to the resolveMLString function which can be called independently. More on that function shortly.

The mlr function starts by ‘de-structuring’ the options. This is the ES6 style of setting defaults for an options object, with the value to the right of the = sign being the default setting for each option. Note that the option object itself gets a default to at the end with the = {} part. Best explanation I came across of de-structuring was at

Next we var the HTML element in case we want to update the lang attribute. Then we create an Array of the possible languages. We do this by looking at the first object in the mLstrings and use the Object.keys() method to create an array from the keys in the object.
We also set the language in use in the globally accessible mlrLangInUse variable. This is initially set to the string passed in for chosenLang.

const root = document.documentElement;

var listOfLanguages = Object.keys(mLstrings[0]);
mlrLangInUse = chosenLang;

Next we create the drop-down options inside an immediately invoked function (note the double brackets at the end).
Each option is appended to the select element we specified in the options (the id was mbPOCControlsLangDrop in our example).

With the options created, we then add an event listener to the select element and use this listener to update things on the change event.

Let’s cover what we do each time there is a change on the select element.

mlrLangInUse = mbPOCControlsLangDrop[mbPOCControlsLangDrop.selectedIndex].value;
// Here we update the 2-digit lang attribute if required
if (countryCodes === true) {
    if (!Array.isArray(countryCodeData) || !countryCodeData.length) {
        console.warn("Cannot access strings for language codes");
    root.setAttribute("lang", updateCountryCodeOnHTML().code);

The first line in the above code re-assigns a value to mlrLangInUse. If you remember this was set at the outset by the options. We want to update this to the value of the language assigned to the option in the drop-down. selectedIndex is part of the Web API and let’s you get at the chosen option from a drop-down. With that we can then read the value of that option.

Next we fire the resolveAllMLStrings() function. But before we get into that lets go to the next lines where we also set the lang attribute of the HTML element. First we check if countryCodes is true. If not, we are done. If it is true, we next check if we have any data in the array of countryCodeData or if it even exists. If not we send a warning to the console and return. Otherwise we go ahead and set the language attribute by running a one line function (updateCountryCodeOnHTML) that returns the object from the countryCodeData array that matches the currently selected language. The .code part just uses the relevant country code part of the retrieved object. For example, I ‘Czech’ was selected I would be retrieving:

  code: "cs",
  name: "Czech",

So the code part would be ‘cs’.

OK, back to the resolveAllMLStrings function. First of all it goes and finds all the strings in the HTML that have the stringAttribute we defined in the options. In our example this would be all “data-mlr-text” attributes. Then it runs a function on each of them, taking the current textContent and assigning it to originaltextContent and using that to set what the new text will be. We do this by passing the original text to the resolveMLString function along with the strings we want to search within (mLstrings). We then set the textContent of the element to be the new string.

Right, final thing we need to look at is the resolveMLString function. Unless someone can offer an alternative, I think this function has to be external to our main mlr function so that it can be called outside of mlr. For example, if elsewhere we want to set the text of an element with JavaScript, we need to call this function without re-instantiating the mlr function (which would in turn require passing in all the necessary options). I’d welcome any feedback here if there is a better way to handle this. However, at present it is possible to do this elsewhere resolveMLString("Sausages", MLstrings) and I would be returned the correct string based on the language currently set in the global mlrLangInUse variable.

In the absence of a better alternative, let’s consider what resolveMLString actually does.

We want to return a translated string if we have one, otherwise return the original string that was sent in. But how do we actually search for that string? We use the find() method of Array. We use find() on our mLstrings and assign an array to stringValues that contains values from each object of strings. For example, stringValues might look like ["Sausages","Kiełbaski","Würste","Колбасные изделия","香腸"]. We can then use the includes() array method on this new array to return true if the array contains the string we are interested in. Now we have found the correct object from our main mlStrings array, we can return the correct string because we know the language we are interested in. Phew!

To exemplify, had we chosen Polish as our language and we wanted Carrot in this language we can pick it from this data easily:

    English: "Carrot",
    Polish: "Marchewka",
    German: "Karotte",
    Russian: "Морковь",
    Traditional: "胡蘿蔔",

Which is where we use matchingStringIndex[mlrLangInUse], otherwise we would just return back the original string.


OK, it might not look like much but here is an example of our efforts!

See the Pen Simple Language Picker by Ben Frain (@benfrain) on CodePen.

I’ll be honest, I found this blog post quite difficult. Not because the solution is anything fancy but because sometimes it is harder to explain code than it is to read it.
The rub however is that if you can’t read the code/language in the first place it’s hard to understand it! That’s the position I find myself in constantly when trying to learn JavaScript.
I hope any less experienced JavaScript developers found some of this useful.
For the more experienced devs, I’m always happy to hear how any of this could be improved.

]]> 10
HTML templating with vanilla JavaScript ES2015 Template Literals Thu, 25 Jan 2018 14:42:58 +0000 I needed to prototype something recently by creating output HTML from a modest set of data. I wanted to avoid extra dependencies such as Handlebars and managed to get the job done using ES2015 Template Literals.

They are more powerful than I first thought so, I wanted to take the time to document what I discovered.

Let’s look at the power of Template Literals and how we can perform nested loops inside them.

Existing documentation

I found a few posts and Stack Overflow threads on the subject of Template Literals. Of note were:

I’m not going to cover everything in the prior posts. Instead I want to limit our time here to show you how you can create HTML using Template Literals to based on the following kind of data. Note, this is entirely contrived data so don’t get too hung up on the content!

var data = [
    Title: "The OA",
    Ended: false,
      "Started with promise but then you watch the final episode and realise it is tosh",
    Episodes: [
      "New Colossus",
      "Forking Paths",
      "Empire of Light",
      "Invisible Self"
    Reviews: [
      { "Rotten Potatos": 69 },
      { "Winge Central": 48 },
      { "Fan Base Fanatics": 62 },
    UserRatings: [7,3,7,8,9,2,4,5,7,6],
    Title: "Lost",
    Ended: true,
      "The instigator of the whole, 'keep them guessing and we'll keep making stuff up as we go' style drama",
    Episodes: [
      "Pilot (Part 1)",
      "Pilot (Part2)",
      "Tabula Rasa",
      "White Rabbit",
      "House of the Rising Sun",
      "The Moth",
      "Confidence Man",
    Reviews: [
      { "Rotten Potatos": 62 },
      { "Winge Central": 46 },
      { "Fan Base Fanatics": 72 },
    UserRatings: [2,8,8,8,7,6,7,8,10,6],

In the above JavaScript object I have 2 ‘records’ of a data set. For each record I want output some HTML to produce something simple like this:

See the Pen Playing with ES2015 Template Literals by Ben Frain (@benfrain) on CodePen.

You can also look at the completed example on CodePen:

We are going to achieve this by not just stamping out the data as is, but also leveraging the ability of Template Literals to do if/else switching and also calling out to other functions to do ‘stuff’ such as averaging a bunch of numbers.

Let’s begin.

Template Literals basics

Until ES2015, if you wanted to create a string in JavaScript from distinct pieces, for example, an existing string plus a number, it was a little messy. For example:

var result = 10;
var prefix = "the first double digit number I learnt was ";
var assembled = prefix + result.toString();
console.log(assembled); // logs => 'the first double digit number I learnt was 10'

ES2015 introduced ‘Template Literals’. They allow us to do this instead:

var result = 10;
var assembled = `the first double digit number I learnt was ${result}`;
console.log(assembled); // logs => 'the first double digit number I learnt was 10'

The back ticks define the contents as a template literal and then we can interpolate values from elsewhere in JavaScript using the ${} pattern. Anything inside the curly braces, which in turn are within the backticks, are evaluated and then inserted as a string.

In order to generate our HTML for each data record, as our data is an Array, we will start with a forEach.

var data = [
  // Data here

// loop through the data
data.forEach((datarecord, idx) => {
  // for each record we call out to a function to create the template
  let markup = createSeries(datarecord, idx);
  // We make a div to contain the resultant string
  let container = document.createElement("div");
  // We make the contents of the container be the result of the function
  container.innerHTML = markup;
  // Append the created markup to the DOM

function createSeries(datarecord, idx) {
  return `
    <div class="a-Series_Title">${datarecord.Title}</div>

I opted to call out to another function to generate the markup. The createSeries function simply receives data and returns a string (the completed template) with all the blanks filled in from the data.

Here is a fuller initial version of the template:

function createSeries(datarecord, idx) {
  return `
    <h2 class="a-Series_Title">${datarecord.Title}</h2>
    <p class="a-Series_Description">
      <span class="a-Series_DescriptionHeader">Description: </span>${datarecord.Description}
    <div class="a-EpisodeBlock">
      <h4 class="a-EpisodeBlock_Title">First episodes</h4>

With that in place, we get each record title wrapped inside our div. That’s the majority of what you need to know to use template literals for templates.

Looping within a Template Literal

However, I am now at the point where I need to loop within my Template Literal. I have an array inside my Episodes key that I want to iterate through so I can print out the names of each episode of a season. Thankfully, we can use the map method to do this. I’ve talked about before but to recap, it creates a new array which is the result of mapping a function onto each element in the existing array. In terms of our template, let’s pick up where we left off above and use map to spit out our episodes. The syntax of the map, inside an existing template literal, will look like this:

${, idx) => `Your looping code here with data inserted like ${this}`}

So, in our template, I’m adding it like this:

function createSeries(datarecord, idx) {
  return `
    <h2 class="a-Series_Title">${datarecord.Title}</h2>
    <p class="a-Series_Description">
      <span class="a-Series_DescriptionHeader">Description: </span>${datarecord.Description}
    <div class="a-EpisodeBlock">
      <h4 class="a-EpisodeBlock_Title">First episodes</h4>

      ${, index) => 
        `<div class="a-EpisodeBlock_Episode">
            <b class="">${index+1}</b>
            <span class="">${episode}</span>


That’s great, all printing out spot on! Oh, wait! What’s with the comma after each episode title?

1 Homecoming
2 New Colossus
3 Champion
4 Away
5 Paradise
6 Forking Paths
7 Empire of Light
8 Invisible Self

It turns out that as the Template Literal is returning a string from the Array, it is joining the contents together with a comma by default. Thank goodness for stack overflow! We can join the looped items with nothing instead by appending .join("") at the end of the map method, for example:

${, index) => 
  `<div class="a-EpisodeBlock_Episode">
      <b class="">${index+1}</b>
      <span class="">${episode}</span>

That sorts that issue!

Now, the data for our reviews is in a different ‘shape’ as they are keys within objects but we can still use them in the Template Literal with a map like this:

${ => 
  `<div class="a-ReviewsBlock_Episode">
      <b class="a-ReviewsBlock_Reviewer">${Object.keys(review)[0]}</b>
      <span class="a-ReviewsBlock_Score">${review[Object.keys(review)[0]]}%</span>

Our next challenge is how to perform if/else logic inside the template.

How to perform if/else switches with ternary operators

Inside our template literal I want to add a ‘More to come!’ message if the series is still in production. We will use the Ended boolean inside the data as our ‘test’ to show this or not and we will employ a ternary operator to perform the test. If you don’t know what a ternary operator is, it is a concise form of if/else statement. It looks like this:

thingToTest === true ? doThisIfTrue : otherWiseDoThis;

So, the first part is the ‘test’, the if. Then after that test is the ? which is where we then specify the code to run if our test is true. Then there is the : which is the break for the else part of the logic. We run the code after that if the test result is false. You know, it’s probably easier to just show you how this would work in our template:

${datarecord.Ended === true ? `` : `<div class="a-Series_More">More to come!</div>`}

So, in this case, if that key is true, we are returning an empty string, otherwise we return the message. Again this is inside an existing template literal.

Using the returned value of another function

With a template literal it is possible render in part of the resultant string by way of another function. So, in our example, suppose we want to render out an average of the scores received by users. That data is held in the UserRatings array which looks like this:

UserRatings: [2,8,8,8,7,6,7,8,10,6],

We will create a function(s) to sum the array and then divde by the length of the array to get an average:

const add = (a, b) => a + b;
function getRatingsAverage(ratings) {
  return ratings.reduce(add) / ratings.length;

Now to use this inside our template we can do this:

`<div class="a-UserRating">Average user rating: <b class="a-UserRating_Score">${getRatingsAverage(datarecord.UserRatings)}</b></div>`


At stated at the top, I’ve been using this technique for local prototyping. However, since publishing, a few people, such as Tommy Carlier below have pointed out some inherent security concerns if you are considering using this kind of approach in production or anywhere you can’t guarantee the safety of the input data.

For example, suppose we were going to accept user input from an input element to interpolate into our template. In that instance there is nothing to stop somebody adding some malicious script. Imagine here the username has been input from a form (thanks to Craig Francis for the example):

var username = 'craig<script>alert("XSS")</' + 'script>';
document.write(`<p>Hi ${username}</p>`);

The solution here is to ‘escape’ the HTML or use the textContent method to ensure the inserted strings are properly escaped. If that’s a scenario you find yourself in, there is a good section in the Google documentation on Template Literals here:

Justin Fagnani, who works on Polymer at Google also pointed me to their ‘lit-html’ library. That takes care of these issue too and currently weighs in at 2.2Kb so I’d definitely look at that in future if heading down that path.


Template Literals afford a lot of power with no library overhead. I will definitely continue to use them when complexity of handlebars or similar is overkill.

The only downsides I found related to developer ergonomics. Prettier currently mashes up the contents of Template Literals pretty badly. I’m confident that will get addressed but right now it’s not a great experience; it can be quite gnarly trying to spot which tags start and end where.

Also, syntax highlighting of markup inside the template strings isn’t great (in Sublime or VS Code at least). Like the Prettier grumble, this doesn’t effect output but makes the experience less pleasurable.

Ultimately I’ve been pleasantly surprised at the versitility of Template Literals and hope this post provides enough curiosity for you to give them a whirl yourself.

]]> 12
Can CSS Custom Properties update animation durations on the fly? Mon, 01 Jan 2018 22:51:37 +0000 I’ve been quite slow to do any real experimentation with CSS Custom Properties. Unless I’m researching a book or solving a specific problem, these days I tend to let new language features pan out a little, allowing support to get a little deeper before diving in.

However, recently, I had a couple of problems to solve that seemed a perfect fit for CSS Custom Properties. The first requirement was a textbook usecase for Custom Properties; re-themeing with CSS, and it passed with flying colours. The other situation, I’m left wondering if I’m simply missing something. There is further info/thoughts after the summary so I’d love to have your opinion in the comments.

Before we get to the issue, I’m going to very quickly cover CSS Custom Properties, in case you are even later than me to using them.

The TL;DR for Custom Properties is that they are a way to define a value (a string essentially) and then either use ‘as is’ throughout your stylesheets, redefine the property further down the cascade or even set the value via interaction via JavaScript. These values get re-evaluated on the fly so instantly update on page.

To exemplify those cases, here’s how you set a Custom Property on the root of the stylesheet:

:root {
    --sausages: #544021;
You use a double dash prior to the custom property name and then assign a value to the property.

This is how you would use that custom property somewhere in your stylesheet:

.thing {
    color: var(--sausages); /* #544021 */
The var() being the key thing here. This is how you communicate you want to use a custom property. If you wanted to redefine that same variable down the cascade, you might do this:
.oven {
    --sausages: #3a3021;
And then if we had something else that lived in side that amended definition it would inherit that value. For example:
.thing-in-oven {
    color: var(--sausages); /* #3a3021 */
Finally, you can also set these vars directly with a simple JavaScript API.
var thing = document.querySelector(".thing");"--sausages", "#2b2722");
So, var() when you want to use one, and set them with a double hyphen before the custom property name.

Now, there are great posts out there on doing all this fun stuff and far more already. I’d recommend Serg’s over at Smashing.

My problem

As Custom Properties are re-evaluated as they are changed (including all instances of them inside calc expressions) I thought they would be perfect for speed-ramping an animation mid-animation. As far as I can tell, they are not!

In case the term ‘speed ramping’ is alien to you, it’s where you take an something in motion and warp the timing mid-executiion. Think ‘Bullet Time’ from the Matrix.

Imagine an animation looking down on a car as it travels left to right across the page. Once it has accelerated to a constant speed we want the ability to speed it up or slow it down by reducing or increasing the animation duration. If the animation duration gets longer the car needs to slow down. If the animation duration decreases then the animation should speed up to compensate for the now shorter duration it has to complete in.

We want this to happen as the input arrives using JavaScript to set the new value to the Custom Property.

Here is my basic reduction of that principle. Granted, it’s a pretty basic car. Imagine it’s a Lada from the 80s and it won’t seem so bad (yes, it is just an orange square).

See the Pen LeRMvm by Ben Frain (@benfrain) on CodePen.

Here’s the code in that Pen in case I have done something obviously stupid:

<div class="thing"></div>
<button>Add 1s to animation duration</button>
:root {
    --duration: 6s;

.thing {
    width: 40px;
    height: 40px;
    background-color: #f90;
    animation-name: move;
    animation-direction: alternate;
    animation-iteration-count: infinite;
    animation-duration: var(--duration);
    animation-timing-function: linear;
    animation-fill-mode: both;

@keyframes move {
    0% {
        transform: none;
    100% {
        transform: translateX(100vw);
var button = document.querySelector("button");
var root = document.documentElement;
var time = 6;
button.addEventListener("click", function(e) {"--duration", `${time++}s`);
Here is what I see in different browsers:

Chrome (Version 63.0.3239.84)

It does slow down but there is a ‘jump’ as the square repositions.

Safari (Version 11.0 12604.

The custom property gets updated in the DOM as the button is clicked but the change doesn’t get applied until you switch tabs or switch focus away from or back to Safari.

Firefox (Version 57.0.1)

Same as Chrome, it does slow down on each click but there is the same ‘jump’.


These are the kind of issues that usually have me reaching for an animation library like Greensock or Velocity to smooth out. I thought CSS Custom Properties could solve this kind of issue.

Either they can and I’m doing it wrong. Or they can’t and that’s pretty disappointing.

Anyone know for sure?

Update 2.1.17

Bramus offered this explanation in the comments below:

The “jump” you mention seems logical and expected behavior to me. If you you move something (linearly) over a distance of 1000px using a duration of 1 second, then at 0.5s (or 50% of the time when compared against the 1s set) it’ll be at 50% of the distance (500px in this case). If you – at that very moment – extend the duration so that it becomes 2 seconds, the element – still at 0.5s in its animation loop – will be positioned at 25% (or 250px) since 0.5/2 = 0.25.

That certainly explains what I am seeing in Chrome and Firefox but now presents a bigger question.

Is this the most likely/beneficial implementation in this scenario?

If a user is changing the duration value, wouldn’t it be reasonable to expect the speed the item is travelling to change and not the position?

To this end I have asked the question via a couple of bug reports:

]]> 5
CSS Environment variables; how to deal with the software bezel of iPhone X Wed, 15 Nov 2017 15:12:45 +0000 Like many a web developer, I’ve found myself tweaking things of late for the iPhone X.

Personally, I don’t see the attraction with rounded screen corners. However, popular handsets like the Samsung S8, Google Pixel 2XL and LG G6 have them and now, the iPhone X too. So, tough luck web-developers, here’s a new constraint we need to consider.

There are posts out there already about this. Here’s one I read that set me on the right path: I’m going to go over some of the same content from Stephen’s post, purely for completeness but you should certainly credit him with any findings common in both posts.

So, let me define the problem. Suppose we have a floating action button fixed at the bottom of our page. By default, once we scroll a little on an iPhone X it looks like this:

Home indicator obscuring button

See the home indicator bar going right through the orange button? Due to the iPhone X’s home indicator, that’s no longer ideal as a hit area so we want to create some extra space so our button is still ‘hittable’.

At this point I’d forgive you for thinking that the iPhone X has merely removed the hardware bezel and forced us to adopt a software bezel!

Regardless, in the case of the iPhone X there is a home indicator bar to accommodate. Future phones – who knows! Point is, we don’t want to design in extra space around that button all the time, only when the device requires it for some omniscient software UI that sits above the browser level.

Initially, I believed that I would be forced into some awful combination of screen measurements and UA sniffing in JavaScript to create a forking point for the iPhone X. However, it turns out that there is, for the most part, a more elegant solution. Let’s take a look.

First of all, it is worth being clear that if you do nothing for the iPhone X, things will still function. There will be the usual dance with the Safari menu bar chrome (in portrait mode) but users can still get by. So, you don’t NEED to do anything. Any existing site will be perfectly functional. But you can do something should you choose to.

To accommodate tweaking, Apple has introduced viewport-fit=cover (from the CSS round display specification) and a few other goodies. However, unlike the afore-linked specification, you can’t use this setting with the @viewport at-rule like this:

@viewport { viewport-fit: cover;}

That does naff-all. In Safari at least. I wonder what that does on a Samsung S8; anyone?
Instead you need to add this new setting as an addition to the viewport meta tag so you should have something like this as your viewport meta tag:

<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no, viewport-fit=cover">

With that in place, Safari on iPhone X can extend, the content of a page into the sides where the ‘Batman Cowl’ sits when viewing your page in landscape. I’ve switched to landscape here as it is a better demonstration of the capabilities:Landscape with the view

Which means that you could now have your standard website background edge to edge. By default, you would only get the background-color of the body tag extending into the side areas but with viewport-fit=cover you can have background images/patterns too. However, this creates another problem. How do we stop the main content entering the weird Batman Cowl areas on a iPhone X? Thank goodness, Apple have considered this.

Apple have borrowed an idea from broadcast TV. In TV land there has long been the notion of a ‘Title Safe’ area. This is an area inset within the main part of the screen inside which titles are set. Keeping them in this area means that titles wouldn’t be clipped when TV/Film had the aspect ratio changed or broadcasters for whatever reason needed to change the dimensions of the origin source. Apple has taken this basic principle and renamed it ‘Safe Area’ for their purposes.

How to apply the browsers ‘Safe Area’ as a value

Safari has exposed env as a environment specific variable in CSS, env being short for ‘Environment variable’. You can find more about its origin in this discussion on the W3C GitHub issues: Environment variables could be used for many things but the values we will look at next are the first to be usable thanks to Apple needing a solution to the iPhone X display.

Initially, Environment variables have been created that means any user agent can have their own value for safe-area-inset-top, safe-area-inset-right, safe-area-inset-bottom and safe-area-inset-left and so by using these keywords in our CSS, the distances should always be correct for the device/browser using them.

To exemplify, suppose the iPhone XI (because Apple’s numbering system makes little sense I’m extending the Roman numerals nonsense) comes along and has a safe area of 60px at the main axis start, 40px cross axis start and end 100px at the main axis end. If you wrote:

body { padding: env(safe-area-inset-top) env(safe-area-inset-right) env(safe-area-inset-bottom) env(safe-area-inset-left);}

That would evaluate on the fictional iPhone XI handset to padding: 60px 40px 100px whilst it would probably evaluate to something different on a different phone with its own or no environment variable mappings.

Because CSS is so forgiving, browsers that don’t support env would simply skip the declaration containing it and move to the next.

So, to fix our prior landscape example for iPhone 11.2 onwards, we could set padding on the body like the prior example and we would see this:

Env in action

Fine, although my original problem was to do with fixing my floating action button. Let’s do that now.

The problem at hand

I can use that same environment variable to add padding/margin to the bottom of my floating action button.

To actually set these values in live code you need to know that there was a change of name for env that occurred before iOS 11.2. It used to be known as constant so if I want to ensure iOS 11.X prior to 11.2 is also covered, an extra line of code is necessary:

/*Default padding*/
padding-bottom: 0;
/*iOS < 11.2*/
padding-bottom: constant(safe-area-inset-bottom);
/*iOS 11.2 >*/
padding-bottom: env(safe-area-inset-bottom);

With that in place, I get this when I scroll down the page a little:Oh No iOS11

Great, right? Hmmmm.

You can use env() just like you would any var() so you can also wrap it in calc and the like should you need e.g. transform: translate3d(0, calc(50px + env(safe-area-inset-bottom)), 0).

A problem remains with iOS 11 and fixed position elements in landscape

We still have an issue. Switching to landscape, and scrolling up the page on iOS11, the Safari menu bar shifts away and we are left with this:

This doesn’t happen in iOS10 or iOS 9

At first I had assumed this was something to do with the env(safe-area-*) stuff but alas it seems there is actually a bug in Safari on iOS11 relating to fixed position elements in landscape orientation.

I’ve opened a bug for this on the Apple bug tracker as I’m fairly confident this can’t be the intended behaviour. The problem wasn’t there in iOS 10 or iOS 9.2.1.


Some manner for UAs to communicate device specific things to the browser is a good thing. The fact that Environment variables are now a standard, albeit a very sparsely implemented one, is a good thing too (they should be in Level 2 of CSS Custom Properties).

The ‘merits’ of the iPhone X’s idiosyncrasies are up for debate. Regardless, the task of dealing with unusual screen shapes is upon us.

Safari still doesn’t communicate anything, JavaScript wise, when their browser chrome pops in and out. This is a major pain point for web developers. Such a provision would at least allow developers to re-organise visuals in a non-hacky manner when Safari constantly ‘moves the cheese’ of the viewport chrome as a user switches orientation and/or scrolls up and down the page. I still despise the way Safari does this! It hard not to conclude that Apple expects users to use Safari for web pages only (not apps) and that it believes app-like functionality should remain the domain of Applications in the App Store.

However, I don’t see the rest of the world (especially outside of US/UK) following that pattern. More and more affordable handsets are loaded with Android, and the user base is growing steadily compared to iOS.

And Android embraces the web as an emerging application platform.

]]> 2
A manifesto for working in teams Mon, 21 Aug 2017 15:34:45 +0000 I’ve worked in a few teams over the years and generally enjoyed those different working relationships.

However, working in a team isn’t always plain-sailing. When working relationships in teams turn sour and there is a pervading sense of mistrust, dissent and ill-feeling, there are usually patterns and clues to the problems which I now recognise.

Sometimes these patterns reveal a team breakdown due to situations beyond any individuals ability to change. For example, a financially insecure company, a boss with dire people skills or a generally oppressive working culture.

However, I’ve found that it’s equally likely to be down to individuals.

For example, I’ve found it common that the warped point of view of one can be subsequently expounded to become the point of view of many. Previously happy team members can become unhappy team members when nothing materially has changed. It’s the workplace equivalent of poison being poured in others ears.

With some reflection and discussion, team relationships in this situation can occasionally be fixed and become fruitful once more. Sometimes, sadly, individual personalities and egos can be so toxic and ‘out of whack’ with others that wasting time trying to ‘fix’ an individual can be futile. Subsequently not dealing with the rot of an individual in a team can destroy the fabric of the whole.

What follows are some meandering thoughts about how you and I can behave and what I feel is healthy in a team. In many ways, this is also a set of notes to my former self, as I’ve certainly been guilty of some of these shortcomings over the years. I’m as much a ‘work in progress’ as the next person; as my team mates will testify!

You’re not the smartest person

This might shock you but you are not the smartest person in the team. Nor are you indispensable.

As such, be mindful of others’ skill sets, even if they are not immediately obvious to you.

Do not be disparaging of anyone else’s input; it likely comes from a place of unique insight.

Note: if you really are the smartest person in the team, in every way, consider moving to a more challenging team/company or leading the team you are in.

Learn from those around you

As you are likely not the smartest person in the team, try to talk less and listen more. I’m reminded at this point of the following:

A wise old Owl sat in an Oak,
The more he saw, the less he spoke,
The less he spoke, the more he heard,
Why can’t we all be like that wise old bird?

Put another way; consider that you have two ears but only one mouth.

If in every meeting you are the only one doing the talking, perhaps the problem isn’t everyone else not saying anything, more likely you not letting others express themselves.

Be candid

I’ve never found a good way of articulating how to be honest with co-workers without being hurtful. You can do it with people when they trust you but everyone trusts different people at different rates. However, Ed Catmull, president of Pixar Animation Studios, nails it in the book, ‘Creativity Inc’. It’s about being candid. Be straightforward and honest but in a generous way.

Never try and score points or put others down. That is immature behaviour.

Give your honest, candid opinion with the sole aim of improving things/situations.

People should in turn learn to reciprocate by taking candid feedback in the manner it is intended.

To this end, don’t get moody because someone didn’t like your work. Listen to what they felt the problem with it was. If they are giving good candid advice, shut up, get over yourself and listen to what they are telling you.

Loose your ego

If you want to be right more often, consider changing your mind.

Be ready to change opinion when new ideas and possibilities surface. If you are working with others on a problem, have a selfless attitude and recognise when a better possibility has presented itself.

So you felt your super-sliding menu was the best thing ever? Big deal, your colleague just came up with a better idea. You know it and so does everyone else so just let go and work towards the best solution your team can come up with, not you individually. The outcome from the team should trump the satisfaction of any individuals ego.

Respect the time of others

In an office environment, respect your colleagues time. You may be bored and in need of light relief, but they may be stressed in the midst of the mother of all problems.


The unwritten rule at most creative workplaces is that if someone has their headphones on, think twice about disturbing them about anything other than immediately important work related matters.

To hit this point on the nose: that hilarious video/gif/meme can wait and the spat you just had with your other half isn’t that important to anyone other than you.


In relation, if you are in a large open office, keep ‘banter’ to a minimum. If possible, take it to the coffee area. We are all louder than we think and whatever it is that is hilarious to you right now, is an unwelcome disturbance for someone else.

Solve your own problems when you can

Don’t get answers from others if you can figure out a problem by yourself. If you get the same person to fix the same kind of issues every single time, you likely aren’t learning all you should.
That’s not to say that having spent a reasonable amount of time on an issue you don’t seek help, just ensure you are doing due diligence to address the issue yourself first.

Don’t leave problems for others

If you are in the business of solving problems, whether that be in code, design or some other engineering context, don’t leave problems for others to rediscover.
If you have to jump ship on a project, be sure to communicate in the clearest possible way that you knew of the problem, what you have considered in relation to it and the probable scope of the issue. People will appreciate that far more than you pretending that the problem didn’t exist or that you were unaware of it.

Note: if a known problem exists in code; don’t push to a ‘working’ code area, regardless of any related implications.

Consider context

“Those guys are idiots!”
“Which dick thought it would be a good idea to use a drop-down here?!”
“I can’t believe they have used inline styles to do this!”

I’ve heard proclamations like this more times than I care to remember. If you find yourself guilty of outbursts like this, consider the fact you don’t fully understand the context in which those choices were made. Instead of assuming the worst of people, try assuming the best.

Maybe that was the best solution available and you just don’t understand the problem fully?

The ‘perpetrators’ may have been constrained by factors you are not privy to. My overwhelming experience in any functioning work environment is that the choices that were made are seldom as bad as they seem when you fully understand the problem that gave birth to them.

Bottom line: don’t rubbish people, generally or specifically, unless you absolutely know they were wilfully negligent or naive.

Communicate with those that need to know, not those you’d like to know

Use group communication tools for group discussions. Use group emails only when absolutely necessary. Email is seldom the right medium for office ‘banter’. Therefore, respect the inbox and time of others and consider whether other mediums are more appropriate.

If there is no appropriate medium for general online chat (e.g. Slack et al.) available for your team environment, consider implementing one.

Give what you know away

If you are the ‘go to’ person on your team/department for an approach, technology or subject, give your knowledge of that subject to others freely and without caveats.

Guarding your knowledge will get you no-where. Giving it away always brings back more to you than you gave away.

In addition, don’t seek to ‘own’ everything. Just because you came up with this or that idea, design or document, don’t seek to own it.

Language like “The design I came up with for…”, or, “Use the function I created that solves summing arrays” is passive aggressive. Or heading documents, “Ben Frain’s rules for how to…” reeks of ego and ownership desperation.

Instead, try using more neutral and inclusive language:

“Our design for…”

“Use the summing arrays function…”

“Rules for how to…”

Avoid toxicity

It’s my belief that immature people moan a lot. Everyone moans from time to time, it’s human nature. But beware serial moaning.

A serial moaner is someone who always complains, usually vocally to anyone else who will listen.

They have to change something? They moan.
They need to swap desks? They moan.
Have to work with different people? They moan.

Serial moaners are toxic. Deal with them fast and deal with them firmly. They poison a teams morale.

If you can confront them about it you should. Sometimes, that way is all they have ever known and once they understand what they are doing it can set them on a more positive path.

Related: check out Number 3, in Milton Glaser’s ‘Ten Things’ essay.

…there is a test to determine whether someone is toxic or nourishing in your relationship with them. Here is the test: You have spent some time with this person, either you have a drink or go for dinner or you go to a ball game. It doesn’t matter very much but at the end of that time you observe whether you are more energized or less energized. Whether you are tired or whether you are exhilarated. If you are more tired then you have been poisoned. If you have more energy you have been nourished. The test is almost infallible and I suggest that you use it for the rest of your life

Don’t covet

Do you think you always get passed up? That everyone else always gets the good jobs and you get the rubbish? Why did they give her that assignment, when you are way better? If you think anything like that, stop!
Try and be mature in your approach. If there is a particular thing you want to work on, ask. If there is a particular role you would like to fill, ask.

Take problems away from people, don’t give them new ones. A boss will be happier to give you something you want than have a disgruntled employee. But they have to know. Give them that chance.

You’re colleagues are not your friends

Due to the intimacy of working with the same people day in and day out, in the work place, people can confuse a professional relationship with a social friendship. Whilst good working relationships are important and to be encouraged, I think it’s very important to be clear on the differences.

Friendship that has arisen through a single event or shared experience is cheap, transient and largely meaningless. That’s the kind of ‘friendship’ that has arisen in the workplace. Few friendships of this nature last more than decade. Their success, or not, is inextricably tied to the fate of the workplace. When a company is doing well and its employees are well looked after, ‘friendship’ with work colleagues is easier to come by. When things aren’t going so well, and lay-offs and pay freezes loom, things tend to feel less rosy.

People tend to get more upset about dysfunctional workplace relationships when they use words like ‘friends’ to discuss how relationships with colleagues used to be. Colleagues are just that, colleagues. A colleague is something different than a friend.

To be clear:

Colleague: a person with whom one works in a profession or business.

Whilst a friend:

a person with whom one has a bond of mutual affection, typically one exclusive of sexual or family relations.

Can a colleague be a friend? Probably. But I think that’s the exception to the rule.

Think of your professional relationships with colleagues in terms of respect instead of friendship.

A working relationship based on respect is more enduring and doesn’t carry the same burden of responsibility that a relationship based around friendship does.

To this end, I don’t feel team members should feel in any way obliged to engage in out-of-work social gatherings with work colleagues. Work is work and your own time is yours to do with as you please.
Not attending out-of-work hours activities with work colleagues is no reflection on your respect for your work colleagues. On the contrary, it is embracing the difference between your personal and social life and your professional working life. That’s not to say you shouldn’t do that if you wish; just tread carefully and consider moving some of your eggs to another basket.

To surmise, I’ve respected hugely many of the people I have worked with. But they are not my friends. When I leave that place of work my respect for them will remain but it’s unrealistic to think I’ll be catching up with them each week for a coffee.


Think kind thoughts, work on your own shortcomings, be candid and generous with your colleagues. Confront toxic personalities but be realistic. Some people you just can’t save.

]]> 0
The effectiveness of productive effort Tue, 30 May 2017 22:38:32 +0000 TL;DR: learning how to focus and applying that focus is the key to not only productivity, but greater happiness in general.

Here are some, occasionally related, ruminations that have led me to that conclusion.

Social media is a waste of time

My life has three tenets; family, work and recreation. Social media doesn’t serve any of them. At home, social media robs me of time and stops me being present with my family. Social media disrupts my concentration at work for little gain and causes more misunderstanding than clarity when it comes to social interactions. It’s my conviction that social media makes us less happy, not more. If your about to counter that social media is how you keep up to date, I’ve anticipated that.

Embrace RSS

Use RSS feeds. Compared to social media, blog posts are more reasoned, more informative and enjoy greater longevity than the stream of drivel spewed up on social media. If you subscribe to good RSS feeds, the articles are more worthy of your time and more likely to feed back in to improved productivity in the future.

Just move forward

A quote of Michael Chrichton’s that stuck with me from the writing profession:

Books aren’t written – they’re rewritten

It’s a notion that I believe holds true for almost any creative endeavour.

Stop procrastinating and just start making it. Over time I’ve accepted that being productive isn’t just the end product. It’s the process. Start something so you have something to change. Expect initial attempts to be hopeless, rough and flawed. But regardless, start something and keep moving forwards.

Ditch your ego

Learn to let go of your ego. Accept criticism when well intentioned. Actively look for quality feedback on your work. Conversely, give your knowledge and candid feedback freely to others when you can. Learning to take, understand and garner feedback can be incredibly productive.

Plan but plan loosely

“No plan survives first contact with the enemy”

That’s a quote attributed in various forms to Helmuth von Moltke. I love the practicality of it. It’s not that you shouldn’t have a plan, it’s that you should expect to adapt it. That way, when reality reveals itself, it shouldn’t throw you quite as much as it might were you expecting things to go to plan.

Write down what you know

I’ve never been very good at mathematics. In my youth, when facing a mathematical problem, my Dad drilled into me: “WRITE DOWN WHAT YOU KNOW”. Writing down what I know about a problem has always helped me order my thoughts. More often than not, it has also moved me towards the answer. While it might seem obvious and at times pointless, when stuck on any problem, take the time to write down everything you think you know about it. Don’t over-think it. Just scribble it down on paper or in a blank text file. Then thank my Dad when your brain subsequently figures it out.

Learn when your brain needs to breathe

If you are the sort of person that finds it hard to walk away from a problem, learn to know when your brain just needs to breathe. When it feels like you are just bashing your head against the problem, and you’ve already written down everything you know: stop. Go and make a cup of tea/go for a walk/eat lunch/shower. More often than not the solution, or a route forward, reveals itself when you let your mind breathe.

Learn to know what your problems are

Before you learn a new framework, adopt a new build tool, start writing in this years hottest meta-language, or choose a new piece of technology or methodology, really think about whether it is solving a problem you actually have. Analyse whether it does what you need. Don’t just choose it because it is popular. Other peoples problems are not necessarily your problems. Learn to analyse your own problems. Be more productive by only solving problems you actually have to solve.


Don’t ignore thousands of years of human evolution. Your body needs to move. If you have a sedentary job make getting exercise a priority. There will always be work requiring your attention but prioritise your daily exercise beyond everything else. Regular exercise will keep your mind and body more able to work in the long term.

To that ends, be economical in the digital world, not the physical. Don’t combine any journeys you need to leave your desk for. Take the longest path to fetch water or use the toilet. Park a little further away from your workplace if possible; better still walk or bike to the office if viable.

Make best use of ‘dead’ time

Long commutes can grind you down. If that’s your reality, invest in audio books and/or listening the many quality, podcasts available. In what used to be ‘dead’ time, you can be learning something new, broadening your mind with unrelated fiction or simply keeping up with all that has happened in your field recently. Now your commute is dedicated time for you that no-one can take from you.

Learn your editor/core tools

What tools do you use? Don’t bother answering. I don’t care. I care whether you can use them well.

To exemplify this point for people in my own field, who write code for a living: at some point, spend a little time with each of the more popular text editors. Get a feel for their respective strengths. Then pick one, stick with it and learn it’s core features to the best of your ability.

The most productive and effective programmers I know don’t switch editors and have less than a handful of plugins installed. They spend their time solving problems and making new things, not trying to micro-optimise their working practice.

Write a log

On my OS desktop I have a diary.txt file. Almost every day I write a couple of paragraphs describing anything significant from the day. Problems encountered or solved, decisions made or pending, and any important conversations I might want to recall in the future.
Plus, at the end of a day, if I’m in the middle of a problem, this file provides a mechanism to document my current predicament and walk away (letting my brain breathe). I’ve found this practice invaluable and it is more than worth the time it takes to write.

Say no without guilt

Do fewer things. You can be more productive with the things you do when you do fewer of them. When someone asks you to get involved in their new side-project, be happy to say no. This shouldn’t be at the expense of decency. I’m a firm believer in repaying favours, but be protective of your free time. The more things you have going on, the less time you will have to be productive in. Learn this skill as soon as you can. With age and responsibility, your free time diminishes rapidly and saying no allows you to concentrate on whatever you have deemed truly important.


Most of these points relate, in some form, to focus. It is my conviction that the focus of effort has the greatest contribution to productivity. Learning to silence the ceaseless cacophony of modern life and navigate the many demands on your time will provide you with more opportunity to focus and be productive in your professional and personal endeavours.

This post was published first over at SuperYesMore.
]]> 4
Use linters for errors, formatters to fix style Wed, 10 May 2017 14:09:34 +0000 TL;DR limit the use of linters to highlighting errors. Use formatters to fix stylistic preferences

In the past, at http://ecss.ioon this site and in talks, I have waxed lyrical about using linting tools like Stylelint to point out problems in authored code. As a static analysis tool, a linter is able to point out issues that are both stylistic, such as no closing semi-colon on declarations, and also problematic, such as incorrect values for known properties. I employed a linter as a means to solve both types of problems and encouraged their use to facilitate keeping developers on the straight and narrow in terms of their authored code. Uniformity was the ultimate aim!

However, in recent months I have found myself moving increasingly away from using a linter to point out the stylistic issues. Instead, I’m favouring a formatter for this job. A formatter does as you might expect. On file save (or in your build tool prior to code commit) the file is automatically formatted to match a pre-defined convention. This way, authors can write things however they want, as lax as they please when it comes to things like indentation and white space, and the file will be automagically ‘fixed’ on save.

This change has come about principally after adopting prettier for formatting JavaScript. It’s opinionated for sure, meaning there are likely some choices initially you will take issue with, but the time saving is immediately apparent. 

Adopting it has spelt the end of all debate around how arguments and functions should look, what should have white space and what shouldn’t, and on and on ad infinitum. In short, any downsides have been greatly outnumbered by the positive and when I jump into TypeScript, I really miss having prettier to tidy things up as I go.

So, for CSS, I’m now using a formatter for stylistic preference and a linter to point out potential errors.

The upshot being problems that are easy for a tool to fix automatically get fixed and problems that actually require more author consideration are pointed out for the author to deal with.

So far, I’m just playing with formatters in CSS and I’ve been using stylefmt so far. However, I’d really like to see a fix whereby only rules in the .stylelintrc are used. If that is an issue for you too you can track the issue here:

Next I’ll be looking perfectionist by the irrepressible Ben Briggs.

It’s early days for out and out CSS formatters but there are plenty of smart folks (and me) talking about the next one here:

]]> 10
Building search results and highlighting matches with regex Fri, 17 Mar 2017 16:19:06 +0000 This post is a practical step-by-step. We will be writing some JavaScript that allows us to highlight user entered strings in text. Think of something like a ‘find’ function in a text editor.

Here’s the demo; just enter a string like ‘Manchester’ or ‘Manc JcT’:

The focus of this post is the JavaScript. The HTML and CSS are very basic. As with prior posts, I’m using ECSS for naming conventions.

Let’s make a start. Here is our HTML (note in the linked demo I have in-lined the CSS and JS):

<!DOCTYPE html>
<html lang="en">
    <title>Highlighting text demo by Ben Frain</title>
    <style type="text/css">
        /* Styles */
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
<div class="sch-Search">
    <div class="sch-Header">
    <input id="schInput" type="text" class="sch-Input"/>
    <div id="schResults" class="sch-Results">

    // JS Here

The CSS is largely irrelevant, so I’m not listing it out here. The only key thing worth mentioning is that I have added overflow-x: hidden; overflow-y: scroll; to the body tag. This is because the page will always be quite long and I dislike the scroll bar appearing and disappearing when the input is emptied:

The goal

A user enters a string of text and we use that string to parse the target text and wrap any matches in a span tag. The class on this span tag (sch-Result_Highlight) then allows us to add our highlighting styles.

We could obviously do this on a bunch of ‘lorem ipsum’ text but we are also going to use the string of user input text to decide what text to display in the first place. This is so we can show relevant results but then also highlight what they matched.

So, in plain terms, if I search for ‘Manche’ I want to display any records of data that include ‘Manche’ and then also highlight the ‘Manch’ string in the resultant data.

Get some data

I wanted a large dataset to play with so I opted for Ideally, we could just use their API and grab the dataset on page load. For example:

var dataSource = "//";
var request = new XMLHttpRequest();"GET", dataSource, true);

request.onload = function() {
    if (request.status >= 200 && request.status < 400) {
        data = JSON.parse(request.responseText);
    } else {
        console.warn("// We reached our target server, but it returned an error");

request.onerror = function() {
    console.error("// There was a connection error of some sort");


Note: I didn’t use Fetch as support is poor as I write this.

However, I found the above API a little flakey and I was worried about how long it would hang around so instead I just loaded a subset of that data at the bottom of the JS file in a ‘here’s some data I saved earlier’ style.

data = [
        traffic_management: "Lane Closure",
        status: "Firm",
        start_date: "26/10/2011 21:00:00",
        road: "M62",
        reference_number: "1819373",
        published_date: "25/8/2011 11:25:23",
        location: "Jct 19 Westbound",
        local_authority: "Rochdale",
        expected_delay: "No Delay",
        end_date: "27/10/2011 05:00:00",
        description: "Nightime hardshoulder closure westbound for electrical testing",
        closure_type: "Planned Works",
        centre_northing: "408723",
        centre_easting: "386220",
    // More

As an aside, my second thought was to do an import statement, ES2015 style, to keep the main JS file cleaner like this:

import * as data from './data.js';

However, native support for import is poor and I didn’t want to get into transpiling with TypeScript or Babel et al.

Get the data relating to input (debounced)

So, we have some data ready to go, so let’s start actually doing something. First off, I want to respond to input in the input field:

schInput.addEventListener("input", function(e) {
}, false);

Our listener waits for input in our schInput element and then invokes debouncedBuildResults. Right, let’s look what debouncedBuildResults looks like:

var debouncedBuildResults = debounce(function(e) {
    schResults.innerHTML = "";
    if ( < 3) {
    for (var i = 0; i < SETTINGS.resultsLimit; i++) {
        buildResults(, data[i]);
}, 250);

This inner part of this function is what we actually want to do but wrapped inside a debounce function. I just grabbed the debounce function off the shelf (_lodash I think) so I won’t detail that here. The only point worth mentioning is that without debounce we would be firing on each input; which might choke things up a little. The debounce allows a little breathing room (250 milliseconds in this example).

If you aren’t sure what a debounce is or whether you want a debounce or a throttle, Chris Coyier has you covered

In terms of what happens on input; first of all we empty the DOM node that contains any existing results. Then if there are less than 3 characters we return out of the function, otherwise, we loop through building out the results (up to the limit of results set in SETTINGS.resultsLimit):

for (var i = 0; i < SETTINGS.resultsLimit; i++) {
    buildResults(, data[i]);

Building a list of results

For each iteration of the loop, we run the buildResults function and pass as parameters the value that has been input and the data result from this iteration of the loop (the data is an array of objects so the square bracket notation: data[i] lets us pick the next one each time). So let’s look now at what buildResults does. Here is the complete function:

function buildResults(query, itemdata) {
    // Make an array from the input string
    query = query.split(" ");

    query.forEach(function(item) {
        // Bail early if we just have a space or a space and then nothing
        if (item === " " || item === "") {
        var reg = "(" + item + ")(?![^<]*>|[^<>]*</)"; // explanation:
        var regex = new RegExp(reg, "i");

        // If the search string(s) aren't found in either key we are interested in, bail
        if (!itemdata.local_authority.match(regex) && !itemdata.description.match(regex)) {
        var aResult = document.createElement("div");
        var authority = document.createElement("h1");
        authority.innerHTML = highlightMatchesInString(itemdata.local_authority, query);
        var detail = document.createElement("p");
        detail.innerHTML = highlightMatchesInString(itemdata.description, query);

The first thing we do is make an array out of whatever the user has entered. This is so we can check if any of the individual strings match. So, say I enter, “Manc closure”, the function will create an array in memory that looks like this: ["Manc", "closure"]. Now we want to iterate over each item in the array using forEach. I believe in returning early when you can so if the item itself is just whitespace, we return. Otherwise, we first create a string using the search term.

var reg = "(" + item + ")(?![^<]*>|[^<>]*</)";

We are going to use this string as a regular expression. I’m familiar with the adage:

Some people, when confronted with a problem, think
“I know, I’ll use regular expressions.” Now they have two problems.
Jamie Zawinski

But the more often I use Regular Expressions, the more impressed I am by their versatility.

Now, I can’t take any credit for this regex string. I’d spent hours trying to figure it out until I found the answer here: What this regex ultimately does is prevent any matches that are within HTML tags. This is important otherwise one string could be found within another that has already been highlighted. For example, suppose a search was made for ‘Manchester Chester’. We’d end up with this kind of HTML:

<h1 class="sch-Result_Title"><mark class="sch-Result_Highlight">Man<mark class="sch-Result_Highlight">chester</mark></mark>dale</h1>
Thanks to Alex in the comments below, I’ve now switched to using a mark element here instead of a span. More on the mark element here:

So, now we have built a string we can pass this to a new Regex:

var regex = new RegExp(reg, "i");

Notice, the second parameter passed is "i", so that we can match insensitively (e.g. we aren’t bothered about capitalisation)

So, we next use our regex to check for matches in either of the keys we are interested in (itemdata.local_authority or itemdata.description). If we don’t get a match in either key we return out of the function. Otherwise, we proceed to build the result HTML. Here’s an example result:

<div class="sch-Result">
    <h1 class="sch-Result_Title">
        <mark class="sch-Result_Highlight">Roch</mark>dale
    <p class="sch-Result_Detail">Nightime hardshoulder closure westbound for electrical testing</p>

You’ll notice that for the innerHTML of the title and detail we are calling the highlightMatchesInString function. This is the final piece of the puzzle. Let’s look at that next.

Highlighting matches

We set the innerHTML of the sch-Result_Highlight and sch-Result_Detail by passing our query alongside the key to the highlightMatchesInString function like this:

authority.innerHTML = highlightMatchesInString(itemdata.local_authority, query);

So this will set the HTML to be whatever is returned out of that function. Here is the highlightMatchesInString function itself:

function highlightMatchesInString(string, query) {
    // the completed string will be itself if already set, otherwise, the string that was passed in
    var completedString = completedString || string;
    query.forEach(function(item) {
        var reg = "(" + item + ")(?![^<]*>|[^<>]*</)"; // explanation:
        var regex = new RegExp(reg, "i");
        // If the regex doesn't match the string just exit
        if (!string.match(regex)) {
        // Otherwise, get to highlighting
        var matchStartPosition = string.match(regex).index;
        var matchEndPosition = matchStartPosition + string.match(regex)[0].toString().length;
        var originalTextFoundByRegex = string.substring(matchStartPosition, matchEndPosition);
        completedString = completedString.replace(regex, `<mark class="sch-Result_Highlight">${originalTextFoundByRegex}</mark>`);
    return completedString;

First of all we create a variable for the string we want to ultimately return. We assign this to itself unless it is null, in which case we set it to be the original string that was passed in.

var completedString = completedString || string;

Then, for each item in the array of strings to match against (remember, the user may have searched for two or more space separated things) we make a regex, just as we did in the prior function. Then if we don’t have a match, we return for the function, otherwise we highlight the text. Here’s how we can do that:

var matchStartPosition = string.match(regex).index;
var matchEndPosition = matchStartPosition + string.match(regex)[0].toString().length;
var originalTextFoundByRegex = string.substring(matchStartPosition, matchEndPosition);
completedString = completedString.replace(regex, `<mark class="sch-Result_Highlight">${originalTextFoundByRegex}</mark>`);

We find the start position with the index. Then we find the end position of the match within the text by taking the number of the start and adding the length of the matched string. Now, before we wrap this range we need to var off the original text. We do this by passing our matchStartPosition and matchEndPosition values to the substring method.

Finally, we can set our completedString to be itself but also replace anything that matches the regex with our wrapped text. I’m using a ES2015 template literal for ease.

completedString = completedString.replace(regex, `<mark class="sch-Result_Highlight">${originalTextFoundByRegex}</mark>`);

Finally, we have our new string so we return that back out of our function:

return completedString;

With any highlights needed applied to the strings, back in the end of the buildResults function, we then append that result into the DOM.



If you’ve made it here, it’s worth looking back on what we’ve done. A little DOM building, a little looping and some array work. We’ve even gotten our hands dirty with some regular expressions, even if the hard work was done for us.

]]> 5
A horizontal scrolling navigation pattern for touch and mouse with moving current indicator Fri, 10 Mar 2017 17:42:36 +0000 This is a practical post. A step-by-step of building up a navigation solution. I tried to leave in all the mistakes I made along the way to save you from my own folly; as such it’s pretty long. Sorry about that!

These days, thanks to the ubiquity of touch devices, users are generally familiar with horizontal scrolling panels. They are an effective way of minimising vertical space while still allowing plentiful content.

However, for mouse input, the pattern doesn’t work as well. By default, there is no direct way to click and drag the content so users can get to any elements out of the visible area. You could of course leave the scrollbar visible but despite being able to bespoke the styling, in most situations I still find it quite ugly.

If you choose to hide the scrollbars where you can, it’s still possible to scroll horizontal panels with mouse input by holding down a modifier key (shift on a Mac for example) while using a mouse wheel. However, this is niche/power user functionality — certainly not something we can rely on.

So, the task in this post is to create a simple scrollable panel for touch, augmented with click and drag functionality for mouse input, along with direction overflow indicators.

Here’s what we will end up with: And we will be building to that through the collection of Codepen steps here:

See the Pen zZZLaP by Ben Frain (@benfrain) on CodePen.

A few readers noted that the Codepen embeds don’t work well. In that case please take a look at the demo page on this site:

So, how do we get there? Let’s start with some basic HTML:

Our navigation HTML structure could look like this:

<nav class="pn-ProductNav">
    <div class="pn-ProductNav_Contents">
        <a href="#" class="pn-ProductNav_Link" aria-selected="true">Chairs</a>
        <a href="#" class="pn-ProductNav_Link">Tables</a>
        <a href="#" class="pn-ProductNav_Link">Cookware</a>
        <a href="#" class="pn-ProductNav_Link">Beds</a>
        <!-- more links -->

I’m using ECSS naming conventions here and following the nesting authoring pattern to provide a single source of truth for each selector (sorry Thierry, I know you’d rather see vanilla CSS):

.pn-ProductNav {
    /* Make this scrollable when needed */
    overflow-x: auto;
    /* We don't want vertical scrolling */
    overflow-y: hidden;
    /* Make an auto-hiding scroller for the 3 people using a IE */
    -ms-overflow-style: -ms-autohiding-scrollbar;
    /* For WebKit implementations, provide inertia scrolling */
    -webkit-overflow-scrolling: touch;
    /* We don't want internal inline elements to wrap */
    white-space: nowrap;
    /* Remove the default scrollbar for WebKit implementations */
    &::-webkit-scrollbar {
        display: none;

The default scrollbars have been hidden where possible, although STILL, 16 years on in Firefox, there is no way to do this without significant hackery.

That gets us this:

See the Pen WpRgZd by Ben Frain (@benfrain) on CodePen.

I’ve added some more basic styling to make it a little more visually appealing, and set the colour of the selected link (using ARIA attributes) but that doesn’t effect the principles of how it works.

If you look at on a hand held this should happily do the horizontal scrolling thing. OK, great, that was the easy part.

Indicating overflow

Now, unless I missed the memo, it’s not possible to know what input a user has ahead of time (for all you wanted to know and more, check out so it’s best to implement as many features universally as we can. If we are ditching the visible scrollbars — which provided indication to the user of an overflowing area, we better have something else to solve that issue in a more visually appealing manner.

So, I got ahead of myself. Let’s do the right thing here first. On our HTML element, by default, is a no-js class.

<DOCTYPE! html>
<html class="no-js">

Let’s use JavaScript to amend this class to simply js when JS is present. Then we can choose to only hide the scrollbars if JS is present:


Our revised CSS:

.pn-ProductNav {
    /* Make this scrollable when needed */
    overflow-x: auto;
    /* We don't want vertical scrolling */
    overflow-y: hidden;
    /* For WebKit implementations, provide inertia scrolling */
    -webkit-overflow-scrolling: touch;
    /* We don't want internal inline elements to wrap */
    white-space: nowrap;
    /* If JS present, let's hide the default scrollbar */
    .js & {
        /* Make an auto-hiding scroller for the 3 people using a IE */
        -ms-overflow-style: -ms-autohiding-scrollbar;
        /* Remove the default scrollbar for WebKit implementations */
        &::-webkit-scrollbar {
            display: none;

OK, we can feel a tiny bit better inside now. Let’s also add a function that can determine if our content is overflowing its container and add a data attribute to communicate that state in the DOM.

function determineOverflow(content, container) {
    var containerMetrics = container.getBoundingClientRect();
    var containerMetricsRight = Math.floor(containerMetrics.right);
    var containerMetricsLeft = Math.floor(containerMetrics.left);
    var contentMetrics = content.getBoundingClientRect();
    var contentMetricsRight = Math.floor(contentMetrics.right);
    var contentMetricsLeft = Math.floor(contentMetrics.left);
    if (containerMetricsLeft > contentMetricsLeft && containerMetricsRight < contentMetricsRight) {
        return "both";
    } else if (contentMetricsLeft < containerMetricsLeft) {
        return "left";
    } else if (contentMetricsRight > containerMetricsRight) {
        return "right";
    } else {
        return "none";

This function measures the right and left position of the first parameter, content (it should be a DOM element) and the second parameter, container (another DOM element) and returns to us whether the content is overflowing to the right, left, both sides or not at all.

Let’s feed the function our existing container and content. I’m adding in IDs to the relevant DOM elements here for simplicity but you could obviously grab them however you like. So the HTML now has IDs in the relevant places:

<nav id="pnProductNav" class="pn-ProductNav">
    <div id="pnProductNavContents" class="pn-ProductNav_Contents">
        <a href="#" class="pn-ProductNav_Link" aria-selected="true">Chairs</a>
        <!-- more -->

And I’m grabbing them in JS like this:

var pnProductNav = document.getElementById("pnProductNav");
var pnProductNavContents = document.getElementById("pnProductNavContents");

And we feed them to our determineOverflow function like this:

pnProductNav.setAttribute("data-overflowing", determineOverflow(pnProductNavContents, pnProductNav));

And then in the DOM, by default (unless you have an enormous screen), we should see data-overflowing="right" on our product nav.

Except we don’t.

It’s currently returning data-overflowing="none". What gives?

Well, a trip into the dev tools reveals that even though the content inside pn-ProductNav_Contents is leading off the page, the computed width of pn-ProductNav_Contents is actually the same width as pn-ProductNav, its wrapper. So, we need some way to make the container the same width as it’s content. We can do this with intrinsic sizing in CSS by applying width: min-content. However, IE doesn’t support intrinsic and extrinsic sizing so we need to go ‘Old Skool’ and break out the float. We will change the content to be a block that is floated left, to create a new block formatting context for its contents.

.pn-ProductNav_Contents {
    float: left;

With that done, our content is far wider than the container. If you look at the attribute of the wrapper in this example, you can see we have data-overflowing="right" on the pn-ProductNav element.

See the Pen NpdOzm by Ben Frain (@benfrain) on CodePen.

Overflow indicators

Now the DOM can tell us what’s overflowing we need it to update as the content is scrolled. Let’s listen to scroll event to do this but as scroll can fire A LOT, we will adapt the example from HTML5 Rocks to perform the action behind request animation frame:

// Handle the scroll of the horizontal container
var last_known_scroll_position = 0;
var ticking = false;

function doSomething(scroll_pos) {
    pnProductNav.setAttribute("data-overflowing", determineOverflow(pnProductNavContents, pnProductNav));

pnProductNav.addEventListener("scroll", function() {
    last_known_scroll_position = window.scrollY;
    if (!ticking) {
        window.requestAnimationFrame(function() {
            ticking = false;
    ticking = true;

If you scroll the content now, you can see the attribute gets updated based upon whether or not the content is overflowing on the left, right, both or none. Groovy. Now how about showing that stuff visually?

Let’s add a couple of elements to serve as our indicators. You could have the indicators as background-images, icon-fonts, whatever. I’ve gone for inline SVGs:

<nav id="pnProductNav" class="pn-ProductNav">
    <div id="pnProductNavContents" class="pn-ProductNav_Contents">
        <!-- Links here -->
    <button class="pn-Advancer pn-Advancer_Left" type="button">
        <svg class="pn-Advancer_Icon" xmlns="" viewBox="0 0 551 1024"><path d="M445.44 38.183L-2.53 512l447.97 473.817 85.857-81.173-409.6-433.23v81.172l409.6-433.23L445.44 38.18z"/></svg>
    <button class="pn-Advancer pn-Advancer_Right" type="button">
        <svg class="pn-Advancer_Icon" xmlns="" viewBox="0 0 551 1024"><path d="M105.56 985.817L553.53 512 105.56 38.183l-85.857 81.173 409.6 433.23v-81.172l-409.6 433.23 85.856 81.174z"/></svg>

I’m using button element so I have some serious ‘undoing’ to do stylistically. Here’s the CSS for the buttons and the SVGs within:

.pn-Advancer {
    /* Reset the button */
    appearance: none;
    background: transparent;
    padding: 0;
    border: 0;
    &:focus {
        outline: 0;
    /* Now style it as needed */
    position: absolute;
    top: 0;
    bottom: 0;

.pn-Advancer_Left {
    left: 0;

.pn-Advancer_Right {
    right: 0;

.pn-Advancer_Icon {
    width: 20px;
    height: 44px;
    fill: #bbb;
Note, because I’m making the buttons absolutely positioned, I also need to add position: relative to pn-ProductNav so they are located relevant to it (and not the nearest non-statically positioned container).

OK, that should do it. Let’s scroll it and see how it looks:

See the Pen yMgZme by Ben Frain (@benfrain) on CodePen.

Oh crap! Can you see how the arrow travels as we scroll. That’s not what I wanted. But it makes (some) sense. The absolutely positioned elements are positioned inside their container, but then when we scroll, the whole element is scrolling. Don’t worry, we can fix this with a little extra structure. We can wrap our nav in a containing element which will provide the positioning context for our buttons.

Out revised HTML:

<div class="pn-ProductNav_Wrapper">
    <nav id="pnProductNav" class="pn-ProductNav">
        <div id="pnProductNavContents" class="pn-ProductNav_Contents">
            <!-- Links -->
    <button class="pn-Advancer pn-Advancer_Left" type="button"><!--button SVG --></button>
    <button class="pn-Advancer pn-Advancer_Right" type="button"><!--button SVG --></button>

We then move our positioning to our new element:

.pn-ProductNav_Wrapper {
    position: relative;

Now everything is where it needs to be, let’s show and hide those indicators depending upon where the content is overflowing. We can set them to be have no opacity by default and then transition their appearance as needed:

.pn-Advancer_Left {
    left: 0;
    [data-overflowing="both"] ~ &,
    [data-overflowing="left"] ~ & {
        opacity: 1;

.pn-Advancer_Right {
    right: 0;
    [data-overflowing="both"]  ~ &,
    [data-overflowing="right"] ~ & {
        opacity: 1;

Now, regardless of input type, as you scroll the panel, you get some indication of where the panel overflows. You can see the current state of our demo here:

See the Pen oZBOBv by Ben Frain (@benfrain) on CodePen.

Making the panel advance on click

So our indicators are now doing the job a visible scrollbar does; we are indicating to the user there is content off one side or the other. Let’s add some functionality to advance the panel in either direction if the user clicks on them.

First we will need a few settings, their use will make more sense shortly:

var SETTINGS = {
    navBarTravelling: false,
    navBarDirection: = "",
    navBarTravelDistance: 150

Let’s grab our two buttons (again, I’ve added IDs for easy access):

// Our advancer buttons
var pnAdvancerLeft = document.getElementById("pnAdvancerLeft");
var pnAdvancerRight = document.getElementById("pnAdvancerRight");

Now, the idea is that when a use clicks a button, we are going to move the scroll panel inside the container using translateX. Then, once the move has ended, we remove the transform but apply the same distance to the scrollLeft property on the panel (scrollLeft being the distance, in px that the panel has been scrolled). That should provide a smooth way of advancing the panel in either direction.

The settings we set as an object provide a means to prevent additional clicks whilst we are moving our panel and also allow us to set different default move amounts.

We also want to ‘gobble up’ any little bits at the end, so if a user is fairly close to the end, we don’t want the standard travel amount that would leave, say 10px, of space to scroll. In that scenario, we just transition the whole distance to the end. That will make more sense when you try it!

There’s a big blob of code coming up here but it is commented so a few reads through and hopefully it’ll make some sense. Note, I don’t profess to be a JS ninja so there are likely better ways to achieve this (and I will happily accept the schooling).

pnAdvancerLeft.addEventListener("click", function() {
    // If in the middle of a move return
    if (SETTINGS.navBarTravelling === true) {
    // If we have content overflowing both sides or on the left
    if (determineOverflow(pnProductNavContents, pnProductNav) === "left" || determineOverflow(pnProductNavContents, pnProductNav) === "both") {
        // Find how far this panel has been scrolled
        var availableScrollLeft = pnProductNav.scrollLeft;
        // If the space available is less than two lots of our desired distance, just move the whole amount
        // otherwise, move by the amount in the settings
        if (availableScrollLeft < SETTINGS.navBarTravelDistance * 2) {
   = "translateX(" + availableScrollLeft + "px)";
        } else {
   = "translateX(" + SETTINGS.navBarTravelDistance + "px)";
        // We do want a transition (this is set in CSS) when moving so remove the class that would prevent that
        // Update our settings
        SETTINGS.navBarTravelDirection = "left";
        SETTINGS.navBarTravelling = true;
    // Now update the attribute in the DOM
    pnProductNav.setAttribute("data-overflowing", determineOverflow(pnProductNavContents, pnProductNav));

pnAdvancerRight.addEventListener("click", function() {
    // If in the middle of a move return
    if (SETTINGS.navBarTravelling === true) {
    // If we have content overflowing both sides or on the right
    if (determineOverflow(pnProductNavContents, pnProductNav) === "right" || determineOverflow(pnProductNavContents, pnProductNav) === "both") {
        // Get the right edge of the container and content
        var navBarRightEdge = pnProductNavContents.getBoundingClientRect().right;
        var navBarScrollerRightEdge = pnProductNav.getBoundingClientRect().right;
        // Now we know how much space we have available to scroll
        var availableScrollRight = Math.floor(navBarRightEdge - navBarScrollerRightEdge);
        // If the space available is less than two lots of our desired distance, just move the whole amount
        // otherwise, move by the amount in the settings
        if (availableScrollRight < SETTINGS.navBarTravelDistance * 2) {
   = "translateX(-" + availableScrollRight + "px)";
        } else {
   = "translateX(-" + SETTINGS.navBarTravelDistance + "px)";
        // We do want a transition (this is set in CSS) when moving so remove the class that would prevent that
        // Update our settings
        SETTINGS.navBarTravelDirection = "right";
        SETTINGS.navBarTravelling = true;
    // Now update the attribute in the DOM
    pnProductNav.setAttribute("data-overflowing", determineOverflow(pnProductNavContents, pnProductNav));

    function() {
        // get the value of the transform, apply that to the current scroll position (so get the scroll pos first) and then remove the transform
        var styleOfTransform = window.getComputedStyle(pnProductNavContents, null);
        var tr = styleOfTransform.getPropertyValue("-webkit-transform") || styleOfTransform.getPropertyValue("transform");
        // If there is no transition we want to default to 0 and not null
        var amount = Math.abs(parseInt(tr.split(",")[4]) || 0); = "none";
        // Now lets set the scroll position
        if (SETTINGS.navBarTravelDirection === "left") {
            pnProductNav.scrollLeft = pnProductNav.scrollLeft - amount;
        } else {
            pnProductNav.scrollLeft = pnProductNav.scrollLeft + amount;
        SETTINGS.navBarTravelling = false;

You can view this stage here:

See the Pen wJgZYP by Ben Frain (@benfrain) on CodePen.

Note in the CSS:

.pn-ProductNav_Contents {
    float: left;
    transition: transform .2s ease-in-out;

.pn-ProductNav_Contents-no-transition {
    transition: none;

In the JS the pn-ProductNav_Contents-no-transition class is being added so that when the transform: none is applied, we don’t see the panel scrolling back. This could also be handled with JS if preferred; this was just a personal preference.

Current indicator

It would be nice to have a indication of the currently active navigation item. Let’s handle that next.

We will add a listener to the scroller and add a ‘current’ class to the link that was clicked:

pnProductNavContents.addEventListener("click", function(e) {
    // Make an array from each of the links in the nav
    var links = []".pn-ProductNav_Link"));
    // Turn all of them off
    links.forEach(function(item) {
        item.setAttribute("aria-selected", "false");
    // Set the clicked one on"aria-selected", "true");

With that in place, you will now see basic styling (text going darker) for when each item is clicked. However, I’d like something a little fancier. My colleague, Tom Millard created something nice for the mobile site at my place of work ( The main navigation line moves and resizes based upon the navigation item that is clicked; Let’s try and ape that functionality here.

A moving current indicator

Let’s use a ‘faceless’ span as our indicator. We will then move this around based upon which element is aria-selected="true" as an extra visual queue. Here’s where it lives at the end of all the links:

<div class="pn-ProductNav_Wrapper">
    <nav id="pnProductNav" class="pn-ProductNav">
        <div id="pnProductNavContents" class="pn-ProductNav_Contents">
            <!-- More Links -->
            <a href="#" class="pn-ProductNav_Link">Worktops</a>
            <span id="pnIndicator" class="pn-ProductNav_Indicator"></span>
    <!-- Buttons -->

By default it will be styled like this:

.pn-ProductNav_Indicator {
    position: absolute;
    bottom: 0;
    left: 0;
    height: 4px;
    width: 100px;
    background-color: #f90;
    transform-origin: 0 0;

And so we have an over-long indicator in the DOM like this:

See the Pen BWWJPR by Ben Frain (@benfrain) on CodePen.

Now, we need to style and move it as nav items are clicked. Here’s the function that will do this for us:

function moveIndicator(item, color) {
    var textPosition = item.getBoundingClientRect();
    var container = pnProductNavContents.getBoundingClientRect().left;
    console.log(textPosition, container);
    var distance = textPosition.left - container;
    var scrollPosition = pnIndicator.parentNode.scrollLeft; = "translateX(" + (distance + scrollPosition + pnProductNavContents.scrollLeft) + "px) scaleX(" + textPosition.width * 0.01 + ")";
    if (color) { = color;

The function takes two parameters, item (the nav link being clicked) and color (we will come to that in time). It takes the item, finds it’s left edge and takes that value from the left edge of the container. This value is the distance we need the line to travel. We apply that value to the translateX property to move the line. Now the real clever bit from Tom; we can set the width of the indicator by scaling the default width of 100px by the width of the text element. Because the default size is 100 the sum is a more simple textPosition.width * 0.01; I owe him a brownie for that one.

Note that the transform-origin we set in the CSS becomes important now, otherwise, whilst the line would re-scale correctly, it would transform it’s scale from a center point instead of the top left.

Plus without the transition in CSS the line would just snap to the new position, the simple transition makes it zip about. A far more pleasing effect.

One other thing that happened. Because I was setting the border between the items with padding and margin, the indicator line was the wrong width; I want it the full width of the selection. Switching to padding on either side means an accurate width; kind of. Take a look at where we are now. Getting there but look at the pesky bits of white-space at the beginning of the line:

See the Pen peepXO by Ben Frain (@benfrain) on CodePen.

Fixing the white-space issue

I few minutes poking around and I remembered the cause. It’s good ol’ white-space that always appears for inline items. There are a number of ways around this, I’m opting to set the font-size to zero on the wrapper and reset it on the link items.

I’ve also amended the border slightly so things are sized more consistently. By having the border on all sides (despite only visible on one side), the measurements are more consistent. Here are changed property/values the links:

.pn-ProductNav_Link {
    // Reset the font size
    font-size: 1.2rem;
    border: 1px solid transparent;
    padding: 0 11px;
    & + & {
        border-left-color: #eee;

One final thing I want to do is set the indicator to the right width initially:


Right, we are starting to look in pretty good shape now:

See the Pen dvvdLZ by Ben Frain (@benfrain) on CodePen.

However, there are a couple more features I would like to add. Firstly, a different indicator colour for each product. Secondly, the ability for a mouse user to drag the panel instead of clicking the advancer buttons. Let’s deal with the colour first.

Make the indicator change colour

Let’s make an object in JS with different colours for each key. Then we can use the position of the clicked item in the node list to pick a colour. Then we can apply that colour to the wrapping element and the indicator can inherit the colour. Here’s what the object with colours looks like:

var colours = {
    0: "#867100",
    1: "#7F4200",
    2: "#99813D",
    // More colours

And then we can pass the colour to the moveIndicator function like this:

moveIndicator(, colours[links.indexOf(]);

For the second argument, we are looking at the colours object, and then choosing key which has a value equal to the clicking items position in the node list.

Now that our moveIndicator function is receiving a color, it will change as we click. Let’s just add a transition to the colour change to smooth it further. This is what the CSS for indicator now looks like:

.pn-ProductNav_Indicator {
    position: absolute;
    bottom: 0;
    left: 0;
    height: 4px;
    width: 100px;
    background-color: transparent;
    transform-origin: 0 0;
    transition: transform .2s ease-in-out, background-color .2s ease-in-out;

And here is the result:

See the Pen MppBYa by Ben Frain (@benfrain) on CodePen.

Drag to scroll

The last feature I’d like to see is drag to scroll functionality. If a mouse user clicks on the nav and drags it, I want it to behave in the same way it would with touch and drag. I’m going to cheat at this point in a ‘here’s one I made earlier’ style and grab a great little JS script I found called ‘dragscroll’. You can find it on GitHub here: Long story short, you include the JS, add a dragscroll class to the scroll panel HTML and you’re done!

See the Pen zZZLaP by Ben Frain (@benfrain) on CodePen.


So now we have our scrolling panel with overflow indicators and clickable ‘advancer’ buttons. In addition we have employed a little JS script to allow drag to scroll functionality for mouse users.

I hope you learnt something reading this; probably not as much as I learnt writing it.

]]> 49
Learning Fri, 03 Mar 2017 10:42:33 +0000 The game

You’re stand at the beginning of a chalked out lane. This lane runs out an impossibly long distance, as far as you can see. It’s smooth like a Billiard table; the worlds largest athletics track.

Before you is a large heavy sphere. Almost as tall as you; a hulk of mass.

As you stand, others pass you on either side, heaving and rolling the spheres before them. The participants are of every conceivable size, age and inherent ability and each employs slightly differing techniques to move the ball along. Each is pushing the ball best they can at their own rate; some in bursts, pausing frequently to regain composure, some are steady and methodical. Some roll the ball on with seeming ease whilst others labour and exert themselves to move the ball just a few inches.

Looking behind you notice that all their lanes started some distance behind, some imperceptibly distant.

What is apparent is that the aim of the game is to get that huge ball moving, and move it as far as possible off into the distance.

This is learning.

The lanes are your subject, the ball your knowledge, the aim is as far off as you want or need it to be.

Anyone can learn anything

Some start sooner. Some begin with greater advantages. But we can all creep. The more we creep, the greater our momentum. The more we exert ourselves with dedicated effort, the more our strength and technique improves and slowly but surely we head towards our goal, however grand or humble that may be.

]]> 1
Enduring CSS now available in hard copy Fri, 13 Jan 2017 18:06:12 +0000 The book, ‘Enduring CSS’ is now available in hard copy form from Packt. It covers scaling CSS on large-scale projects.

Previously the title was only available in eBook form from Leanpub and it’s nice to be able to offer it as a hard copy from now on.

There aren’t many books available that purely cover CSS architecture so if it’s a subject that interests you I’d love to know how you find it.

Although the Packt version has a different cover, it’s the same content as the version sold at Leanpub. There are typographical and stylistic differences but otherwise it is the same. However, be aware that any future updates to the title are more likely to hit the Leanpub version before the Packt version (I have direct access to the Leanpub versions whilst any future updates to the Packt version are at the mercy of the publisher).

]]> 0
Position Sticky is back! But it has issues Fri, 13 Jan 2017 14:52:21 +0000 In WebKit devices (older Android and iOS) position: sticky has been around for some time. It was subsequently removed from Android/Chrome (at v35). Now, it’s back in the standards and back in Chrome from version 56 onwards.

Sadly there is significant disparity in how Chrome/Firefox and Safari now implement the feature.

Here is how the feature is described in the specification:–3/#sticky-pos

A stickily positioned box is positioned similarly to a relatively positioned box, but the offset is computed with reference to the nearest ancestor with a scrolling box, or the viewport if no ancestor has a scrolling box.

As a simple example, consider a blog post where all headers are sticky and stick to the top of the page as the user scrolls.

See the Pen Simple position: sticky Example by Ben Frain (@benfrain) on CodePen.

The requisite code is remarkably small. Just tell each header to be sticky and tell it where you want it to stick:

.to-Stick {
    position: sticky;
    top: 0;

OK, great. However, implementations in Chrome/Firefox and Safari differ when transforms are applied to parent elements (note, the latest Chrome implementation is only in dev/Canary as I write this).

You’ll need to view the following example in both iOS/Safari and Chrome/Firefox to see the difference (for now you can use Chrome Dev or Canary, enter responsive mode in the tools and selected something like the Nexus 5X as the device).

Here is the amended example:

See the Pen Simple position: sticky Example by Ben Frain (@benfrain) on CodePen.

To explain the example, the body element has a transform: translateY(-44px) transform applied. Safari shows the sticky box sticking to the top of the viewport while Chrome has the sticky element ‘sticking’ 44px sooner.

Whether this disparity is due to the specification making no allowance as to what should happen when transforms are being applied to the scrolling box I cannot say. At first glance, WebKit seems to be doing it right. However, in terms of adherence to the specification, things are not so clear.

Chrome/Firefox is factoring the transform into the calculation of where the sticky element should show (although they don’t factor in a top: -44px on the body which I thought was weird). Specification wise, Chrome is probably closer — the sticky element is positioned with reference to its nearest ancestor (the body in this example, which has been transformed to a different visual position). However, practically, I think the WebKit implementation makes total sense too. If you add top: 0 to a sticky element, no matter what’s happened with the container, the element in question is going to stick to the top of the viewport.

There is at least one bug report ( in relation to the Chrome implementation but I can’t decide if it only seems like a bug because it differs to Safari.

Ultimately, as developers we could deal with either approach. What is not cool is a different implementation across browsers.

So, enjoy the fact you can start using position: sticky again, but be aware that if you have a transform on the scrolling box it belongs to, things could start to get messy.

]]> 12
A modern CSS reset (with caveats) Thu, 03 Nov 2016 12:02:33 +0000 I’m always on the look out for a simple way to undo default user agent styling on some ‘pickier’ elements (it’s what led to app-reset).

Right up front, I want to say that unless you are very careful, you probably don’t want to remove the default styling of ALL elements. Form controls, buttons, video tags etc. all have well-considered default options. Nuking them all is unlikely to be the most sensible option.

L. David Baron of Mozilla, brought me to my senses with this comment:

There’s also good reason for a decent portion of our 3200 lines of UA style sheets, and you might not want to blow it all away.

You can see what he’s talking about here: there’s plenty of stuff in there I hadn’t considered!

So, to be clear, there is a massive potential cost to this, it may cause more problems than it solves. I haven’t used it on a project of any size yet so I’ll update in due course.

The all property (MDN docs here) provides an easy mechanism to select all properties and revert (CSS4 Level 4 only), change to initial, inherit or unset. It was the unset part that @SelenIT2 had previously escaped me.

So, in modern browsers you can do this:

* {
    all: unset;

/* Thanks to L. David Baron for this: */
base, basefont, datalist, head, meta, script, style, title,
noembed, param, template {
    display: none;

You can see an example of this here with some elements such as button and fieldset showing no default UA styling:

The first part with the universal selector unsets everything. With just this you will see things like the head displayed on screen. To prevent that you need to add back display: none for those elements. These are the elements in the Mozilla style sheet referenced above.

Browser support

At present, support for this seems to be: Chrome 37, Firefox 27, Safari 9.1 but no Internet Explorer 🙁.


I have not tested this to any degree so I really aren’t sure whether it solves more problems than it creates. However, I intend to update this post in the coming weeks as I try this out in more of the projects I work with.

Again, once more in case you missed it at the top; this probably isn’t the best thing to do unless you are going to be vigilant in adding decent styling back in for any of the elements you will be using in your site/app.

]]> 4
Holier Than Thou Thu, 20 Oct 2016 15:35:08 +0000 Every few months somebody commits a cardinal sin of web development. They openly discuss or document a technology choice they have made that is contrary to the received wisdom of the ‘web community’.

You know what happens next. Other web developers, brave behind their avatars (even those of great standing), use social media to pour scorn on said developer and denounce them for their practice/proclamation.

In case you are in any doubt, let me exemplify the kind of proclamations I’m talking about:

  • “It’s OK to rely on JavaScript for your webpage”
  • “Icon fonts are fine to use sometimes”
  • “Sometimes using a div instead of a button element makes sense”
  • “Maybe SRP/OOCSS isn’t the best way to architect your CSS”
  • “Sometimes showing a graphic telling people to rotate their phone is OK”
  • “There are times when having a separate mobile and desktop website is the best thing to do”

People, generally go ape at such statements; loosing all objectivity and often reacting with such vocal disdain you’d think the offender had drowned puppies.

We, the ‘web community’ (yes, I’m miming wanky air quotes there) need to grow up.

Sure, we should point out the shortcomings of the choices we and others make. But the swell of vitriol that flows in growing numbers whenever someone dare go against the grain just makes us seem like lemming-like zealots.

I’m of the believe that it’s a pattern of behaviour that holds back progress. Instead, I’m going to argue we should always begin with the mindset that Nicholas Zakas describes:

Instead of assuming that people are dumb, ignorant, and making mistakes, assume they are smart, doing their best, and that you lack context.

This is starting from a point of wisdom and balance — not anger and disgust. It’s sure to result in more productive discussion sooner, instead of the usual cooling off period following such an event before the various factions start to actually listen and discuss things like adults instead of petulant children.

One of the nice things about maturing as a person is that whenever you think you have the ‘one true way’ to do something, you remember it’s probably because you haven’t yet tried your solution with enough variations of the problem.

Therefore, someone else’s technology choices, while initially nonsensical to you, maybe the exact same choice you would make given the same variables. Or, once you step outside your current mindset, they may simply represent the choices you should be making in the future.

None of this should preclude you from doing due diligence as you make your choices. But be pragmatic, do your homework, make your call, and maybe even burn your idols.

]]> 3
Prototyping reactive interfaces with an adapted JavaScript Observer pattern Wed, 19 Oct 2016 10:23:11 +0000 Did you think this piece was going to have something to do with React, the JavaScript framework? It doesn’t. Therefore, move along you lot; there’s another bazillion posts out there for you.

By the same token, if you are a Computer Science graduate and already know what an Observer pattern is and why you would use it, you can probably save yourself any further reading.

For the two of you that are still here, this post is documenting how it’s possible to prototype a reactive interface with some vanilla, and relatively straightforward, JavaScript known as an ‘Observer’ pattern.

Thanks to Tom’s input, we will end up with an amended Observer pattern but it has some extra benefits that may suit your needs too.


I spend much of my working day prototyping new product features. I enjoy prototyping immensely as you can move fast, fail quickly and iterate until satisfaction. This is most straightforward with ‘flat’ ideas and designs. In that instance it’s mostly just HTML and CSS with a dab of JavaScript for basic interaction.

However, sometimes you want to prototype something a little more ‘alive’. Perhaps more accurately, one may wish to prototype an interface that responds in multiple places to changes in common data or input.

On a larger scale, this reactive functionality is what frameworks like Ember, Angular or the framework I won’t mention do for breakfast but I don’t want the mental or technical burden of a framework for this simple task.

For the purpose of this post, consider prototyping a simple stock buying widget. The stock price will change randomly and based upon the current stock price and our minimum buy price (that the user will input) a ‘Buy Stock’ button will light up or not as the case may be. This is as simple as possible a demonstration as I could imagine but you’ll hopefully appreciate the principle. What this widget does will not be important so much as the way it is doing it.

What can we easily write with JavaScript that can solve this need? Ordinarily, when prototyping something, I write very ‘procedural’ JavaScript; user presses a button, I write an attribute into the body, insert some text somewhere etc. Each function follows another and the ‘state’ of the thing is communicated and checked by interrogating the DOM. For example:

if (document.body.getAttribute("data-widget-enabled" === true)) {
    // the widget is enabled

This is fine to a point but gets messy the more complex the functionality you are prototyping. I found myself constantly following trails in the debugger tools, trying to ascertain which function had set which thing to which value.

In short, things were starting to smell in my code and I felt I needed to find a better tool for the job at hand.

The Observer pattern

My search for the right approach for the job at hand led me to the ‘Observer’ pattern. My first stop on the Google links was a section on the Observer pattern from Addy Osmani’s ‘Learning JavaScript Design Patterns’. This certainly sounded like everything I wanted:

The Observer is a design pattern where an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state.

However, in practice, I stumbled when trying to wrap my head around ‘concrete’ Observers and basically get anything working. I left there humbled and confused so my search continued.

The first post I came across that explained something that made sense to me was and I managed to get up and running pretty quickly.

That explanation is the basis of how the first example of our stock widget works:

See the Pen ZpqvRW by Ben Frain (@benfrain) on CodePen.

Let’s look under the bonnet. The principle explained in Jarrett’s post is making a function to house your data states:

function Widget() {
    this.validInput = false;
    this.maxValue = 10;
    this.marketOpen = false;
    this.livePrice = 10;
    this.observers = [];
    this.buttonText = "Stock price too high";

We need some observers to watch the data states for change. The following function facilitates adding observers into the list of observers that will be notified of changes:

Widget.prototype.addObserver = function (observer) {

We will then need a ‘notify’ function that loops through each of the observers that we add and notifies them of data changes:

Widget.prototype.notify = function (data) {
    this.observers.forEach(function(observer) {, data);

You can create an instance of our simple stock widget like this:

var sw = new Widget();

Remember, you can see this implementation here:

To make this work we have a standard function that loops and fires another function to adjust the stock price:

(function loop() {
   var rand = Math.round(Math.random() * (1000 - 500)) + 3000;
   setTimeout(function() {
   }, rand);

The random price function adjusts the price up or down by 5 and then fires the notify function of our widget (sw.notify):

function randomPrice() {
    var possibleResponses = [1,2];
    var rand = possibleResponses[Math.floor(Math.random() * possibleResponses.length)];
      if (rand === 1) {
        sw.livePrice = sw.livePrice + 5;        
      } else {
            sw.livePrice = Math.max(sw.livePrice - 5, 0);
        livePrice: sw.livePrice
      swPrice.textContent = sw.livePrice;

The functions that amend data don’t need to be added as a prototype; any function can amend the data so long as it invokes the notify function of the changes. So, consider this snippet of code. Here, when the user enters a different value we want to update the data and notify any observers:

swInput.addEventListener("input", function(){
    sw.maxValue = parseFloat(swInput.value);
        maxValue: sw.maxValue
}, false);

An observer looks like this. In this instance, it changes the state of the button depending upon whether it is above or below the ‘Max Buy Price’:

    if (sw.livePrice > sw.maxValue) {
        swBuy.setAttribute("aria-disabled", "true");
        swBuy.textContent = "Stock Price too high";
    } else {
        swBuy.setAttribute("aria-disabled", "false");
        swBuy.textContent = "Buy Now";

This pattern works quite well but with some limiting caveats:

  • Every time something makes a change to the data and you invoke the notify function, every observer updates, regardless of whether the data change is of interest to it or not. For example, say we have the following possibilities:
function Widget() {
    this.validInput = false;
    this.maxValue = 10;
    this.marketOpen = false;
    this.livePrice = 10;
    this.observers = [];
    this.buttonText = "Stock price too high";

And we have an observer looking after a certain area of the interface that only cares about the this.livePrice updating, it will always run, regardless on every change to the data. If your observer is doing any DOM work (for example, setting classes, attributes or updating textContent or innerHTML) it will repeat that work every time, whether or not anything it cares about has actually changed.

The second caveat to this pattern is that you must be careful that you don’t create an observer that amends data based upon a change to the same data. For example, this would create an infinite loop:

    if (sw.livePrice > sw.maxValue) {
        sw.livePrice += sw.livePrice + 5;
        sw.notify({livePrice: sw.livePrice})

An improved Observer pattern for prototyping

Tom hears all my JavaScript woes (poor man). I was relating my unease at every observer running its logic regardless of it’s interest in the piece of data that had changed. He subsequently came up with the following changes. First, the ‘data’ structure stays the same. The observers however declare the properties they are interested in like this:

    props: ["livePrice","maxValue"],
    callback: function observeAndSetButton() {
        if (sw.livePrice > sw.maxValue) {
            swBuy.setAttribute("aria-disabled", "true");
            swBuy.textContent = "Stock Price too high";
        } else {
            swBuy.setAttribute("aria-disabled", "false");
            swBuy.textContent = "Buy Now";

Let’s talk through how this refined notify function works.

We pass an object instead of a function and the object contains an array containing the properties we are interested in, and then a callback function that we want to execute should any of the properties we are interested in have changed.

The biggest area of change is the notify function. This now checks for invalid property assignment (so I can’t write sw.notify({livePwice: sw.livePrice}) and end up with a livePwice property alongside a livePrice one) and then only updates the data if it is different from the value it already has.

Next we filter the observer array into a new array that contains only observers that have interests that align with the properties in the data that have changed.

Then we run the callback function of only those observers that have changes.

Additionally, functions can use the * symbol as the property they are interested in. That way, they run on every change in the data. For example:

    props: ["*"],
    callback: function observerEverything() {
        // stuff

The entire notify function now looks like this:

Widget.prototype.notify = function (changes, callback) {
    // Loop through every property in changes and set the data to that new value
    var prop;
    for(prop in changes) {
        // First catch any incorrect assignments of the data
        if(typeof this[prop] == "undefined") {
            console.log("there is no property of name " + prop);
        // We want to exit if the change value is the same as what we already have, otherwise we update the main object with the new one
        if (this[prop] === changes[prop]) {
        } else {
            this[prop] = changes[prop];                

    // Loop through every observer and check if it matches any of the props in changes
    // Do this by filtering the existing array
    var matchedObservers = this.observers.filter(hasSomeOfTheChangedProps);

    // filter invokes this and returns observers that match into the matchedObservers array
    function hasSomeOfTheChangedProps(item) {
        // If the props contains a wildcard 
        if (item.props === "*") {
            return true;
        // Otherwise check if the changed prop is included in the props list
        for(var prop2 in changes) {
            // To ensure we don't quit the entire loop, we want to return true if the prop2 is in item.props. Otherwise we want to keep looping to check each prop and only quit the function by returning once the entire loop has run
            if(item.props.includes(prop2)) {
                return true;
        return false;
    // Now for any observers that care about data that has just been changed we inform them of the changes
    matchedObservers.forEach(function (matchingObserver) {;

Here’s a Pen with that new code:

See the Pen rrQNBJ by Ben Frain (@benfrain) on CodePen.

The curious can check the console and notice that although I’ve added another looping function to toggle a different property on a different time delay, the observer only runs if the properties it is interested in (livePrice and maxValue) update.

This code could perhaps be improved by preventing an Observer updating a value it is interested in (to remove the possibility of an infinite loop) but that code doesn’t exist in the example.


Given — we are running a very limited prototype here, so this may seem like unneeded complexity but this amended Observer pattern has proven a worthwhile addition for the prototypes I build that have the potential to grow in functionality and scope.

As ever, I welcome any improvements to this technique in the comments below or I’m on Twitter @benfrain.

]]> 1
The iOS Safari menu bar is hostile to web apps: discuss Thu, 22 Sep 2016 22:06:20 +0000 I’m a big fan of Safari in general. My loathing of Safari on iOS is largely restricted to the menu bar. For clarity, I’m talking about the UI at the bottom with the forward/backward icons in:


That bar isn’t hostile in principle; rather it’s the actions that invoke/dismiss its presence and its hijacking of the bottom 44px of the viewport that make it a constant frustration and problem for web pages.

iOS Safari wack-a-link

Users can’t fully enjoy ‘app like’ layout patterns on iOS. The kind of layouts popularised by iOS itself:


Fancy a game of wack-a-mole? Try making a fixed menu bar that sits within the bottom 44px of your web page/app. Scroll down the page just a little so the menu bar recedes and then try and tap the links/buttons in your fixed menu bar. How d’ya like them Apples (pun intended)? Every time Safari registers a touch in this area it pops up the Safari menu bar and you have to click what you clicked again. Every. Single. Time.

It’s a bit weird to me that these kind of layout patterns, that Apple champions natively, aren’t a problem with Windows Mobile & Android on the mobile web, just Safari on iOS.

It doesn’t need to be an ‘app like’ fixed bar. Safari on iOS considers the bottom 44px of the viewport sacrosanct in general. Plain old links that are unfortunate enough to be scrolled into that area have to be tapped twice too. That area of Safari is like some sort of ‘touch deity’; strictly off limits to all but those who understand what’s going on under the hood.

There’s no way around this frustration. Back in iOS7 we had the minimal-ui meta tag. If you remember, this gave us visual/practical equivalence with ‘Add to Home Screen’ (providing of course you have the requisite <meta name="apple-mobile-web-app-capable" content="yes"> in there).

That is to say, on iOS7 with minimal-ui present, the page loaded with a minimal amount of browser chrome by default. Users got the menu bar back by clicking the header chrome. Apple subsequently removed minimal-ui as a thing. I can appreciate that move; having no obvious back button on some pages and not others could be bad for users.

However, there are more problems with the current iOS Safari menu bar beside usability. From a technical perspective, here are a few:

  • the flexible viewport height means that window.innerHeight provides different results depending upon whether the page has just loaded or how far you have scrolled. Makes sense because the inner height is changing but, err, dunno — weird
  • there is no API/meta tag supported by Safari iOS that allows the page to be loaded with the menu bar hidden by default
  • There is no way to dismiss the bar from the UI other than by scrolling
  • There is no way to let clicks in that bottom 44px area actually click on first touch
  • The computed length of 100vh doesn’t get updated when the viewport size changes. That’s plain weird

I made a litle test page to demonstrate some of this. Load it up in Safari and scroll to see the difference, and the difference again when you save to home screen and then run it from there.

In addition, take a look at another demo my colleague, Tom Millard, made. This allows you to disable touch move on the body (as per the previous post) but then move your finger in that bottom 44px area again and you’ll see that it all falls apart.

Wish list for Safari on iOS

  • Stop hijacking that bottom 44px area. Let the menu bar appear when a user scrolls up or taps the top chrome
  • Make 100vh always actually equal to 100vh, the result being it would be a dynamic value as the Safari chrome expands/contracts (the only way around this currently is to make something absolute with top, bottom, left right set to zero but you still get a horrid redraw as the chrome shifts in and out). There are perhaps some side-effects of this I’ve not considered that make this undesirable so I’m all ears on that? But if innerHeight is being updated I think it makes sense that vh works too
  • bonus points — how about allowing us to size something across the whole viewport without always scrolling the content below it (more about that in the last post)?


Am I wrong about this? Has Safari actually got it right?

If it’s a problem for you/your organisation too I need your help. I’ve opened a dialogue about this with Apple employees but they need any metrics, further uses cases, evidence of support tickets you have received off that back of this etc. before they will consider changing the current behaviour.

I implore you, if you have any strong feeling, data, anecdotal evidence on this pattern being a problem, please comment below.

]]> 31