Feb 16, 2017

Web development technologies to adopt in 2017

I started 2016 feeling quite overwhelmed by the sheer number of new technologies that were being introduced. This year I feel like many of those technologies have matured, so I have collated a list of the ones that I think deserve your attention. My focus for the last couple of years has been on performance, so I’ve made an effort to ensure that all of the technologies mentioned are either “performance-friendly” or are directly related to performance.

Preact — preactjs.com

A fast 3kB alternative to React with the same ES6 API. For projects that use React’s other APIs, preact-compat exists as a compatibility layer allowing Preact to be a complete drop-in replacement for React.

As well as being smaller than React, Preact is also much faster. I measured the impact of migrating to Preact in three projects and saw a 3-4x reduction in JavaScript execution times in all of them .

Webpack — webpack.js.org

Bundle your scripts, images, styles, assets… I was initially put off Webpack because I thought it did too much. With the help of Pete Hunt’s webpack-howto I realised that Webpack can be as simple or as complex as you want it to be.

These days I recommend that people use Webpack even for trivial projects because it’s easy to set up, the defaults are good, and it opens up a huge number of possibilities for optimising your application.

offline-plugin — github.com/NekR/offline-plugin

A highly-configurable Webpack plugin that provides an offline experience for your application using ServiceWorker, with AppCache as a fallback.

If there’s one thing on this list that you consider adopting, it should be this. The number of people around the world whose sole means of accessing the Internet on a mobile phone is increasing. We can improve their experience on the web immeasurably by utilising technologies like ServiceWorker & AppCache, and this plugin is a ridiculously easy way to do that.

Tachyons — tachyons.io

Functional CSS for humans. Tachyons is a mobile-first responsive CSS framework with a focus on accessibility. Its single-purpose class structure is scalable and practically removes the need to write any custom CSS.

Tachyons has a relatively small footprint, and is extremely modular so you can easily include only what you need. It’s a good, powerful alternative to heavier frameworks like Bootstrap or Foundation.

Lighthouse — github.com/GoogleChrome/lighthouse

Built by some of the industry’s foremost web performance evangelists, Lighthouse is an automated tool for analysing your web application’s performance and providing insights on developer best practices.

As well as using it as a once-in-a-while test, I also recommend running Lighthouse on a regular basis — perhaps part of a daily build. Its insights are broad enough to cover a wide range of issues from accessibility to performance, but specific enough that any issues are easily actionable.

(Read more)

Feb 13, 2017

How we assemble web pages at BBC News

This post is about the Web Application Framework in use by some teams at the BBC. It is not strictly a framework in that it specifies the contracts between components, rather than providing concrete implementations of the components. For this reason, I prefer to think of it as the Web Application Specification.

At the beginning of 2015, a group of developers and technical architects from around the BBC got together with the goal of designing a system for sharing web page components between teams. This came from an acceptance that most of the BBC’s public-facing web products have a similar look & feel, and a desire to improve efficiency through sharing rather than building similar things over and over again.

In some organisations, technologies are standardised which makes sharing components between teams a trivial task. At the BBC, teams are free to use whichever technologies they like. This means that any component-sharing solution we come up with can’t be tied to a template language, data structure, build system, or even a CSS preprocessor. This meant that building a component library like Lonely Planet’s Rizzo or CloudFlare’s CFUI was out of the picture. We also felt that something like the FT’s Origami would require too much effort on the users of the components.

We ended up going back to basics and designed a solution that is based on two premises:

  1. All of our web pages are built out of HTML, CSS, and JavaScript.
  2. Most pages have three main parts: the head, which contains metadata and styling; the body, which contains content; and the part at the end of the body, which contains JavaScript.

What we came up with is called the WAF, or Web Application Framework. It’s a surprisingly simple framework, and is built on top of three core principles:

(Read more)

Dec 26, 2016

What it's like to work as a developer at BBC News

The BBC is a pretty large organisation. Today it employs around 20,000 people (actually around 35,000 when you include part-time and fixed-term contract employees) across a huge number of divisions. The BBC Careers website typically has over 100 vacancies posted on any given day. Before I joined the BBC, I found the sheer scale of it a bit intimidating. Usually I can get an idea of what it’s like to work for a company by reading their job advertisements and their engineering blogs, but with the BBC I was almost completely clueless. In this post I hope to shed some light on what it’s like to work as a developer or tester for BBC News.

Just a small disclaimer first: from an engineering perspective, the BBC is not like most other companies — it’s more like dozens of smaller companies, each with their own engineering department, working towards a common goal. News, Sport, Programmes, iPlayer, Radio… As digital products, these are all built mostly independently of each other. I work for BBC News, so a lot of what I’ve written may not apply outside of BBC News.

(Read more)

Jul 22, 2016

Redefining the BBC News core experience

TL;DR: Over the last 4 years, the BBC News core experience has been transformed from a speedy 21KB page into a slow & bloated 685KB monster. This was in part due to a lack of performance monitoring and 4 years of feature creep, but also due to a lack of performance-oriented culture throughout the business.

I created a lightweight prototype of the BBC News core experience which demonstrates that focusing on the content first and foremost can result in an extremely fast page. I want the BBC and other websites to rethink what the core experience means, and experiment with giving users the power to define their own experience.

In the beginning of 2012 the BBC Responsive News team wrote about how they provide a “core experience” for users by default, and then progressively enhance the page if the browser cuts the mustard. At the time, this was cutting edge. They were able to build pages that worked on practically any browser without compromising the experience for users on modern browsers. To quote directly from the Responsive News blog:

The first tier of support we call the core experience. This works on everything. I’ve seen it work on a Nokia E65, a Blackberry OS4, Kindle 1, a HTC Touch 2 running Win Mobile 6.5, a Samsung U900 Soul, a Commodore Vic20, my nan’s slipper and a toaster just sellotaped to a TV. Likewise, GoogleBot, text-browsers like Lynx, folks that disable JavaScript and so on are all assured a good level of service.

This technique is still in use today, and is an integral part of the front-end strategy for all modern BBC News pages. In 2012 it allowed the team to provide a fast and lightweight experience for users on low-end devices. 7 HTTP requests totalling 21KB was all it took to load the core experience of the BBC News front page. All users benefited from this fast initial page load, with modern browsers progressively enhancing the rest of the page after the content was loaded.

It has been over 4 years since the BBC News core experience was first built, and a lot has changed since then. Today, the core experience consists of 91 HTTP requests totalling 685KB – over 32x heavier than the original core experience. With JavaScript disabled this can be reduced to 137KB – still over 6x heavier.

The BBC News team is aware of their website’s shortcomings. Back in May 2015 I conducted a huge performance review which I’ve spoken about extensively both internally and externally (video). A lot of work has been done over the last year, and many of the issues mentioned in those slides have already been addressed. Despite this, the elephant in the room is still the core experience.

That’s why when BBC News ran an internal hack day (where people can form teams to work on whatever they like) I took the opportunity to totally redefine the BBC News core experience.

(Read more)

Mar 28, 2016

How can we fix open source culture?

The recent kerfuffle around the NPM #unpublishgate and the Greenkeeper bot impersonation has got me thinking about the open source community and its culture.

Sometimes the open source community feels like a wonderful, cooperative, welcoming place. There have been times when maintaining an open source project has given me an enormous sense of satisfaction and well-being. On the best days, complete strangers offer valuable feedback and even actively contribute to my projects.

On the worst days I feel drained, unappreciated, and even abused. Stephan describes this more concisely than I could right at the bottom of Your “just” considered harmful:

The reactions to the npm #unpublishgate showed me once more just how far spread entitled and toxic behaviour is in our community. This has to change and being silent or accepting won’t help.

This is the part of the open source culture that we need to fix. Entitled and toxic are not words that I associate with welcoming and inclusive communities. Yet they are completely apt descriptions of behaviour which is common within the open source community.

I don’t have any solutions to offer. I’m merely venting some frustrations which have been building up for quite some time. But we need to fix this. I don’t want to see this toxic behaviour cause another friend, colleague, or community member to suffer.

How can we fix open source culture?

(Read more)

Aug 23, 2015

Accidental Keyboard Enthusiasm

Over the last 5 years I’ve managed to collect quite a few mechanical keyboards, to the point where I think I qualify as an (accidental) enthusiast.

Das Keyboard (3) Model S Ultimate

This was my first mechanical keyboard. The soft Cherry MX Brown switches make it my favourites for long periods of typing. Even so, I rarely use it any more. At the time of writing, this model is still available on the Das Keyboard website.

CODE Keyboard

I was really excited when Jeff Atwood announced the CODE keyboard. I already knew that I wanted my next keyboard to be compact and have backlit keys, so the CODE seemed to come at just the right time. Not long after buying the CODE, I purchased some Keycool rainbow keycaps so brighten things up.

The CODE has Cherry MX Clear switches, which makes for a much firmer keyboard than the Das. I find the clears preferable for short bursts of typing, but over long periods they tire my hands out.

ErgoDox

As I’ve mentioned in a previous post, I decided that I wanted a split-hand keyboard. After much searching, I settled on buying an ErgoDox kit from Massdrop. This was a really fun project and involved spending plenty of time at the London Hackspace soldering station. The ErgoDox is by far the most comfortable keyboard I’ve used – I like it so much I’ve even placed an order for the next generation ErgoDox.

My ErgoDox has Cherry MX Clear switches. Somehow the clears on the ErgoDox feel much softer than on the CODE, so I’m able to enjoy the feeling of firm keys without the fatigue I experience with the CODE.

(Read more)

Jul 27, 2015

Functional Programming Resources

This post contains a collection of resources for learning about functional programming. These resources cover a range of levels from beginner-friendly introductions right through to more advanced concepts. (Read more)

Jul 8, 2015

Some Git Things

Some notes on terminology

In case you’re not familiar with some of the terminology used below, here is a small glossary.

Object

An object in Git is either a blob (file), tree (directory), commit, or tag. All objects in Git have a hash (like 99b69df491c0bcf5262a967313fad8be0098352e) and are connected in a way that allows them to be modelled as a directed acyclic graph.

Reference

A reference in Git is a bit like a pointer, or a symlink. References are not objects themselves, and they always point to either an object or another reference. Branches, tags, and HEAD are examples of references.

You can learn about all of this and much more in my Hacker’s Guide to Git.

(Read more)

Apr 26, 2015

Cabal: Installing readline on OSX

I’ve had trouble installing the readline package on a few separate OSX installations, so I figured it was worth writing the solution down.

When running cabal install for a package which depends on readline (or simply when running cabal install readline), Cabal exits with errors along the lines of

Configuring readline-1.0.3.0...
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for GNUreadline.framework... checking for readline... no
checking for tputs in -lncurses... yes
checking for readline in -lreadline... yes
checking for rl_readline_version... yes
checking for rl_begin_undo_group... no
configure: error: readline not found, so this package cannot be built

The problem is that Cabal is not aware of the location of the readline lib. My workaround is to specify the location of the lib whenever running these commands:

$ cabal install readline --extra-include-dirs=/usr/local/Cellar/readline/6.3.8/include/ \
                         --extra-lib-dirs=/usr/local/Cellar/readline/6.3.8/lib/ \
                         --configure-option=--with-readline-includes=/usr/local/Cellar/readline/6.3.8/include/readline \
                         --configure-option=--with-readline-libraries=/usr/local/Cellar/readline/6.3.8/lib/

Your paths may differ slightly if you have a different version of readline installed. You can check this with

$ ls /usr/local/Cellar/readline
6.3.8
(Read more)

Oct 31, 2014

Transitioning to a new keyboard layout

I’ve long been considering switching to a different keyboard layout. I tend to type with mostly my forefinger and middle finger, only using my ring and pinky fingers occasionally to stretch out to the modifier keys. Despite this, I still manage to type at around 120WPM on a staggered QWERTY keyboard.

Thinking back, I probably started teaching myself to type at a reasonable speed around age 10. I’m now in my mid-twenties. My typing technique (or lack thereof) never really bothered me, but 15 years of typing with poor technique has started to take its toll. Recently I’ve started experiencing hand fatigue, and I’m beginning to see early signs of RSI. So I figure now is the perfect time to make some changes to the way I type.

(Read more)

Aug 2, 2014

JavaScript Performance: Variable Initialization

Initializing variables properly in JavaScript can have significant performance benefits. This can be shown with a simple synthetic benchmark.

notype.js

var x = null;

for (var i = 0; i < 1e8; i++) {
    x = 1 + x;
}

withtype.js

var x = 0;

for (var i = 0; i < 1e8; i++) {
    x = 1 + x;
}
(Read more)

May 25, 2014

A Hacker's Guide to Git

A Hacker’s Guide to Git is now available as an e-book. You can purchase it on Leanpub.

Introduction

Git is currently the most widely used version control system in the world, mostly thanks to GitHub. By that measure, I’d argue that it’s also the most misunderstood version control system in the world.

This statement probably doesn’t ring true straight away because on the surface, Git is pretty simple. It’s really easy to pick up if you’ve come from another VCS like Subversion or Mercurial. It’s even relatively easy to pick up if you’ve never used a VCS before. Everybody understands adding, committing, pushing and pulling; but this is about as far as Git’s simplicity goes. Past this point, Git is shrouded by fear, uncertainty and doubt.

(Read more)

May 9, 2014

Understanding JavaScript: Inheritance and the prototype chain

This is the first post in a series on JavaScript. In this post I’m going to explain how JavaScript’s prototype chain works, and how you can use it to achieve inheritance.

First, it’s important to understand that while JavaScript is an object-oriented language, it is prototype-based and does not implement a traditional class system. Keep in mind that when I mention a class in this post, I am simply referring to JavaScript objects and the prototype chain – more on this in a bit.

Almost everything in JavaScript is an object, which you can think of as sort of like associative arrays - objects contain named properties which can be accessed with obj.propName or obj['propName']. Each object has an internal property called prototype, which links to another object. The prototype object has a prototype object of its own, and so on – this is referred to as the prototype chain. If you follow an object’s prototype chain, you will eventually reach the core Object prototype whose prototype is null, signalling the end of the chain.

So what is the prototype chain used for? When you request a property which the object does not contain, JavaScript will look down the prototype chain until it either finds the requested property, or until it reaches the end of the chain. This behaviour is what allows us to create “classes”, and implement inheritance.

(Read more)

Mar 10, 2014

Defining readable code

Code readability is something that I often bring up during code reviews, but I often have trouble explaining why I find a piece of code to be easy or difficult to read.

When you ask programmers how to make code easier to read, many of them will mention things like coding standards, descriptive naming, and decomposition. These things actually aid in making code easier to comprehend rather than easier to read. For me, readability is at a lower level, somewhere between legibility and comprehension.

 

At the lowest level is legibility. This is how easily individual characters can be distinguished from each other, and can usually be boiled down to the choice of font, as well as the foreground & background colours.

At the highest level is comprehension, which is the ease in which a block of code can be fully understood. Decomposition, naming conventions and comments are just a few of the many ways to improve comprehension.

Readability sits between these two. This level is a little harder to define, but I believe it comes down to two main factors: structure and line density.

(Read more)

Sep 30, 2013

HTTP status as a service

Using Node.js* you can run a simple “HTTP status as a service” server. This can be useful for quickly checking whether your application handles various status codes.

var http = require('http');

http.createServer(function (request, response) {
  var status = request.url.substr(1);

  if ( ! http.STATUS_CODES[status]) {
    status = '404';
  }

  response.writeHead(status, { 'Content-Type': 'text/plain' });
  response.end(http.STATUS_CODES[status]);
}).listen(process.env.PORT || 5000);

This will create a server on port 5000, or any port that you specify in the PORT environment variable. It will respond to /{CODE} and return the HTTP status that corresponds to {CODE}. Here’s a couple of examples:

$ curl -i http://127.0.0.1:5000/500
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Date: Mon, 30 Sep 2013 14:10:10 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Internal Server Error%
$ curl -i http://127.0.0.1:5000/404
HTTP/1.1 404 Not Found
Content-Type: text/plain
Date: Mon, 30 Sep 2013 14:10:32 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Not Found%

This is a really simple example, and could easily be extended to let you specify a Location header value for 30X responses.

(Read more)

← Older Posts