2025-07-11 15:55:00
medium.com
There were some hard choices to make immediately. The first thing we discarded was webfonts, as these were bytes we simply didn’t have to spend.
font-family: -apple-system, ".SFNSText-Regular", "San Francisco", "Roboto",
"Segoe UI", "Helvetica Neue", sans-serif;
Discarding webfonts and instead using the system font on the device had three benefits for us.
First, it meant we didn’t have to worry about a flash-of-unstyled-text (FOUT). This happens when the browser renders the text before the font is loaded, and then renders it again after loading, resulting in a brief flash of text in the wrong style. Worse, the browser may block rendering any text at all until the font loads. These effects can be exaggerated by slow connections, and so being able to eliminate them completely was a major win.
Second, leveraging the system font meant that we were working with a large glyph set, a wide range of weights, and a typeface designed to look great on that device. Sure, customers on Android (which uses Roboto as the system font) would see a slightly different layout to customers on iPhone (San Francisco, or Helvetica Neue on older devices), or even customers on Windows Phone (Segoe UI). But, how often do customers switch between devices like that? For the most part, they will have a consistent experience and won’t realise that people on other devices see something slightly different.
Best of all, we got all of this at the cost of zero bytes from our page budget. System fonts were an absolute no-brainer, and I still use them today.
Jake Archibald once described the difference between a library and a framework like this: a library is something your code calls into, a framework is something which calls into your code.
Frontend web development has been dominated by frameworks at least since React, if not before. SproutCore, Cappuccino, Ember, and Angular all used a pattern where the framework controls the execution flow, and it hooks into your code as and when it needs to. Most of these would have broken our 128KB page budget before we had written a single line of application code.
We looked at libraries like Backbone, Knockout, and jQuery, but we knew we had to make every byte count. In the days before libraries were built for tree-shaking, almost any library we bundled would have included wasted bytes, so instead we created our own minimal library, named Whizz.
Whizz implemented just the API surface we needed: DOM querying, event handling, and AJAX requests. Much of it simply smoothed out browser differences, particularly important when supporting everything from IE8 to Safari 9 to Android Browser to Opera Mini. There was no virtual DOM, no complex state management, no heavy abstractions.
The design of Whizz was predicated on a simple observation: the header and footer of every page were the same, so re-fetching them when loading a new page was a waste of bytes. All we really needed to do was fetch the bits of the new page we didn’t already have.
We then handled updates with a very straightforward technique. An event listener would intercept the click, fetch the partial content via AJAX, and inject it into the page. (These were the days before the Fetch API, when we had to do everything with XMLHttpRequest. Whizz provided a thin wrapper around this.)
{
"title": "Document title for the new page",
"content": "Partial HTML for just the new page
"
}
The AJAX request included a custom header, X-Whizz, which the server recognised as a Whizz request and returned just our JSON payload instead of the full page. Once injected into the page, we ran a quick hook to bind event listeners on any matching nodes in the new DOM.
function onClick(event) {
var mainContent;event.preventDefault();
mainContent = WHIZZ.querySelector("main");
WHIZZ.load(event.target.href, function (page) {
document.title = page.title;
WHIZZ.replaceContent(mainContent, page.content);
WHIZZ.rebindEventListeners(mainContent);
});
}
This really cut down on the amount of data we were transferring, without needing heavy DOM manipulation, or fancy template engines running in the browser. Knitted together with a simple loading bar (just to give the user the feeling that stuff is moving along) it really made navigation, well, whizz!
Probably the most significant problem we faced in squeezing pages into such a tiny payload was images. Even a small raster image, like PNG or JPEG, consumes an enormous amount of bytes compared to text. Text content (HTML, CSS, JavaScript) also gzips well, typically halving the size on the wire, or more. Images, however, often don’t benefit from gzip compression. We had already committed to using them sparingly, but reducing the absolute number of images wasn’t enough on its own.
While we started off using tools like OptiPNG to reduce our PNG images as part of the build process, during development we discovered TinyPNG (now Tinify). TinyPNG did a fantastic job of squeezing additional compression out of our PNG images, beyond what we could get with any other tool. Once we saw the results we were getting from TinyPNG, we quickly integrated it into our build process, and later made use of their API to recompress images uploaded by users.
JPEG proved more of a challenge. These days Tinify supports JPEG images, but at the time they were PNG-only so we needed another approach. MozJPEG, a JPEG encoder tool from Mozilla, was pretty good and was a big improvement over the Adobe JPEG encoder we had been using. But we needed to push things even further.
What we came up with involved exporting JPEGs at double the scale (so if we wanted a 100×100 image, we would export it 200×200) but taking the JPEG quality all the way down to zero. This typically produced a smaller file, albeit heavily artefacted. However, when rendered at the expected 100×100, the artefacts were not as noticeable.
The end result used more memory in the browser, but spared us precious bytes on the wire. We recognised this as a trade-off, and I’m still not 100% sure it was the best approach. But it was sparingly used, and effective for what we needed.
The real wins came from embracing SVG. SVG has the advantage of being XML-based, so it compresses well and scales to any resolution. We could reuse the same SVG as the small and large versions of an icon, for example. Thankfully, it was also supported by Opera Mini.
That isn’t to say SVG was all plain sailing. For one thing, not all of our target browsers supported it. Notably, Android Browser on Gingerbread did not have great SVG support, so our approach here was to provide a PNG fallback using the
Browsers which supported
The larger problem we had with SVG was one more unexpected, because it turned out that vector design tools like Adobe Illustrator and Inkscape produce really noisy, bloated SVGs. Adobe Illustrator especially embeds huge amounts of metadata into SVG exports, with unnecessarily precise coordinates for paths. This was compounded by artefacts resulting from the way graphic designers typically work in vector tools: hidden layers, deeply nested groups, redundant transforms, and sometimes even nested raster images. Literally, PNG or JPEG data embedded in the SVG, which you would never see unless you opened it in a code editor.
The result was images which should have been 500 bytes coming in at 5–10KB, or larger. If we were going to pull this off, we needed to very quickly become experts at SVG optimisation.
SVGO, the SVG optimisation tool, was relatively nascent at the time, but did a grand job of stripping away much of the Adobe cruft. Unfortunately, it wasn’t good enough on its own.
Many hours of experimentation took place, just fooling with the SVG code in an editor and seeing what that did to the image. We realised that we could strip out most of the
When fooling with the code wasn’t enough, we started working with the designers to merge similar paths into a single element, which often produced smaller files. We worked toward a goal of only ever having a single path for any given fill colour. This wasn’t always possible, but was often a great start at reducing the size of the SVG.
Path definitions we would typically round to one or two decimal places, depending on what worked best visually. We found that the simpler we made the SVG, the greater the chance was that it would render consistently across various devices, and the smaller the files would get.
Unfortunately, it was an especially labour-intensive process which didn’t lend itself very well to automation. These days, I would be more relaxed about just letting SVGO do its stuff, and SVGO is a more capable tool now than it was then. But I still wince when I see unoptimised SVGs dropping out of Figma landing in a project I’m working on, and will often take the ten minutes or so needed to clean them up.
Minifying across CSS and JavaScript has been standard practice for over a decade, but some developers question the utility of minification. They argue that once gzip/deflate is introduced, the wins from minification are trivial. Why go to all the trouble of mangling your code into an unreadable mess, when gzip offers gains an order of magnitude larger?
We didn’t find these arguments especially persuasive at the time. For one thing, even saving 3–4KB was considered a win and worth our time on the budget we had. But more than that, gzip/deflate support was pretty spotty on mobile browsers from the time. Opera Mobile (distinct from Opera Mini) had pretty poor gzip support, and Android Browser from the time was reported as inconsistent in sending the required Accept-Encoding content negotiation header (In hindsight, perhaps this reported inconsistency was overstated/FUD, but even if that’s true, we didn’t know it then.)
Introducing minification prior to zip meant that even if the client did not support gzip or deflate encoding, they still enjoyed reduced payloads thanks to the minification. We were using Gulp for our build tool, which at the time was the shiny new hotness, and presented a code-driven alternative to Grunt.
Gulp’s rich library of plugins included gulp-minify-css, which reduced CSS using the clean-css library under the hood. We also had gulp-uglify to minify the JavaScript. That was effective in reducing the size of our assets, but with only 128KB to play with, we were always hammering home this mantra that every byte counts. So we took things one step further and added minification to our HTML.
I don’t know that anyone is routinely doing this these days, precisely for the reasons outlined above. Gzip/deflate gets you bigger gains, and HTML (unlike JavaScript) doesn’t lend itself to renaming variables, etc. But there were a few techniques we were able to use to reduce the payload by even a few hundred bytes.
There were early wins from replacing any Windows-style newlines (\r\n) with UNIX-style ones (\n). We were also able to strip out any HTML comments, excepting IE conditional comments, which had semantic meaning to that browser. We could safely remove whitespace from around block-level elements like
Keep your files stored safely and securely with the SanDisk 2TB Extreme Portable SSD. With over 69,505 ratings and an impressive 4.6 out of 5 stars, this product has been purchased over 8K+ times in the past month. At only $129.99, this Amazon’s Choice product is a must-have for secure file storage.
Help keep private content private with the included password protection featuring 256-bit AES hardware encryption. Order now for just $129.99 on Amazon!
Help Power Techcratic’s Future – Scan To Support
If Techcratic’s content and insights have helped you, consider giving back by supporting the platform with crypto. Every contribution makes a difference, whether it’s for high-quality content, server maintenance, or future updates. Techcratic is constantly evolving, and your support helps drive that progress.
As a solo operator who wears all the hats, creating content, managing the tech, and running the site, your support allows me to stay focused on delivering valuable resources. Your support keeps everything running smoothly and enables me to continue creating the content you love. I’m deeply grateful for your support, it truly means the world to me! Thank you!
BITCOIN bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge Scan the QR code with your crypto wallet app |
DOGECOIN D64GwvvYQxFXYyan3oQCrmWfidf6T3JpBA Scan the QR code with your crypto wallet app |
ETHEREUM 0xe9BC980DF3d985730dA827996B43E4A62CCBAA7a Scan the QR code with your crypto wallet app |
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.