I recently reworked the frontend of my boring personal page at johnamata.com
And look, I know what youâre thinking. âOh great, another kid got hold of some shiny new animation technique and decided to crap all over my browserâs performance.â
Now you aint wrong. But stick with me here, because this isnât your average, run-of-the-mill, âletâs make the userâs device catch fireâ kind of web animation. I actually spent some tiny effort to give a damn about your CPU and memory
So let me quickly write a messy blog post to explain it, although chiefly itâs because writing one might direct more traffic to my site: I need to boost the site so that johnamata.com is #1 when I search for my name on Google! Even some kidâs facebook profile ranks over me đ
Now cause I dont really have much time to write about all the other changes made and the git commits (that I squashed into big commits - these included tiny optimizations like scheduling animations), for the scope of this blog post I will focus on this small commit specifically for background.js which basically improved the average perf by >50% (average of Chrome Dev Tools Performance Monitor Memory Graph for 30-60 seconds)
JK, these tab info arent a good estimate - use dev tools!
Caching
const tileCache = {};
function generatePenroseTile(size, color) {
const key = `${size}-${color}`;
if (tileCache[key]) return tileCache[key];
// ... tile generation code ...
tileCache[key] = dataUrl;
return dataUrl;
}
This right here? This is the difference between âmy animation runs smoothlyâ and âmy animation makes CPUs beg for mercy.â Letâs break it down:
- Memory Efficiency: weâre not storing every single tile. Weâre storing unique combinations of size and color. Itâs like the Marie Kondo method of tile storage - if it doesnât spark joy (or hasnât been used before), we donât keep it
- Computation Savings: generate once, use many times. Itâs the âreduce, reuse, recycleâ of the programming world. Your CPU will thank you for not making it redraw the same damn triangle fifty times
- String Keys: using
${size}-${color}as the key makes it simple. Itâs like the perfect hash function, but without all the complexity that makes you want to headbutt your keyboard
This caching mechanism isnât just about saving CPU cycles. Itâs about understanding the nature of the beast weâre dealing with. These tiles (I was planning on making them penrose tiles - maybe later), by their very nature, have a limited set of shapes and orientations. By caching, weâre not just optimizing - weâre leveraging the inherent properties of the system weâre simulating
Creating Layers Without Angering the Render Gods
Now, letâs talk about creating these layers without making the DOM have an existential crisis:
function createLayer(layerIndex) {
const layer = document.createElement('div');
// ... layer styling ...
const fragment = document.createDocumentFragment();
for (let i = 0; i < numberOfTiles; i++) {
// ... tile creation ...
fragment.appendChild(tile);
}
layer.appendChild(fragment);
return layer;
}
Letâs break it down:
- Document Fragments: itâs like the staging area for your DOM elements. Youâre telling the browser, âHey, Iâm not done yet, donât go repainting everythingâ
- Batch DOM updates: by appending everything to the fragment first, then appending the fragment to the layer, youâre essentially saying, âHereâs all the changes at once.â Itâs like ripping off a band-aid - painful, but quick
- Minimizing Reflows: every time you touch the DOM, the browser has to recalculate styles, layout, and repaint. This code says, âLetâs do that once, not 30 times per layer.â And just like that our browserâs layout engine just breathed a sigh of relief
This approach isnât just about performance - itâs about understanding how browsers work at a fundamental level. Itâs mechanical sympathy in action (more on what that means later). Youâre not fighting the browserâs natural behavior; youâre working with it
CSS
const styleSheet = document.createElement('style');
styleSheet.textContent = `
@keyframes rotateTile {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
`;
document.head.appendChild(styleSheet);
- Single Keyframe definition: instead of defining animations for each tile, we define it once
- Transform magic: using
transform: rotate()instead of mucking about with top/left properties is like telling the browser, âHey, you know that GPU youâve got? Letâs use it.â Itâs kinda like hardware acceleration without explicitly asking for it - Simplified Animation logic: by defining the animation once and reusing it, weâre reducing the amount of unique data the browser needs to keep track of
This approach is more on having an understanding of how modern browsers handle animations. Youâre not just writing code; youâre speaking the browserâs language. Youâre saying, âI understand how you work, and Iâm going to work with you, not against you.â Itâs not some algorithmic optimization
Itâs Mechanical Sympathy
Look, I get it. You spent four years (or more, no judgment here - college is fun) slaving away over textbooks, learning about Big O notation, red-black trees, and the intricacies of quicksort vs. mergesort. You emerged, blinking, into the sunlight, armed with the knowledge to conquer any algorithmic challenge thrown your way. And then you started working on web apps, and suddenly, few of that seemed to matter anymore
Welcome to the world of frontend development, where your fancy algorithms take a backseat to an entirely different kind of optimization
First, letâs get one thing straight: the optimizations weâve been discussing in our tile animation? Theyâre not algorithmic optimizations in the classical computer science sense. Theyâre what we call mechanical sympathy optimizations
âBut wait,â I hear you cry, âwhatâs the difference?â Glad you asked my hypothetical reader! (or maybe I really have a reader for this post)
- Algorithmic optimizations: these are about improving the time or space complexity of your algorithm. Itâs about going from O(n^2) to O(n log n), or reducing your space usage from O(n) to O(1). Itâs the stuff of coding interviews and whiteboard exercises
- Mechanical Sympathy optimizations: these are about understanding how your code interacts with the underlying system - in our case, the browser and the JavaScript engine. Itâs about working with the system, not against it đ¤
Our optimizations? Theyâre firmly in the mechanical sympathy camp. Weâre not changing the fundamental algorithms (which are pretty simple to begin with). Weâre changing how we interact with the browserâs rendering engine, the DOM, and the JavaScript runtime
The Constant Time Conundrum: Why O(1) is King in Frontend Land
Now, hereâs where it gets interesting. In frontend development, weâre often operating in what we might call âeffective constant timeâ - or at least, thatâs what weâre aiming for. What do I mean by that? Well, letâs break it down:
- Render Performance: when youâre animating something on screen, you have about 16.67ms to do all your work if you want to hit that silky smooth 60fps. Whether you have 10 elements or 1000, you still have the same time budget
- User Interaction: when a user clicks a button, they expect an immediate response. Whether your app is managing 100 items or 10,000, that click better register real quick
- Initial Load: users start getting antsy after about 3 seconds of load time. Whether youâre loading a simple blog or a complex web app, youâre racing against the same clock
See the pattern? In all these cases, weâre not trying to optimize for better-than-linear time complexity. Weâre trying to optimize for consistent, predictable performance regardless of scale
So why doesnât traditional algorithmic optimization help much here? A few reasons:
- Scale Isnât the Problem: most web apps arenât dealing with millions of elements in the DOM or running complex computations in the browser. The difference between O(n) and O(log n) often doesnât matter when n is small
- The Bottleneck is elsewhere: the performance bottleneck in web apps is often in things like DOM manipulation, network requests, and rendering - not in your JavaScript computations
- Perceived Performance matters more: users donât care if your algorithm is O(n) or O(1) - they care if the app feels fast and responsive
So what really matters in frontend optimization? IMO, here are the true heroes:
- Batch DOM updates: like we did with our document fragments in the animation. This is about reducing the number of expensive DOM operations
- Efficient Rendering: using things like transform for animations, as we did, to leverage GPU acceleration
- Resource management: techniques like lazy loading, code splitting, and efficient asset loading
- Memory management: avoiding memory leaks, using object pools, and being smart about garbage collection
- Event Delegation and Throttling: managing user interactions efficiently
These are all about working with the browser, understanding its quirks and optimizing for its strengths. Theyâre about constant factors, not algorithmic complexity
Now, donât get me wrong. Algorithmic knowledge is still valuable. There will always be cases where you need to optimize a complex computation or manage large datasets efficiently. But in the day-to-day trenches of frontend development, mechanical sympathy often trumps algorithmic wizardry
Understanding how the browser works, how JavaScript engines optimize code, how the event loop functions - these are the things that will often give you more bang for your buck in web performance
Itâs not about writing clever algorithms. Itâs about writing code that plays nice with the platform itâs running on. Itâs about respecting the constraints of the web platform and working within them, not fighting against them
Takeaway
So, whatâs the lesson here? Mechanical Sympathy. I was basically thinking of writing a post about it, but I kept on procrastinating until I worked on this tiny rework of my website.
My main takeaway is that even in frontend work, itâs important to think of the platform. Learn how browsers work. Understand the DOM, the render pipeline, the JavaScript runtime. These are your real algorithms, your real data structures
Your CS degree isnât useless - far from it. The problem-solving skills, the ability to think abstractly about code, these are invaluable. But donât be surprised or disappointed when you find yourself spending more time thinking about browser quirks than balanced trees
In the end, the best optimization is the one that makes your app feel faster to the user. And in the world of web apps, that often means focusing on those constant time factors, on smoothing out the rough edges of browser performance, rather than shaving a few microseconds off your sorting algorithm
So the next time youâre optimizing a web app, remember: your Big O might not be as important as your Big OMG-this-feels-faster-to-the-user. And thatâs okay. Thatâs part of web development. Peace out