Is JavaScript more fragile?
‘JS is more fragile’ is a stance common among Progressive Enhancement advocates (and I’m certainly guilty of this myself).
Kyle Simpson (getify), Sara Soueidan and others have just had a spirited debate on Twitter on how divisive this characterisation of JavaScript can be, prompted by an excellent blog post by Aaron Gustafson. (Both the post and the discussion are good reading, so go read them.)
Getify is right to push back on the statement so I’ve been thinking about how I would rephrase it.
(This is also a followup of sorts to the blog post I wrote a couple of days ago.)
The best I could come up with is, instead of ‘JS is more fragile’, to say ‘complex apps are more fragile and JS is a powerful enabler of complexity’. Less catchy, sure, but sometimes less catchy is what you need.
The ultimate cause of JS’s perceived fragility isn’t some inherent flaw in the tool but that it’s often used to create fragile products. It’s a cognitive lever that lets you build products whose complexity (and maintenance overhead) is out of proportion with the effort involved in making them. That lever comes with a downside which are substantial additional difficulties in error handling, state management, and dealing with changes in the execution environment.
There’s a distinction to be made here between complex processes and complex implementations. JavaScript’s power lies in its ability to create complex implementations with relatively simple code. A lot of that is due to the ease with which you can pull in powerful dependencies. It’s also a basic function of it being a fully fledged programming language with powerful APIs where HTML and CSS are not. The cost is that, out of the box, this code also tends to be more fragile. Simple code is usually simple because it has bad error handling, buggy state management, and doesn’t deal very well with changes in context. But often the process remains simpler overall, even when you account for all of the additional problems you have to account for. The core reason, for example, why many developers move style handling out of CSS and into Javascript is because of that tradeoff: simpler process, more complex product, and the process is the thing that costs money.
Progressive enhancement, conversely, tends to be a more complex development process (juggling HTML, CSS, and JS can sometimes border on being a nightmare) that results in a simpler implementation. It’s a different tradeoff. The economic rationale for progressive enhancement is generally that simpler—more robust—products have greater reach and lower long term maintenance costs, which leads to a larger customer base, which in turn results in a higher return on investment. So: more complex process, simpler product, and the product is the thing that makes money.
Another, usually more obliquely stated, benefit of progressive enhancement is that HTML, CSS, and JavaScript often have different failure scenarios. Failures tend to be quite granular—only network errors are all or nothing. Most of the common failure scenarios are when the client doesn’t support a specific feature you’re using. Sometimes they have limited JavaScript support (often through over-enthusiastic privacy blockers but also due to missing APIs or memory constraints). Sometimes they have limited CSS support (older browsers, as long as they aren’t IE, are often surprisingly good at JS but suprisingly bad at CSS). Sometimes they don’t support the HTML features you’re using (coughHTML5 formscough).
A lot of the percieved fragility of JavaScript comes from the fact that when you’ve implemented all core functionality in JS and you enter one of JS’s failure scenarios, the other parts of the stack aren’t there to pick up the slack. The strength of the web platform isn’t that any one layer of the stack is more robust than the other but that they are differently fragile and fail at different times.
One of the downsides of progressive enhancement is also an inevitable consequence of its core tradeoff: simpler, more reliable implementations have an upper limit on their capabilities. Sometimes you really do have to build a complex product.
But many, if not most, websites only require a small fraction of the complexity that JavaScript is capable of. It seems sensible to always try for simpler implementations first, before we move on to the more complex ones.
Sometimes that simpler solution isn’t no-JS but simpler JS. For example, if you can figure out a way to avoid having to do client-side state management, you should do so—state management being a frequent source of bugs in modern web applications.
There’s no reason why a set of simple forms on a government website should be implemented in Angular when they could be implemented more reliably and with greater reach using built-in features of HTML and CSS (with some JS for Ajax and for filling in the validation gaps).
Progressive enhancement’s core value proposition, for me, is that HTML and CSS have features that are powerful in their own right. Using HTML, CSS, and JavaScript together makes for more reliable products than just using Javascript alone in a single-page-app.
This philosophy doesn’t apply to every website out there, but it sure as hell applies to a lot of them.