Web dev at the end of the world, from Hveragerði, Iceland

New Web Development. Or, why Copilots and chatbots are particularly bad for modern web dev

There’s blood in the water. Angry developers, users, and regulatory bodies are circling React and Single-Page-App web development, snapping big chunks out of their sides. The smell of blood just brings more and more critics.

It’s finally safe to point out the flaws in the status quo without risking your job, both because more and more people are voicing their concerns, but also because so many have already been fired. What’re they going to do, fire you again?

With nothing to lose, we might as well point out what’s broken.

At the same time, the exact people who cause the shit storm of broken websites and plodding web apps – management – are finally receptive because they now have to get things done with fewer people and at a lower price.

Shit actually needs to work now, because tech can’t coast on “AI” funding forever. They know this. We know this.

Web development is on the verge of a paradigm shift.

Paradigm is a term that’s easy to dismiss and wave off with a mocking hand. It’s been overused – turned into a word salad cliche by executives and consultants – but the fundamental idea has a core truth that helps us understand how fields and professional practices change.

A paradigm is a mental model or worldview that’s defined through exemplary examples. That’s how it was originally defined. Thomas S. Kuhn, in his book The Structure of Scientific Revolutions, had to choose between two different words for his concept: exemplar and paradigm, both roughly meaning the same. He chose paradigm.

Gmail was the exemplar that defined the ajax paradigm of thinking about web development.

Facebook was the exemplar that defined the React and Component paradigm of web app development.

A paradigm is a way of thinking about practical problems in the field using and exemplar as a reference. As the paradigm matures, it gains more exemplars (Twitter, web-based MS Word, etc) and those examples converge on a set of “best practices”.

The best practices are derived from example, not the other way around. Facebook didn’t converge on their best practices until after they’d delivered a mature app using React.

The important consequence of switching to a new a new paradigm – mental model – is that many formerly intractable problems suddenly become solvable. A complex social media app made by a literal army of developers would have been impossible witout the component model. Facebook would not have managed to deliver a workable app with the earlier Ajax paradigm and a traditional Model-View-Container architecture.

A paradigm offers a new way of looking at problems, which enables new solutions that would have been impossible to predict or conceive of in the older worldview.

A paradigm, originally popularised by Thomas S. Kuhn in his book The Structure of Scientific Revolutions, is a mental model or a worldview that helps explain why and how a particular thing works.

Newtonian physics was a paradigm. It explained the mechanics of motion, gravity, and mass in ways that was predictive and practical. You could use the model to guide further work in both physics and engineering. But as science progressed we discovered more and more exceptions and outliers that the model couldn’t explain which made it useless for work in those areas. As a mental model, it went from being an aid to a hindrance.

Einstein’s theory of relativity explained those outliers, while still being useful for all of the prior examples of Newtonian mechanics, and helped predict future work that would have been impossible using the Newtonian way of thinking.

When the older paradigm ceases to be useful and begins to be replaced by a new paradigm that is more useful, that’s a paradigm shift.

As I wrote above paradigms aren’t defined by theory but through example. If Newtonian physics had been confined to just Newton’s writings, it wouldn’t have become a paradigm. What defines a paradigm is the work done in the field. That it was used as a foundation for future theories, academic practice, and to guide engineering and other work is what made it a paradigm.

This is also why paradigm shifts are messy. They aren’t all-or-nothing because the older paradigm continues to work for many of its intended use cases. The transition takes time and may never complete fully. We still apply Newtonian mechanics to this day. We’ll still be using React web apps years from now.

Exactly how much time a shift takes depends on the value of the new kind of work that the incoming paradigm enables and the rate of churn in the industry or field. An older paradigm in a mature field may never be fully replaced until all of its adherents age out or die.

The paradigm shift that web development is entering hinges on the fact that while React was a key enabler of the Single-Page-App and Component era of the web, in practice it normally tends to result in extremely poor products. Built-in browser APIs are now much more capable than they were when React was first invented.

  • Websites and web apps made in React tend to be slower than those made in other frameworks and much slower than those made using just the built-in/DOM APIs necessary to fulfil the product’s requirements. It’s built for an era when those capabilities didn’t exist natively in the browser so the inherent performance issues didn’t matter. Slow was better than not at all, but that isn’t a trade-off we have to make anymore.
  • Many popular React approaches tend to be inaccessible by default. Accessibility is increasingly both a hard legal requirement and positive business investments. (Accessibility features usually benefit a plurality of a service’s customers, just in varying degrees. The benefit isn’t exclusive to a a single demographic. Accessibility work also tends to be a source of innovation as it drives research into new modes of interactivity.)
  • React apps tend to be disproportionately complex. Or, to be more specific, React apps have a baseline complexity that exists due to the unavoidably thick abstraction layer over the base browser platform. This ratchets up the implementation and design complexity of many small- to mid-sized apps by a notch or several.
  • The various popular approaches to state management for React usually only let you choose which flavour of complicated spaghetti code you end up with, not avoid it altogether.
  • They tend to have poor support for low- to mid-ranged devices and connections.
  • Framework churn – the constant release of partially-incompatible versions – makes maintenance of React-based projects much more expensive than those built on standardised browser APIs, which in practice never change once they get widely supported. This cost can be enormous in the long term.

React has been great for commodifying developers. By forcing all web development into the same shape and size, you make recruitment easier and can more easily force the structure of your development teams to conform to your organisation’s whims. This was great during the period when investors considered developer headcounts and escalating team sizes to be a positive signal for their investment, but it’s no longer useful now that the same crowd arbitrarily considers those same measures to be a negative signal.

Changing attitudes among investors, toxic influences as they are, took the “tailwind”, so to speak, out of React’s sails, making it more acutely noticeable that it’s a generally poor fit to modern web projects.

People are noticing. We’re starting to see more and more reports of teams and organisations who are switching away from React and seeing both performance and productivity gains as a result.

For example:

“We were completely surprised by the speed gain,” Moulis said. “Our application engine is designed to produce complex ERP-type applications, which involve heavy data consumption to present in real-time. On a page we consider complex, with over 800 DOM elements, some of which use different subscription systems via our event system at initialization to update when necessary, the overall load time dropped from 4-5 seconds to 400ms.”

As well as speed gains, the user interactions markedly improved, said Moulis.

Pivoting From React to Native DOM APIs: A Real World Example

But there’s a problem. After years of industry disinterest in training for fundamentals, such as CSS, HTML, or built-in platform APIs, those who are switching are finding that neither their organisation nor the job market seem to know how to train or find people with these skill sets.

From the above report:

He added that finding developers who know vanilla JavaScript and not just the frameworks was an “unexpected difficulty.”

This may be unexpected to those used to the meatgrind of the developer pipeline consistently, just crank the handle to get a fresh batch of identikit React coders delivered. Thow developers in turn have never been given the opportunity to broaden their skill sets.

To those of us outside the React community it’s been a slow motion train wreck unfolding over several years. We’ve been watching developers reach for bloated React components long since been made obsolete by singlelines of CSS code or DOM API calls, not because they had a concrete practical reason to do so, but because they simply weren’t aware of just how powerful the web platform has become. It’s been obvious for a long while that most web developer training isn’t for making you a web developer with a broad foundation. It’s for turning you into a homogenous React unit that management can freely move around on their org charts.

Now those people risk being left behind – effectively betrayed by the training and recruitment industry that had “standardised” on React.

As the web developer (and extremely astute observer of many things) Marco Rogers said on Mastodon the other day:

This has been brewing in my head for a long time. The frontend ecosystem is kind of broken right now. And it’s frustrating to me for a few different reasons. New developers are having an extremely hard time learning enough skills to be gainfully employed. They are drowning in this complex garbage and feeling really disheartened. As a result, companies are finding it more difficult to do basic hiring. The bar is so high just to get a regular dev job. And everybody loses.

What’s even worse is that I believe a lot of this energy is wasted. People that are learning the current tech ecosystem are absolutely not learning web fundamentals. They are too abstracted away. And when the stack changes again, these folks are going to be at a serious disadvantage when they have to adapt away from what they learned. It’s a deep disservice to people’s professional careers, and it’s going to cause a lot of heartache later.

The organisations themselves are under increasing pressure to lower costs. Investors want less money spent on software development, primarily through lower pay and fewer developers, but the overall desire to lower costs underlies both demands. That’s one of the drivers of the paradigm shift: projects built on DOM APIs are cheaper to maintain because once an API is a standard, browsers are extremely hesitant to change them. Frameworks, conversely, change every season in ways that break your code again and again.

Again from Marco Rogers’ thread I quoted above:

I always have to start with the cynical take. It’s just how I am. But I do want to talk about what I think should be happening instead.

Companies that want to reduce the cost of their frontend tech becoming obsoleted so often should be looking to get back to fundamentals. Your teams should be working closer to the web platform with a lot less complex abstractions. We need to relearn what the web is capable of and go back to that.

Organisations are caught in a bind. They are being pressured to lower costs but to take advantage of one of the more concrete and empirically validated cost-cutting measure (getting back to fundamentals and using simpler abstractions) requires either retraining developers or recruiting developers who are disproportionally senior: people who originally came into the field before React completely took over.

And senior people are both in shorter supply and more expensive when you can find them.

(There’s also the issue of age prejudice in the tech industry, but let’s leave that topic for a later day.)

Developers have rightly focused on acquiring skills that pay the bills: React.

Organisations have, rightly or wrongly, focused on recruiting developers whose skillsets made them more easily slotted wherever and whenever the org chart du jour required it.

Those who train or teach web development have followed suit. Courses, ebooks, and videos that focus on core built-in APIs, CSS, and HTML are the exception, not the norm.

Effectively, everybody wants to build the training infrastructure the new paradigm needs…

But nobody wants to pay for it.

Enter, chatbots and copilots.

I, and other people who have been writing about web development, keep hearing two kinds of reports on how people are using LLMs.

  1. Developers use chatbots to “teach themselves” vanilla JS, CSS, or HTML.
  2. And they use it to generate vanilla JS code, either by describing a problem they’d normally solve using React to a chatbot or by prompting a copilot to autocomplete a solution.

Organisations don’t mind this, some even outright require it, because they don’t want to invest in retraining their developers. And if an experienced React developer can use a language model to auto-generate enough “vanilla” JS and CSS code to replace a few junior developers, that represents substantial cost savings to the organisation.

Even if you believe in the usefulness of language models for coding – for argument’s sake – this particular use of an LLM is an extremely bad idea as it leans on these models where they’re weakest: genuine novelty.

Why LLMs are an extremely bad fit for new standards-oriented web development

I’m not a fan of using Large Language Models for software development, but even if you consider those concers to be either overblown or already solved – even if you think that copilots and chatbots as implemented today are great tools for software development – they’re still extremely poor tools for specifically shifting your organisation’s coding practices to a new reality of core platform-oriented web development and for learning how to use the many new features of the web platform.

It’s all down to how a Large Language Model works.

I’m not going to get into the weeds of how these models work, the issue can be explained at a bit of a higher level.

The core observation that underlies these models is that size matters. The more training data you have for a specific use case, the better the outcomes tend to be. Tools built on Large Language Models are statistical models of a (mostly) textual landscape. The more data they have to derive their statistical modelling from, the better, more plausible, and more realistic the answer tends to be.

There are limits, of course, and depending on who you ask, we may or may not have already reached that limit as the improvements over the early GPT-4 models have all been incremental at best.

But, generally, the more training data you have, the better the model is at what it does.

There are downsides. There are some indications that larger models actually hallucinate more and get less predictable, but it’s hard to tell whether that’s integral to how these methods work or whether it’s down to, well, the kinds of people who run AI companies. Dishonest people with poor judgment tend to make poor decisions and then try to hide them.

This size-over-everything characteristic also goes to partially explain why these tools tend to be bias magnifiers.

The majority of the body of text in the training data is biased. Most of it was written before we had a broader cultural awakening about biased language and, even today, biased language still represents the majority of what you can find on the web – even of newer writing. It’s merely gone from a super-majority to regular majority.

The models are expected to statistically boil this broad data set into single answers and the single answer that most accurately represents the broad consensus in the training data is going to be one that’s biased. Always. Prejudice and biased language gets amplified and, because less biased language is a smaller proportion of the training data set, the unbiased answers are at risk of being of a lower quality and a higher overfitting rate.

The odds of overfitting, or verbatim copies of text from the training data, increases the smaller the portion of the training data the answer draws from. A smaller body of text means less variation in the training data, which occasionally gets amplified into no variation in the answer.

This represents a problem when faced with a paradigm shift.

Training data sets implicitly represent the status quo – the old paradigm. By definition, it is useless for modelling of a new paradigm.

It’s the old world and you can’t begin to understand a new unexplored world from maps of the familiar and the already known. It’s unexplored. There are no maps.

A key characteristic of new web development is that modern platform features are underused. This means they are underrepresented, if not entirely non-existent, in all existing data sets. The exemplars haven’t been made yet, but we’re getting closer with every new project that switches to working with DOM APIs directly.

Large Language Models can’t model what doesn’t yet exist

That’s the core issue here: new web development is in the process of being invented and defined through practice. What it looks like today does not represent its final form. It won’t look like the web development I learned twenty or ten years ago because the platform has evolved.

It has a number of new and frankly amazing capabilities that are either severely underused by the field at large or almost entirely unused.

This is not even close to being a complete list of powerful and underused features of the modern browser platform.

Some of these are underused because they still have limited availability. Some of them because they’re so new and we’re still trying to figure them out (in between projects, because employers still generally don’t pay for research even if it presents an opportunity for substantial cost savings).

Some of them, like JavaScript modules and import maps or custom properties, we’re still figuring out because they represent a new way of thinking or have complex implications.

Modern web applications and websites that build directly on these features are going to be very different from those that translate the React Single-Page-App mode of thinking directly into “vanilla” JS or those that represent what was the status quo over a decade ago.

New Web Development isn’t old old web development. It isn’t going to be a nostalgia movement.

It took five years for the simple XMLHttpRequest component to trigger the evolution of Ajax, the precursor paradigm to the React SPA paradigm, and that was in a tech industry that was arguably marginally more functional and coherent than today’s oligopolies.

We’re still in the progress of inventing this future and a technology that’s as fundamentally backwards-facing as Large Language Models is incapable of predicting what it will look like.

And all the rest of us know is that it’s going to be new.

You can also find me on Mastodon and Bluesky