Web dev at the end of the world, from Hveragerði, Iceland

The sentiment disconnect on ‘AI’ between tech and the public

I took a bit of break from work last week and tried my hardest to get away from the “tech context”. I went on photography exhibits, took walks, and gave clearing my head a good shot.

But that clear space just immediately filled with muddy, anxious questions.

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

You see this in film reviews on The Guardian site. You’re also starting to see this in social media, obviously exaggerated into harmfulness as all things social media, as people switch contexts from their pro-tech bubbles on Twitter or LinkedIn and test out Bluesky or Mastodon that’s less predominantly supportive of the tech mainstream.

There, having been enthusiastic about “AI” is more likely to work against you than for you, much like crypto-coin enthusiasm does.

There’s a contingent of people who are critical of “AI” in tech, of course. I’m one of them, so most people I interact with in my corner of tech social media aren’t fans. But they’re also very clearly not the norm. The norm is heavy and extensive use of language models for coding, at the expense of training, documentation (what little funding there was seems to be evaporating), accessibility, and general variety.

The norm, and the sheer difficulty in avoiding it, is what prompts developers to write posts like this one:

AI can certainly be used for good, but do we already live in a world where if you’re a software developer, particularly on the web, you have to be fully on board the AI train to be of actual value?

“Is AI part and parcel of web dev?"

A copilot coupled with a chatbot seems to have become an almost required part of the web developer toolkit, along with Typescript – another tool that, controversially, I think has very little code quality benefit and, like the copilot, exists almost entirely to enable autocomplete features in a Microsoft-owned text editor.

This places web developers in a very different context from how “AI” adoption (or lack thereof) is unfolding in other industries or among the general public.

Media is using it to replace certain classes of entry-level jobs, like copy-writing and audio transcription, or destroy sectors that are firmly cost centres like translation. The kind of machine learning that gets baked directly into most creative industry tools the way LLMs are being baked into coding environments are usually a bit less overtly generative – like “select sky”, object discovery, or automatic dust removal. Adobe is trying to push their generative model features, but don’t seem to be making as much headway as Microsoft has with Github Copilot and Visual Studio Code.

Coding is shaping up to be the one industry that sincerely adopts “AI” for everything everywhere, for better or for worse (primarily worse), and this is already colouring how many people in tech talk about it. Their experience is positive so they give every instance of hype, fraud, and deception from the notoriously unreliable AI industry the credence it doesn’t deserve and pick up many of their common talking points.

Like the bullshit claim that being against “AI” is ableist.

This comes to a head when the tech and the mainstream contexts collapse, such as when somebody who is used to the tech bubbles on Twitter or their corner of LinkedIn switches over to a less algorithmic and less techy social media like Bluesky or Mastodon.

At that point, if you write something potentially controversial but reasonably argued, such as describing the hard work done by moderators who work at a tech company as support for the idea that tech companies are doing a lot of work they generally don’t get credit for*, to use recent events on Bluesky as an example, and it turns out that you left a large collection of pro-AI posts behind you on those other social media sites, it’s quite likely that the crowd will use those posts to dismiss everything you have to say, _ever_, and flip the bozo bit on you, hard.

(* I’d argue that it’s the moderators who are not getting the credit they deserve. The companies themselves are just trying avoid getting regulated out of existence.)

The idea being that if you said something as obviously bullshit and partisan as “being against AI is ableist” then you’re obviously a bullshit partisan. It might have seemed like the norm in the context you came from, but that’s because that community – your peers – are feeding you bullshit and calling it cake.

The mainstream non-tech rhetoric is in the “generative models are crap, destroy jobs, and add bullshit to the software we’re trying to use at work” camp.

To many, “AI” seems to have become a tech asshole signifier: the “tech asshole” is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry.

For example, my sister helps manage a book store as a day job. They hire a lot of teenagers as summer employees and at least those teens use “he’s a big fan of AI” as a red flag.

(Obviously a book store is a biased sample. The ones that seek out a book store summer job are generally going to be good kids.)

I don’t think I’ve experienced a sentiment disconnect this massive in tech before, even during the dot-com bubble.

During most bubbles, the public and tech tend to be on the same page: “this is great!” Then the bubble pops and they’re still, roughly, on the same page: “this sucks!”

Crypto-coins ended up only appealing to a minority of both mainstream tech and the mainstream public opinion, so the sentiment on both sides was a bit more muddied.

“AI” sentiment, however? Clear as distilled water. One camp is a fan. The other is shaping up to be less enthusiastic.

This makes the job harder.

Working in web and software development – which includes writing about it – often feels like a career of trying to prevent or fix disasters.

  • New projects usually have a “bad idea from the manager” original sin wrapped around at least one of its branches. You need to be careful to try and prevent that bad idea from harming the project, the customers, and the company without pissing off said manager. You’re always going to fail on at least one of those fronts.
  • Old projects are inaccessible – to the point of being outright legal liabilities – performance disasters you need to fix.
  • Many of those are code quality nightmares too with the test suite being so tightly integrated into the codebase that changing a single line of code – any line – will break a dozen tests.
  • Ongoing projects are a dependency management and security alert death march.

The job is like seeing a boulder about to tip over an edge and roll down a hill and into a village. You’re not going to push it back up. Even stopping the boulder is impossible. All you can do is try and delay the tipping point, prop the damn thing up, then slow the descent and try to direct its path to where it won’t harm anybody. And occasionally in between crises you manage to ship something good.

This is tragic because the web as a development platform – what’s supported by the browser itself, no frameworks or dependencies – has never been more interesting, but you won’t see any of those features in your language model tools.

I don’t know if there’s that much genuine demand for web dev that isn’t either “let’s set up a code disaster by churning out tons and tons of code” or “let’s fix a code disaster that was created by churning out tons and tons of code.”

So, when I see posts like Remy Sharp’s asking the very reasonable question “Is AI part and parcel of web dev?” my first reaction is: if it is, then I’m out.

If “AI” becomes a mandatory part of dev (web or otherwise) then I’m out.

If web dev ends up being the lone LLM-in-daily-practice holdout post-bubble then I’m just plain done with dev as a career.

I’ve been on the fringes of web dev all these years because I care about accessibility, performance, semantics, and old-fashioned ideas like the separation of concerns. If LLMs become the standard approach to web development, my corner of the field is likely to get even more starved of funding and resources.

The disconnect between a generally “AI” positive work context, which is less likely to be avoidable if tech does standardise on LLMs, and the generally “AI is bullshit” normal social context, is also jarring enough already.

Unless I stumble onto a sustainable niche within web dev, then if “AI” ends up dominating the field, it’s probably time for me to move on.

Finding a niche is not impossible, but the odds don’t look favourable at the moment.

So I’m still genuinely hoping that the answer to Remy’s question ends up being “no”.

You can also find me on Mastodon and Bluesky