The Intelligence Illusion: stepping into a pile of 'AI'
This is a part of a series where I review the work I’ve done over the past couple of years.
- Two-year review: to plan a strategy you must first have a theory of how the hell things work
- Out of the Software Crisis: two year project review
- Sunk Cost Fallacy: chasing a half-baked idea for much too long
- The Intelligence Illusion: stepping into a pile of ‘AI’ (this page)
- A print project retrospective: the biggest problem with selling print books is the software
- Thinking about print
- Disillusioned with Deno
- An Uncluttered retrospective: Teachable is a mess and I need to pick a lane
Stepping into a pile of “AI” #
After I published Out of the Software Crisis I divided my time too much.
- I spent too much energy and time chasing funding for an ivory tower research project.
- I dove head first into researching generative models and related studies.
- The remainder of my time went into working whichever freelance projects came my way through existing contacts and older leads.
This is the period where I really dropped the ball on freelancing and failed to build on the success of Out of a Software Crisis.
It’s also the time when I, again, deliberately chose not to think entrepreneurially and instead wrote the book I thought (and still think) needed to exist in the world. This would still qualify as an Affordable Loss as I’m still here and still paying the bills (knock on wood), but because I was too focused on the outcome (a book) I didn’t put enough through into whether it was a book that suited my network or the audience I have the capability to reach. I also didn’t really think through whether there was an audience for it in general.
I’m proud of The Intelligence Illusion. The research I put into it rivals what I did for my PhD. I analysed, step-by-step, many of the functional and business problems with generative models, and the book ends with the conclusion that they are, essentially, the opposite of what most of us – individuals, society, and industry – need from our software. Producing the book built on my experience in research, my writing, my understanding of software development, UX design and machine learning, so making the book was a good match for my Means.
I put a lot of effort into maintaining a “just the facts” level of impartiality in the book. The editor I worked with, who volunteered their work because they also thought this book needed to exist, played a constant game of “too opinionated” Whac-A-Mole (“you can’t make a claim like that unless you can back it up with a reference”) throughout the writing process. I made sure that every criticism I levelled against the technology or business was backed by research and that every study I referenced used sound methodology, unless the study being unsound was the point.
Whenever I came across use cases where generative models genuinely work and are (relatively) safe, I made sure to highlight those benefits. I avoided mentioning the literally evil shit that passes as ideology among many in the field of AI because their ideas are so aberrant that mentioning them would make me look like a conspiracy theorist. I got researchers, entrepreneurs, software developers, and business consultants in my network to review and re-review it prior to publication. (All examples of Co-Creation.)
It wasn’t as much of a success as Out of the Software Crisis, which sold quite a bit more in its first few months after publication, but it hasn’t done badly all things considered. It keeps ticking on with a few sales every month. I’ve even been reviewed the book over the past few weeks to see if it needs updating and, honestly, it’s still completely on point.
But it doesn’t have the traction that the first book still has, and I think that’s because functionality, design, and genuine business concerns are completely irrelevant to the main AI Bubble. The bubble itself is only going to get more and more ridiculous. The discourse surrounding the AI bubble doesn’t really have anything to do with grounded business concerns on either side.
The criticisms of “AI” that resonate with the audience I can reach on the platforms available to me are social, cultural, and political – that the technology might not be functional is almost good news. The goals of the AI industry are so horrendously bad for the rest of us that dysfunction might just buy us more space to breathe.
Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn’t provide insights as it’s just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.
The AI Bubble has become a political – even religious – movement whose primary focus seems to be to disempower labour and voters.
Fans of the technology, conversely, are only interested in research and writing that provides them with cover to ignore the critics.
The book I wrote falls outside this dynamic – the larger community discourse around “AI”. It appeals to those who like to understand the genuine technological limitations and compromised functionality that generative models all suffer from. It also appeals to those who are curious about the few use cases where the tech might work. But those two groups are, by definition, going to be an audience that’s less invested in the pros and cons of “AI” than the rest, and “uninvested” is generally the opposite of what you want in a target market.
My existing audience is also likely to lean more left and environmental just by virtue of me being me. I don’t think I reach many who lean conservative or libertarian in their political views. The labour issues with generative models are the primary concerns of my network. Whether the technology is flawed or not is almost immaterial.
The Intelligence Illusion is a book I had to write because I needed to understand the technology and its implications. I began my research with an open mind and even a few ideas for projects but ended up convinced that generative models are, all else being equal, one of the worst ideas to come out of software in recent years.
And, yeah, that includes cryptocoins.
There’s a limit to how much I can build on that. My research completely cured me of any interest in working with the technology and there doesn’t seem to be a market for “AI” consultancies where the only message is “don’t use it, just wait and reassess the situation in three years’ time”.
The sales pattern the book has seen supports this. Where the sales of Out of the Software Crisis came in bursts as it reached new segments of the audience that I could reach, sales for The Intelligence Illusion have been lower but steadier, a few sales every month, which to me indicates that it’s spreading primarily through word of mouth outside my network.
It seems to be genuinely helpful to those who like to have their thoughts on generative models grounded in an understanding of the limitations of the technology.
Some seem to then use that – I now have a handle on the risks – as an excuse to go all-in on ChatGPT, which is frustrating, but also beyond my control.
The book, The Intelligence Illusion: a practical guide to the business risks of Generative AI, ended up on at least three “best of 2023” lists, which was enormously validating.
This book had a big impact on me. This book deepened my skepticism about the current wave of GenAI hype, although I do admit (like the author) that it still has some reasonable use cases.
Refreshingly level-headed and practical. If you work somewhere that’s considering using generative tools built on large language models, read this before doing anything.
Bjarnason poured a well-researched glass of lemon juice into the greasy pool of incredulous media coverage, which hasn’t improved much all year. It’s accessible for anyone who’s spent more than 15 minutes with a clueless executive or myopic developer — or, frankly, engaged with any of the technological “disruptions” of the past two decades — as Bjarnason rigorously unpacks the many risks involved with the most popular use cases being promoted by unscrupulous executives and ignorant journalists.
The book might not have been the most entrepreneurially sensible thing to write in 2023, but I do believe it was the most important work I did in 2023, at least for myself, if not for my audience.
Deciding not be an “AI” pundit #
Soon after publishing The Intelligence Illusion I had the misfortune of being quoted in the New York Times.
My parents would be very annoyed to see me characterise the experience as a “misfortune”.
- “It’s the New York Times!”
- “It sells copies of your book!”
But it also misrepresented just how critical I am of the AI Bubble, how much the cult-like behaviour that seems to be driving it worries me, and it misrepresented the existing discourse surrounding the potential harms caused by generative models, largely because it seemed to avoid quoting the many women and People of Colour who are leading said discourse and have the greatest expertise.
It also made me realise that I had a choice to make:
- Stick to my existing “game”. I am a writer and web developer focused on writing and software development.
- Change the “game”. Become a pundit who mints quotes and opinion columns on demand, appears wherever and whenever all in order to sell a book. Maybe write a quick follow-up book and switch to being all “AI” all the time.
Changing the game would require choosing an intellectual stance.
- Orthodoxy: “this tech is the next big thing, but we need to use it responsibly.”
- Heterodoxy: “this tech is inherently flawed, doesn’t really work, and is made by people we should not trust.”
(If you’re outside the AI Bubble, those two stances would be swapped. Among illustrators “AI is garbage made by garbage people” is the orthodox opinion.)
My stance on “AI” is obviously of the heterodox variety to people and institutions inside the bubble (which NYT definitely is, the lawsuit is about them getting a cut of what they see as the future). I’m convinced the functionality of the technology is vastly overstated and much of the perceived gains are psychological or statistical illusions, if not outright fraud. And I simply do not understand how people who have been critical of big tech companies in the past have somehow managed to change that stance with the advent of generative models.
But wading into a bubble – becoming one of the foot soldiers of the discourse machine – will chew you up and spit you out in pieces. I got a taste of what that felt like in the smaller ebook bubble that took over publishing in the early days of the Kindle and the iPad. It is a distinctly unpleasant experience. Getting drawn into that kind of fight only becomes less pleasant as a bubble grows.
Going all-in on punditry would make me some money in the short term, but it would also make me deeply unhappy. Doing the coldly entrepreneurial thing would have made me miserable.
It would also be a short-lived “success”. These bubbles pop and, unless you have your research, journalistic, or academic career to fall back on, being a critic of a bubble’s orthodoxy leaves you with nothing after it pops. There is also the constant risk of fallout:
- Social media drama escalating out of control.
- Reputation harm from saying the wrong thing at the wrong time in front of an adversarial audience.
- Blatant lies being spread about you. (I had this happen to me when I was criticising how the publishing industry was handling ebooks.)
- People also remember for a long, long time that you were an adversary, even if they have since come to completely agree with everything you said. Most of the people who criticised the dot-com bubble are still disliked by many in tech, even though, in hindsight, they agree 100% with every point the critics made.
This is not something you build a career on.
I decided I would not be changing my game and would instead continue to focus on software and web development topics. This can include occasionally generative models because we’re all forced to deal with them now, but in a more focused way.
For the next few weeks after making that decision I replied with every query for a quote, interview, or column from the media with an apology that I wasn’t in the position to help them but offered instead a list of highly capable people who had similar opinions but backed it with a much deeper expertise than mine.
People like:
- Abeba Birhane, PhD
- Dr. Damien P. Williams
- Professor Melanie Mitchell
- Margaret Mitchell, PhD
- Meredith Whittaker, President of The Signal Foundation
- Professor Emily M. Bender
- Timnit Gebru, PhD
I stopped getting these queries in short order and never got a second email from most of them.
I’m sure the fact that most of the people I suggested they talk to instead were women, People of Colour, or both had nothing to do with it.
What to do about it? #
I still believe in the book, both the writing and the research as a work. Since most of the sales seem to be driven by word of mouth, this is one of the few cases where selling a technical book on Amazon’s Kindle might make sense, so getting it on there is on my list of things to do over the next few weeks…
(Conversely, the sales for Out of the Software Crisis are over 95% direct and most of that 5% that sold via Amazon came about because I linked to the Kindle page for the book.)
Another sensible action might be to use Lulu’s or Bookvault’s retail distribution network to make it more broadly available, but the economics there are less clear.
And, as it happens, that’s the next project I need to review:
In 2023 I got the print edition of Out of the Software Crisis into the world. It was a messy process, but in this case not really because of a mistake I made.