'The Intelligence Illusion (Second Edition): Why generative models are bad for business' – Black Friday launch sale
I’m publishing a second edition of my book The Intelligence Illusion (Second Edition): Why generative models are bad for business with a Black Friday launch sale!
A new and updated edition – 40% longer, every chapter updated in some way – at almost a 30% discount launch deal from today (28 November 2024) until the end of Monday (2 December 2024).
Read on for why I decided to do this and why now.
Why I’ve made a second edition #
Generative models are bad for business. The harm they do to your work and business isn’t limited to their unreliability or regular failures.
These products are suffused throughout with flaws and errors that expose any company that decides to rely on them or let their employees use them to risks that under normal circumstances would be considered outright untenable.
But, because we are in the middle of a bubble and because tech companies are now increasingly intertwined with the governments that should be holding them accountable, managers everywhere are rushing to adopt these tools in as many ways as is possible in the belief that they represent an imminent revolution in productivity.
People who are working in these companies are forced to try to be the lone voices of reason: maybe we should hold off until some of the more egregious flaws have been fixed?
Maybe we shouldn’t be using a risky, exploitative, technology with a non-deterministic but fairly constant failure rate?
You can’t appeal to your average manager’s sense of fair play or justice. After all, what are the odds that the guy who didn’t hesitate to fire a pregnant woman about to go on maternity leave is going to care about concerns about worker exploitation or environmental impact?
I know, I know. You’re on a job hunt so you can get out from under this manager’s thumb and that takes time. The job market is what it is at the moment. In the meantime, you need to prevent him from totally destroying the team and the product, because when it turns to shit, he’s not going to be the one they blame.
Just telling a manager about the risks doesn’t work either. I know. That’s exactly the mistake I did with the first edition of The Intelligence Illusion. I outlined all of the risks in detail, often with suggestions for how they might get mitigated in the future and the only thing that managers took away from it was “these risks will get mitigated in the future.”
Full speed ahead!
But that suggestion of potential future mitigation, an assumption of mine born out of optimism, has turned out to be entirely wrong.
As I explain in the new introduction to the second edition:
That assumption was the foundation of the optimism that – unfortunately – was a continuing thread throughout the first edition of this book. Even though it was highly critical of all kinds of generative models, the book was suffused throughout with a hope: these problems might get addressed, in time, and we’d discover ways to use the models safely and productively.
That optimism was unwarranted and was the first major mistake of the first edition.
The second big mistake was my assumption that anybody who read through a detailed overview of all of the dysfunctions, risks, and flaws of generative models would realise that, obviously, a technology this broken was unsuitable for most business. I didn’t account for the inherent irrationality of people already predisposed to be fans of the technology. Where I thought I had delivered a guide that outlined all the reasons why generative models should be avoided, consultants and managers saw a guide that helped them sell “AI” by giving them a list of risks to downplay.
Meanwhile, the tech industry has sunk to new lows. Layoffs continue unabated and are spreading throughout our economies as tech companies promote the lie that workers can be replaced either with generative models or with a much smaller workforce “empowered” by “AI”. Safety and security teams for “AI” products are understaffed or have even been outright disbanded. “AI” product research and design has all but arrested – every product is stuffed with chatbots, image generation, and language model autocompletion, no matter whether it makes sense or not. If the customers don’t buy it, then they’ll be forced to buy it, and there will be no attempt to come up with more useful design abstractions for the technology. More and more, you no longer even have the option to opt out of paying for “AI” integrations or to buy a cheaper version of the service that doesn’t come with a chatbot attached.
This is the strategy of a monopolist unafraid of regulators, the consumer, or the government. Why would Microsoft try to disrupt software development? They already own the entire field by virtue of owning GitHub, npm, TypeScript, and Visual Studio Code. There’s no need for any of these companies to even pretend that this is a disruptive innovation on a path to become more and more useful. Just bundle it with your core offerings and raise the prices. What’s the consumer going to do about it anyway?
So, even though the business risks of generative models have not changed in a meaningful way in the intervening year, a second edition became necessary to specifically counter the irrational exuberance of an AI Bubble in the face of the continuing disregard these companies have shown for safety, security, and overall business risks.
Instead of a book that matched risks with hypothetical mitigations and dreams of a more responsible tech industry, the second edition is a book that tells you, straight-up, how bad generative models are for business and in enough detail that you can, hopefully, find a few that would directly threaten your business and are enough to convince your manager to hold off, at least for another year.
Because risk to their own career is one thing most managers will actually pay attention to.
What was changed? #
- Every chapter has been changed or edited in some way.
- The second edition is twelve thousand words longer than the first edition, or over forty-two thousand words.
- Some of the risks were upgraded or downgraded, based on the behaviours of the companies involved.
- Elements from the research I’ve done after the first edition was published have been integrated. Specifically I included versions of my essay The LLMentalist Effect and Modern software quality in the book.
- I wrote a new introduction and afterword. The afterword specifically talks about the massive productivity cost that can come from using generative models.
- References were updated or replaced where appropriate.
- Every mention of a hypothetical or predicted mitigation of these risks has been removed. These are not going to happen with the companies currently in charge.
- The recommendations have been rewritten and edited to reflect the increasing irresponsibility and monopolistic behaviour of the tech giants that control these products.
Anybody who bought the first edition through my Lemon Squeezy sales page will get the second edition for free #
If you bought the first edition directly from me, using the checkout page managed by Lemon Squeezy, then you should be able to download the second edition, for free, through the Lemon Squeezy “My Orders” page and you should be able to sign in to it with the email you used for your original purchase.
It’s important to me that this book get used, that it is helpful to those who are trying to mitigate the harms done by generative models in a business context. So, I’m making the second edition available at no additional cost to everybody who bought the book from me directly through the Lemon Squeezy platform.
All that I ask from you in return is that, if you know anybody who might find the book interesting or useful, you recommend the book to them. Spread the word on your social media if you can.
Praise for the first edition of The Intelligence Illusion #
Amid the current AI tsunami, I knew it was time to do my share of learning this stuff by filling gaps among pieces of my fragmented knowledge about it and organize my thoughts around it. This book served that purpose very well.
Generative AI, ChatGPT and sorts, have been released prematurely with too many, too much downsides for possible benefits. It’s especially so when it comes to commercial use. This book walk you through possible risks your business might encounter if you casually incorporate it.
Many of his arguments opened my eyes. I’m glad I found his book at this timing. It’s a hype, at least for now and a foreseeable future. Use cases will likely be very limited. And to protect ourselves from bad actors, we need solid regulations, just like in the case of crypto.
The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.
Should we build xGPT into our product?
Before you answer, make sure to take advantage of all the homework Baldur Bjarnason has done for you.
Back when I worked in publishing, I employed Baldur Bjarnason as a consultant on complex digital publishing projects, as I appreciated his ability to grasp technical detail and translate it into terms that a general manager could understand - and act upon. He has applied that same skill to a superb new ebook on the business risks of generative AI, which I was lucky enough to read in advance of publication. It combines deep research, logical analysis and clear business recommendations.
I bought it, and I read most of it (skimming the middle part), and it is brilliant. Thank you!
I just bought the book this morning and it’s exactly what I needed. I have not seen a clearer description of how generative AI works, what it might be good for, and what the risks are. The references alone are worth the price of the book.
When it comes to the current hype surrounding AGI and LLMs, whether you’re a true skeptic (like me) or a true believer, The Intelligence Illusion is a splash of lemon juice in the greasy pool of incredulous media coverage. Accessible for anyone who’s spent more than 15 minutes with a clueless executive or myopic developer (or, frankly, engaged with any of the technological “disruptions” of the past two decades), Bjarnason rigorously unpacks the many risks involved with the most popular use cases being promoted by unscrupulous executives. He brings plenty of receipts to support his observations, too, while also spotlighting areas where this technology might have legitimate potential for good. Highly recommended!
PS: The images throughout do an amazing job of subtly reinforcing the book’s title and premise and would be worthy of a print edition.