My writing on AI; the story so far
I’ve cleaned up the notes and essays I’ve been posting on AI over the past few months while doing my research for The Intelligence Illusion and figure it might be useful to have them all in one thread, for reference.
In this one from February, I pointed out that, since the tech industry has blundered every major ‘disruption’ it promised us over the past decade, AI is its last chance at keeping the party going.
We have these two on AI-generated works and copyright
Then I wrote this one in March.
If you are claiming double-digit percentage improvements in productivity, it won’t take long for the rest of us to see that you’re bullshitting
When you promise an AI revolution, eventually you will have to deliver
Another from March. On advocacy research. You should ignore most AI studies you see because they tend to be fluffy marketing bullshit.
Why you should ignore most AI research you hear about on social media
Here I point out that if tech companies are basing all of their new, revolutionary AI features on AI summarisation, those better work properly.
And they don’t.
This one goes over some of the issues with using AI in healthcare. Relevant since Epic is trying AI again, this time in collaboration with Microsoft.
This one is a thought exercise, where I’m thinking out loud about regulating AI. Unlike others, I don’t think AGI is an issue and my worry is primarily about what big corporations do with it.
Here I point out that all of these AIs are fundamentally American and that is a pretty substantial problem in out multi-cultural world. Because most of us aren’t American.
Here I show you, in detail, how I read through a study on the productivity benefits of AI.
Critically reading academic papers is a skill and takes practice.
If you can’t be arsed, you can just leave it to me and buy my book instead 😎
In this one I riff of a discussion that points out that algorithmically-generated art (such as AI art) is anti-poetry, which I still find to be a compelling point.
Here I go over how AI chatbots trigger anthropomorphism and why that matters.
This one, from earlier this week, is an extract from the book.
Artificial General Intelligence and the bird brains of Silicon Valley
Another extract from the book. This time a warning about the AI industry and AI research. It has a long history of pseudoscience and unfulfilled promises.
Finally, the last extract and the latest essay, where I point out the fundamental friction that exists between programming culture, software development, and language models.
AI code copilots are backwards-facing tools in a novelty-seeking industry
All of these posts and essays are a small part of the research that went into my book, The Intelligence Illusion: a practical guide to the business risks of Generative AI, or are essays extracted from it
8 May 2023 #
About the worrying polarisation that’s taking place in AI research, where it looks like the more pragmatic and sensible voices are getting ostracised by the rest.
I also wrote about how ChatGPT and OpenAI is overpriced because we have underestimated its defects and security flaws.
Turns out that language models can also be poisoned during fine-tuning.
For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.