Studying the pitfalls and potential of generative code (plus links)
Get The Intelligence Illusion in PDF and EPUB
Over the past few months, I’ve been diving deeper and deeper into the topic of language models and, to a lesser extent, diffusion models.
During this time, my newsletter has been dominated by links and research, most of it on AI and AI companies.
I’d like to thank all of you who have stuck around during this time.
The book I was working on is finally out. The Intelligence Illusion: A practical guide to the business risks of Generative AI.
I wrote a blog post about "why I wrote The Intelligence Illusion" and tweaked the design of the front page of my website while I was at it.
One of my mantras I had while writing it was that it’s a tool. It’s research, made accessible, there to help you decide, for yourself, what the risks of these systems are.
But it’s also a tool for me. I really do think that language models are one of the more promising innovations in software development in later years, but it’s being let down by poor designs, shoddy thinking, superstition, and some of the least trustworthy people and companies that I’ve had the displeasure to read about.
As I wrote in the book itself:
I’ve never before experienced such a disparity between how much potential I see in a technology and how little I trust those who are making it.
The important word there is potential. It’s clear to me that language models fit programming better than they do most other tasks. That deserves exploration. But, exploration only has value if it is structured and researched. Personal exploration just leads to superstition and hearsay.
The research I did for the book was necessary for me to be able to form my thoughts and ideas on generative AI in software development.
Namely, none of the current approaches are likely to unlock that “potential”.
Over the next few weeks, I’ll begin by posting extracts from the book that will serve as the foundation of my argument around the use of language models in programming.
After that, I’ll be publishing a series of essays on some of the pitfalls and potential of generative coding. Most of it will be pitfalls because this is the software industry, and our first instinct as an industry is always to fuck things up.
But, unlike in writing and art, the potential in programming is there. It deserves investigation.
That potential may never be realised. Many prior programming revolutions petered out without revolutionising anything. Some even turned out to be counterproductive. But it needs to be taken seriously.
If you want to support this newsletter and help me publish more essays of substances and less of the automated link posts, the best way is to buy either one of my books:
- Out of the Software Crisis: Systems-Thinking for Software Projects
- The Intelligence Illusion: a practical guide to the business risks of Generative AI
The income from the ebooks is what pays for the essay writing.
Get The Intelligence Illusion in PDF and EPUB
Software development links #
- “Avoiding the Rewrite Trap. A common challenge for engineering… - by Camille Fournier - Apr, 2023 - Medium”
- “Deno KV - Simon Willison’s TILs”. This looks really interesting. Looks like you could use the open source version with litestream or litefs.
- “Stanford Professor: Mass Tech Layoffs Caused by ‘Social Contagion’”
- “ongoing by Tim Bray - Amazon Q1 2023 Financials”. “And why is it legal for Amazon to be the prime competitor of the economy’s whole retail sector while not having to make a profit?”
- “Using crypto for crime is not a bug — it’s an industry feature - Financial Times”
- “Does OAuth2 have a usability problem? (yes!)”
AI links #
- “Why Silicon Valley is bringing eugenics back”
- “AI in News Reporting: A Test Is Coming for Journalism Ethics - Bloomberg”. The signal-to-noise ratio of the web, esp. on any kind of commercial website, is about to drop like a stone. This isn’t likely to end well for them
- “MEPs seal the deal on Artificial Intelligence Act”
- “EU proposes new copyright rules for generative AI - Reuters” “Companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems”
- ‘False Alarm: How Wisconsin Uses Race and Income to Label Students “High Risk”’
- “A research team airs the messy truth about AI in medicine”. “In some instances, an AI could lead to faster tests, or speed the delivery of certain medicines, but still not save any more lives.”
- “A Photographer Tried to Get His Photos Removed from an AI Dataset. He Got an Invoice Instead.”
- “The Cliffs Notes paradox”. I didn’t realise people genuinely thought summaries led to actual understanding. I alway thought the people obsessed with summarisation were all about passing as read without actually reading
- “AI translation jeopardizes Afghan asylum claims - Rest of World”
- “Too Big to Challenge?. I find it deeply disturbing that the… - by danah boyd - Apr, 2023 - Medium”
What are the major business risks to avoid with generative AI? How do you avoid having it blow up in your face? Is that even possible?
The Intelligence Illusion is an exhaustively researched guide to the business risks of language and diffusion models.