Links (2 September 2024)
This week’s must-read #
Is AI a Silver Bullet? — Ian Cooper - Staccato Signals
This incredibly detailed and well thought out essay by Ian Cooper is a must-read. It took me a while to get around to reading it as it was originally published in June. Don’t be like me. Read it sooner rather than later. You won’t regret it.
But the flaw in prompts as a description of the code to be generated tends to be that natural language is a remarkably poor medium for expressing the model that we want to use in our software. Every time we move away from a 3GL to create a more natural interface, we rapidly get diminishing ability to author software.
[…]
Where a space is already commoditised and no advantage through an innovative internal product is possible, it is likely that a SaaS offering is a better way of reducing the cost of software, than authoring an LLM based solution. We have already moved away from end user authored software in many such cases.
[…]
If we already have developers who work in the abstraction of 3GL to model the problem it is unlikely that switching them to Prompt Engineering will do anything other than slow them down. We create new accidental complexity that they are less experienced with.
As we develop more code by Prompt Engineering the question becomes what feeds the LLM with new or innovative solutions to the problem, new ways to model the answer? Prompt Engineering itself is unlikely to provide that. But the LLM can only parrot existing solutions to the problem.
[…]
This is the trade off we have seen before: eliminating the accidental complexity of authoring code has less value than we think because code spends most of its time in maintenance, so the maintainability of the code is more important than the speed to author it.
[…]
The risk is though that Prompt Engineering is just a replacement of one form of accidental complexity – which we are good at solving – with another form of accidental complexity, how to elicit the computable model we want, via Prompt Engineering – which we are lest good at solving (for now). The current state of Prompt Engineering, with its “spells” that must be incanted to work, just looks like another form of accidental complexity.
[…]
Finally, code generation often suffers from the problem that it does not build understanding. Because the team did not build the code up, or use publicly available, well-documented frameworks, the onboarding costs are high.
[…]
The problem is a classic trade-off: what many innovations in the space took from the accidental complexity of authoring software, they added into the accidental complexity of owning that software.
This is a problem we repeatedly forget, and relearn. Any successful software will be changed as the needs of its users change. Most of the lifetime of any successful piece of software will be in the ownership phase. During ownership our costs will be the cost of change. So to reduce overall costs, we need to reduce the cost to change our software.
I could quote from it all day. My only quibble (and it’s a quibble) is that I’m much more sceptical about copilots than the author. I think software developers as a group have a history of being completely incapable of accurately judging the effectiveness of whatever is the current trend in dev, and I think that we, as a demographic, are uniquely vulnerable to the type of cognitive biases that LLM tools trigger. As in, we are generally confident that our intelligence somehow makes us immune to biases whereas the superior pattern recognition that comes with the one-dimensional capability that gets labelled as intelligence in our community is more likely to do the opposite. We see patterns that aren’t necessarily there.
Links #
- “Does AI benefit the world? – Chelsea Troy”. “Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest.” In a sensible world, the debate would end there. We don’t live in a sensible world so we need articles like this one.
- “Watch what they do, not what they say | LinkedIn”. “If you are serious about your climate targets and net zero pledges, then you must do something different to what you did last year, and the year before.”
- “Semi-Annual Reminder to Learn and Hire for Web Standards — Adrian Roselli”
- “CSS { In Real Life } | The Problem With Surveys (and Why You Should Take This One)”. “One thing I noticed this time around is the number of men I’m aware of doing this stuff has increased, while the proportion of women, non-binary, or non-gender conforming people has declined.”
- “Crows May Be Smarter Than We Thought”. ‘Creating and using mental templates might be a skill that evolved in the ancestor of all corvids, the “Corvida” branch of songbirds, or perhaps it is even shared more broadly across the animal kingdom, she says.’
- “Challenging The Myths of Generative AI | TechPolicy.Press”. ‘Learning from natural selection is distinct from education: nobody ever says moths or AI systems were educated. Yet, AI advocates want to apply educational freedom (the “right to learn”) to the companies that build Diffusion and Large Language Models.’
- “The Humble Link - Jim Nielsen’s Blog”
- “Fighting Climate Disinformation Is an Urgent Priority”. “To say that people are immune to such efforts because they distrust corporations is to ignore the way these corporations seek, often very effectively, to deliver their disinformation not as corporate missives but as messages of culturally hegemonic common sense.”
- “Signal Is More Than Encrypted Messaging. Under Meredith Whittaker, It’s Out to Prove Surveillance Capitalism Wrong | WIRED”. “The short answer here is that AI is a product of the mass surveillance business model in its current form. It is not a separate technological phenomenon.”
- “Apple is cutting jobs across its Books and News apps - The Verge”. “The iPhone maker is reportedly laying off around 100 employees.” So, they fired 99.5 people working on News and the part-time intern who occasionally pokes at anything related to Books?
- “When monoculture leads to monofailure”. “Often, we optimise ourselves into single points of failure. Protecting computer systems from threats around the world is an extremely complex task, and delegating the task to a single company seems like an efficient solution to the problem.”
- “Just use fucking paper, man - Andy Bell”. This. Index cards too. (I alternate between pencil and fountain pen depending on my mood, but the principle is the same in both cases.)
- “Slop is Good • furbo.org”. This is an analogous line of reasoning to Crockford’s “IE’s stagnation was good for web dev as it let everybody catch up and the industry mature” take. Not sure I agree with it, but it’s an interesting thought.
- “Authors shocked to find AI ripoffs of their books being sold on Amazon | Artificial intelligence (AI) | The Guardian”
- “Create public-facing unique keys alongside your primary keys”. “I’ve come to realize that you should almost always create external IDs for your database tables, and that they should be prefixed with a type identifier, but not be used as the primary key for your tables.”
- “Gardening and code | Go Make Things”. “One of the biggest pitfalls I see my students fall into is trying to plan for use cases that don’t exist yet—and might not ever happen.”
Photos #
Only a couple of photos this week.
This one was from a walk by Varmá river here in Hveragerði. Taken on the iPhone using Halide and its Process Zero.
And this one is a new photo of my sister’s cat Kolka.