Language models and software quality and other links
Double dose of links this week, since last Monday was a bank holiday here in Iceland.
Modern software quality, or why I think using language models for programming is a bad idea #
This massive 7000-word essay is based on a talk I gave at Hakkavélin, a hackerspace in Reykjavík.
I had an amazing time there and the questions and feedback I got was instrumental in helping me write the final essay.
Many thanks to @rysiek for organising everything and to everybody who showed up to listen and discuss.
The essay goes over the nature of modern software development, gives you a high level overview of how language models work, and then goes into detail as to why they are, in their current form, a bad fit for software development.
From my conclusion:
Even if you purposefully tried to come up with a technology that played directly into and magnified the software industry’s dysfunctions you wouldn’t be able to come up with anything as perfectly imperfect as these language models.
Read more over on the newsletter.
Highlight of the week #
"Origin Stories: Plantations, Computers, and Industrial Control"
The architectures of Babbage’s engines are bound with his theories of labor control, and his engines served as one of the multiple mechanisms by which he sought to discipline workers. And at the root of his larger project of industrial labor discipline lie plantation logics and technologies.
AI links #
- "AI statement". From Clarkesworld: “We believe that governments should be seeking advice on this legislation from a considerably wider range of people than just those who profit from this technology.”
- "How Congress Fell for OpenAI and Sam Altman’s AI Magic Tricks"
- "Workers Are Terrified About AI, So What Can They Do About It?"
- "Superintelligence: The Idea That Eats Smart People". “What it really is, is a form of religion. People have called a belief in a technological Singularity the “nerd Apocalypse”, and it’s true.” From 2016. Still accurate.
- "Opinion: AI tools like ChatGPT are built on mass copyright infringement - The Globe and Mail"
- "Absentee Capitalism - Ed Zitron’s Where’s Your Ed At". “Executive excitement around generative AI is borne from these disconnected economics, because none of these people actually create anything.”
- "Optimum tic-tac-toe". “Something to keep in mind the next time someone tries to sell you a large language model for expert advice.”
- "AI and book bans are pretty existential in terms of the threat they represent to authors". “Demand publishers not use AI covers. Demand they do not use AI editors. No AI in the writing, editing, production, or marketing of our books.”
- "Thought experiment in the National Library of Thailand | by Emily M. Bender". “The only knowledge it has is knowledge of distribution of linguistic form.”
- "What Will Transformers Transform?". “If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.”
- "What Will Transformers Transform?". “GPT-n cannot reason, and it has no model of the world. It just looks at correlations between how words appear in vast quantities of text from the web, without know how they connect to the world. It doesn’t even know there is a world.”
- "Just Calm Down About GPT-4 Already And stop confusing performance with competence". “No, because it doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language.”
- "On Understanding Power and Technology". "The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now.
- "Lessons from Soviet Russia on deploying small nuclear generators | daverupert.com"
- "What is the real point of all these letters warning about AI?". Quotes some smart people.
- "Biden’s former tech adviser on what Washington is missing about AI - The Washington Post". This is pretty sensible advice overall and the US would be better off if it was followed.
- "Against Predictive Optimization"
- "‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight". Using language models chatbots in healthcare and therapy is absolutely going to kill people.
- "Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat".
- "Dear Stack Overflow, Inc.". “Specifically, moderators are no longer allowed to remove AI-generated answers on the basis of being AI-generated, outside of exceedingly narrow circumstances.”
- "Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists".
- "Speed and Efficiency are not Human Values - by John Warner".
- "Crypto collapse? Get in loser, we’re pivoting to AI – Attack of the 50 Foot Blockchain".
- "Maps To The “Gold”". “The current state of the art’s been creeping up on us for quite some time now, and in the cold light of day a lot of it turns out to be Fool’s Gold.”
- "Ayyyyyy Eyeeeee. The lie that raced around the world… | by Cory Doctorow | Jun, 2023 | Medium".
- "How the media is covering ChatGPT - Columbia Journalism Review"
Software development #
- "Brown M&Ms | blarg". Canary questions.
- "Link Preload as Image - Jim Nielsen’s Blog"
- "The Next Larger Context. "Always design a thing by considering… | by Camille Fournier | May, 2023 | Medium"
- "Where does my computer get the time from?"
- "Why we’re bad at CSS". I like this. Personally, tho, I think it’s less that the industry is bad at CSS, more that it’s bad at software dev (seriously bad). It’s just more obvious in CSS because of its structure
- "Markdown images are an anti-pattern". HTML image markup is also easier to remember, IMO.
- "Watch Transitions in Slow Motion in Chrome’s DevTools - Jim Nielsen’s Blog"
The rest #
- "Chile’s Atacama Desert has become a fast fashion dumping ground"
- "Notes apps are where ideas go to die. And that’s good". This is what I use most dedicated notes apps for. But working notes are also a huge part of my process and those are in separate apps.
- "Tomorrow and tomorrow and tomorrow".