My position on AI (for future reference)
I’m writing this mostly for myself.
It’s light on references. If you want more referenced and more solidly argued overviews of some of the issues with “AI” I invite you to go explore the “AI” category on this site and the “AI” tag on my newsletter archive.
(This week’s links are at the end of this post.)
The short version is that I think generative models and ML-implemented automated decision-making systems are extremely harmful.
Their systemic flaws are a Y2K-style economy-wide disaster in the making.
Many individuals are able to use generative models safely. The problem is that a small percentage will suffer through catastrophic errors, bias, or misinformation and many of the biggest systemic downsides will only appear at a later date, long after the fact.
This creates a dynamic that’s similar to what we faced with COVID or Y2K. The extreme harm is there, but is at such a low percentage and at such a delay that it doesn’t meaningfully affect broad societal behaviour until it’s too late.
If the technology gets adopted widely then the “low” percentage becomes a humongous number in real terms.
One percent of all society is a lot of people.
This is why my current position is that we should do anything we legally can to slow or limit the adoption of these models in our systems, organisations, and institutions.
- They should not be given the benefit of the doubt when it comes to complying with existing laws and regulations
- Additional laws creating strict regulations specific to the “AI” industry should be passed.
- Anybody with standing and the resources to spare should sue AI vendors.
- Developers should refuse to implement or integrate them.
- Even slowing down integration and implementation through inaction benefits society by reducing the scale of use.
The harms #
“AI” systems are not functional #
These systems are not nearly as functional as vendors make them out to be.
The core issue is that these systems are non-deterministic and prone to unpredictable errors. They are prone to making major factual errors, severe “reasoning” failures, suffer from systemic biases of pretty much every kind, and their output – at best – is of a pretty low quality, forcing organisations to hire low paid workers to do the unsavoury job of making unacceptable machine-generated pap acceptable.
In pragmatic business terms, using and integrating these systems presents enough risks to fill an entire book.
These systems do not present a long-term competitive advantage. The harms come from their use.
Statements such as “if we don’t make them then the Chinese will” are not a counter-argument because the biggest problems aren’t caused by the existence of these systems but from their use in critical or vulnerable contexts.
It’s systemic legitimate use at scale that is likely to cause the most harm.
- Medical systems that have fatal errors.
- Financial decision systems that randomly throw people into poverty or debt.
- Code generators that occasionally inject undetectable but exploitable bugs.
- Research systems whose output is just reliable enough to get used but still suffer from regular errors and misconceptions.
- Assessment systems in education that fail students through no fault of the student.
- Generative models or decision-making systems that systematically discriminate against women and minorities.
- A supercharged surveillance society where the surveillance systems mistakenly track or identify somebody regularly, leading to innocent people getting into trouble with the law.
- Misinformation at scale by mainstream media companies and tech industry information systems.
(This list is not even remotely exhaustive.)
Many of these harms do not directly impact the organisation implementing the “AI” system, which means that they cannot be trusted to correctly assess the potential harm.
Most of the harms happen at such a rate that their effect won’t become noticeable in small-scale or casual use.
This means that efforts to replace human labour with generative model automation is likely to be disastrous for both our economies and our society.
These systems are unpredictable and untestable #
Their very design means that we cannot test them properly to discover and prevent their failure conditions. Generative models cannot be integrated safely at scale. They will always have unpredictable errors.
This would be manageable if it weren’t for the fact that…
Many of their failures are invisible to the user #
Because these systems are used to automate, very few of their users will have the expertise to be able to correctly identify issues with the output.
Most flaws will go unnoticed.
Even when the system’s user has the skills to identify problems, they are likely to suffer from automation bias: tools for cognitive automation are there to help you think less, and because you’re thinking less, you’re less likely to be critical of the output than you would normally.
Automation bias is a major issue for aviation and the “AI” industry has it in spades.
That, for many use cases, the error rates might be relatively low (single digit percentages) makes these systems more harmful not less. It means that their severe dysfunction is less likely to be spotted until they have been integrated or used at scale.
We can’t trust the AI industry #
“AI” has a long, long history of snake oil, false promises, and even outright fraud. Most of the studies and papers I’ve reviewed over the past few months (and I’ve reviewed hundreds) are flawed, misleading, or even outright deceptive.
The industry also makes independent replication of research essentially impossible.
The poor quality of much existing research and the roadblocks preventing replication means that the capabilities of these systems are vastly overstated, but we won’t be able to conclusively demonstrate their limitations to the media or policy-makers in the near or medium term.
“AI” systems are vulnerable #
Generative models are prone to training data poisoning and their prompt inputs can’t be secured.
Widespread use is, effectively, baking systemic security flaws into our society and infrastructure.
Language models are prone to memorisation and copying #
Many of the major benchmarks used by OpenAI and other vendors to guide language model development are human exams such as medical exams or bar exams. These exams primarily test rote memorisation.
Companies like Microsoft and Google are hell-bent on using language models as information systems to help people research and discover facts. This is only possible through even more memorisation and copying of strings of text from the training data sets.
The use of language models for programming requires the memorisation of platform APIs, control syntax, globals, and various other textual elements that are necessary for functioning code.
Together, these requirements mean that current large language models are optimised to memorise and copy strings of text.
As they are purely language models, they exclusively operate in the world of text, not rules, facts, or judgement. To be able to correctly answer the question of who was the first person to walk on the moon, they have to memorise much of that answer as textual fragments.
This bears out in research as performance of these systems degrades when they memorise less.
This is an issue because there is no way to control exactly how these memorised texts get used or even what gets memorised.
Every language model is a stochastic plagiariser, which can cost you your job if you’re a writer, but can result in serious legal issues if you’re a programmer. Current approaches for preventing this kind of copying are not sufficient as they focus on preventing verbatim copies and LLMs excel at lightly rephrasing output to match context.
The problem is that “lightly rephrased” copied text is still plagiarism. For code, that’s still licence contamination. For writing, that’s usually still a fireable offence.
“Inevitable” harms are not inevitable #
Because it seems to be impossible to automatically detect the output of a generative model every aggregator is likely to be flooded with garbage before long.
That means, among others:
- Search engines
- Self-service online stores such as the Kindle store
- Social media
- Sites that collect creative work or art
It’s extremely likely that our information ecosystem will be severely degraded by generative models.
Many people present this as inevitable. But this isn’t exactly true. Once you start to look at the spam that is being generated and the fraud that’s being perpetrated, it starts to look like much of it is created using commercially-available generative models made by vendors such as OpenAI. Self-hosted models do not seem to be nearly as effective for spam or criminality.
If this is correct, that means this problem can be mitigated by targeting the providers of generative models and related services such as OpenAI, Google, Anthropic, and Microsoft. We can prevent them from improving their systems’ effectiveness at fraud, and we can punish them directly for failures to create safe products.
This applies equally to both fraud and spam. The best generative models for criminal use seem to be the commercial ones. Creating and maintaining a complex generative model is extremely expensive, so if stricter regulation manages to mitigate the damage caused by commercial or commercially-adjacent academic systems, then criminals and spammers are, in my opinion, unlikely to be able to come up with effective replacements using their own resources.
On the copyright issue #
Copyright protection for the output #
Beyond the plagiarism problem, copyright is not my primary concern with generative models. Simple prompt-to-output use will result in works that don’t have copyright protection. But edited output almost certainly will be protected. Creative works that are integrated into a larger, copyrighted work are likely to be protected as well. And modified or edited output should result in a copyright-protected final product.
I don’t expect this to be a major issue. It does mean that, legally speaking, you should treat any generative model output the same way as you would a public domain text or image. Using it is fine, but make it your own in some way.
Training data #
Whether it’s legal for companies to train these models on copyrighted works has not yet been answered conclusively.
We don’t know if this is fair use under US copyright law or whether it falls under one of the existing exceptions to copyright law elsewhere. The EU is likely to clarify the issue eventually, but the USA’s ongoing governance dysfunction means we are only likely to get a clear answer to the question through lawsuits.
I expect this to get resolved eventually. Unfortunately there is no answer that would benefit all involved equally. Either tech companies will benefit by being allowed to use existing works without payment while directly competing with – to the point of extermination – the producers of those works. Or the creative industries will benefit by having hard limitations put in place on what the tech industry can do with any and all copyrighted works. Either answer could literally wipe out entire industries.
Non-issues #
They are not on the path to Artificial General Intelligence (AGI) #
The argument for why AGI is science-fiction is too long to hash out in this already too long post, but the short version is that extraordinary claims require extraordinary evidence and those making the AGI claim have provided none. Everything they’ve presented in support of their claim is shoddy or misleading.
This means we aren’t missing a thing if we slow down or abandon this technology.
This week’s links #
Dev #
- "Granularity walkers for textual descriptions"
- "Short session expiration does not help security"
- "aria-haspopup and screen readers - Manuel Matuzović"
- "You Deserve a Tech Union is here! — Ethan Marcotte"
- "Don’t Throw “Consulting Services” Onto Your Website - DaedTech"
- "Accessibility is a part of inclusive design and disability rights) - Eric Eggert"
Media #
- "Dialogue Is Still Really Hard to Hear – Pixel Envy". In my experience, there is generally a clear difference in dialogue audibility between older movies and shows and newer.
- "The New Gatekeepers: How Disney, Amazon, and Netflix Will Take Over Media". A report by the Writer’s Guild of America on the anti-competitive environment that is holding the US studio system back.
- "Netflix and other streamers wield too much power over labor. Use antitrust law to break them up". The old vertical studio system was broken up by the Justice Department. It may be time to do the same with these 21st century behemoths.
- "Netflix and other streamers wield too much power over labor. Use antitrust law to break them up". It’s hard to see how the strikes can be settled equitably and a relatively fair system restored without again invoking antitrust laws to force giant entertainment companies to separate production from streaming distribution.
- "Death Spiral of Hollywood Monopolies - by Alena Smith". “Without regulation of these monopolies, Hollywood will succumb to a death spiral, planting a stake in the heart of the entertainment industry.” Pro-competition regulation, known as “anti-trust” in the US, generate a massive amount of value for the economy.
- "Make Hollywood Great Again - BIG by Matt Stoller". “The best way to understand what’s happening to the US studio system is that it thrived while regulations limited integration and consolidation and immediately began to decline once those regulations were disbanded.”
- "No, streaming is not more expensive than cable TV". This comparison was always bogus, not just because the cable prices were wack, but also because they’re implicitly comparing premium ad-free streaming to ad-supported basic cable.
- "EXCLUSIVE: Naomi Wu and the Silence That Speaks Volumes". “Now that they know that I could be dead in a ditch tomorrow and no one would give a shit or say a word I’m 1000x less safe here.”
- ’The “Hero’s Journey” Is Nonsense - Tales of Times Forgotten’. The “Hero’s Journey” is the Meyers-Briggs of storytelling.
- "Why the Hollywood strike matters to all of us | Cognoscenti"
AI #
- "Artificial Intelligence Lawsuit: AI-Generated Art Not Copyrightable – The Hollywood Reporter". This was exceedingly unlikely to go any other way as US law is pretty settled on this.
- "85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science - Initiative for Digital Public Infrastructure"
- "Can’t lose what you never had: Claims about digital ownership and creation in the age of generative AI | Federal Trade Commission". Companies deceptively selling such content to consumers are violating the FTC Act. This conduct obviously injures artists and writers, too.
- "Microsoft pulls AI-written article telling tourists to visit the Ottawa Food Bank - The Verge"
- "New York Times considers legal action against OpenAI as copyright tensions swirl". The problem with the dysfunction of US governance is it means the only way many of the questions regarding copyright, models, and training data can be settled is with a long-running lawsuit.
- "The Imminent Enshittification of the Internet". Generative models are an existential threat to aggregators. Y’know, the same aggregators that are pouring billions into making said models.
Photographs #
Took this one using a Carl Zeiss Jena Werra about twenty years ago in Ashton Court, Bristol, IIRC.
Interesting looking clouds over Hellisheiði.