Web dev at the end of the world, from Hveragerði, Iceland

A note on the EU AI Act

There’s an announcement from the EU doing the rounds saying that the EU AI Act has “entered into force” on 1 August 2024.

This is true, but not in the way most people think.

  • When they say that the EU AI Act “enters into force” what they mean is it has officially become a law and the work to build its regulatory infrastructure is beginning.
  • The act won’t start being enforced for another six, twelve, and 24/36 months, for prohibited, general purpose models, and high risk models respectively.

High risk

If you’re making “AI” for biometrics, core infrastructure, admissions or assessment for education, job recruitment, job evaluation, life or health insurance, border control, and a few other cases that involve using ML to profile or assess people, then your product is high risk.

Effectively, if you’re positioning your product to be in a place where it can royally fuck people’s lives up through automated decision-making, profiling, or assessment, then it’s high risk.

Vendors of these products have 24 months (or 36 for a subset of “high risk”) to get their ass in gear, improve their documentation and record-keeping, support human oversight and generally have the sort of quality control you’d expect from a high risk product.

General Purpose

General Purpose “AI” – basically LLMs at the moment – have twelve months to prepare decent documentation, a published policy that describes how they’re following the EU Copyright Directive, and have a “sufficiently” detailed summary of the media used for training. (What “sufficient” means is 🤷🏻‍♂️ but it’s a loophole big enough to float the entire island of Ireland through, with space for the Shetlands to spare.)

You might think, based on the panicked whinging coming from US tech cos that these requirements are onerous but they’re getting twelve months to write documentation and pinky-swear that they’ll respect copyright law. Hardly a business-destroying set of requirements.

General Purpose systems that pose “systemic” risks have additional requirements.

WTF are “systemic” risks? Well, it sounds like it’s mostly “Self-Aware AI Doomsday” bullshit but the long and short of it is that most of the bigger General Purpose models will need to do more nonsense “will this glorified excel spreadsheet somehow escape from the data centre and walk around like that hot chick from Ex Machina” testing and have the basic computer security measures every big vendor should have in the first place.

The EU AI Act is the bare minimum

If none of this sounds that onerous – after all, demonstrating that a system that’s fundamentally incapable of pre-emptive action isn’t capable of pre-emptive action is stupendously easy, and documentation plus basic security is what they should be doing anyway – remember that US tech is fundamentally messed up and dysfunctional.

The industry is 50% true believers who are trying to prove that their personal best buddy Clippy is on the verge of personhood and 50% grifters who are promising legal slavery of thinking individuals in their core marketing.

They don’t want to demonstrate conclusively that their sexy doomsday robot fantasies are nonsense. They want to be able to pretend they’re true. Being forced to demonstrate that they’re untrue goes against their core beliefs and their main marketing strategy, respectively.

Many of the people involved are also whiny, self-centred narcissists prone to flying into rage whenever anybody tries to set even the softest and manageable of boundaries. If you’ve ever had an alcoholic or addict relative or a toxic narcissist partner: that’s pretty much every exec in tech and most execs in US business today.

The EU in this scenario is a criminally abused spouse who is trying to set the softest of boundaries. Like “don’t beat me up on Saturdays before I go out with my friends.”

The requirements look entirely reasonable considering these products are being positioned to become central to modern software – they’re effectively positioned to become the entirety of modern computing.

That being a fundamentally bad idea is not addressed by the act in any meaningful way, but the US tech industry will throw a tantrum anyway.

Edited: mixed up the high risk and prohibited timeline.

You can also find me on Mastodon and Bluesky