How to regulate AI
The problem with regulating AI isn’t in coming up with effective regulations. That’s the easy part.
The problem is that tech and the authorities don’t want effective regulation because “effective” in this context always means “less profitable for the industry”.
That’s why they tend to either come up with ineffective half-measures or measures that strengthen the incumbents’ positions.
But, what if you didn’t care about protecting Microsoft’s ability to profit from AI? What would you do then?
If you’re like me and assume that the biggest source of AI-related problems will be companies that develop and integrate AI into their products, not individual criminality and fraud, then you have to directly attack the tech industry’s ability to profit from the tech.
First, you clarify that for the purposes of Section 230 protection (or the equivalent in your jurisdiction), whoever hosts the AI as a service is responsible for its output as a publisher. If Bing Chat says something offensive, then Microsoft would be as liable as if it were an employee.
You’d set a law requiring tools that integrate generative AI to attach disclosures to the content.
- Gmail/Outlook should pop up a notice when you get an email that their AI generated.
- Word/Docs should have metadata fields and notices when you open files that have used built-in AI capabilities.
- AI chatbots have to disclose that they are bots.
- GitHub Copilot should add a machine-parsable code comment.
You could always remove the metadata, but doing so would establish an intent to deceive.
The point isn’t to create metadata that’s impossible to remove. The point is to discourage the creation of products that automate deception. This differs from cookie banners in that, for better or for worse, the modern web user comes to a site with the expectation of being tracked, but they expect emails from co-workers to have been written by those co-workers. So, the point is to make the practice socially embarrassing and prevent it from becoming the norm.
All the announced products from Google, Microsoft, and AI startups are primarily for automating deception: chatbots that don’t say they’re AI, AI-generated email and docs, pictures that are presented as photographs.
That’s their go-to-market strategy: automate and normalise deception.
They should be made to understand that this is not okay.
Finally, you’d mandate that all training data sets be made opt-in (or that all of its contents are released under a permissive license) and public.
- Heavy fines for non-disclosure.
- Heavy fines for violating opt-in.
- Even heavier fines for lying about your training data set.
- Make every AI model a “vegan” model.
- Remove every ethical and social concern about the provenance and rights regarding the training data.
—“This would nuke the entire AI industry from orbit”.
I don’t think so?
It’d pop the current bubble, absolutely, but it would also set the industry on a long-term course that’s more likely to result in genuine and sustainable breakthroughs in Machine Learning.
It would force them to design AI integrations properly, with more forethought, making them much more likely to result in genuine productivity enhancements.
Automating deception is more likely to be destructive than productive.
If you think fraud and deception is the point of AI, as it was with the crypto industry, then yes, this will kill that industry.
But, if you think there’s more to the technology, that it could lead to major improvements in the UX of modern computing, then you should embrace any measure that sets the industry on a more sustainable path than the “let’s speed-run the crypto bubble and see if we can make it even bigger” course it’s on currently.
For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.