AI and Asbestos: the offset and trade-off models for large-scale risks are inherently harmful
“AI ethicists are speaking out, but are we listening?”
Well, yes and no. People are listening, but many in tech aren’t. All too many think they can cherry-pick “just the good stuff” without contributing to the massive harms, but they can’t and they are.
Our societies keep taking incredibly destructive actions and the people who enable them generally rationalise them using one of two strategies.
- They buy into the “offset” model of large-scale risks. Your destructive activity in one place is “offset” by a beneficial activity elsewhere. The destruction still happens, and you usually don’t have any concrete evidence that the benefit actually happens, so more often than not it’s purely a rationalisation for destroying your immediate society.
- They buy into the “trade-off” model where something destructive or toxic is deemed “worth it” if it has some other benefit.
Neither of these rationalisations are ultimately “rational”. They’re just excuses for shitting in your own back yard and feeling good about it.
Or, an excuse for shitting in your neighbour’s back yard and getting away with it.
Asbestos in the US was for the longest time the poster child of the trade-off rationalisation. The US only banned asbestos this year, despite the massive harms every stage of its acquisition, production, and use presents to human health.
That’s because US culturally believes in the trade-off model. And, unfortunately for those of us outside of the US, its culture has always had a tendency to spread to other countries.
Tech is fundamentally an evangelical US culture and people in tech have a strong tendency to rationalise the use of technology using the offset or trade-off models.
It’s justified because we’re doing this good thing as well.
It’s okay because, even though this destroys your job, your political system, the arts, and your local community, it makes me more productive. Worth it.
They default to a “rational” optimistic scepticism where they acknowledge the harms of the tech but focus on the potential benefits because a core unspoken tenet of their ideology is that bad things are okay if they also benefit somebody – especially if it benefits themselves.
Optimistic sceptics are more dangerous than “AI” true believers. They sound more plausible, but the harms from what they’re proposing are exactly the same. They’re just arguing that you should let them harm you because it personally benefits them.
It’s not only irrational, it’s patently offensive.
I’m going to repeat that for emphasis: the harms from what optimistic sceptics propose are exactly the same as the harms from what true believers propose.
The only different is the optimistic sceptics sound more plausible because they acknowledge the harms, unlike the true believers. They’re still ultimately proposing to carry on regardless, just like the true believers.
That makes them even more destructive because they’re more likely to be convince people who have the power to decide.
I won’t be linking to the post itself because I don’t want to direct an angry crowd at anybody but this quote is good example of the kind of thinking that is common in tech:
I’ve been noodling with various AI tools off and on for a year now. Most of my experiences have not been what I’d consider extraordinary. But there are three use cases that I think have real potential.
I’m citing it because it’s so typical of the genre and sounds so reasonable. But, to get a sense of how misguided it is to think like this, replace “AI” in the text with “asbestos”.
I’ve been noodling with various asbestos tools off and on for a year now. Most of my experiences have not been what I’d consider extraordinary. But there are three use cases that I think have real potential.
Nothing in the text prevents the harms. It’s effectively advocating that we find good uses for the tech regardless of how harmful it is in other contexts.
This is the play book for harmful tech everywhere:
- First the tech is presented as borderline magical.
- Then we discover that it isn’t nearly as capable as presented.
- And we find out that there are substantial harms.
- People in tech focus on finding positive uses of the tech as if finding constructive uses magically disappeared the harms.
It’s this last hand-wave that’s causes the most damage. We know well enough these days to take the proclamations of the true believers with a grain of salt, but the “reasonable” contingent still holds considerable sway.
But we need to remember that they’re also promising magic, just like the true believers.
Where the believers promise magic functionality, the “reasonable” promise magic harm prevention.
The “reasonable” perspective is just as unrealistic and wishful as that of the true believer who believes in miracles. They are both assuming miracles. One’s just being a bit more subtle about it.
Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.
This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.
This post was originally published as a member preview on Steady.