Three factors of AI anthropomorphism
A major issues with this latest wave of AI systems is anthropomorphism. This, combined with automation bias, short circuits our ability to properly assess the work these tools are doing for us. The research on automation bias goes back decades in human factors and is unsurprising, at least to many with a design background.
But anthropomorphism looks trickier
The most convincing theory of anthropomorphism that I found was the “three-factor theory”, which seemed to have some grounding in experimental studies. The idea is that our tendency to imbue objects and animals with humanlike characteristics is triggered by three different factors. You don’t need them all, but it seems to be the strongest when the three are combined.
- Understanding. Our understanding of behaviour is grounded in how we understand our own. So, when we seek to understand why something does what it does, we reach first for an anthropomorphic explanation. We understand the world as people because that’s what we are. This becomes stronger the more similar it seems to ourselves.
- Motivation. We are motivated to both seek out human interaction and to interact effectively with our environment. These motivations reinforce the first factor. When we lack a cognitive model of how something works, but are strongly motivated towards interacting with it effectively, the two reinforce each other. The more uncertain you are of how that thing works, the stronger the anthropomorphism. The less control you have over it, the stronger the anthropomorphism.
- Sociality. We have a need for human contact and our tendency to see the human behaviour the environment around us seems to increase proportionally with our isolation.
AI chatbots on a work computer would seem to be a perfect storm of all three factors. 1. They are complex language models and we simply have no cognitive model of a thing that has language but no mind. 2. They are tools, software systems, that we need to use effectively but behave randomly and unpredictably. Our motivation to control and use them is unfulfilled. 3. Most people’s work environment is of social isolation with only small pockets of social interaction throughout the day.
Combined, this creates the Eliza Effect, which is so strong and pronounced that Joseph Weizenbaum, who made Eliza, described it as “delusional”
All of which is to say that I have very strong doubts about our ability to effectively and safely use AI chatbots. They both habitually generate absolute garbage and seem to short-circuit our ability to properly assess the garbage.
This would seem to be a particularly dangerous combination.
None of the above factors are affected by the fact that you “know” that something isn’t human or even when you know that the internal mechanisms of a thing make humanlike behaviour genuinely unlikely.
If you know that something isn’t a mind, but don’t have a cognitive model for how it works, then that still triggers the first factor, and your motivation to understand will still reinforce it. Knowledge doesn’t protect you, only an internalised, effective cognitive model.
The people around Weizenbaum knew exactly what Eliza was and how it was made but they fell into the “delusional” conviction that it was humanlike with an ease that worried him.
Which is relatable, because it is worrying.
For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.
“From Bitter Ground” #
My friend, Tom Abba, has released a narrative experience that weaves together a website and a book of handmade collages. All for £25 (UK).
That AI is providing some companies with rhetorical cover for doing layoffs that they were planning on anyway does not mean those jobs will be replaced by AI, nor does it even mean that it’s genuinely the plan.
(AI is bloody expensive)
A note on deno versus node #
“Deno vs. Node: No One is Ready for the Move”
I actually prefer Deno these days, much nicer experience. But I also think that Node’s massive community is it’s biggest liability.
Node has too many constituencies.
It serves front end developers, despite being an incredibly poor match. It’s about as different from the browser environment that a JS engine can get. It’s also used to make front end tools, which is a different need that has different demands on a runtime.
The node environment is it’s own idiosyncratic thing with odd offshoots. Node can be a very different thing just between two developers depending on when they got started.
Then you have electron developers, who are joined at the hip with node.
Finally, the isomorphic crowd, who are building hacky hodge-podge systems that awkwardly run on both node and in the browser.
Node needs to serve them all, and it does so pretty badly.
Deno, conversely, is a browser environment. Even as a server platform it uses browser idioms for server features. It feels *much *more cohesive as a result.
Node can’t be “updated” to get this kind of cohesion.
All of which is to say that deno will probably get stomped into the ground by node, because we can’t have nice things.
Software development links #
- “The Dark Side of the Mac App Store: How Scam Apps and Shady Developers Are Preying on Users”. App store are more of a liability than a benefit.
- “Parrots taught to video call each other become less lonely, finds research - Animal behaviour - The Guardian”. These sort of experiments are genuinely important research in terms of understanding animal cognition. Also, a little bit sad, as it shows that parrots are intelligent social beings.
- “Adactio: Journal—Read-only web apps”
- “Is Critical Thinking the Most Important Skill for Software Engineers? - The Pragmatic Engineer”
- “Offline Is Just Online With Extreme Latency - Jim Nielsen’s Blog”
- “The Calm Web: A Solution to Our Scary and Divisive Online World - Calibre”
AI links #
- “A quote from Dan Sheehan”
- ‘Discord’s New “AI” Chatbot Is a Useless, Miserable Nightmare’
- “Evaluating Verifiability in Generative Search Engines”. “On average, a mere 51.5% of generated sentences are fully supported by citations and only 74.5% of citations support their associated sentence.” The difference between the AI crowd and the rest of us is that they’re going to think this is an excellent result.
- “OpenAI’s hunger for data is coming back to bite it”. “These methods, and the sheer size of the data set, mean tech companies tend to have a very limited understanding of what has gone into training their models.”
- “Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say”. ‘One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.’
- “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems - The New York Times”
- If you had any doubt that these language models were biased as hell, turns out Reddit is a big part of their training data.
- “Google calls for relaxing of Australia’s copyright laws so AI can mine websites for information”. Very few countries need new AI regulation. They just need to be less lax in enforcing the laws they already have.
- “AI Users Are Neither AI Nor Users - by Debbie Levitt - Apr, 2023 - R Before D”. “These are not users. Period, end of story.”
- “Sorry AI, but User Research is More Than Just Predictive Text”. Feedback would be generic at best, wrong at worse.
- “Competition authorities need to move fast and break up AI”. “Without the robust enforcement of competition laws, generative AI could irreversibly cement Big Tech’s advantage, giving a handful of companies power over technology that mediates much of our lives.”
- “Google CEO peddles #AIhype on CBS 60 minutes - by Emily M. Bender”. "If you create ignorance about the training data, of course system performance will be surprising."