The AI is an American
“AI and the American Smile: How AI misrepresents culture through a facial expression”
Years ago, I had a go at explaining to somebody that AI colourisation inevitably erased variation and minorities out of history.
AI-generated images are that x1000. Everything becomes American.
This goes well beyond just AI-generated images (though those have so so so many issues).
The writing from these systems is extremely American as well, in style, structure, and tone, but most importantly in how it encodes context and information.
The notion that text is a projection of reality and that it encodes the context it’s representing is an American notion. Because the US and Canada are low-context cultures. There is an ingrained assumption that if it isn’t in the text, it isn’t there. So, text contains its context, which gives the claim that it projects reality surface-level plausibility.
Most other cultures, even other English-speaking ones, are higher context than the US. They assume that there are things you don’t need to say, because the reader has enough context to read between the lines, so to speak.
In a low context culture, an employee noting that a business trip falls on the birthday of their daughter would merely prompt an apology, possibly with a note about how the employee is being such a good sport to be willing to go to work while missing out on such an important event.
Because the employee didn’t specifically ask for the date of the trip to be changed, the American boss is going to assume that the note is just an informative note—possibly a gesture of interpersonal bonding. After all, who doesn’t make sacrifices for work?
In a high context culture, the note would be seen as an explicit request for the date of the trip to be changed.
Same text, completely different interpretations based on a cultural understanding of context.
This obviously causes conflict in real-life global workplaces.
But in terms of AI, that both means that the training data that has the most context embedded within it and is more likely to have an effect on the AI’s eventual behaviour is likely to be American writing. Text in a low context culture has to carry more information within it than text in a high context culture.
All the higher-context English-speaking cultures are going to get lost in the mix.
What is likely to happen when you start to use an American AI chatbot to generate emails and text for a British or Indian workplace is that everybody starts to sound American. Not in the writing style, but in the amount of context carried in the text.
Best case scenario is that everybody finds this rude and insufferable. Worst-case scenario is that this leads to even greater Americanisation of global culture.
What you all should take note of is how little AI companies seem to care.
They are fundamentally making language tools, but they don’t seem to care at all about the dynamics of the languages themselves.
How these languages are used; how information transmission varies from culture to culture; how much a language culture relies on prior socialisation to create a shared high context; all of these issues lie at the heart of how these tools could affect our societies…
But vendors don’t care.
For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.