Skip to main content
Baldur Bjarnason

Poisonings, Corporations, and other links

Baldur Bjarnason

For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.

The poisoning of ChatGPT #

Over on the newsletter I wrote about how ChatGPT and OpenAI is overpriced because we have underestimated its defects and security flaws.

The poisoning of ChatGPT

Turns out that language models can also be poisoned during fine-tuning.

Elsewhere #

Friday, I published an extract from my AI book, The Intelligence Illusion, through my newsletter:

AI code copilots are backwards-facing tools in a novelty-seeking industry

It’s the part from the book that goes into some of the issues with using AI copilots and chatbots for software development.

Over the weekend I pulled together a blog post listing all of the writing on AI I’ve done over the months. I plan on updating this one regularly.

My writing on AI; the story so far

I also wrote about the worrying polarisation that’s taking place in AI research, where it looks like the more pragmatic and sensible voices are getting ostracised by the rest.

The polarisation of AI discourse serves nobody except power

The threat isn’t AI; it’s corporations #

So far, AI discourse has consistently divided itself into three camps:

  1. Those who think that these models are too powerful; they only disagree on the polarity. They may disagree about whether the technology is good or bad, but they agree that it’s magical, exponential, and godlike.
  2. Those who think the biggest threat are people—individuals or small groups—and what they will do with the models. They are also the most active at trying to talk others out of any attempt to regulate AI because “it’s too late now; open source models means we can’t stop the crimes.”
  3. Those who consider corporations to be the biggest threat, period. AI modes just happen to be their latest tools at disempowering the workforce and their political opponents. The functionality of the models doesn’t really matter, only their effectiveness at disintermediating labour and bypassing regulation.

It shouldn’t surprise anybody who has been following my writing that I consider the first camp’s beliefs to fall somewhere between science-fiction and superstition, the second camp to be wilfully ignorant about power, and that I fall firmly into the third category.

Thankfully—and I think we have earlier tech shenanigans with privacy, gig economy, and crypto to thank for this—the third camp seems to have quite a bit of traction with a number of interesting articles over the past few days:

Extracts from the book #

I’ve been publishing extracts from The Intelligence Illusion as essays.

The first and most important, was “Artificial General Intelligence and the bird brains of Silicon Valley”. It’s about the AGI myth and how it short-circuits your ability to think about AI ​

The second is about how incredibly common pseudo-science and snake oil is in AI research.

“Beware of AI pseudoscience and snake oil”

You need to be very careful about trusting the claims of the AI industry.

The last one is “AI code copilots are backwards-facing tools in a novelty-seeking industry”

There is a fundamental tension between programmer culture, software development, and how language models work.

Web history #

People have been reevaluating the web’s history, where it is today, how it faces the future, how it relates to blogging and the independent web.

Somewhat adjacent, to the side, of this is Mandy Brown’s essay on egiosm and altruism. It doesn’t quite fit the web history or future theme, but it is about a reevaluation of values.

“The other side of egoism”.

She is easily one of my favourite writers on the web today.

AI model security #

“Poisoning Language Models During Instruction Tuning”

So, large AI models are a security shit-show because they can be poisoned through their training data. Turns out they can also be poisoned through instruction tuning.

“Prompt injection explained, with video, slides, and a transcript”

Between training data/instruction poisoning and prompt injections, language models are a complete security nightmare.

Finally a fast python? #

Mostly vapour at the moment, but fairly convincing vapour. Who wouldn’t like a superfast, easily deployable python variant?

All your check-ins are us #

Here’s a ‘fun’ statistic. Microsoft says that among Copilot users:

40% of the code they’re checking in is now AI-generated and unmodified

www.microsoft.com/en-us/Inv…

As people pointed out on Mastodon, that they have this statistic means that they’re tracking your code from copilot output all the way into the GitHub commit.

“GitHub Copilot AI pair programmer: Asset or Liability?”

Copilot can become an asset for experts, but a liability for novice developers.

Makes the “40% commit copilot suggestions unchanged” stat more worrying.

The open source, open research AI reckoning #

“We Have No Moat And neither does OpenAI”

This is an interesting document, ostensibly a leaked Google doc. There’s an opportunity here for the OSS community to do better than OpenAI or Google and I have to hope we don’t botch it

“Google shared AI knowledge with the world — until ChatGPT caught up”

When he uses Google Translate and YouTube, “I already see the volatility and instability that could only be explained by the use of,” these models and data sets.

The large size of these models might well be counter-productive, both in terms of instability and volatility, but also in terms of performance. Much of the vaunted performance is an illusion down to contamination and the “emergent” abilities seem to be a mirage as well.

“Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says”

​Much of the hand-wringing by elder white dudes in the field is also being exposed as theatrics.

“‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts”

The Intelligence Illusion by Baldur Bjarnason

What are the major business risks to avoid with generative AI? How do you avoid having it blow up in your face? Is that even possible?

The Intelligence Illusion is an exhaustively researched guide to the risks of language and diffusion models.

The Business Risks of Generative AI