Skip to main content
Baldur Bjarnason

Now I'm disappointed.

Baldur Bjarnason

So, Robin Sloan replied to my post that was responding to his post as a serious piece of “AI” commentary.

Taking the original post seriously may have been a mistake.

So: I think Baldur misreads me a bit

If you make vague statements that replicate the arguments made by “AI” company CEOs who are angling for hundreds of billions of dollars of investment money, then I will read that argument as one that’s either supporting or furthering said CEO’s agenda.

If you didn’t want it read that way, you should have made an effort to distance what you were saying from the standard lines of rhetoric being used by those boosting the “AI” scene. You don’t have to go full sceptic, just downplay the anthropomorphism (“They are the writing”), hyperbole (“Everything”, “ingenious”, “super science”), and be a bit less dismissive of existing criticism.

Outright stating that critics who have already come to a conclusion are probably wrong and have not been thinking “with sufficient sensitivity and imagination” when many of them – such as the academics I mentioned in my initial reply – have been working and writing in the field of “AI” academia for years and have come to their current positions the hard way, sometimes at the expense of their careers, is not a “reasonable”, “centrist”, or “both sides” position to take. It’s advocacy and that means that the rest of the argument needs to be read as such.

First: yes, it is precisely science fiction. Three years ago, the vision of a fluent, formidable software correspondent was science fiction, too.

This may look true from outside the field, but if you go through the academic work on deep learning and language models over the years there was a clear arc of improvement in fluency and natural language processing. The big surprise about ChatGPT to people in the field wasn’t the existence or performance of the chatbot itself, but its popularity once it was launched.

That we might get a fluent chatbot one day was not science fiction but a reasonable possibility based on the field’s progress as early as ten years ago and practically a given after the release of GPT-2 six years ago.

Saying “We will probably get fluent chatbots” in 2019 was at the very least a plausible statement based on the scientific and academic research at the time.

The same can be said about pretty much everything coming out of Machine Learning today. The features of these models were not science-fiction to those paying attention. These technologies have been a long time coming and what capabilities they have today are built on the foundation of years of research. The “science fiction” aspect almost always comes from hyperbole and overstatement of these capabilities, which are then revealed to be completely unrealistic. Every time these capabilities have been put to thorough peer-reviewed testing, their advancements have been shown to be in line with the general expectations of the field.

This happens so consistently that you’re almost always better off waiting a few weeks (or months if the original vendor is secretive like OpenAI and makes impartial confirmation hard or impossible) for researchers to figure out the model’s true capabilities.

That’s why when those who have been paying attention to the field are saying that some of the promises coming from AI vendors about Large Language Models are unfounded science-fiction, that isn’t something to be taken lightly.

Saying “we will probably get super-scientist AI” in 2025 is, unlike a 2019 statement on fluent chatbots, not a reasonable speculation to make based on current scientific and academic research. It is not based on anything except hyperbole from people with vested interests in inflating an investment bubble. There is no research, nothing peer-reviewed, no science, no rational foundation for assuming that anybody can turn the pattern-matching pseudo-reasoning of today’s LLMs into super-science.

It’s bullshit, of the Harry Frankfurt variety. It’s an extraordinary claim that should not be accepted without extraordinary evidence, but is instead being offered with no evidence and a complete disregard for anything resembling the scientific method or academic practices.

More importantly, it’s also propaganda.

It’s specifically propaganda that is furthering and enabling some of the worst actions of the US government and tech companies:

I could go on, but I won’t because covering tech company shenanigans on their own would require thousands of words.

Giving a fact-free statement like “LLMs will give us super-science, so we should give them a few years of free rein” the benefit of the doubt when they’re quite obviously going to be spending those years looting and irreversibly harming our economies is deeply irresponsible.

Second: I don’t think it’s possible to say, with airless certainty, that “there is no path” from language models as we know them to super scientists or super scientist-enablers.

It’s absolutely possible to say that with certainty.

Let’s put it this way: “Tesla will make a hyper-efficient cold fusion reactor before 2030” is a more plausible statement because at least we know that fusion exists and there’s scientific research indicating that a workable cold fusion reactor might at least be plausible.

AI super scientists don’t exist anywhere. Super scientist-enablers don’t exist anywhere. There is no research, science, or data to support the idea that they’re even possible. It’s just a fact-free claim made by people angling for investment money or influence. It’s a statement that’s less plausible than “Tesla will make cold fusion possible by 2030”.

That should make you pause. At least for a moment.

I laid out my counter-scenario: if, in a few years, there aren’t any signs of meaningful contribution to science from Claude’s successors, I’ll reconsider my position.

Given the harm that tech companies are doing under the auspices of “AI progress”, giving them “a few years” to finish off their attacks on labour, academia, media, and science, when the future they’re promising is utterly implausible and has no basis in science or research, is a deeply unserious position to take.

It’s also very disappointing, which means that – unfortunately – I have to end this exchange on the same note it started.