Skip to main content
Baldur Bjarnason

Keeping up with and assessing AI research (links & notes)

Baldur Bjarnason

I’m not going to comment on That Banking Thing other than to say that this entire saga just showed us, again, that Miles Bron in The Glass Onion was an accurate representation of a certain type of tech/startup/VC guy.

This is exactly the sort of mess and resolution of said mess that is caused by that kind of ignorance, coupled with the power we keep giving to those ignoramuses.

Anyway, I digress…


Why you should ignore most AI research you hear about on social media #

If you find it hard to keep up with AI research, here’s a foolproof rule-of-thumb for deciding which studies to ignore:

If a majority of the authors work for the AI vendor or a major partner, you know it:

Read more…


A 4-Monthiversary sale for Out of the Software Crisis #

It’s almost 4 months since I published my book “Out of the Software Crisis”.

I’ve been really happy with the response and wanted to do something cool to mark the occasion.

But I couldn’t come up with any cool ideas, so I decided to just do a sale instead: 20% off! Sale expires 16:00 GMT, 15 March 2023. Sale is now over!

Until then, you’re benefiting from my inability to come up with something clever.

Out of the Software Crisis: Systems-Thinking for Software Projects (20% off) (Sale is now over.)

How do you know if you’re going to like it?

Well, the best way is to read bits of it! Which you can do because I’ve posted extracts as essays on my website.

If you enjoy reading these, you’re going to enjoy the book:

These few aren’t extracts from the book but are very much in the same vein and style.

Reading through these should give you a very clear idea of whether you’ll like the book or not.

Don’t like the essays? Avoid the book.

Enjoy the essays? Then the book has more.

Anyway, the book is on sale for the next few days, until 16:00 GMT on the 15th of March 2023. Until then, 20% off. (Sale is now over.)


It Took Me Nearly 40 Years To Stop Resenting Ke Huy Quan - Decider #

This is so touching.


Craft vs Industry: Separating Concerns - hello, yes. I’m Thomas Michael Semmler: CSS Developer, Designer & Developer from Vienna, Austria #

Seems to be asking exactly the sort of questions we should be asking ourselves at this time.


Lovely to see AI critics split into adversarial factions over Liliputian which-end-of-the-egg details when the hype crowd stands united around bullshit and false promises.


Vanderbilt Apologizes for ChatGPT-Generated Email #

Like I’ve said before, the technical term for somebody who uses AI to write their emails is “asshole”.


Startup Winter is Coming - Stacking the Bricks #

A few years’ old but very accurate.


The long shadow of GPT - by Gary Marcus #

I’ve tried hard in my research to be even-handed—try to get a realistic sense of the pros and cons of this tech

But, man, does it seem tailor-made for fraud and misinformation.

These models absolutely do have practical uses. They are amazing at many of the things they do, which is something that recent bubbles like cryptocoins didn’t have.

But I have never seen a technology so perfectly suited to fraud, disinformation, and outright abuse. Yeah, even more than crypto. These models are much more accessible, much easier to use, and have a much wider range of different abuse and fraud vectors.

I’m finding it hard at this point to see how this can be a net win.

Like, a 13-year-old dickweed isn’t likely to be able to engage in a rug-and-pull and is more likely to be the victim, not perp, in a crypto scam.

But using his gaming computer to download a bunch of photos off a classmate’s social media account and using stable diffusion+dreambooth to create deepfakes of her is so easy that I’m convinced it’s already begun to happen.

And that’s just one potential abuse vector for one of these models. They all have dozens, if not hundreds of abuse vectors.

At the moment, I think the only thing that’s truly holding back large-scale abuses and fraud is the fact that these tools are slow and expensive.

If prices continue to drop (which IMO is necessary for financial viability) then we’re in for a world of hurt.

Abuse and deception tactics with AI are becoming quite sophisticated. Already last autumn, researchers started to notice organised efforts to use AI image generation for astroturfing.

Now, with improved image generation and lower OpenAI API prices, it seems very likely that these astroturfing systems will very quickly become more sophisticated and more realistic. Autogenerated social media profiles. Realistic family photos. LinkedIn profiles more convincing than your own.


Kodsnack discussion with Tim Urban and Torill Kornfeldt #

A really interesting discussion between an AI enthusiast and a biologist. Poor audio but worthwhile.​


The Best of the Rest #