Deno, Shakespeare's Emoticon, Return to Office, and other links and notes
Newsletter #
“Notetaking, Tagged Templates, and How Deno is a Clear Improvement Over Node” #
I sent this out yesterday. Has a tiny update on Colophon Cards but is mostly about how useful I’m finding Deno.
Development and productivity #
"Vibe Driven Development" #
It comes down to this annoying, upsetting, stupid fact: the only way to build a great product is to use it every day, to stare at it, to hold it in your hands to feel its lumps.
Yeah. Works for pretty much any creative process out there.
Why return to the office causes a drop in productivity #
- It’s well established that open offices are really bad for productivity
- It was extremely popular among companies to switch to open offices in spite of this.
So, it shouldn’t have come as a surprise to see productivity deteriorate when people are forced to return to the office.
A corollary to this is that it becomes obvious that the COVID decline in productivity was down to COVID, not working from home.
Experimental conditional tagged templates for generating HTML strings #
An experiment. By me. This is all about seeing if I can improve the ergonomics of using tagged templates to render HTML strings by avoiding the ternary operator.
“SpiderMonkey Newsletter (Firefox 110-111)” #
This includes supporting modules in Workers, adding support for Import Maps.
The top two items on my Firefox support wishlist are coming! 🙌🏻
“Safari 16.4 Beta Release Notes - Apple Developer Documentation” #
Guessing most people will talk about the web app stuff but there’s also import maps and scroll-to-text links. Also Compression Streams API.
“Announcing Squire 2.0—Fastmail releases next generation of open-source rich text editor” #
Looks interesting. I wonder what it’s accessibility is like?
After looking into it a bit, I get the impression that accessibility is up to whoever integrates the editor, since it seems mostly to be down to how the toolbar gets implemented?
The AI mess continues #
“Can We Trust Search Engines with Generative AI? A Closer Look at Bing’s Accuracy for News Queries” #
Sometimes the references simply do not support the claim being made.
“Introducing the AI Mirror Test, which very smart people keep failing” #
Exacerbated by how text works. There is no intelligence in any text, but we are trained to reconstruct reasoning and thoughts from abstract symbols.
“How Not to Test GPT-3” #
GPT-3 has almost certainly read about this experiment again and again and again.
The answer for every amazing “AI is sentient!” result is that the test was already in the training data.
“Defending against AI Lobbyists - Schneier on Security” #
I’m starting to think that there might be some unsafe uses of this AI thing.
“Collections: On ChatGPT” #
This is a really solid overview of the current state of AI and an analysis of its potential impact on academia.
It’s highly quotable:
But calling this a hallucination is already ascribing mind-like qualities to something that is not a mind or even particularly mind-like in its function.
ChatGPT is such a mess of academic dishonesty that it isn’t even necessary to prove its products were machine-written because the machine also does the sort of things which can get you kicked out of college.
ChatGPT does not understand the logical correlations of these words or the actual things that the words (as symbols) signify (their ‘referents’)
Boosters of this technology frequently assume applications in fields they do not understand.
“Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT” #
Well, this is an interesting wrinkle.
’Bing: “I will not harm you unless you harm me first”’ #
A search engine that summarizes results is a really useful thing. But a search engine that adds some imaginary numbers for a company’s financial results is not.
The intelligence illusion #
“There is no meaning in text, it’s all constructed, but how?” has been a core debate in the humanities post-WW2.
What role does social context have? Intertextuality? Your knowledge of the author? Etc.
There is no intelligence in text. There is no reasoning, no thoughts, no theory, nothing but abstract symbols we’ve trained ourselves to interpret as concepts.
And the eventual reconstruction is never complete, always skewed, always different from the author’s intent in substantive ways.
I really can’t stress this enough, because it’s a tough concept that many have trouble accepting: there is nothing in text. It doesn’t have thoughts, logic, reasoning, emotion, ideas, facts, or concepts. It is abstract symbols intended to represent many of those things, and we’ve trained our brains from childhood to make those associations.
And we get it wrong almost every time. Our reading of any give text is never a perfect match for what the author intended.
So, now autocompletes come around, masquerading as AI, constantly generating text based on patterns in existing bodies of text, and we read it the same way we’ve done any text.
By assuming that the abstract symbols represent intelligence.
So, our brains go to work, construct order, emotions, and reasoning out of randomly generated text and it sort of works because it’s based on patterns that were originally constructed by someone capable of reasoning and emotion.
But there’s nothing there, nothing behind it.
This is why it’s incorrect to say that the AI is hallucinating. It’s we who are hallucinating. We are hallucinating an intelligence where there is none because reconstructing the thoughts that drove the creation of a text is what we’ve been practicing all our lives.
Any intelligence we see in it is our own.
Last couple of times I had this strong a sense of “wow, people are really buying into the hype with cult-like fervor” was 2007 in the Icelandic financial bubble and 1999 in the dot-com bubble. I don’t think crypto came close, even at its peak.
Just plain interesting to read #
“Shakespeare’s Missing Smile – Terence Eden’s Blog” #
When did we lose Shakespeare’s emoticon?
“Tech Was Supposed to Make Cars Safer. It Didn’t Deliver.” #
Cars need less software, not more.