Skip to main content
Baldur Bjarnason

Artificial General Intelligence and the bird brains of Silicon Valley

Baldur Bjarnason
This page was originally published elsewhere, but is republished here for archival purposes.

This essay was edited out of chapters two and three of my book, The Intelligence Illusion: a practical guide to the business risks of Generative AI, with minor alterations to make the two parts more cohesive together.

The problem is, if one side of the communication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (independent of the model). Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.

Emily M. Bender, Timnit Gebru, et al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?.[1]

Bird brains have a bad reputation. The diminutive size of your average bird and their brain has lead people to assume that they are, well, dumb.

But, bird brains are amazing. Birds commonly outperform mammals with larger brains at a variety of general reasoning and problem-solving tasks.[2] Some by a large margin.[3] Their small brains manage this by packing numerous neurons in a small space using structures that are unlike from those you find in mammals.[4]

Even though birds have extremely capable minds, those minds are built in ways that are different from our own or other mammals. Similar capabilities; different structure.

The ambition of the Silicon Valley AI industry is to create something analogous to a bird brain: a new kind of mind that is functionally similar to the human mind, possibly outperforming it, while being built using very different mechanisms. Similar capabilities; different structure.

This effort goes back decades, to the dawn of computing, and has had limited success.

Until recently, it seems.

If you’re reading this, you’ve almost certainly interacted with a Generative AI, however indirectly. Maybe you’ve tried Bing Chat. Maybe you’ve subscribed to the paid tier for ChatGPT. Or, maybe you’ve used Midjourney to generate images. At the very least you’ve been forced to see the images or text posted by the overenthusiastic on social media.

These AI models are created by pushing an enormous amount of training data through various algorithms:

What comes out the other end is a mathematical model of the media domain in question: text or images.

You know what Generative AI is in terms of how it presents to you as software: clever chatbots that do or say things in response to what you say: your prompt. Some of those responses are useful, and they give you an impression of sophisticated comprehension. The models that generate text are fluent and often quite engaging.

This fluency is misleading. What Bender and Gebru meant when they coined the term stochastic parrot wasn’t to imply that these are, indeed, the new bird brains of Silicon Valley, but that they are unthinking text synthesis engines that just repeat phrases. They are the proverbial parrot who echoes without thinking, not the actual parrot who is capable of complex reasoning and problem-solving.

A zombie parrot, if you will, that screams for brains because it has none.

The fluency of the zombie parrot—the unerring confidence and a style of writing that some find endearing—creates a strong illusion of intelligence.

Every other time we read text, we are engaging with the product of another mind. We are so used to the idea of text as a representation of another person’s thoughts that we have come to mistake their writing for their thoughts. But they aren’t. Text and media are tools that authors and artists create to let people change their own state of mind—hopefully in specific ways to form the image or effect the author was after.

Reading is an indirect collaboration with the author, mediated through the writing. Text has no inherent reasoning or intelligence. Agatha Christie’s ghost does not inhabit the words of Murder on the Orient Express. Stephen King isn’t hovering over you when you read Carrie. The ghost you feel while reading is an illusion you’ve made out of your own experience, knowledge, and imagination. Every word you read causes your mind to reconstruct its meaning using your memories and creativity. The idea that there is intelligence somehow inherent in writing is an illusion. The intelligence is all yours, all the time: thoughts you make yourself in order to make sense of another person’s words. This can prompt us to greatness, broaden our minds, inspire new thoughts, and introduce us to new concepts. A book can contain worlds, but we’re the ones that bring them into being as we read. What we see is uniquely our own. The thoughts are not transported from the author’s mind and injected into ours.

The words themselves are just line forms on a background with no inherent meaning or intelligence. The word “horse” doesn’t come with the Platonic ideal of a horse attached to it. The word “anger” isn’t full of seething emotion or the restrained urge towards violence. Even words that are arguably onomatopoeic, like the word “brabra” we use in Icelandic for the sound a duck makes, are still incredibly specific to the cultures and context they come from. We are the ones doing the heavy lifting in terms of reconstructing a picture of an intelligence behind the text. When there is no actual intelligence, such as with ChatGPT, we are the ones who end up filling in the gaps with our memories, experience and imagination.

When ChatGPT demonstrates intelligence, that comes from us.[5] Some of it we construct ourselves.[6] Some of it comes from our inherent biases.[7]

There is no ‘there’ there. We are alone in the room, reconstructing an abstract representation of a mind. The reasoning you see is only in your head. You are hallucinating intelligence where there is none. You are doing the textual equivalent of seeing a face in a power outlet.

This drive—anthropomorphism—seems to be innate. Our first instinct when faced with anything unfamiliar—whose drives, motivations, and mechanisms we don’t understand—is to assume that they think much like a human would.[8] When that unfamiliar agent uses language like a human would, the urge to see them as near or fully human is impossible to resist—a recurring issue in the history of AI research that dates all the way back to 1966.[9]

These tools solve problems and return fluent, if untruthful, answers, which is what creates such a convincing illusion of intelligence.

Text synthesis engines like ChatGPT and GPT-4 do not have any self-awareness. They are mathematical models of the various patterns to be found in the collected body of human text. How granular the model is depends on its design and the languages in question. Some of the tokens—the smallest unit of language the model works with—will be characters or punctuation marks, some of them will be words, syllables, or even phrases. Many language models are a mixture of both.[10]

With enough detail—a big enough collection of text—these tools will model enough of the probabilistic distribution of various words or characters to be able to perform what looks like magic:

With enough of these correlative shortcuts, the model can perform something that looks like common sense reasoning: its output is text that replicates prior representations of reasoning.[11] This works for as long as you don’t accidentally use the wrong phrasing in your prompt and break the correlation.[12]

The mechanism behind these systems is entirely correlative from the ground up.[13]What looks like reasoning is incredibly fragile and breaks as soon as you rephrase or reword your prompt.[14] It exists only as a probabilistic model of text. A Generative AI chatbot is a language engine incapable of genuine thought.[15]

These language models are interactive but static snapshots of the probability distributions of a written language.

It’s obviously interactive, that’s the whole point of a chatbot. It’s static in that it does not change when it’s used or activated. In fact, changing it requires an enormous amount of computing power over a long period of time. What the system models are the distributions and correlations of the tokens it records for the texts in its training data set—how the various words, syllables, and punctuation relate to each other over as much of the written history of a language as the company can find.

That’s what distinguishes biological minds from these algorithmic hindsight factories: a biological mind does not reason using the probability distributions of all the prior cultural records of its ancestors. Biological minds learn primarily through trial and error. Try, fail, try again. They build their neural network, which is functionally very different from what you see in a software model,[16] through constant feedback, experimentation, and repeated failure—driven by a chemical network that often manifests as instinct, emotion, motivation, and drive. The neural network—bounded, defined, and driven by the chemical network—is constantly changing and responding to outside stimuli. Every time an animal’s nervous system is “used”, it changes. It is always changing, until it dies.

Biological minds experience. Synthesis engines parse imperfect records of experiences. The former are forward-looking and operate primarily in the present, sometimes to their own detriment. The latter exist exclusively as probabilistic manifestations of imperfect representations of thoughts past. They are snapshots. Generative AI are themselves cultural records.

These models aren’t new bird brains—new alien minds that are peers to our own. They aren’t even insect brains. Insects have autonomy. They are capable of general problem-solving—some of them dealing with tasks of surprising complexity[17]—and their abilities tolerate the kind of minor alterations in the problem environment that would break the correlative pseudo-reasoning of a language model.[18] Large Language Models are something lesser. They are water running down pathways etched into the ground over centuries by the rivers of human culture. Their originality comes entirely from random combinations of historical thought. They do not know the ‘meaning’ of anything—they only know the records humans find meaningful enough to store.[19] Their unreliability comes from their unpredictable behaviour in novel circumstances. When there is no riverbed to follow, they drown the surrounding landscape.

The entirety of their documented features, capabilities, and recorded behaviour—emergent or not—[20]is explained by this conceptual model of generative AI. There are no unexplained corner cases that don’t fit or actively disprove this theory.

Yet people keep assuming that what ChatGPT does can only be explained as the first glimmer of genuine Artificial General Intelligence. The bird brain of Silicon Valley is born at last!

Because text and language are the primary ways we experience other people’s reasoning, it’ll be next to impossible to dislodge the notion that these are genuine intelligences. No amount of examples, scientific research, or analysis will convince those who want to maintain a pseudo-religious belief in alien peer intelligences. After all, if you want to believe in aliens, an artificial one made out of supercomputers and wishful thinking feels much more plausible than little grey men from outer space. But that’s what it is: a belief in aliens.

It doesn’t help that so many working in AI seem to want this to be true. They seem to be true believers who are convinced that the spark of Artificial General Intelligence has been struck.[21]

They are inspired by the science fictional notion that if you make something complex enough, it will spontaneously become intelligent. This isn’t an uncommon belief. You see it in movies and novels—the notion that any network of sufficient complexity will spontaneously become sentient has embedded itself in our popular psyche. James Cameron’s skull-crushing metal skeletons have a lot to answer for.

That notion doesn’t seem to have any basis in science. The idea that general intelligence is an emergent property of neural networks that appears once the network reaches sufficient complexity, is a view based on archaic notions of animal intelligence—that animals are soulless automata incapable of feeling or reasoning.[22] That view that was formed during a period where we didn’t realise just how common self-awareness (i.e. the mirror test) and general reasoning is in the animal kingdom.[23] Animals are smarter than we assumed and the difference between our reasoning and theirs seems to be a matter of degree, not of presence or absence.[24]

General reasoning seems to be an inherent, not emergent, property of pretty much any biological lifeform with a notable nervous system.

The bumblebee, despite having only a tiny fraction of the neurons of a human brain, is capable of not only solving puzzles but also of teaching other bees to solve those puzzles. They reason and have a culture.[25] They have more genuine and robust general reasoning skills—that don’t collapse into incoherence at minor adjustments to the problem space—than GPT-4 or any large language model on the market. That’s with only around half a million neurons to work with.[26]

Conversely, GPT-3 is made up of 175 billion parameters—what passes for a “neuron” in a digital neural network.[27] GPT-4 is even larger, with some estimates coming in at a trillion parameters. Then you have fine-tuned systems such as ChatGPT, that are built from multiple interacting models layered on top of GPT-3.5 or GPT-4, which make for an even more complex interactive system.

ChatGPT, running on GPT-4 is, easily a million times more complex than the “neural network” of a bumblebee and yet, out of the two, it’s the striped invertebrate that demonstrates robust and adaptive general-purpose reasoning skills. Very simple minds, those belonging to small organisms that barely have a brain, are capable of reasoning about themselves, the world around them, and the behaviour of other animals.[28]

Unlike the evidence for ‘sparks’ of AGI in language models, the evidence for animal reasoning—even consciousness—is broad, compelling, and encompasses decades of work by numerous scientists.[29]

AI models are flawed attempts at digitally synthesising neurologies. They are built on the assumption that all the rest—metabolisms, hormones, chemicals, and senses—aren’t necessary for developing intelligence.

Reasoning in biological minds does not seem to be a property that emerges from complexity. The capacity to reason looks more likely to be a built-in property of most animal minds.[30] A reasoning mind appears to be a direct consequence of how animals are structured as a whole—chemicals, hormones, and physical body included. The animal capacity for problem-solving, social reasoning, and self-awareness seem to increase, unevenly, and fitfully with the number of neurons until it reaches the level we see in humans.[31] Reasoning does not ‘emerge’ or appear. Some creatures are better at it than others, but it’s there in some form even in very small, very simple beings like the bumblebee. It doesn’t happen magically when you hook up a bunch of disparate objects together in a complex enough network. A reasoning mind is the starting point of biological thinking, not the endpoint that only “emerges” with sufficient complexity.

The internet—a random interconnected collection of marketing offal, holiday snaps, insufferable meetings, and porn—isn’t going to become self-aware and suddenly acquire the capacity for general reasoning once it reaches a certain size, and neither will Large-Language-Models. The notion that we are making autonomous beings capable of Artificial General Intelligence just by loading a neural network up with an increasingly bigger collection of garbage from the internet is not one that has any basis in anything we understand about biology or animal reasoning.

But, AI companies insist that they are on the verge of AGI.[32] Their rhetoric around it verges on the religious as the idea of an AGI is idealised and almost worshipped.[33] They claim to be close to making a new form of thinking life, but they refuse to release the data required to prove it.[34] They’ve built software that performs well on the arbitrary benchmarks they’ve chosen and claim are evidence of general intelligence, but those tests prove no such thing and have no such validity.[35] The benchmarks are theatrics that have no applicability towards demonstrating genuine general intelligence.[36]

AI researchers love to resurrect outdated pseudoscience such as phrenology—shipping AI software that promises to be able to tell you if somebody is likely to be a criminal based on the shape of their skull.[37] It’s a field where researchers and vendors routinely claim that their AIs can detect whether you’re a potential criminal, gay, a good employee, liberal or conservative, or even a psychopath, based on “your face, body, gait, and tone of voice.”[38]

It’s pseudoscience.

This is the field and the industry that claims to have accomplished the first ‘spark’ of Artificial General Intelligence?[39]

Last time we saw a claim this grand, with this little scientific evidence, the men in the white coats were promising us room-temperature fusion, giving us free energy for life, and ending the world’s dependence on fossil fuels.[40]

Why give the tech industry the benefit of the doubt when they are all but claiming godhood—that they’ve created a new form of life never seen before?

As Carl Sagan said: “extraordinary claims require extraordinary evidence.”

He didn’t say “extraordinary claims require only vague insinuations and pinky-swear promises.”

To claim you’ve created a completely new kind of mind that’s on par with any animal mind—or, even superior—and provides general intelligence using mechanisms that don’t resemble anything anybody has ever seen in nature, is by definition the most extraordinary of claims.

The AI industry is backing their claims of Artificial General Intelligence with hot air, hand-waving, and cryptic references to data and software nobody outside their organisations is allowed to review or analyse.[41]

They are pouring an every-increasing amount of energy and work into ever-larger models all in the hope of triggering the ‘singularity’ and creating a digital superbeing. Like a cult of monks boiling the oceans in order to hear whispers of the name of God.[42]

It’s a farce. All theatre; no evidence. Whether they realise it or not, they are taking us for a ride. The sooner we see that they aren’t backing their claims with science, the sooner we can focus on finding safe and productive uses—limiting its harm, at least—for the technology as it exists today.

After everything the tech industry has done over the past decade, the financial bubbles, the gig economy, legless virtual reality avatars, crypto, the endless software failures—just think about it—do you think we should believe them when they make grand, unsubstantiated claims about miraculous discoveries? Have they earned our trust? Have they shown that their word is worth more than that of independent scientists?

Do you think that they, with this little evidence, have really done what they claim, and discovered a literal new form of life? But are conveniently unable to prove it because of ‘safety’?

Me neither.

The notion that large language models are on the path towards Artificial General Intelligence is a dangerous one. It’s a myth that directly undermines any effort to think clearly or strategise about generative AI because it strongly reinforces anthropomorphism.

That’s when you reason about an object or animal as if it were a person. It prevents you from forming an accurate mental model of the non-human thing’s behaviour. AI is especially prone to creating this reaction. Software such as chatbots trigger all three major factors that promote anthropomorphism in people:[43]

  1. Understanding. If we lack an understanding of how an object works, our minds will resort to thinking of it in terms of something that’s familiar to us: people. We understand the world as people because that’s what we are. This becomes stronger the more similar we perceive the object to be to ourselves.
  2. Motivation. We are motivated to both seek out human interaction and to interact effectively with our environment. This reinforces the first factor. The more uncertain we are of how that thing works, the stronger the anthropomorphism. The less control we have over it, the stronger the anthropomorphism.
  3. Sociality. We have a need for human contact and our tendency towards anthropomorphising objects in our environment increase with our isolation.

Because we lack cohesive cognitive models for what makes these language models so fluent, feel a strong motivation to understand and use them as they are integrated into our work, and, increasingly, our socialisation in the office takes on the very same text conversation form as a chatbot does, we inevitably feel a strong drive to see these software systems as people. The myth of AGI reinforces this—supercharges the anthropomorphism—because it implies that “people” is indeed an appropriate cognitive model for how these systems behave.

It isn’t. AI are not people. Treating them as such is a major strategic error as it will prevent you from thinking clearly about their capabilities and limitations.

Believing the myth of Artificial General Intelligence makes you incapable of understanding what language models today are and how they work.


This was an excerpt from The Intelligence Illusion #

What are the major business risks to avoid with generative AI? How do you avoid having it blow up in your face? Is that even possible?

The Intelligence Illusion is an exhaustively researched guide to the risks of language and diffusion models.

Out of the Software Crisis by Baldur Bjarnason


  1. Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21 (New York, NY, USA: Association for Computing Machinery, 2021), 610–23, https://doi.org/10.1145/3442188.3445922. ↩︎

  2. John Timmer, “Bird Brains Are Dense—with Neurons,” Ars Technica, June 2016, https://arstechnica.com/science/2016/06/bird-brains-are-densewith-neurons/. ↩︎

  3. Smithsonian Magazine and Dirk Schulze-Makuch, “Crows Are Even Smarter Than We Thought,” Smithsonian Magazine, accessed April 4, 2023, https://www.smithsonianmag.com/air-space-magazine/crows-are-even-smarter-we-thought-180976970/. ↩︎

  4. Seweryn Olkowicz et al., “Birds Have Primate-Like Numbers of Neurons in the Forebrain,” Proceedings of the National Academy of Sciences 113, no. 26 (June 2016): 7255–60, https://doi.org/10.1073/pnas.1517131113. ↩︎

  5. James Vincent, “Introducing the AI Mirror Test, Which Very Smart People Keep Failing,” The Verge, February 2023, https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test. ↩︎

  6. Arvind Narayanan and Sayash Kapoor, “People Keep Anthropomorphizing AI. Here’s Why,” Substack newsletter, AI Snake Oil, February 2023, https://aisnakeoil.substack.com/p/people-keep-anthropomorphizing-ai. ↩︎

  7. “How Much of AI’s Recent Success Is Due to the Forer Effect? – Terence Eden’s Blog,” February 2023, https://shkspr.mobi/blog/2023/02/how-much-of-ais-recent-success-is-due-to-the-forer-effect/. ↩︎

  8. Nicholas Epley, Adam Waytz, and John T. Cacioppo, “On Seeing Human: A Three-Factor Theory of Anthropomorphism,” Psychological Review 114, no. 4 (October 2007): 864–86, https://doi.org/10.1037/0033-295X.114.4.864. ↩︎

  9. Arleen Salles, Kathinka Evers, and Michele Farisco, “Anthropomorphism in AI,” AJOB Neuroscience 11, no. 2 (April 2020): 88–95, https://doi.org/10.1080/21507740.2020.1740350; Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: Freeman, 1976). ↩︎

  10. Murray Shanahan, “Talking About Large Language Models” (arXiv, February 2023), https://doi.org/10.48550/arXiv.2212.03551. ↩︎

  11. Ruben Branco et al., “Shortcutted Commonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (Online; Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021), 1504–21, https://doi.org/10.18653/v1/2021.emnlp-main.113. ↩︎

  12. Tom McCoy, Ellie Pavlick, and Tal Linzen, “Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Florence, Italy: Association for Computational Linguistics, 2019), 3428–48, https://doi.org/10.18653/v1/P19-1334. ↩︎

  13. Timothy Niven and Hung-Yu Kao, “Probing Neural Network Comprehension of Natural Language Arguments,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Florence, Italy: Association for Computational Linguistics, 2019), 4658–64, https://doi.org/10.18653/v1/P19-1459. ↩︎

  14. Myeongjun Jang and Thomas Lukasiewicz, “Consistency Analysis of ChatGPT” (arXiv, March 2023), https://doi.org/10.48550/arXiv.2303.06273. ↩︎

  15. Kyle Mahowald et al., “Dissociating Language and Thought in Large Language Models: A Cognitive Perspective” (arXiv, January 2023), https://doi.org/10.48550/arXiv.2301.06627. ↩︎

  16. Gary Marcus, “Deep Learning: A Critical Appraisal” (arXiv, January 2018), https://doi.org/10.48550/arXiv.1801.00631; David Watson, “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence,” Minds and Machines 29, no. 3 (September 2019): 417–40, https://doi.org/10.1007/s11023-019-09506-6. ↩︎

  17. “Bumble Bees Can Teach Each Other to Solve Problems - Study,” The Jerusalem Post JPost.com, accessed April 4, 2023, https://www.jpost.com/science/article-734044. ↩︎

  18. Jang and Lukasiewicz, “Consistency Analysis of ChatGPT.” ↩︎

  19. Emily M. Bender and Alexander Koller, “Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Online: Association for Computational Linguistics, 2020), 5185–98, https://doi.org/10.18653/v1/2020.acl-main.463. ↩︎

  20. Every complex system has emergent behaviours. It is not evidence of sentience or self-awareness. See John Gall and John Gall, The Systems Bible: The Beginner’s Guide to Systems Large and Small: Being the Third Edition of Systemantics (Walker, Minn: General Systemantics Press, 2002), for example: “The point is that real novelty in the world increases as complexity increases. As the necessary richness of underlying structure is attained, the new property emerges, seemingly out of nowhere.” There is also evidence that the much-vaunted emergent abilities of Large Language Models might basically be benchmarking artefacts. See Rylan Schaeffer et al., “Are Emergent Abilities of Large Language Models a Mirage?” (arXiv, April 2023),https://arxiv.org/abs/2304.15004. ↩︎

  21. Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4” (arXiv, March 2023), https://doi.org/10.48550/arXiv.2303.12712. ↩︎

  22. Daniel Everett, “Beyond Words: The Selves of Other Animals,” New Scientist, July 2015, https://www.newscientist.com/article/dn27858-beyond-words-the-selves-of-other-animals/. ↩︎

  23. Marc Bekoff, “Scientists Conclude Nonhuman Animals Are Conscious Beings: Didn’t We Already Know This? Yes, We Did.” August 2012, https://www.psychologytoday.com/us/blog/animal-emotions/201208/scientists-conclude-nonhuman-animals-are-conscious-beings; Philip Lowe et al., “The Cambridge Declaration on Consciousness” (Cambridge, UK, 2012), http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf, this declaration was a particular turning point that showed that general scientific consensus had shifted on the subject: “Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.” ↩︎

  24. Karla Moeller, “How Are Humans Different from Other Animals?” Text, May 2017, https://askabiologist.asu.edu/questions/human-animal-differences. ↩︎

  25. Neuroscience News, “Puzzle-Solving Behavior Spreads Through Bumblebee Colonies,” Neuroscience News, March 2023, https://neurosciencenews.com/problem-solving-bee-behavior-22728/. ↩︎

  26. “List of Animals by Number of Neurons,” Wikipedia, March 2023, https://en.wikipedia.org/w/index.php?title=List_of_animals_by_number_of_neurons&oldid=1145876354. ↩︎

  27. “List of Animals by Number of Neurons.” ↩︎

  28. “This Tiny Fish Can Recognize Itself in a Mirror. Is It Self-Aware?” Animals, February 2019, https://www.nationalgeographic.com/animals/article/fish-cleaner-wrasse-self-aware-mirror-test-intelligence-news. ↩︎

  29. Paul Patton, “One World, Many Minds: Intelligence in the Animal Kingdom,” Scientific American, accessed April 7, 2023, https://doi.org/10.1038/scientificamericanmind1208-72. ↩︎

  30. Jonathan Birch, Alexandra K. Schnell, and Nicola S. Clayton, “Dimensions of Animal Consciousness,” Trends in Cognitive Sciences 24, no. 10 (October 2020): 789–801, https://doi.org/10.1016/j.tics.2020.07.007; Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, “Helping a Cagemate in Need: Empathy and Pro-Social Behavior in Rats,” Science (New York, N.Y.) 334, no. 6061 (December 2011): 1427–30, https://doi.org/10.1126/science.1210789; Smithsonian Magazine and Meilan Solly, “Gorillas Appear to Grieve for Their Dead,” Smithsonian Magazine, accessed April 7, 2023, https://www.smithsonianmag.com/smart-news/gorillas-appear-grieve-their-dead-180971896/. ↩︎

  31. Cyriel M. A. Pennartz, Michele Farisco, and Kathinka Evers, “Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach,” Frontiers in Systems Neuroscience 13 (2019), https://www.frontiersin.org/articles/10.3389/fnsys.2019.00025, and this seems to even apply to the capacity for consciousness: “Although major differences between rodent and human brains in terms of size, complexity and the presence of specialized areas should be acknowledged, we argue that the ‘size’ argument will rather affect the complexity and/or intensity of the information the organism will be conscious of, and not so much the presence or absence of consciousness.” ↩︎

  32. Sam Altman, “Planning for AGI and Beyond,” February 2023, https://openai.com/blog/planning-for-agi-and-beyond. ↩︎

  33. Robin Hanson, “AGI Is Sacred,” August 2022, https://www.overcomingbias.com/p/agi-is-sacredhtml. ↩︎

  34. Anna Rogers, “Closed AI Models Make Bad Baselines,” Hacking Semantics, April 2023, https://hackingsemantics.xyz/2023/closed-baselines/. ↩︎

  35. Inioluwa Deborah Raji et al., “AI and the Everything in the Whole Wide World Benchmark” (arXiv, November 2021), https://doi.org/10.48550/arXiv.2111.15366. ↩︎

  36. Thomas Liao et al., “Are We Learning Yet? A Meta Review of Evaluation Failures Across Machine Learning,” 2022, https://openreview.net/forum?id=mPducS1MsEK. ↩︎

  37. Xiaolin Wu and Xi Zhang, “Automated Inference on Criminality Using Face Images” (arXiv, November 2016), https://doi.org/10.48550/arXiv.1611.04135. ↩︎

  38. Luke Stark and Jevan Hutson, “Physiognomic Artificial Intelligence,” {SSRN} {Scholarly} {Paper} (Rochester, NY, September 2021), https://doi.org/10.2139/ssrn.3927300. ↩︎

  39. Bubeck et al., “Sparks of Artificial General Intelligence.” ↩︎

  40. Ventana al Conocimiento, “Cold Fusion: Anatomy of a Scientific ’Fraud’,” OpenMind, March 2019, https://www.bbvaopenmind.com/en/science/physics/cold-fusion-anatomy-of-a-scientific-fraud/. ↩︎

  41. Kyle Barr, “GPT-4 Is a Giant Black Box and Its Training Data Remains a Mystery,” Gizmodo, March 2023, https://gizmodo.com/chatbot-gpt4-open-ai-ai-bing-microsoft-1850229989. ↩︎

  42. Tante, “Artificial Saviors,” Boundary 2, August 2018, https://www.boundary2.org/2018/08/tante/, similarly notes how AI research has taken a religious dimension. For example: “Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained.” ↩︎

  43. Epley, Waytz, and Cacioppo, “On Seeing Human.” ↩︎