Google Bard's vulnerabilities and other links
Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing #
I wrote this over on the newsletter site:
Google is rushing ahead to “catch up” on AI without paying any attention to the security or integrity of its own products, something that its own employees, past and present, have been warning it about.
They are ignoring the acute vulnerability that large language models have with keyword manipulation exploits, making them the modern equivalent of the search engines of the 90s. The only thing that’s different today is that there is now much more money in manipulating search engines than ever before, which makes the vulnerability of large language models a lethal issue for search, research, or information management at scale.
Read the rest over on the other site.
Generative AI: What You Need To Know #
I launched a set of information cards on generative AI that summarise the findings in my book, The Intelligence Illusion packaged up with references sorted by topic. You can read more about it on the project site.
What’s more important is that they come free with every purchase of the book or the bundle. They are designed to complement the book—provide a way for you to get quickly up-to-date while reading through the book itself at a more leisurely pace. I don’t expect many people to buy the deck on its own, but they are available if all you want is a tight summary of each risk and issue with generative AI.
Media links #
- “The Algorithm is a Lie”. “But when it comes to picking and choosing what TV shows and films to greenlight, no, they don’t have a data-based algorithm.”
- “The Slow Death of Hollywood”. From 2019, but useful background to the current situation.
- “Can a Writers Strike Save Hollywood from Monopoly?”
- TIL that the screenplay for Honey I Shrunk the Kids was written by horror legends Brian Yuzna and Stuart Gordon 🤯
- “Five Things: May 11, 2023 — As in guillotine…”
Google Bard’s limited rollout #
There’s suspicion that the EU’s General Data Protection Regulation (GDPR) is at the center of the omission.
“Google Bard hits over 180 countries and territories—none are in the EU - Ars Technica”
While my initial take on this was the same as Ars Technica’s, the GDPR doesn’t seem to be the common factor, as other people have pointed out. The GDPR is the law in the UK as well and they aren’t exclude. Canada is excluded and their privacy regulations aren’t nearly as robust as the EU’s. The only common factor I can see is that both Canada and the EU are investigating ChatGPT. This might be more of a case of Google not wanting distract regulators in those territories from investigating a competitor.
Cory Doctorow on Google’s AI Hype Circle #
The entire case for “AI” as a disruptive tool worth trillions of dollars is grounded in the idea that chatbots and image-generators will let bosses fire hundred of thousands or even millions of workers.
That’s it.
But the case for replacing workers with shell-scripts is thin indeed. Say that the wild boasts of image-generator companies come true and every single commercial illustrator can be fired. That would be ghastly and unjust, but commercial illustrators are already a tiny, precarious, grotesquely underpaid workforce who are exploited with impunity thanks to a mix of “vocational awe” and poor opportunities in other fields.
Other AI links #
- “AI and Data Scraping on the Archive - Archive of Our Own”. “We’d like to share what we’ve been doing to combat data scraping and what our current policies on the subject of AI are.” Unsurprisingly sensible.
- “GitHub and OpenAI fail to wriggle out of Copilot lawsuit - The Register”. This one is likely to have consequences.
- ‘Google Neural Net “AI” Is About To Destroy Half The Independent Web – Ian Welsh’. “But in the larger sense “AI” is a giant parasite devouring other people’s expertise and denying them a living.”
- “The Computers Are Coming For The Wrong Jobs”
- “Humans and algorithms work together — so study them together”
- “The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases”
- “Google wants to take over the web”
- “Data Statements: From Technical Concept to Community Practice - ACM Journal on Responsible Computing”
- “Fake Pictures of People of Color Won’t Fix AI Bias - WIRED”. “Despite being designed to empower and protect marginalized groups, this strategy fails to include any actual people in the process of representation.”
- “Understanding ChatGPT: A Triumph of Rhetoric”. “Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence”
Web dev #
- “On browser compatibility and support baselines - molily”. “My fear is that Google’s Baseline initiative oversimplifies the discourse on browser support.”
- “Amazon Is Still Running an Injury Mill for Workers”
- “ChatGPT is powered by these contractors making $15 an hour. Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT.”