“Five Things: May 11, 2023 — As in guillotine…”

“Data Statements: From Technical Concept to Community Practice - ACM Journal on Responsible Computing”

“Fake Pictures of People of Color Won’t Fix AI Bias - WIRED”

“Despite being designed to empower and protect marginalized groups, this strategy fails to include any actual people in the process of representation”

“Understanding ChatGPT: A Triumph of Rhetoric”

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence

“On the Impossible Safety of Large AI Models”

Apparently, the writers of this paper—working for Google at the time—were forced to tone down v1. They’ve now updated it to v2 after they left the company

“Data Harm Record – Data Justice Lab”

“EU lawmakers back transparency and safety rules for generative AI - TechCrunch”

This is pretty important. Includes limitations on the use of facial recognition for surveillance as well as transparency rules for foundational models.

“A quote from PaLM 2 Technical Report (PDF)”

That PaLM 2 models are smaller than PaLM 1 tracks with the conclusion of my book, which was that very large models are worse for most practical tasks

“’We Shouldn’t Regulate AI Until We See Meaningful Harm’: Microsoft Economist to WEF”

AI discourse coming out of the tech industry is getting more unhinged by the day.

“Baseline”

Looks like a genuinely useful addition to web dev documentation.

“I unintentionally created a biased AI algorithm 25 years ago – tech companies are still making the same mistake”

“Pluralistic: Two principles to protect internet users from decaying platforms”

“On Generative AI and Satisficing - by Dave Karpf”

There is not $100 billion+ of revenues to be found in Clippy-but-awesome.

I share Dave’s scepticism about AI’s productivity benefit

“Artificial Intelligence: Pearson takes legal action over use of its content to train AI - Evening Standard”

This was inevitable. Tech broke the web’s social contract.

“MEPs to vote on proposed ban on ‘Big Brother’ AI facial recognition on streets”

“Writers On Set - Not a Blog”

All of this research and writing has gone into a book, on the risks of using generative AI at work: “The Intelligence Illusion: a practical guide to the business risks of Generative AI”

The latest link post that gathers up the links I posted last week is here: “Poisonings, Corporations, and other links”

I put together an overview of all of my writing on AI here: “My writing on AI; the story so far”

“The Computer Scientist Peering Inside AI’s Black Boxes”

“Word Count 47: Why we should all care about the WGA writer’s strike — Chocolate and Vodka”

I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?

“That looks soooo 2023”.

“AI machines aren’t ‘hallucinating’. But their makers are - Naomi Klein”

Because what we are witnessing is the wealthiest companies in history unilaterally seizing the sum total of human knowledge

Always fun to see AI art enthusiasts siding with fascists in denouncing abstract artists like Barnett Newman. It’s like they’re speed-running through all the worst takes from the last couple of centuries of art discourse

“‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts”