“Can a Writers Strike Save Hollywood from Monopoly?”

“Google’s AI Hype Circle”

The entire case for “AI” as a disruptive tool worth trillions of dollars is grounded in the idea that chatbots and image-generators will let bosses fire hundred of thousands or even millions of workers.

“How Iceland sold the same Green Electricity twice - Industry Decarbonization Newsletter”

“Google Bard hits over 180 countries and territories—none are in the EU - Ars Technica”

there’s suspicion that the EU’s General Data Protection Regulation (GDPR) is at the center of the omission.

Not following privacy regulation limits your market reach.

“On browser compatibility and support baselines · molily”

My fear is that Google’s Baseline initiative oversimplifies the discourse on browser support.

“AI and Data Scraping on the Archive - Archive of Our Own”

We’d like to share what we’ve been doing to combat data scraping and what our current policies on the subject of AI are.

Unsurprisingly sensible.

“GitHub and OpenAI fail to wriggle out of Copilot lawsuit • The Register”

This one is likely to have consequences.

“Amazon Is Still Running an Injury Mill for Workers”

TIL that the screenplay for Honey I Shrunk the Kids was written by horror legends Brian Yuzna and Stuart Gordon 🤯

I just published “‘What next?’ he asks with trepidation” where I have a low-key anxiety attack about the future, before I break for the weekend 😅

“The Computers Are Coming For The Wrong Jobs”

‘Google Neural Net “AI” Is About To Destroy Half The Independent Web – Ian Welsh’

But in the larger sense “AI” is a giant parasite devouring other people’s expertise and denying them a living

“Humans and algorithms work together — so study them together”

“ChatGPT is powered by these contractors making $15 an hour. Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT.”

“The downside of AI: Former Google scientist Timnit Gebru warns of the technology’s built-in biases”

“Google wants to take over the web”

The threat here isn’t sci-fi fantasies of intelligent computers that could exist in the distant future; it’s what companies are doing today

“Cats Migrated to Europe 7,000 Years Earlier Than Once Thought”

“Five Things: May 11, 2023 — As in guillotine…”

“Data Statements: From Technical Concept to Community Practice - ACM Journal on Responsible Computing”

“Fake Pictures of People of Color Won’t Fix AI Bias - WIRED”

“Despite being designed to empower and protect marginalized groups, this strategy fails to include any actual people in the process of representation”

“Understanding ChatGPT: A Triumph of Rhetoric”

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence

“On the Impossible Safety of Large AI Models”

Apparently, the writers of this paper—working for Google at the time—were forced to tone down v1. They’ve now updated it to v2 after they left the company

“Data Harm Record – Data Justice Lab”

“EU lawmakers back transparency and safety rules for generative AI - TechCrunch”

This is pretty important. Includes limitations on the use of facial recognition for surveillance as well as transparency rules for foundational models.

“A quote from PaLM 2 Technical Report (PDF)”

That PaLM 2 models are smaller than PaLM 1 tracks with the conclusion of my book, which was that very large models are worse for most practical tasks