“Patterns, Prophets and Priests
For a good chunk of the past 10 or 12 years, it’s felt to me like we were stuck in an age that shoved genuine attention to human-ness and care to the margins.
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
“Patterns, Prophets and Priests
For a good chunk of the past 10 or 12 years, it’s felt to me like we were stuck in an age that shoved genuine attention to human-ness and care to the margins.
“The World Wide Web became available to the broader public 30 years ago : NPR”
“Meredith Whittaker: Consciousness isn’t AI risk—it’s the corporations”
“AI is already writing books, websites and online recipes - The Washington Post”
The last one is “AI code copilots are backwards-facing tools in a novelty-seeking industry”
There is a fundamental tension between programmer culture, software development, and how language models work
The second is about how incredibly common pseudo-science and snake oil is in AI research.
“Beware of AI pseudoscience and snake oil”
You need to be very careful about trusting the claims of the AI industry.
I’ve been publishing extracts from The Intelligence Illusion as essays.
The first and most important, was “Artificial General Intelligence and the bird brains of Silicon Valley”. It’s about the AGI myth and how it short-circuits your ability to think about AI
I just published “AI code copilots are backwards-facing tools in a novelty-seeking industry”
Where I argue that there’s a fundamental tension between programming culture and how language models work.
“Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says”
“Google shared AI knowledge with the world — until ChatGPT caught up”
When he uses Google Translate and YouTube, “I already see the volatility and instability that could only be explained by the use of,” these models and data sets
“GitHub Copilot AI pair programmer: Asset or Liability?”
Copilot can become an asset for experts, but a liability for novice developers.
Makes the “40% commit copilot suggestions unchanged” stat more worrying
“We Have No Moat And neither does OpenAI”
This is an interesting document, ostensibly a leaked Google doc. There’s an opportunity here for the OSS community to do better than OpenAI or Google and I have to hope we don’t botch it
Here’s a ‘fun’ statistic. Microsoft says that among Copilot users:
40% of the code they’re checking in is now AI-generated and unmodified
“fast.ai - Mojo may be the biggest programming language advance in decades”
Mostly vapour at the moment, but fairly convincing vapour. Who wouldn’t like a super-fast, easily deployable python variant?
I was writing for me, all along.
Mandy Brown is easily one of my favourite writers on the web today.
“Prompt injection explained, with video, slides, and a transcript”
Between training data/instruction poisoning and prompt injections, language models are a complete security shitshow.
“Poisoning Language Models During Instruction Tuning”
So, large AI models are a security shitshow because they can be poisoned through their training data. Turns out they can also be poisoned through instruction tuning.
This essay I wrote back in February remains relevant: “Generative AI is the tech industry’s Hail Mary pass”
What is important for you, and anybody who works in tech, to know, is that this move is desperate, even if the tech ends up doing what it promises.
Finally got around to watching this video where Adam Conover interviews Emily Bender and Timnit Gebru. It’s really good. Incredibly thorough and fun to watch. Highly recommended
It’s grift. With a touch of Qanon-style religious mania.