“Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says”
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
“Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says”
“Google shared AI knowledge with the world — until ChatGPT caught up”
When he uses Google Translate and YouTube, “I already see the volatility and instability that could only be explained by the use of,” these models and data sets
“GitHub Copilot AI pair programmer: Asset or Liability?”
Copilot can become an asset for experts, but a liability for novice developers.
Makes the “40% commit copilot suggestions unchanged” stat more worrying
“We Have No Moat And neither does OpenAI”
This is an interesting document, ostensibly a leaked Google doc. There’s an opportunity here for the OSS community to do better than OpenAI or Google and I have to hope we don’t botch it
Here’s a ‘fun’ statistic. Microsoft says that among Copilot users:
40% of the code they’re checking in is now AI-generated and unmodified
“fast.ai - Mojo may be the biggest programming language advance in decades”
Mostly vapour at the moment, but fairly convincing vapour. Who wouldn’t like a super-fast, easily deployable python variant?
I was writing for me, all along.
Mandy Brown is easily one of my favourite writers on the web today.
“Prompt injection explained, with video, slides, and a transcript”
Between training data/instruction poisoning and prompt injections, language models are a complete security shitshow.
“Poisoning Language Models During Instruction Tuning”
So, large AI models are a security shitshow because they can be poisoned through their training data. Turns out they can also be poisoned through instruction tuning.
This essay I wrote back in February remains relevant: “Generative AI is the tech industry’s Hail Mary pass”
What is important for you, and anybody who works in tech, to know, is that this move is desperate, even if the tech ends up doing what it promises.
Finally got around to watching this video where Adam Conover interviews Emily Bender and Timnit Gebru. It’s really good. Incredibly thorough and fun to watch. Highly recommended
It’s grift. With a touch of Qanon-style religious mania.
I just published “Beware of AI pseudoscience and snake oil”
It’s important to be sceptical about the claims made by AI vendors
I was shocked to discover how poor much AI research is and how fond the industry seems to be of pseudoscience
In case you missed it, I have a book out that’s a critical analysis of language and diffusion models
“The Intelligence Illusion: a practical guide to the business risks of Generative AI” illusion.baldurbjarnason.com
Too many AI pundits, pro or con, are either misogynistic dudes who really want AGI to happen, or wide-eyed evangelistic types giving off strong QAnon/“rapture cult” vibes
Trying to broadly keep up with AI discourse on social media gets very off-putting, very quickly 😬
“Artificial General Intelligence and the bird brains of Silicon Valley”
4700 words on the biggest risk of generative AI: believing in the myth of AGI.
“150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting”
“‘It’s the opposite of art’: why illustrators are furious about AI”