I just published “Beware of AI pseudoscience and snake oil”
It’s important to be sceptical about the claims made by AI vendors
I was shocked to discover how poor much AI research is and how fond the industry seems to be of pseudoscience
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
I just published “Beware of AI pseudoscience and snake oil”
It’s important to be sceptical about the claims made by AI vendors
I was shocked to discover how poor much AI research is and how fond the industry seems to be of pseudoscience
In case you missed it, I have a book out that’s a critical analysis of language and diffusion models
“The Intelligence Illusion: a practical guide to the business risks of Generative AI” illusion.baldurbjarnason.com
Too many AI pundits, pro or con, are either misogynistic dudes who really want AGI to happen, or wide-eyed evangelistic types giving off strong QAnon/“rapture cult” vibes
Trying to broadly keep up with AI discourse on social media gets very off-putting, very quickly 😬
“Artificial General Intelligence and the bird brains of Silicon Valley”
4700 words on the biggest risk of generative AI: believing in the myth of AGI.
“150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting”
“‘It’s the opposite of art’: why illustrators are furious about AI”
I just published “Artificial General Intelligence and the bird brains of Silicon Valley”
4700 words on the biggest risk of generative AI: believing in the myth of AGI
“A Short History of Eugenics: From Plato to Nick Bostrom - Truthdig”
“The Luring Test: AI and the engineering of consumer trust”
Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering
I just published “Artificial General Intelligence and the bird brains of Silicon Valley”
It’s a 4700 word extract from my book. Probably the most important part as it tackles the AGI myth and how falling for it clouds your judgement
I haven’t read the “AI does mind-reading! Hyperventilate!” survey. But it should be noted that MRI/fMRI research studies are notoriously unreliable to begin with and I can’t imagine adding AI into the mix is an improvement.
“Users demand X! Any business that doesn’t do X will fail!”
“Do you have data to support that? Is this something you’ve found with your own users?”
“No, but everybody knows it’s true.”
Every time
“Are Emergent Abilities of Large Language Models a Mirage?”
This is an interesting preprint arguing that the so-called “emergent” abilities of LLMs are, basically, benchmarking artifacts. So much of what’s wrong in AI is down to crap benchmarks
“AI Chatbots Have Been Used to Create Dozens of News Content Farms”
“Rise of the Newsbots: AI-Generated News Websites Proliferating Online”
This is an improvement over other models. Still the wrong metaphor. Tech only deals in systems, which don’t have momentum or direction, just order vs entropy
Entropy always wins in the long run
If you want to make a grown Icelandic socialist cry, this’d do it
Maístjarnan, “The May Star”. Lyrics by Halldór Laxnes. Song by Jón Ásgeirsson. Sung by the attendees at the Icelandic Confederation of Labour centennial in 2016
Obligatory, international workers day post
“A research team airs the messy truth about AI in medicine”
In some instances, an AI could lead to faster tests, or speed the delivery of certain medicines, but still not save any more lives.
Common types of “AI beats humans at X” studies
“We compared ChatGPT to a bunch of reddit trolls!”
“We measured the productivity of people who do stuff for $5 at tasks nobody ever does then generalise that to everything”
“We benchmarked the AI on tests in its training data”