“Where does my computer get the time from?”

It strikes me that ChatGPT-style fine-tuning is in effect specifically optimising for The Barnum or Forer effect. Answers that feel more accurate get rated as more accurate and promoted. The entire AI bubble might just be based on the accidental automation of a carnival psychic con.

“Why we’re bad at CSS”

I like this. Personally, tho, I think it’s less that the industry is bad at CSS, more that it’s bad at software dev (seriously bad). It’s just more obvious in CSS because of its structure

“Markdown images are an anti-pattern”

HTML image markup is also easier to remember, IMO.

“Origin Stories: Plantations, Computers, and Industrial Control”

“Five Things: May 25, 2023”

“Thought experiment in the National Library of Thailand | by Emily M. Bender”

The only knowledge it has is knowledge of distribution of linguistic form.

“Tina Turner remembered by Mad Max director George Miller: ‘She was the opposite of a diva’ | Tina Turner | The Guardian”

Wrote a short blog post explaining why I felt the need to make needtoknow.fyi

“‘Generative AI: What You Need To Know’ is a free guide to help you spot AI bullshit”

www.baldurbjarnason.com/2023/ever…

“New EPIC Report Sheds Light on Generative A.I. Harms”

Had a quick read through this report and it’s pretty good. Most of the proposed interventions are a bit “🤨 that’s unlikely to happen” but that doesn’t mean you shouldn’t try.

“Generative AI: What You Need To Know” gives you an overview of the main topics in generative AI in 15 quick-to-read cards, written specifically for a non-technical audience, based on the research I did for my book

Now on a fancy domain 😁

needtoknow.fyi

I’ve rejigged “Generative AI: What You Need To Know” to work as a free resource.

I’m hoping it can help people ground the discussion . Still not decided on the domain and might do a few more edits, so still open to feedback 🙂

“Webinar: Alex North in conversation. Join us at 19:00 BST on Thursday 8 June for a conversation with the bestselling crime writer.”

Won’t make it myself, but this looks interesting.

I wonder how many people realise that the theme of Phillip K. Dick’s Do Androids Dream Electric Sheep?, is precisely the opposite of its adaptation, Blade Runner?

Keep seeing a bunch of literary types make this mistake so I’m guessing regular folks don’t have a chance.

“What Will Transformers Transform?”

If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.

“What Will Transformers Transform?”

GPT-n cannot reason, and it has no model of the world. It just looks at correlations between how words appear in vast quantities of text from the web, without know how they connect to the world. It doesn’t even know there is a world.

“Just Calm Down About GPT-4 Already And stop confusing performance with competence”

No, because it doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language.

“Review by glecharles - The Intelligence Illusion | The StoryGraph”

Whether you’re a true skeptic (like me) or a true believer, The Intelligence Illusion is a splash of lemon juice in the greasy pool of incredulous media coverage.

😁

“TV writer David Simon weighs in on the Writers Guild of America strike : NPR”

If that’s where this industry is going, it’s going to infantilize itself.

“A Well Known URL For Your Personal Avatar - Jim Nielsen’s Blog”

Any reason why this shouldn’t work? I think this could work.

If you have to complain about EU behaviour, complain about the drive to break End-to-End encryption (something that’s honestly likely to be a violation of the European human rights charter, but if passed the legal wrangling will take years).

I decided I didn’t like the original title for this week’s newsletter entry, so I changed it 🙂

“Prompts are unsafe, and that means language models are not fit for purpose”

In addition to being potentially vulnerable to black-hat keyword manipulation and plundering the commons, these systems are a big security hazard as designed

“Prompts—and with them language models—are not fit for purpose”

Watch for bait-and-switch. Intentionally or not, some leader-level positions with purportedly large teams turn out to be individual contributor positions

“Thoughts for In-The-Job-Market Product Leaders”

😅 I’ve fallen for this a couple of times 😬

“The Hawthorne Effect or Observer Bias in User Research”

Rather than mitigating it, IME most rely on this effect in order to get the predetermined results they want.