“Thought experiment in the National Library of Thailand | by Emily M. Bender”

The only knowledge it has is knowledge of distribution of linguistic form.

“Tina Turner remembered by Mad Max director George Miller: ‘She was the opposite of a diva’ | Tina Turner | The Guardian”

Wrote a short blog post explaining why I felt the need to make needtoknow.fyi

“‘Generative AI: What You Need To Know’ is a free guide to help you spot AI bullshit”

www.baldurbjarnason.com/2023/ever…

“New EPIC Report Sheds Light on Generative A.I. Harms”

Had a quick read through this report and it’s pretty good. Most of the proposed interventions are a bit “🤨 that’s unlikely to happen” but that doesn’t mean you shouldn’t try.

“Generative AI: What You Need To Know” gives you an overview of the main topics in generative AI in 15 quick-to-read cards, written specifically for a non-technical audience, based on the research I did for my book

Now on a fancy domain 😁

needtoknow.fyi

I’ve rejigged “Generative AI: What You Need To Know” to work as a free resource.

I’m hoping it can help people ground the discussion . Still not decided on the domain and might do a few more edits, so still open to feedback 🙂

“Webinar: Alex North in conversation. Join us at 19:00 BST on Thursday 8 June for a conversation with the bestselling crime writer.”

Won’t make it myself, but this looks interesting.

I wonder how many people realise that the theme of Phillip K. Dick’s Do Androids Dream Electric Sheep?, is precisely the opposite of its adaptation, Blade Runner?

Keep seeing a bunch of literary types make this mistake so I’m guessing regular folks don’t have a chance.

“What Will Transformers Transform?”

If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.

“What Will Transformers Transform?”

GPT-n cannot reason, and it has no model of the world. It just looks at correlations between how words appear in vast quantities of text from the web, without know how they connect to the world. It doesn’t even know there is a world.

“Just Calm Down About GPT-4 Already And stop confusing performance with competence”

No, because it doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language.

“Review by glecharles - The Intelligence Illusion | The StoryGraph”

Whether you’re a true skeptic (like me) or a true believer, The Intelligence Illusion is a splash of lemon juice in the greasy pool of incredulous media coverage.

😁

“TV writer David Simon weighs in on the Writers Guild of America strike : NPR”

If that’s where this industry is going, it’s going to infantilize itself.

“A Well Known URL For Your Personal Avatar - Jim Nielsen’s Blog”

Any reason why this shouldn’t work? I think this could work.

If you have to complain about EU behaviour, complain about the drive to break End-to-End encryption (something that’s honestly likely to be a violation of the European human rights charter, but if passed the legal wrangling will take years).

Predictably the tech industry is reacting with outrage at the EU deciding that the US is “supposedly” a surveillance state and that most data transfers to the US is essentially just a mechanism to bypass EU consumer protections.

Remember back when tech bubbles were lightweight fluff like “NoSQL means you don’t need relational databases anymore. Put everything in a document database and scaling becomes easy!”

Good times.

Apparently “I let ChatGPT control my life for X days/hours” has become a genre of Youtube videos. 🤨

I decided I didn’t like the original title for this week’s newsletter entry, so I changed it 🙂

“Prompts are unsafe, and that means language models are not fit for purpose”

In addition to being potentially vulnerable to black-hat keyword manipulation and plundering the commons, these systems are a big security hazard as designed

“Prompts—and with them language models—are not fit for purpose”

Watch for bait-and-switch. Intentionally or not, some leader-level positions with purportedly large teams turn out to be individual contributor positions

“Thoughts for In-The-Job-Market Product Leaders”

😅 I’ve fallen for this a couple of times 😬

“The Hawthorne Effect or Observer Bias in User Research”

Rather than mitigating it, IME most rely on this effect in order to get the predetermined results they want.

“The Interpretive Dance”

They should not call themselves “I” and they should not refer to themselves and humans as “we.”

“Ban LLMs Using First-Person Pronouns — Crooked Timber”

This is a start. Doesn’t go far enough, but it’s a start.

“Resisting AI: A Review”