I’m not against using generative AI tools but, AFAICT there are very few trustworthy actors in the field: OpenAI, Facebook, Google, Stability AI, and Midjourney all have a track record that’s dodgy to say the least

The way they’re rushing into AI doesn’t add to the trust either

“‘ChatGPT said I did not exist’: how artists and writers are fighting back against AI - Artificial intelligence (AI) - The Guardian”

“Google won’t honor medical leave during its layoffs, outraging employees”

She was let go by Google from her hospital bed shortly after giving birth. She worked at the company for nine years

Tech cos are run by scumbags

“Why open data is critical during review: An example - Steve Haroz’s blog”

But OpenAI something something “somebody might accidentally utter the name of God and usher in the end times”

“What do tools like ChatGPT mean for Math and CS Education?”

Nothing could be farther from how a calculator works.

“Epic’s overhaul of a flawed algorithm shows why AI oversight is a life-or-death issue”

The ratio of false alarms to true positives was about 30 to 1, according to CT Lin, the health system’s chief medical information officer.

“Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy”

“Tooled. — Ethan Marcotte”

But I’ll just note that labor economics has an old, old term for [gestures around] all this: de-skilling.

Read this.

“OpenAI’s GPT-4 Is Closed Source and Shrouded in Secrecy”

“Social sponges: Gendered brain development comes from society, not biology”

“Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence”

I don’t know if I’ve just been reading too many academic papers but this looks like fairly straightforward guidance

“The stupidity of AI”

The belief in this kind of AI as actually knowledgeable or meaningful is actively dangerous. It risks poisoning the well of collective thought, and of our ability to think at all.

JFC, it’s like the past few years’ discussions on bias in training data and issues with shortcut learning in AI models just didn’t happen at all?

Like, our industry didn’t take in any of it, did it?

“Modern Font Stacks”

“How to tell if AI threatens YOUR job”

I’ve come to a pretty grim, if obvious, realization: the more excited someone is by the prospect of AI making their job easier, the more they should be worried.

Interesting take.

Remember when we had the ostensibly reasonable side of the tech influencer sphere saying that web3 was too big to fail?

Good times. Good times.

Not a deranged industry at all.

“Microsoft just laid off one of its responsible AI teams”

Building an ethics and responsibility team for AI is a productive way of getting all the troublemakers into one room to get rid of them all at once.

“The climate cost of the AI revolution • Wim Vanderbauwhede”

From this it is clear that large-scale adoption of LLMs would lead to unsustainable increases in ICT CO₂ emissions.

“All data is health data. – Hi, I’m Heather Burns”

It is merely contextual. So you need to think of all of your data inputs, collections, and sharing, in that contextual way.

“Don’t ask an AI for plant advice • Tradescantia Hub”

Using AI for specialist problems (which is most of them) is a trap. The AI will lie confidently and you won’t have the expertise to spot the lie

“TBM 205: “Process” vs. Systems & Habits - by John Cutler”

“It Took Me Nearly 40 Years To Stop Resenting Ke Huy Quan - Decider”

This is so touching.

“Craft vs Industry: Separating Concerns - hello, yes. I’m Thomas Michael Semmler: CSS Developer, Designer & Developer from Vienna, Austria”

Lovely to see AI critics split into adversarial factions over Liliputian which-end-of-the-egg details when the hype crowd stands united around bullshit and false promises.

“Vanderbilt Apologizes for ChatGPT-Generated Email”

Like I’ve said before, the technical term for somebody who uses AI to write their emails is “asshole”.