“Linda Gottfredson - Southern Poverty Law Center”

This is the author of the definition of intelligence that Microsoft’s “Sparks of Artificial General Intelligence” paper is built on.

“‘I didn’t give permission’: Do AI’s backers care about data law breaches? - Artificial intelligence (AI) - The Guardian”

“‘It’s called stealing’: new allegations of plagiarism against Roy Lichtenstein”

Lichtenstein and Erró are just the absolute worst. Horrible artists; horrible people.

“The Company Behind Stable Diffusion Appears to Be At Risk of Going Under”

Stability AI raised $100 million last year and has already spent a significant portion of those funds

The AI grift ain’t cheap.

“Pixels of the Week – April 10 , 2023 by Stéphanie Walter - UX Researcher & Designer.”

“Tech Companies Are Ruining Their Apps, Websites, Internet”

These companies don’t make good software and treating their processes as “best practices” means you don’t either.

I ran into this blog post the other day and enjoyed it enormously.

“Worldwide Story Structures”

“I tried out SyntheticUsers, so you don’t have to”

Niloufar Salehi, who is way more generous with their time than me went and tested the SyntheticUsers service

We’re going to see so much more of this kind of bullshit AI crap

OpenAI: we were literally training the model with customer data techcrunch.com/2023/03/0…

AI fans: What if they weren’t actually training on user data?

“I’m an ER doctor: Here’s what I found when I asked ChatGPT to diagnose my patients”

If my patient in this case had done that, ChatGPT’s response could have killed her.

The tech sphere at the start of every new innovation seems to utterly forget that laws exist. Then the realisation through painful experience begins. Turns out that if you weren’t allowed to do something before, a new technology doesn’t give you a free pass to do it now

“A quote from Jim Fan”

This theorises that Midjourney is collecting user data to fine tune its model. Kinda hope this isn’t the case because otherwise they’re likely to run into issues with the GDPR down the line.

“Blinded by Analogies - by Ethan Mollick - One Useful Thing”

The use of wishful comparisons, where AI is explained by analogy with something that’s entirely different from what it’s actually doing, is a long-standing issue with AI research. Drew McDermott called it “wishful mnemonics” in 1976

“April 4 - by Rob Horning - Internal exile”

“Socialtrait App”

Good lord. This nonsense is a thing. FFS.

“Synthetic Users: user research without the headaches”

Y’all do realise this is a really bad idea, right? Please tell me people realise this is a bad idea.

“Midjourney CEO Says ‘Political Satire In China Is Pretty Not Okay,’ But Apparently Silencing Satire About Xi Jinping Is Pretty Okay - Techdirt”

🤨

“Copyright lawsuits pose a serious threat to generative AI”

Between the lawsuits, EU regulators, and the various unsolved technical issues, the total dominance of Generative AI is not the certainty its made out to be

“Closed AI Models Make Bad Baselines”

“Closed AI Models Make Bad Baselines - Hacking semantics”

“Italy’s ChatGPT ban attracts EU privacy regulators - Reuters”

I told you so.

“Merchant: How AI doomsday hype helps sell ChatGPT - Los Angeles Times”

Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.

“More Everything With AI - Jim Nielsen’s Blog”

“The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoders”

This is pretty bad.

“AI as centralizing and distancing technology”

This touches on one of my concerns with how Microsoft and Google are proposing to use AI. It’s about putting AI between people. Separate them in the name of ‘productivity’