OpenAI: we were literally training the model with customer data techcrunch.com/2023/03/0…

AI fans: What if they weren’t actually training on user data?

“I’m an ER doctor: Here’s what I found when I asked ChatGPT to diagnose my patients”

If my patient in this case had done that, ChatGPT’s response could have killed her.

The tech sphere at the start of every new innovation seems to utterly forget that laws exist. Then the realisation through painful experience begins. Turns out that if you weren’t allowed to do something before, a new technology doesn’t give you a free pass to do it now

“A quote from Jim Fan”

This theorises that Midjourney is collecting user data to fine tune its model. Kinda hope this isn’t the case because otherwise they’re likely to run into issues with the GDPR down the line.

“Blinded by Analogies - by Ethan Mollick - One Useful Thing”

The use of wishful comparisons, where AI is explained by analogy with something that’s entirely different from what it’s actually doing, is a long-standing issue with AI research. Drew McDermott called it “wishful mnemonics” in 1976

“April 4 - by Rob Horning - Internal exile”

“Socialtrait App”

Good lord. This nonsense is a thing. FFS.

“Synthetic Users: user research without the headaches”

Y’all do realise this is a really bad idea, right? Please tell me people realise this is a bad idea.

“Midjourney CEO Says ‘Political Satire In China Is Pretty Not Okay,’ But Apparently Silencing Satire About Xi Jinping Is Pretty Okay - Techdirt”

🤨

“Copyright lawsuits pose a serious threat to generative AI”

Between the lawsuits, EU regulators, and the various unsolved technical issues, the total dominance of Generative AI is not the certainty its made out to be

“Closed AI Models Make Bad Baselines”

“Closed AI Models Make Bad Baselines - Hacking semantics”

“Italy’s ChatGPT ban attracts EU privacy regulators - Reuters”

I told you so.

“Merchant: How AI doomsday hype helps sell ChatGPT - Los Angeles Times”

Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.

“More Everything With AI - Jim Nielsen’s Blog”

“The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoders”

This is pretty bad.

“AI as centralizing and distancing technology”

This touches on one of my concerns with how Microsoft and Google are proposing to use AI. It’s about putting AI between people. Separate them in the name of ‘productivity’

“※ ChatGPT Survey: Performance on NLP datasets”

Related to what I noted recently about these large foundational models being technically flawed. Turns out ChatGPT isn’t actually that good at natural language tasks compared to simpler models.

“SoK: On the Impossible Security of Very Large Foundation Models”

I’ve only had a quick read of this preprint but it manages to both pull together many of the issues with large language models I’ve seen raised in other papers and give them a solid, reasoned foundation

‘Statement from the listed authors of Stochastic Parrots on the “AI pause” letter’

“The web we broke. — Ethan Marcotte”

“AI is going to make teaching worse, but not in the way everyone thinks - Charles Kenneth Roberts”

“Buzzfeed Has Begun Publishing Articles Generated by A.I. — Pixel Envy”

Predictably, the articles are even worse than Buzzfeed’s usual.

“The Impact of AI on Developer Productivity: Evidence from GitHub Copilot”

I can spot at least four serious flaws in this at a glance

But all you need to know is that most of the authors work for Microsoft or Github

“Manton Reece - Introducing Micro.blog podcast transcripts”

One of the good things to come out of the current AI bubble are improved automatic transcripts.