“Privacy Violations Shutdown OpenAI ChatGPT and Beg Investigation”
Almost as if centralising an entire industry on the services provided by a couple of companies was a bad idea.
(A bad idea that tech loves: see AWS)
“AI Takes Over Because of Human Hype, Not Machine Intelligence - Jim Nielsen’s Blog”
“Here are more details on Ubisoft’s Ghostwriter AI tool from GDC 2023”
“OpenAI’s policies hinder reproducible research on language models”
People are justifiably frustrated at the fact that the world’s most powerful people are simultaneously incompetent and invincible, ever protected from the consequences of running a risky, rotten economy.
“ChatGPT-4 produces more misinformation than predecessor - NewsGuard”
This shouldn’t come as a surprise. In the research I’ve read, hallucinations are an emergent property that increases with the size of the model
Not disclosing that something is AI-generated is so obviously unethical that I expect the tech industry to fight any and every attempt to mandate disclosure tooth and nail.
“Adobe made an AI image generator”
That this is trained only on images they explicitly have the rights to train on is moving in the right direction. As is bringing outpainting to Photoshop. But this is Adobe, which is genuinely one of the most disliked companies in existence
The site will be available in read-only mode for a limited period afterwards
I had missed this part yesterday. Amazon is planning on nuking a good chunk of photography history.
“Don’t trust AI to talk accurately about itself: Bard wasn’t trained on Gmail”
“Web fingerprinting is worse than I thought - Bitestring’s Blog”
“Chatbots, deepfakes, and voice clones: AI deception for sale”
From the US FTC. Also looks like pre-existing regulations might apply quite well to generative AI
“You’re Doing It Wrong: Notes on Criticism and Technology Hype”
On criti-hype and how critics often paradoxically echo the hyperbolic promises of the tech industry
“Great, Dating Apps Are Getting More Hellish Thanks to AI Chatbots”
Tech is digging up its “all regulations lead to monopolies” narrative. Meanwhile something as simple as requiring that all AI-driven communications be disclosed would prevent a wide range of abuses.
A worrying aspect of the wholesale adoption of LLMs in all of our productivity and coding tools is that it seems feasible to poison their training data, both broadly and targeted, at a fairly low cost
“On Generative AI, phantom citations, and social calluses”
The Generative AI age is going to be exhausting and unpleasant, isn’t it?
The short answer is that they used ChatGPT as if it was an oracle: a trustworthy source of knowledge that did not require any sort of verification.
This is a direct consequence of how tech portrays these tools
“The Nightmare of AI-Powered Gmail Has Arrived”
Are you excited for your co-workers to become way more verbose, turning every tapped-out “Sounds good” into a three-paragraph letter?
And:
Are you looking forward to wondering if that lovely condolence letter from a long-lost friend was entirely generated by software or if he just smashed the “More Heartfelt” button before sending it?
We’ve had over thirty years of experience in assessing the credibility of online source. Dismissing the fact that you can’t trust LLMs with “how is that different from a search engine” is just arrant nonsense. LLMs don’t have any of the markers we rely on for assessing sources.
“No Doctor Required: Autonomy, Anomalies, and Magic Puddings – Lauren Oakden-Rayner”
By calling the device a normal detector, it makes us think that the model is only responsible for low risk findings.