“Web fingerprinting is worse than I thought - Bitestring’s Blog”
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
“Web fingerprinting is worse than I thought - Bitestring’s Blog”
“Chatbots, deepfakes, and voice clones: AI deception for sale”
From the US FTC. Also looks like pre-existing regulations might apply quite well to generative AI
“You’re Doing It Wrong: Notes on Criticism and Technology Hype”
On criti-hype and how critics often paradoxically echo the hyperbolic promises of the tech industry
“Great, Dating Apps Are Getting More Hellish Thanks to AI Chatbots”
Tech is digging up its “all regulations lead to monopolies” narrative. Meanwhile something as simple as requiring that all AI-driven communications be disclosed would prevent a wide range of abuses.
A worrying aspect of the wholesale adoption of LLMs in all of our productivity and coding tools is that it seems feasible to poison their training data, both broadly and targeted, at a fairly low cost
“On Generative AI, phantom citations, and social calluses”
The Generative AI age is going to be exhausting and unpleasant, isn’t it?
The short answer is that they used ChatGPT as if it was an oracle: a trustworthy source of knowledge that did not require any sort of verification.
This is a direct consequence of how tech portrays these tools
“The Nightmare of AI-Powered Gmail Has Arrived”
Are you excited for your co-workers to become way more verbose, turning every tapped-out “Sounds good” into a three-paragraph letter?
And:
Are you looking forward to wondering if that lovely condolence letter from a long-lost friend was entirely generated by software or if he just smashed the “More Heartfelt” button before sending it?
We’ve had over thirty years of experience in assessing the credibility of online source. Dismissing the fact that you can’t trust LLMs with “how is that different from a search engine” is just arrant nonsense. LLMs don’t have any of the markers we rely on for assessing sources.
“No Doctor Required: Autonomy, Anomalies, and Magic Puddings – Lauren Oakden-Rayner”
By calling the device a normal detector, it makes us think that the model is only responsible for low risk findings.
I’m not against using generative AI tools but, AFAICT there are very few trustworthy actors in the field: OpenAI, Facebook, Google, Stability AI, and Midjourney all have a track record that’s dodgy to say the least
The way they’re rushing into AI doesn’t add to the trust either
“Google won’t honor medical leave during its layoffs, outraging employees”
She was let go by Google from her hospital bed shortly after giving birth. She worked at the company for nine years
Tech cos are run by scumbags
“Why open data is critical during review: An example - Steve Haroz’s blog”
But OpenAI something something “somebody might accidentally utter the name of God and usher in the end times”
“What do tools like ChatGPT mean for Math and CS Education?”
Nothing could be farther from how a calculator works.
“Epic’s overhaul of a flawed algorithm shows why AI oversight is a life-or-death issue”
The ratio of false alarms to true positives was about 30 to 1, according to CT Lin, the health system’s chief medical information officer.
But I’ll just note that labor economics has an old, old term for [gestures around] all this: de-skilling.
Read this.
“Social sponges: Gendered brain development comes from society, not biology”
“Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence”
I don’t know if I’ve just been reading too many academic papers but this looks like fairly straightforward guidance
The belief in this kind of AI as actually knowledgeable or meaningful is actively dangerous. It risks poisoning the well of collective thought, and of our ability to think at all.