“Ayyyyyy Eyeeeee. The lie that raced around the world… | by Cory Doctorow | Jun, 2023 | Medium”

“Notes apps are where ideas go to die. And that’s good”

This is what I use most dedicated notes apps for. But working notes are also a huge part of my process and those are in separate apps

“How the media is covering ChatGPT - Columbia Journalism Review”

Incredibly disappointing to see a bunch of smart poeple do a “nonono I don’t actually agree with the literal statement that I said I agreed with. I actually agree with a completely different statement I imagined in my head”

“Watch Transitions in Slow Motion in Chrome’s DevTools - Jim Nielsen’s Blog”

“Tomorrow and tomorrow and tomorrow”

That article about an “AI” drone killing its operator in a simulation is a complete fabrication. “Simulation” here means constructed scenario. No actual “AI” model was created


“I can just suck up all of the data in the world to build my model and not pay anybody anything”


“WTF? Why is everybody closing up, charging for APIs and access, and suing AI companies? Not fair”

Tech broke the web’s social contract and now everything is closing up

“On Understanding Power and Technology”

The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now

“Lessons from Soviet Russia on deploying small nuclear generators | daverupert.com”

We’re going to need to come up with AI bubble coping strategies. The epic “AI voice” is taking over media and online discourse

This is what happened in Iceland in the 2008 bubble, which was the first post-web pansocietal bubble I’ve experienced. AI is following the same path IMO

“What is the real point of all these letters warning about AI?”

Quotes some smart people.

“Biden’s former tech adviser on what Washington is missing about AI - The Washington Post”

This is pretty sensible advice overall and the US would be better off if it was followed.

“Against Predictive Optimization”

“‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight”

Using language models chatbots in healthcare and therapy is absolutely going to kill people

“Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat”

They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.

It’s heartening that I’m starting to get anti-AI SEO hustle emails 😄

One thing’s for sure. As long as the primary focus of AI discourse is either AGI nonsense or similar sci-fi, nobody is talking about whether the tech actually works as claimed or not.

Oh, ffs! I come back to work after a bank holiday weekend and see that the AI industry is ramping up its nonsense to new heights.


this is our second blast of abusive traffic from an AWS customer today apparently from an AI company harvesting Internet Archive texts at an extreme rate

This sort of nonsense is just going to escalate

“AI statement”

From Clarkesworld

We believe that governments should be seeking advice on this legislation from a considerably wider range of people than just those who profit from this technology

“Excluding GPLed code from training data sets and only training on permissive licenses is disrespectful of the GPL” is a take I hadn’t seen before. Don’t think I’m better off for having been subjected to it

I know Harlan Ellison is a bit of controversial figure (i.e. a dick) but on this he wasn’t wrong

And then they don’t even send you a copy of the DVD!


I wish I was more optimistic about large language models, but everything I’m seeing at the moment leads me to think that the best case scenario is a massive acceleration of Silicon Valley’s worst instincts and an ongoing degradation of our software ecosystem.

“Chile’s Atacama Desert has become a fast fashion dumping ground”