“On Understanding Power and Technology”

The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now

“Lessons from Soviet Russia on deploying small nuclear generators | daverupert.com”

We’re going to need to come up with AI bubble coping strategies. The epic “AI voice” is taking over media and online discourse

This is what happened in Iceland in the 2008 bubble, which was the first post-web pansocietal bubble I’ve experienced. AI is following the same path IMO

“What is the real point of all these letters warning about AI?”

Quotes some smart people.

“Biden’s former tech adviser on what Washington is missing about AI - The Washington Post”

This is pretty sensible advice overall and the US would be better off if it was followed.

“Against Predictive Optimization”

“‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight”

Using language models chatbots in healthcare and therapy is absolutely going to kill people

“Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat”

They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.

It’s heartening that I’m starting to get anti-AI SEO hustle emails 😄

One thing’s for sure. As long as the primary focus of AI discourse is either AGI nonsense or similar sci-fi, nobody is talking about whether the tech actually works as claimed or not.

Oh, ffs! I come back to work after a bank holiday weekend and see that the AI industry is ramping up its nonsense to new heights.

::sighs::

this is our second blast of abusive traffic from an AWS customer today apparently from an AI company harvesting Internet Archive texts at an extreme rate

This sort of nonsense is just going to escalate

“AI statement”

From Clarkesworld

We believe that governments should be seeking advice on this legislation from a considerably wider range of people than just those who profit from this technology

“Excluding GPLed code from training data sets and only training on permissive licenses is disrespectful of the GPL” is a take I hadn’t seen before. Don’t think I’m better off for having been subjected to it

I know Harlan Ellison is a bit of controversial figure (i.e. a dick) but on this he wasn’t wrong

And then they don’t even send you a copy of the DVD!

www.youtube.com/watch

I wish I was more optimistic about large language models, but everything I’m seeing at the moment leads me to think that the best case scenario is a massive acceleration of Silicon Valley’s worst instincts and an ongoing degradation of our software ecosystem.

“Chile’s Atacama Desert has become a fast fashion dumping ground”

“How Congress Fell for OpenAI and Sam Altman’s AI Magic Tricks”

“Workers Are Terrified About AI, So What Can They Do About It?”

“Brown M&Ms | blarg”

Canary questions.

“Superintelligence: The Idea That Eats Smart People”

What it really is is a form of religion. People have called a belief in a technological Singularity the “nerd Apocalypse”, and it’s true

From 2016. Still accurate

“Opinion: AI tools like ChatGPT are built on mass copyright infringement - The Globe and Mail”

“Absentee Capitalism - Ed Zitron’s Where’s Your Ed At”

Executive excitement around generative AI is borne from these disconnected economics, because none of these people actually create anything.

“The Next Larger Context. “Always design a thing by considering… | by Camille Fournier | May, 2023 | Medium”

“Optimum tic-tac-toe”

­> Something to keep in mind the next time someone tries to sell you a large language model for expert advice.