“AI chatbots lose money every time you use them. That’s a problem.”

AI chatbots have a problem: They lose money on every chat.

This is such a fucking bubble.

“Microsoft 365, including Outlook and Word, goes offline for thousands”

“Why do we employ all these people? They clearly aren’t doing anything important! Just fire them.”

Also, something, something cloud software.

“Modern software quality, or why I think using language models for programming is a bad idea”

This massive 7000 word essay is based on a talk I gave at Hakkavélin, a hackerspace in Reykjavík

“Dear Stack Overflow, Inc.”

Specifically, moderators are no longer allowed to remove AI-generated answers on the basis of being AI-generated, outside of exceedingly narrow circumstances

“Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists”

“Speed and Efficiency are not Human Values - by John Warner”

There’s a good chance that in a year or two we’ll see a spike in hard to replicate bugs, where software is behaving erratically with frequent but unpredictable declines in quality. The cause will turn out to be a hastily integrated language model somewhere in the system.

“Crypto collapse? Get in loser, we’re pivoting to AI – Attack of the 50 Foot Blockchain”

“Maps To The “Gold””

The current state of the art’s been creeping up on us for quite some time now, and in the cold light of day a lot of it turns out to be Fool’s Gold

“Ayyyyyy Eyeeeee. The lie that raced around the world… | by Cory Doctorow | Jun, 2023 | Medium”

“Notes apps are where ideas go to die. And that’s good”

This is what I use most dedicated notes apps for. But working notes are also a huge part of my process and those are in separate apps

“How the media is covering ChatGPT - Columbia Journalism Review”

Incredibly disappointing to see a bunch of smart poeple do a “nonono I don’t actually agree with the literal statement that I said I agreed with. I actually agree with a completely different statement I imagined in my head”

“Watch Transitions in Slow Motion in Chrome’s DevTools - Jim Nielsen’s Blog”

“Tomorrow and tomorrow and tomorrow”

That article about an “AI” drone killing its operator in a simulation is a complete fabrication. “Simulation” here means constructed scenario. No actual “AI” model was created

twitter.com/harris_ed…

“I can just suck up all of the data in the world to build my model and not pay anybody anything”

Later…

“WTF? Why is everybody closing up, charging for APIs and access, and suing AI companies? Not fair”

Tech broke the web’s social contract and now everything is closing up

“On Understanding Power and Technology”

The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now

“Lessons from Soviet Russia on deploying small nuclear generators | daverupert.com”

We’re going to need to come up with AI bubble coping strategies. The epic “AI voice” is taking over media and online discourse

This is what happened in Iceland in the 2008 bubble, which was the first post-web pansocietal bubble I’ve experienced. AI is following the same path IMO

“What is the real point of all these letters warning about AI?”

Quotes some smart people.

“Biden’s former tech adviser on what Washington is missing about AI - The Washington Post”

This is pretty sensible advice overall and the US would be better off if it was followed.

“Against Predictive Optimization”

“‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight”

Using language models chatbots in healthcare and therapy is absolutely going to kill people

“Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat”

They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.