“Meta in Myanmar, Part III. The Inside View”

Again, all the content warnings. But also this:

I think that if you make a machine and hand it out for free to everyone in the world, you’re at least partially responsible for the harm that the machine does.

“Meta in Myanmar, Part II: The Crisis - Erin Kissane’s small internet website”

ALL the content warnings.

Fuck.

“Robin Rendle — The Cascade”

“Ro Salarian: rosalarian: I spent ten years building up a…”

I don’t want to make “content,” I want to make comics, I want to make art, and I want to do it in a space that is mine. I’m not sure there’s a place for that anymore.

“There aren’t really big gaps in OKLCH where it just doesn’t render any color at all. - Chris Coyier”

We have, what, maybe a decade left of our current hyperconnected global tech industry left before it becomes unsustainable because of the climate crisis and we’re wasting it on generative “AI”?

That’s where we’ve decided to spend the time that’s left?

Sheesh.

“Who Is OpenAI’s Sam Altman? Meet the Oppenheimer of Our Age”

“Target says it’s closing 9 stores due to theft. The crime data tells a different story.”

“Imposter Syndrome Driven Design and a Bedfordshire Clanger - Stephanie Stimac’s Blog”

“Why side projects are essential for creatives—and employers should embrace them | Jonas Downey”

“My Books Have Been Banned or Challenged in 16 States”

Even if the book banners do read my books, and my writing is effective and they empathize with the queer characters, their agenda is stronger than their empathy.

“Fair Warning: For as long as there has been AI research, there have been credible critiques about the risks of AI boosterism”

From 2020 but still absolutely relevant.

“Navigation API · Issue #34 · WebKit/standards-positions · GitHub”

If we get both the Navigation API and transitions, then that’d mean we’ll have fixed most issues with both SPAs and non-SPA sites making the choice largely one of implementation

I expected the AI bubble to get bad, but the scale of the bullshit still hits hard

Only those who weren’t likely to get involved in it anyway seem to have heeded any warnings

Everything I see in the LLM space is close to the diametric opposite of what I would have recommended

“Seven lies in four sentences”

Rumors that LLMs have solved then hallucination problem are greatly exaggerated.

9 months later—an eternity in current AI—it’s stilly common for LLMs to spout complete, utter bullshit.

LLMs are not fit for any sort of knowledge work

“Don’t Build Microservices, Pursue Loose Coupling - DevOps.com”

“Adactio: Journal—Websites in the dock”

I wonder if there’s much point using wrappers like Electron any more?

IMO there are only two concrete reasons left: 1. You need to make a document-oriented app. 2. The design really needs native menus to work.

And I wrote about how making or using generative models is, all else being equal, a dick move.

www.baldurbjarnason.com/2023/ai-i…

I also published my weeknotes yesterday, with the links for the week.

www.baldurbjarnason.com/2023/week…

Quick reminder that I’ve set up a pre-order page for the print edition of Out of the Software Crisis

softwarecrisis.dev/letters/p…

“Adactio: Journal—Crawlers”

“Nine things automated accessibility tests can’t test | daverupert.com”

“Return to Office Is Bullshit And Everyone Knows It - Dhole Moments”

Executives are genuinely bad at their job part the infinite.

“Boys will be boys | Revert to Saved: A blog about design, gaming and technology”

I’m not sure there’s another phrase that infuriates me quite so much.

“NASA satellites reveal restoration power of beavers”