“Sorry AI, but User Research is More Than Just Predictive Text”
Feedback would be generic at best, wrong at worse.
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
“Sorry AI, but User Research is More Than Just Predictive Text”
Feedback would be generic at best, wrong at worse.
“Competition authorities need to move fast and break up AI”
Without the robust enforcement of competition laws, generative AI could irreversibly cement Big Tech’s advantage, giving a handful of companies power over technology that mediates much of our lives.
“Google CEO peddles #AIhype on CBS 60 minutes - by Emily M. Bender”
if you create ignorance about the training data, of course system performance will be surprising.
I hesitate to link to debunks, because all too often it doesn’t do much more than help spread the original bunk around. But, in this case it’s illustrative of just how blatant it’s getting.
“A Computer Generated Swatting Service Is Causing Havoc Across America”
This sounds like a not good kinda thing.
“AIs can write for us but will we actually want them to? - Bryan Braun - Frontend Developer”
Algogen text and art is just not as useful as the punditry thinks it is.
“AI-Generated Images from AI-Generated Prompts — Adrian Roselli”
“Anyone suggesting ChatGPT, Bard, or other self-described AI tools can generate their alternative text for them is simply being lazy.”
“Building LLM applications for production”
Given how fast things are moving, isn’t anybody integrating an LLM into their product today extremely likely to be stuck with a massively obsolete system in the long term?
“New prompt injection attack on ChatGPT web version. Markdown images can steal your chat data”
“Prompt injection attack on ChatGPT steals chat data - System Weakness”
🤨
This is what I’ve been working on for the past few months.
My new ebook, The Intelligence Illusion: A Practical Guide to the Business Risks of Generative AI, will be out later this month.
“The Great Flowering: Why OpenAI is the new AWS and the New Kingmakers still matter”
This is why it honestly doesn’t matter whether better or more ethical alternatives to OpenAI appear. They’re going to be the default so they need to be held to account.
“I Think I Found a Privacy Exploit in ChatGPT - Development tutorials for modern web development”
Case in point. OpenAI’s privacy issues aren’t limited to AI. They’re just bad actors overall.
“Italy’s new rules for ChatGPT could become a template for the rest of the EU”
For the tech dudes huffing in the crowd that Italy is trying to kill AI: what they’ve outlined is just basic GDPR compliance.
This. General Purpose AI are riskier than specialised, so any regulation that prompotes them over the other is counterproductive.
“Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI”
This 1997 paper by Philip E. Agre is a fascinating look into the history of AI research. It shows that the field’s issues have been there for a long while.
“Federal privacy watchdog probing OpenAI, ChatGPT following complaint - CBC News”
I’d missed this one when it came out.
Speed was never the issue with horses. People bought cars so they wouldn’t have to shovel horseshit. Then cars became more affordable and people bought them who could never have afforded a hors