That AI is providing some companies with rhetorical cover for doing layoffs that they were planning on anyway does not mean those jobs will be replaced by AI, nor does it even mean that it’s genuinely the plan.
(AI is bloody expensive)
... works as a web developer in Hveragerði, Iceland, and writes about the web, digital publishing, and web/product development
These are his notes
That AI is providing some companies with rhetorical cover for doing layoffs that they were planning on anyway does not mean those jobs will be replaced by AI, nor does it even mean that it’s genuinely the plan.
(AI is bloody expensive)
“Schumacher family planning legal action over AI ‘interview’ with F1 great”
What the actual fuck are people thinking? Just further proof that “AI” creates brainworms.
‘Discord’s New “AI” Chatbot Is a Useless, Miserable Nightmare’
“Evaluating Verifiability in Generative Search Engines”
On average, a mere 51.5% of generated sentences are fully supported by citations and only 74.5% of citations support their associated sentence
The difference between the AI crowd and the rest of us is that they’re going to think this is an excellent result
My friend, Tom Abba, has released a narrative experience that weaves together a website and a book of handmade collages. All for £25 (UK).
“OpenAI’s hunger for data is coming back to bite it”
These methods, and the sheer size of the data set, mean tech companies tend to have a very limited understanding of what has gone into training their models.
“See the websites that make AI bots like ChatGPT sound so smart - Washington Post”
This kind of documentation on AI training data should come from the companies themselves, not journalists.
“Is Critical Thinking the Most Important Skill for Software Engineers? - The Pragmatic Engineer”
“Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say”
One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review. Managers protested that it was getting in the way of their “real work,” the person said.
I’ve put together a web page for my soon-to-be-released book on the business risks of language and diffusion models. First pass, but I think the page gets the points across
“Offline Is Just Online With Extreme Latency - Jim Nielsen’s Blog”
“Google CEO peddles #AIhype on CBS 60 minutes - by Emily M. Bender”
“The Calm Web: A Solution to Our Scary and Divisive Online World - Calibre”
“Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems - The New York Times”
If you had any doubt that these language models were biased as hell, turns out Reddit is a big part of their training data.
“Google calls for relaxing of Australia’s copyright laws so AI can mine websites for information”
Very few countries need new AI regulation. They just need to be less lax in enforcing the laws they already have.
“AI Users Are Neither AI Nor Users - by Debbie Levitt - Apr, 2023 - R Before D”
These are not users. Period, end of story.
“Sorry AI, but User Research is More Than Just Predictive Text”
Feedback would be generic at best, wrong at worse.
“Competition authorities need to move fast and break up AI”
Without the robust enforcement of competition laws, generative AI could irreversibly cement Big Tech’s advantage, giving a handful of companies power over technology that mediates much of our lives.
“Google CEO peddles #AIhype on CBS 60 minutes - by Emily M. Bender”
if you create ignorance about the training data, of course system performance will be surprising.
I hesitate to link to debunks, because all too often it doesn’t do much more than help spread the original bunk around. But, in this case it’s illustrative of just how blatant it’s getting.
“A Computer Generated Swatting Service Is Causing Havoc Across America”
This sounds like a not good kinda thing.
“AIs can write for us but will we actually want them to? - Bryan Braun - Frontend Developer”
Algogen text and art is just not as useful as the punditry thinks it is.