\n\n\n\n We Broke Writing and Nobody Wants to Talk About It - AgntBox We Broke Writing and Nobody Wants to Talk About It - AgntBox \n

We Broke Writing and Nobody Wants to Talk About It

📖 4 min read671 wordsUpdated Mar 31, 2026

“I was forced to use AI until the day I was laid off.” That’s not a hypothetical—it’s what actual copywriters are saying right now, according to a recent investigation by Blood in the Machine. They were handed AI tools, told to “enhance productivity,” and then shown the door when management realized the AI could just… keep going without them.

I review AI toolkits for a living. I test the latest models, compare features, write up what works and what falls flat. But lately, I’ve been feeling something I didn’t expect: nostalgia for the mess we left behind.

The Efficiency Trap

Every toolkit I review promises the same thing: faster content, better output, more scale. And they deliver. You can generate a thousand words in seconds. You can A/B test fifty headlines before breakfast. You can populate an entire content calendar while your coffee cools.

But speed isn’t the same as value. And somewhere in our rush to optimize everything, we forgot that writing was never supposed to be frictionless.

The New York Times recently published an opinion piece from a creative writing professor describing what AI is doing to students. They’re not learning to write anymore—they’re learning to edit AI output. They’re not developing voice—they’re developing prompts. The struggle, the revision, the painful process of figuring out what you actually want to say? That’s being automated away.

What We Actually Lost

Pre-AI writing was slow. It was inefficient. You’d stare at a blank page. You’d write garbage, delete it, write more garbage. You’d read your work out loud and cringe. You’d revise seventeen times and still feel uncertain.

That uncertainty was the point.

Writing forced you to think. Not just about what to say, but why you were saying it. Every sentence was a decision. Every word choice mattered because you put it there, not because an algorithm suggested it ranked well for engagement.

Now? I watch people treat writing like a manufacturing problem. Input requirements, output content, optimize for metrics. The toolkits I review are getting better at mimicking human writing, but they’re also training humans to write like machines.

The Reviewer’s Dilemma

Here’s my problem: I can’t tell you these tools don’t work. They do. I’ve tested dozens of them. They’re fast, they’re capable, and they’re getting cheaper. For certain use cases—documentation, basic summaries, templated content—they’re genuinely useful.

But I’m watching an entire generation of writers get laid off because companies decided “good enough” content generated at scale beats great content created by humans. I’m reading student essays that sound like they were written by committee. I’m seeing the craft of writing reduced to prompt engineering.

And I’m complicit. Every positive review I write, every toolkit I recommend, every efficiency gain I highlight—I’m helping build the system that’s replacing the thing I actually care about.

What Comes Next

I don’t have a solution. I’m not going to tell you to abandon AI tools or pretend they don’t exist. I’m not going to romanticize the “good old days” when writing was harder and slower and paid worse than it does now (which is saying something).

But I am going to be more honest about what we’re trading away. When I review a toolkit that promises to “10x your content output,” I’m going to ask: output of what? When I test a tool that “writes like a human,” I’m going to question whether that’s actually what we want.

The pre-AI writing era wasn’t perfect. It was gatekept, it was inefficient, and plenty of bad writing got published. But at least when you read something, you knew a person struggled to create it. You knew someone made choices, took risks, and put their name on work they weren’t entirely sure about.

That vulnerability—that human uncertainty—is what made writing worth reading. And I’m not sure any toolkit can replicate that, no matter how many parameters it has.

I’ll keep reviewing the tools. But I’m done pretending we didn’t lose something important along the way.

🕒 Published:

🧰
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring

More AI Agent Resources

Agent101AgntkitAi7botAgntup
Scroll to Top