Companies are shipping code faster than ever. Velocity is through the roof. The code has never been worse.
AI-generated code has changed not only how software is created, but how teams are managed. What used to require training employees, recruiting talent, hiring expert consultants, and making careful buy-vs-build decisions now starts with “Please build me a new Salesforce”. The barrier to creating code has almost dropped to zero, but the barrier to creating good code didn’t move.
The slop is coming from every direction. Junior developers, non-engineers, and even founders are vibe coding entire features, sometimes entire products, and shipping to production without a code audit. It works until it doesn’t. Junior devs are producing code they don’t fully understand, taking a the tests pass, ship it! mentality. I’m even seeing senior engineers skipping scrutiny because the sheer volume of AI-written output means less time reviewing and more time merging.
Features are shipped fast, but maintenance burden is exploding, security vulnerabilities are slipping through, and the hidden cost of AI velocity is mounting fast.
Congrats, your CTO just became the Chief Slop Officer.
This is fundamentally a technical leadership problem. Someone has to own code quality in the age of AI. The job needs to shift from “build the thing” to “ensure the thing is built well”.
Here’s what’s working for us:
-
Skills, skills, skills. We use Claude Code skill files that cover best practices, linting rules, testing, and security checks directly into the development workflow, updated constantly as new obstacles emerge.
-
Automate everything. If a human has to remember to check it, it won’t get checked. GitHub Actions, automated code review tools, and CI pipelines that enforce quality before anything touches production.
-
Human code review. Automation catches patterns, but it takes a human to catch bad decisions. A senior engineer reading the diff before it ships is still the single most effective quality gate we have. AI can write the code, but someone with context and judgment needs to sign off on it.
Speed without guardrails is just debt with better marketing. AI velocity is real, but it has to be pointed in the right direction. Every codebase needs a guardian.
What’s the biggest code quality issue you’re seeing from AI-written development?