bitarch.dev
Software Engineering

The Technical Debt of AI Speed

We’re shipping features faster than ever, but it often comes at the cost of quality. AI speed is a new kind of debt.

Dhruba Baishya
Dhruba Baishya
Software Engineer
Feb 24, 2026
6 min read

AI helps us write code, but it also makes it easier to overlook fundamental principles. We’re fixing immediate problems, but we might be making it harder to maintain our systems long-term.

The Review Fatigue is Real

AI can generate hundreds of lines of code in seconds. This speed is great for developers, but it’s often a challenge for reviewers. Pull request sizes are increasing, and keeping up with the volume while maintaining quality is difficult.

When you see a large PR that seems to work, it’s tempting to approve it quickly. But if we don’t fully understand the generated code, bad patterns can start to accumulate. We need a way to manage this volume without compromising our standards.

It Works, But Is It Sustainable?

AI often prioritizes the most direct solution. It might give you a working function, but it doesn’t always consider the broader architecture. This can lead to logic being duplicated or components being too tightly coupled.

This can make it harder for others to understand the codebase. If the logic isn't clear or modular, contributing becomes difficult and risky. We need to ensure that the speed AI provides doesn't come at the expense of structural integrity.

The Shift to Agents and Skills

Lately, I’ve changed how I use AI. I’ve moved away from simple chat interfaces and started using tools like Claude Code and Gemini CLI to build custom agents and skills.

Instead of just asking for a feature, I define "skills"—sets of instructions and rules that an agent must follow. These skills capture our team's specific standards for different languages or frameworks. For example, I have a skill for FastAPI that enforces a specific folder structure—separating utils, models, schemas, services, clients, and routers—and requires ruff and mypy checks as part of the app's standard to maintain code quality.

The best part? These agents automate this entire setup. They don't just write the code; they initialize the structure and run the linting and type checks before I even see the output. These agents and skills can be shared across the entire team, standardizing how code is generated from the start and ensuring everyone follows the same architectural principles.

AI-Driven Code Reviews

We’re also using these agents to help with code reviews, but it's not just about "reviewing PRs" in the traditional sense. We’re building reviewer agents that take the direct output of our coding agents.

These reviewer agents provide feedback based on specific patterns, rules, and principles—like SOLID principles—defined in a "reviewing skill" tailored to a particular technology or language. This setup automates the quality check before the code even reaches a human. It flags violations of our standards, catches common AI-generated anti-patterns, and suggests structural improvements.

This reduces the burden on human reviewers. They can focus on high-level architecture and business logic while the agent handles the consistency checks. It helps us maintain a consistent standard, no matter how much code we're shipping.

Building with Discipline

Managing AI speed requires discipline. Here’s how we’re keeping our codebases clean:

  • Codify your standards into skills. Don't repeat your rules in every prompt. Build a skill that the agent can reuse.
  • Automate the first pass of reviews. Use agents to enforce the "boring" stuff so humans can focus on what matters.
  • Think in patterns, not just features. Guide the AI to follow your specific architectural choices.
  • Keep iterations small. Even with agents, smaller chunks are easier to validate and integrate.

AI speed is only an asset if you can control it. By moving toward custom agents and shared skills, we can keep our codebases healthy while shipping faster than ever.

Read More from the Author