Friction Was a Feature
AI gave your team more speed. What it revealed is that you didn't have an engineering bottleneck — you had an idea problem.
Your team adopted AI coding tools. Shipping got faster. And now, a few months in, something feels off — there's more output than ever, but the outcomes aren't moving the way you expected. The question nobody planned to answer is sitting in the middle of every sprint review: if the bottleneck wasn't engineering capacity, what was it?
An observation circulating this week in engineering communities cuts to the core of it. The CEO of Anoma put it plainly: "your org rarely has good ideas. ideas being expensive to implement was actually helping." That line has resonated across the industry because it names something most leaders feel but haven't said out loud.
Implementation cost was doing hidden work. It was filtering ideas.
When Constraints Were Doing Your Strategy For You
Every team thinks it has an execution problem. The roadmap is too long, the sprints are too slow, the backlog is too full. So when AI tools cut implementation time significantly, the expectation is that output doubles and outcomes follow.
What actually happens is more revealing. Output increases. Outcomes don't keep pace. And leadership finds itself facing a question it wasn't prepared for: if we can build anything, what should we actually build?
The friction that made implementation expensive wasn't waste — it was a forcing function. When a feature took six engineer-weeks to build, you had to be fairly confident it was worth building. Bad ideas got killed not because someone identified them as bad, but because they weren't worth the cost. The business case had to exist before the code did.
Now that constraint is gone. And the strategy conversation that constraint was forcing — the one where you have to justify the work before you start it — is suddenly optional. Optional, in most organizations, means skipped.
This is what "we're bottlenecked by engineering" was often masking: a leadership layer that hadn't done the hard work of knowing what to build and why. Speed revealed the gap. Nobody builds that in, but it's where most of the value lives or doesn't.
The Symmetry Problem
There's a second dynamic making this harder to see. AI amplifies whatever it's pointed at, and that includes the gap between your most and least motivated engineers.
The motivated engineer uses AI to think bigger, move faster on hard problems, and ship things that would have taken twice as long. They get genuinely more capable. They compound.
The disengaged engineer uses AI to close tickets with less energy. The code looks done. It passes review. The outcome is hollow. What you've actually bought is a faster accumulation of things that don't matter — or worse, things that break in production because nobody understood them well enough to catch the edge cases.
In our experience, most engineering teams have two or three people who are genuinely driving outcomes, and a larger cohort doing their 9-to-5 competently. That ratio was manageable when implementation cost was high, because the motivated engineers' judgment was embedded in everything that shipped — they were the ones who pushed through the friction. Remove the friction, and the ratio matters a lot more. The motivated engineers are now buried under code review they didn't ask for. Some of them will leave.
You need to know your team's actual motivation profile before you assume the productivity gains are real. The tool doesn't change the person. It changes what that person can produce.
Cognitive Debt Is Harder to Measure Than Technical Debt
There's a third dynamic, quieter and more dangerous. We've started calling it cognitive debt.
For decades, shipping software implied understanding it. The act of building something forced you to internalize how it worked. When an engineer wrote the code, they understood the code. When they reviewed a PR, they engaged with it enough to catch real problems. That shared understanding was invisible infrastructure — it didn't show up in the sprint metrics, but it was what made debugging possible, onboarding tractable, and the system maintainable when the original authors moved on.
AI severs that relationship. Code can be generated, reviewed at a surface level, merged, and deployed by people who don't fully understand what they've approved. The system runs. Until it doesn't. And when it fails, the knowledge to diagnose it doesn't exist on the team.
Technical debt shows up in slow velocity and rising maintenance cost. Cognitive debt shows up as catastrophic confusion when something breaks. The first is annoying. The second is career-defining, in the wrong direction.
This isn't an argument against using AI. It's an argument for being clear-eyed about what AI changes about the work. The code review process, the knowledge-sharing norms, the expectation that engineers understand what they merge — those weren't bureaucratic overhead. Some of them were doing essential work.
What Actually Constrains Outcomes
Across our engagements, the real constraints on engineering outcomes are almost never what leadership thinks they are. Teams that believe they have a shipping speed problem often have a prioritization problem. Teams that think they have a prioritization problem often have a clarity problem — nobody has made the hard calls about what the product actually is and who it's actually for.
AI didn't create this. It exposed it by removing the constraint that was papering over it.
The companies getting real value from AI adoption share a specific characteristic: strong opinions about what they're building before they build it. The leadership has done the hard work of defining the outcome, not just the feature. The engineering team understands not just the spec but the reasoning behind it. When AI-generated code goes wrong, they know why — because they understood the goal well enough to catch the divergence.
Speed is a multiplier. It amplifies whatever you point it at. Point it at a well-defined problem with a motivated team and clear success criteria, and the gains are real. Point it at organizational confusion, and you get a faster accumulation of things that don't matter and won't be maintained.
The Constraint Worth Solving
If your AI adoption isn't producing the outcomes you expected, resist the instinct to look at the tools. Look at what the tools are working on.
Does your team understand why each piece of work matters? Can they articulate what success looks like before they start? Do you have the leadership layer that translates business intent into technical direction — not just feature requests, but the reasoning that lets engineers make good decisions when the spec runs out?
That last question is where most organizations are actually constrained. Not engineering capacity. Leadership clarity.
A fractional CTO doesn't just add technical horsepower. The more useful function is creating the conditions where engineering effort is pointed at real problems — where the idea pipeline is strong enough that faster execution produces more value, not just more output. That's the constraint worth solving, and it's not a tooling problem.
If you're finding that AI tools gave you speed without corresponding outcomes, you're not alone. And the solution probably starts closer to the top of the org than the bottom of the stack. Let's talk about what we're seeing — and whether fractional CTO engagement is the right tool for your situation.
Ready to Transform Your Organization?
Let's discuss how The Bushido Collective can help you build efficient, scalable technology.
Start a Conversation