"Why won’t my team use agents?"
Meditations on AI agent adoption
“Why won’t my team use coding agents?” I’ve heard that question so many times now. It’s always coming from an engineering leader who is frazzled, trying to keep up with the hype, has fully shifted to agentic coding, but cannot seem to get buy in from anyone what on the team. Seriously, I’ve had this conversation dozens of times.
It’s a good question though. Coding agents are capable of miracles, so why has adoption been so slow, and why are so many senior engineers resistant to it? This question has consumed most of my waking hours for the last three months. Below, some meditations.

I.
A common argument is that the coding agents themselves aren’t actually good. They are not actually capable of miracles. The people who have “figured it out” are lying or are posers who aren’t doing real engineering. A slightly less aggressive but more common posture is to admit that coding agents are good sometimes, maybe for greenfield or prototype projects, but they are still a form of cutting corners that will eventually come back to bite.
Both of these takes are wrong. Everyone on my team is regularly running 5+ coding agents at a time. We are moving at a ridiculous rate. And yet our codebase has more test coverage and more integration tests than any other codebase I’ve worked with, including when I was at Google (what, you think my GNN training scripts had tests?). I feel very comfortable where we sit on the velocity/safety curve, and I’d even make the case that we are more safe per line of code or feature written than we were previously.
Take as given that the agents can do what it is claimed they can do. What then?
Another reasonable argument is that the agents are too hard to set up. The out of the box experience of using a coding agent is bad. The people who have coding agents figured out have invested a lot of time in tooling and learning. They figured out how to use git worktrees and figured out how to use web sessions and figured out how to write an AGENTS.md and figured out how to have an IV that is mainlining Twitter directly into their veins.
There is truth to this. Early coding agents were unwieldy and hard to control. Even though the models have gotten better, I still would hesitate to use these tools without any configuration. My team identified pretty early on that configuration was a bottleneck. It’s a pain. You have to learn an entirely new language, a new way of describing things, just to get your agent to follow instructions. Every time your agent doesn’t follow the right steps, you add more exclamation points to your config in the hopes that this time it will make a difference. We eventually figured it out, took us months. We built noriskillsets.dev and the nori CLI as ways to reduce friction, so other people don’t have to spend months. They work, try them out.
But still, difficult tooling in an immature ecosystem is not the whole truth. Nori tools are often necessary but insufficient.
After working with dozens of teams, size 1-100, I have two hypotheses about what is slowing adoption.
First: coding agents feel bad to use.
Second: when you use them effectively, the rest of the organization breaks.
II.
Why do coding agents feel bad to use?
My usual workflow involves running multiple agents at once. As many as I can fit into my computer RAM and my attention. Generally that’s about 5. I’ve been doing this for about 5 months now, so I’m very used to it.
Recently, I had to go back to using a single agent for a day, to experiment with how first time adopters experience the technology. And you know what? It fucking sucked!
When you have only a single coding agent, your natural inclination is to babysit it. That means you are sitting there, only partially engaged, watching text stream by. Maybe eventually you disconnect and start scrolling Twitter or something, and then you look up and half the day has gone by, you haven’t done anything, and you forgot what you were working on in the first place. The experience of using a single agent is one of being yanked around, like a kid on a rickety carnival ride. Moments of intense focus that punctuate long periods of boredom and disengagement.
I wrote about this before, in my post on Cathedral Builders:
In some sense, the point of a coding agent is to distance the engineer from the code they are writing. In some cases this is quite extreme — when I use a nori-powered agent, the coding agent is often chugging away autonomously for ten, twenty, thirty minutes before I check in on it. That’s great for me, because I’m not working on one extremely nasty extremely difficult problem at a time. I’m working on like 5 mostly straightforward problems. But if you’re the kind of person who’s primary workflow involves thinking super deeply about one thing, then even the simplest coding agent is designed to pull you out of that flow. A simple ‘hello world’ request from Claude Code takes ~5 seconds. Asking it to write ‘hello world’ to a python file takes ~20 seconds. In that time, you’re just sitting there twiddling your thumbs!
The coding agent is built to disconnect you from whatever you’re thinking about, which makes it uniquely bad for cathedral builders who need to think deeply about problems for long periods of time before they can make progress.
I strongly believe it is impossible to enter a flow state when managing a single coding agent. So the net result is that you are significantly less productive.
“Don’t babysit the agent!” says the true believer. “If watching every single action an agent takes is detrimental to productivity, let the agent go and do its own thing!” But now you’re right back to having to spend hours and hours figuring out how to set up a CLAUDE.md and SKILLS and a dozen other things. If you don’t do that — if you try to parallelize out of the box — you end up with unmaintainable slop.
I’ve been an eng manager for a few years now. In many ways, all of this is similar to how managers treat their team already. Junior engineers need guidance and handholding. Senior engineers get rubber stamp reviews on their PRs. I don’t mind when my TL wants to do a huge refactor (from a code perspective; I care a lot from the business angle). I’ve learned to let go of the code, make it someone else’s responsibility. That frees me up to do other things with my time. But that in turn can only happen because I trust my TLs.
I can only parallelize when I let go. I can only let go when I trust my agent not to screw everything up. And I can only trust my agent not to screw everything up when I’ve spent a ton of time configuring it. No wonder adoption is slow.
III.
So you’ve spent a while configuring things to exactly your specification. You trust the agent. You’ve learned to let go. You’re running 2, 5, 10, 20 agents at once. Things feel good. Then more of your team starts adopting this stuff, and instead of moving really fast suddenly things grind to a halt. What gives?
All of our best practices are designed for a world where code is expensive. First, the machines themselves were expensive. A server would take up an entire room and you’d have to carefully apportion compute time because running the machine was the budget of a small nation. Then the machine operators were expensive. Software became the critical industry and demand for software engineers shot up.
What happens in a world where code is expensive?
We try to avoid working on the same files as teammates because we hate merge conflicts because figuring them out is hard because code is expensive.
We spend lots of time in alignment meetings so people don’t accidentally work on the same things because we don’t want to have to close PRs because code is expensive.
We do code review because we need to make sure more than one person on a team has a mental model of the code because the mental model is necessary to make any changes because code is expensive.
We build extremely lean MVPs, stripping out anything that can be stripped out, because we don’t want to build in the wrong direction because code is expensive.
People are rewarded for doing these things and doing them well. People build their careers around these intuitions and are paid handsomely for bringing them to their teams. People identify with these things. “I’m good at code review” is absolutely something people are proud of!
Code is no longer expensive. Code is cheap. Our intuitions are operating in the wrong evolutionary environment. So if you bring engineers who are aggressively using coding agents into a process built around mitigating the cost of code, you’re going to hit a wall. The engineers will literally just idle as they get bottlenecked by process. Things that are ostensibly meant to speed things up end up getting in the way.
IV.
So the dev process needs to change. In some places, radically. We’re still figuring out what exactly this means at Nori, but some principles that have helped:
Don’t wait for others. If you need to get something done that depends on someone else, just duplicate the work and finish the feature. Worst case scenario, you have to close one of two PRs. Since code is cheap, whatever; stepping on toes doesn’t matter.
Code review shouldn’t be required. Accept that the mental model for everyone is going to be a bit more hazy. It’s fine, because it’s much easier to just ask an AI what any given piece of code is doing.
Focus on product over implementation. Confusion about what is being built is the most common source of slowdown. Spend time making sure everyone understands the user stories that define your product. All of your engineering meetings should be laser focused on product.
Let people build things. An MVP no longer has to be minimal. If you do the user stories thing right, you’ll get way more mileage out of letting your team go wild.
In general, we try to aggressively remove bottlenecks to parallelization. Increasingly, those bottlenecks are not on the dev side.
As a concrete example, I said that understanding user stories is critical for moving fast. User stories come from the product folks, not the eng folks. So much of product direction is defined by slow ab tests and consumer validation studies that can only be justified when engineering epics take weeks or months. When your engineering team can ship daily, your product cycles simply need to be faster. Way faster. So far, we’ve found success by using AI to dramatically increase and automate the scale of outreach and feedback collection in our product motion. But that just bumps the bottleneck to somewhere else. When the product team can cycle quickly, the sales team ends up being the bottleneck. And so on.
This process is exactly like debugging a slow database request or api call. Find the slowest part of the waterfall diagram, aggressively optimize it, and then repeat. I think every part of every organization is going to radically change in the next year; how can it not? The pressure to move faster will come naturally as individual teams start adopting AI tooling and then bump against the teams that haven’t yet. That in turn means that best practices across the board need to be rethought or thrown out. Exciting, because it’s a lot of opportunity to discover what the new best practices are. Scary, because upheaval is always scary.
V.
I opened by positing that teams are struggling to adopt AI. I think that’s true, but it’s also kinda a lie. People are slow to adopt AI compared to what? If you zoom out a bit, adoption of AI workflows is happening at an insane clip. Claude Code only hit general release in May of last year — only 8 months. Maybe ChatGPT had a more meteoric rise? I can’t think of other products that have this steep an adoption curve, certainly not dev tooling.
But everyone feels behind anyway.
There is a massive amount of FOMO in the market right now. Everyone keeps hearing about these crazy gains to productivity. They can’t keep up with Twitter. And even if they could, they wouldn’t be able to identify what works and what’s garbage (the vast majority of it is garbage). I think eventually all that chaos will stabilize as best practices harden. But that doesn’t mean companies are safe to wait it out — just the opposite. Small, nimble teams that remodel their business around AI are gunning for big established players. In this rare instance, I think the FOMO is well placed. So if you’re one of those executives or leaders asking “why won’t my team use agents?” this post will hopefully be a wakeup call. Being unable to adapt to a world with AI is now your company’s biggest hair on fire existential problem.
PS: I recently went back and reread my old ‘meditations on AI’ post. I stand by basically everything I wrote back then, and think understanding that post is also highly relevant to what’s happening now. But instead of quoting the whole thing, I’ll just link it here.

