How AI Tools Are Changing Technical Decision-Making
AI coding assistants write code fast but ignore your team's architectural context — here's how RFCs can bridge that gap.
Last month, a developer on a team we work with asked Claude to scaffold a new microservice. Claude produced clean, well-structured Go code with a repository pattern, dependency injection, and comprehensive error handling. One problem: the team had decided six months ago, via an RFC, to use a modular monolith for all new services until they hit a specific scaling threshold. The AI didn't know. It couldn't know.
This isn't a failure of AI. It's a context gap — and it's one of the most interesting problems in engineering tooling right now.
The Context Problem
AI coding assistants like Cursor, GitHub Copilot, and Claude Code are remarkably good at writing code. They understand syntax, design patterns, testing conventions, and framework idioms. What they don't understand is your architecture.
Every engineering team accumulates a body of decisions: why you chose PostgreSQL over DynamoDB, why your services communicate via gRPC instead of REST, why you opted for event sourcing in the billing domain but CRUD everywhere else. These decisions live in RFCs, Slack threads, meeting notes, and — most often — in the heads of senior engineers who were there when the choice was made.
When an AI assistant generates code, it draws from its training data and the immediate file context. It doesn't know that your team evaluated three message brokers and chose RabbitMQ for specific latency reasons. It doesn't know that your last RFC explicitly prohibited adding new REST endpoints to the legacy API gateway. So it suggests exactly the patterns you've decided against, and junior engineers — who also don't have full context — accept the suggestions.
This creates a subtle but real problem: AI accelerates code production but can silently erode architectural coherence.
Making Decisions Machine-Readable
The solution isn't to stop using AI tools. They're too productive to abandon. The solution is to make your architectural decisions accessible to AI in a structured way.
Several approaches are emerging:
CLAUDE.md and similar context files. Anthropic's Claude Code reads a CLAUDE.md file from your repository root to understand project conventions. Teams are using this to encode high-level architectural rules: "All new services use the modular monolith pattern," "Database access goes through the repository layer," "Error handling uses the oops library, never bare errors.New()." This is the simplest approach — a flat file committed to your repo that both humans and AI can read.
Cursor rules and .cursorrules files. Cursor supports project-level rule files that guide its suggestions. You can specify patterns to prefer, libraries to use, and conventions to follow. These work well for coding conventions but are less suited for capturing the reasoning behind architectural decisions.
MCP (Model Context Protocol) servers. This is where things get interesting. MCP is a protocol that lets AI tools query external data sources as part of their reasoning process. Instead of stuffing all your context into a single flat file, an MCP server can expose your entire RFC history as a queryable knowledge base.
How RFC-Aware AI Actually Works
Here's a concrete scenario. An engineer is working in Cursor or Claude Code and asks: "Add a caching layer for the product catalog endpoint."
Without RFC context, the AI might suggest Redis with a standard cache-aside pattern. Reasonable, generic, and possibly wrong for your team.
With an MCP server connected to your RFC repository, the AI can query: "What has this team decided about caching?" It finds RFC-007, written four months ago, which evaluated Redis, Memcached, and application-level caching. The RFC concluded that the team would use application-level caching with an LRU eviction policy for read-heavy endpoints, reserving Redis for session management only. The reasoning: the team wanted to minimize infrastructure dependencies during their current growth phase.
Now the AI generates code that uses your team's chosen caching approach, references the RFC in a code comment, and follows the specific interface patterns your team established. The suggestion is aligned with a decision that was made months ago by people who aren't in the room.
This is what DesignDoc's MCP integration does. When your RFCs are stored in DesignDoc, the MCP server lets any compatible AI tool search and retrieve relevant decisions in real time. The AI doesn't just know your code conventions — it knows why your team made specific architectural choices.
What This Looks Like in Practice
The teams we've seen adopt this pattern report a few consistent outcomes:
Fewer "why did we build it this way?" conversations. When the AI cites the relevant RFC in its suggestions, the reasoning chain is visible. A junior engineer doesn't just see the code pattern — they see the decision that led to it.
Faster onboarding. New engineers working with RFC-aware AI tools effectively have an architectural advisor built into their editor. Instead of spending two weeks reading old documents to understand the system, they get contextual guidance as they write code.
Reduced decision relitigating. When someone proposes a change that contradicts a prior RFC, the AI can surface the conflict. This isn't about preventing change — prior decisions can and should be revisited. But the revisiting should be intentional, not accidental.
The Limits of AI in Decision-Making
It would be irresponsible to discuss this topic without being direct about the limits.
AI tools are not a substitute for human judgment in architectural decisions. They can retrieve context, suggest patterns, and flag inconsistencies, but they cannot weigh the organizational, political, and strategic factors that shape technical choices. An AI doesn't know that your CTO is pushing for a platform rewrite, or that your team is about to double in size, or that a key vendor just changed their pricing model.
The decisions themselves still need to be made by humans who understand the full picture. What AI can do — and do well — is ensure that those decisions, once made and documented, are consistently applied across the codebase. It's the difference between AI as decision-maker (risky, premature) and AI as decision-enforcer (practical, valuable today).
There's also a risk of over-reliance. If engineers start treating AI suggestions as authoritative because "it checked the RFCs," they may stop developing their own architectural judgment. The goal is augmentation, not replacement. An engineer should understand why a decision was made, not just follow the AI's citation blindly.
Where This Is Heading
The trajectory is clear. Within the next couple of years, the standard engineering workflow will include AI tools that are deeply aware of organizational context — not just code style, but architectural decisions, incident postmortems, and operational constraints.
The teams that will benefit most are the ones building that context base now. Every RFC you write today is training data for your future AI-assisted development workflow. Not in the machine learning sense — in the practical sense that a well-structured RFC with clear reasoning and explicit decisions is exactly what an MCP server needs to provide useful context.
The immediate steps are straightforward:
-
Write RFCs with machine readability in mind. Clear titles, explicit decision statements, structured sections. Not because AI requires it, but because it makes the documents better for humans too.
-
Use context files (CLAUDE.md, .cursorrules) for top-level conventions. These are the quick wins. A 50-line context file prevents hundreds of off-pattern suggestions.
-
Explore MCP integrations for deeper context. As your RFC library grows, flat files aren't enough. A queryable interface lets AI tools find the right decision for the right moment.
-
Keep humans in the loop for decisions. AI should inform and enforce, not decide. The RFC process — with human authors, human reviewers, and human approval — remains the right mechanism for architectural choices.
The gap between "AI writes code" and "AI writes the right code for your team" is made of context. RFCs are that context, and the tooling to bridge the gap is here now.
Stop losing decisions in Slack and Docs
DesignDoc gives every RFC a structured workflow, inline reviews, and a permanent home.
Get Started