How to Write a Compelling RFC Motivation Section
The motivation section is the most important part of an RFC — here's a practical framework for writing one that gets people to care.
Every RFC has a motivation section, and in most of them, it's the weakest part. Engineers tend to rush past it — the problem feels obvious, the solution is what they're excited about, and they want to get to the design. But a weak motivation section sinks proposals more often than a weak design does.
Here's why: reviewers who aren't convinced the problem is worth solving will never engage seriously with your solution. They'll skim the design, leave a vague "looks fine" comment, and move on. Or they'll push back on the entire premise, and you'll spend the review cycle debating whether to do the work at all instead of discussing how to do it well.
A strong motivation section does the opposite. It makes the problem vivid and urgent, gives reviewers the context they need to evaluate your solution, and creates alignment before the design discussion even begins.
The Framework
Every motivation section should answer three questions:
- What's the problem? Describe the current state and what's wrong with it. Be specific.
- Who does it affect? Identify the people, teams, or systems that feel the pain.
- What happens if we do nothing? Explain the trajectory. Does the problem get worse? Does it stay the same? Is there a deadline or forcing function?
Let's see this in practice.
Good vs. Bad: An Example
Suppose you want to propose moving your team's CI pipeline from Jenkins to GitHub Actions.
Bad motivation:
Our CI/CD pipeline is outdated and slow. Jenkins is hard to maintain and doesn't integrate well with our modern development workflow. Moving to GitHub Actions would improve developer experience and reduce maintenance burden.
What's wrong with this? Everything is vague. "Outdated" compared to what? "Slow" — how slow? "Hard to maintain" — for whom, and how much time does it take? "Modern development workflow" — what does that mean concretely? A reader can't evaluate any of these claims.
Good motivation:
Our Jenkins instance requires an average of 6 hours per month of dedicated maintenance from the platform team — patching plugins, debugging flaky master-agent connections, and managing credentials. In Q4 alone, we had three incidents where broken Jenkins plugins blocked deployments for the entire engineering org, with the longest outage lasting 4 hours.
Build times on Jenkins average 18 minutes for our main repository, with p95 at 31 minutes. Developers report that the slow feedback loop is their top frustration in our quarterly eng survey (cited by 34 of 52 respondents).
Jenkins job configurations live outside our repositories in XML files on the Jenkins server. This means CI changes can't be reviewed in PRs, can't be tested locally, and can't be rolled back through git. When we onboarded the three new engineers in January, each spent roughly two days learning Jenkins pipeline syntax — time that would have been unnecessary with a YAML-based system that lives in the repo.
We have 14 active repositories. Migrating them is non-trivial, which is why we've deferred this decision twice already. But the maintenance cost is growing: our Jenkins version is two major versions behind, and the next upgrade requires Java 17, which will break three plugins we depend on.
Notice the difference. The good version has numbers (6 hours/month, 18-minute average builds, 3 incidents in Q4). It names who's affected (platform team, developers, new hires). It explains the trajectory (growing maintenance cost, forced Java upgrade). A reviewer reading this knows exactly what problem they're evaluating and can form their own opinion about whether it's worth solving.
Technique 1: Quantify the Pain
Vague problems get vague priority. Quantified problems get resources.
You don't need perfect data. Rough estimates are fine, as long as you're transparent about how you arrived at them. "Based on our last three incidents, we estimate..." is honest and useful. "Builds are slow" is neither.
Things worth quantifying:
- Time lost. How many engineer-hours per week/month does this problem cost? If builds take 18 minutes and each developer runs 6 builds per day, that's 1.8 hours of waiting per developer per day. For a team of 20, that's 36 hours of waiting daily.
- Incident frequency. How often does this problem cause outages, pages, or blocked deployments?
- Customer impact. Are there support tickets, churn signals, or complaints tied to this problem?
- Growth trajectory. Is the problem getting worse? At what rate?
You don't need all of these. Pick the metrics that make the strongest case for your specific problem.
Technique 2: Tell the Story of Doing Nothing
The "do nothing" scenario is underused and powerful. Most motivations implicitly assume that action is required, but spelling out what inaction looks like makes the case more compelling.
If we don't migrate off Jenkins in the next six months, we'll need to perform the Java 17 upgrade anyway to stay on a supported version. That upgrade will break three plugins (Blue Ocean, Pipeline Utility Steps, and the Slack notification plugin), each of which will need to be replaced or forked. The platform team estimates this work at 3-4 weeks. At that point, we'll have invested a month of work just to stay on Jenkins — work that doesn't improve build times, developer experience, or reliability.
This reframes the decision. It's no longer "should we migrate?" but "should we invest a month maintaining the current system or a month moving to a better one?"
Technique 3: Ground It in Real Incidents
Abstract problems feel theoretical. Specific incidents feel urgent.
On November 12, the Credentials Binding plugin updated automatically and broke all pipeline builds that use SSH keys. The platform team didn't identify the root cause for 3 hours because Jenkins' error messages pointed to the SSH configuration, not the plugin. During that window, 4 teams were unable to deploy, including the payments team which had a time-sensitive regulatory fix queued.
One concrete story communicates more than a page of abstract arguments. Reviewers can picture this happening. They can imagine being on call when it happens again. That emotional connection matters — decisions aren't made purely on logic.
Don't fabricate incidents. Use real ones. If you don't have specific incidents, that might be a signal that the problem isn't as severe as you think — which is also valuable information.
Technique 4: Separate Problem from Solution
The most common structural mistake in motivation sections is smuggling the solution into the problem statement. Watch for sentences like:
- "We need to migrate to GitHub Actions because..."
- "The problem is that we're not using a modern CI system..."
- "Jenkins lacks the features we need, such as..."
These frame the problem in terms of the proposed solution. The motivation should describe the problem in a way that's solution-agnostic. Multiple solutions should plausibly address the problem you've described.
Solution-smuggling: "Jenkins doesn't support YAML-based pipeline definitions in the repository."
Problem-focused: "Our CI configuration lives outside the repository, which means changes can't be code-reviewed, tested locally, or rolled back through version control."
The second framing is better because it describes a real problem that multiple solutions could address (GitHub Actions, GitLab CI, CircleCI, or even Jenkins with its Jenkinsfile approach). This lets reviewers engage with the problem on its own terms before evaluating your specific solution.
Technique 5: Name the Audience
Who feels this pain? Be explicit.
Platform team: 6 hours/month on Jenkins maintenance, on-call for CI incidents. All developers (52 engineers): 18-minute average build times, can't review or test CI changes locally. New hires (8 in the last quarter): ~2 days of onboarding time spent learning Jenkins-specific tooling.
When you name the audience, reviewers can self-identify. The platform team lead reads this and thinks "yes, this is exactly my experience." A new hire reads it and remembers their own frustrating first week. An engineering director reads it and sees 52 engineers waiting for slow builds.
Common Mistakes
Assuming shared context. You've been thinking about this problem for weeks. Your reviewers haven't. Spell out things that feel obvious to you — they probably aren't obvious to everyone who'll read the RFC.
Being too abstract. "Developer productivity is impacted" is abstract. "Developers wait an average of 18 minutes per build, running 6 builds per day" is concrete. Always prefer concrete.
Jumping to the solution. If your motivation section mentions your proposed solution by name, rewrite it. The motivation stands on its own.
Not quantifying. "Slow builds" is an opinion. "18-minute average, 31-minute p95" is a fact. Facts are harder to argue with.
Writing too much. A motivation section is typically half a page to a full page. If it's three pages, you're probably including design details that belong in a different section. State the problem, quantify it, explain the trajectory, and stop.
The Litmus Test
After writing your motivation section, try this: show it to someone who doesn't know about your proposed solution. Ask them two questions:
- Do you understand the problem?
- Does it seem worth solving?
If they say yes to both, your motivation section is doing its job. If they're confused or unconvinced, revise it before you spend time on the design. A brilliant solution to a poorly-articulated problem doesn't get built.
Stop losing decisions in Slack and Docs
DesignDoc gives every RFC a structured workflow, inline reviews, and a permanent home.
Get Started