How we used AI to streamline Product Requirements Document creation for an MCP server proof of concept
We all know that Product Requirements Documents (PRDs) are essential for any software project - they define expectations, technical specs, and success criteria. But let's be honest: creating comprehensive PRDs can be pretty time-consuming, especially when you're trying to make sure you haven't missed any technical details or edge cases.
This blog post walks through our real-world experience creating a PRD for an MCP (Model Context Protocol) dice roll server - a proof-of-concept project we needed to validate our organisation's adoption of MCP technology. What made this interesting was how we used AI assistance to speed up the whole process while keeping the quality high.
The bigger picture: We weren't just creating documentation for the sake of it. This PRD would serve as the foundation for AI-driven development, where detailed requirements enable AI agents to break down epics into tasks, generate implementation code, and automate infrastructure deployment. By investing in solid, structured requirements upfront, we create a blueprint that AI agents can execute with minimal human intervention.
Our goal seemed simple enough: create a lightweight MCP server that exposes a single authenticated tool called roll_dice
, returning a cryptographically secure random integer between 1-6. But as anyone who's worked on "simple" projects knows, the devil's in the details:
- Authentication and authorisation patterns
- AWS deployment architecture
- Security compliance requirements
- Testing strategies
- Infrastructure automation
- Monitoring and observability
We decided to use AI assistance from the start to make sure we captured all the necessary requirements and followed best practices while keeping development moving.
We kicked off the process with a simple, direct prompt to establish our foundation:
Human Prompt:
"Help me create a PRD for the following.
Want a MCP server created using the Python SDK. The MCP server must implement OAuth security from the src/py/mcp_auth_github module (exists in local repo). The MCP server will return a random dice roll as an integer between 1 and 6 inclusive."
What we got back was pretty neat - the AI generated a comprehensive PRD structure covering objectives, scope, technical requirements, and deployment considerations. It gave us a solid starting point that would have taken hours to create manually.
The reality check: Like any first draft, it wasn't quite right. The AI included some requirements that didn't fit our use case and missed others that were essential. We spent some time refining through follow-up conversations, gradually shaping it into something that matched our actual needs.
Once we had our initial draft, we did something that turned out to be pretty clever - we cleared the LLM's context completely. This gave us a fresh perspective, free from any bias from our previous conversation. Then we asked the AI to take an objective look at what we'd created:
Human Prompt:
"Review the PRD and critique it. Look to make improvements to the document structure, and analyse what the document contains and look to address gaps in functionality or inconsistencies."
The results were eye-opening. The AI spotted gaps we'd completely missed:
- Technical architecture diagrams
- API specification/contract details
- Monitoring/alerting requirements
- Rate limiting specifications
- Request/response schemas
- Error code definitions
This fresh review highlighted 15+ specific improvements needed, showing us just how much we'd overlooked in our initial approach.
Human Prompt:
"Drop any timeline reference from the PRD."
Part of the generated PRD included suggested timelines, which didn't make sense for our situation. We weren't about to commit to dates we couldn't control, so we asked the AI to remove them. What was cool was how efficiently it handled this - removing all timeline sections and renumbering everything else automatically. It's a nice example of how specific prompts can save you from tedious manual editing.
The AI gave us a list of recommendations, and one really caught our attention:
10. Add missing technical details: Lambda memory/timeout settings, API Gateway configuration, environment segregation strategy, etc.
This was exactly what we needed for solid implementation guidance, so we decided to pursue it.
Human Prompt:
"draft recommendation 10"
What happened next was pretty impressive - the AI churned out a comprehensive "Technical Implementation Details" section covering:
- AWS Lambda configuration (runtime, memory, timeout, environment variables)
- API Gateway setup (CORS, throttling, routes)
- Monitoring strategy (CloudWatch metrics, alarms, logging)
- Health check specifications
- Environment strategy (dev/staging/prod)
- Token management approaches
This single prompt generated some solid technical sections that would have taken us a while to complete on our own - and we probably wouldn't have been as thorough. We reviewed everything and tweaked it to match our specific requirements, but the heavy lifting was done.
Here's where we hit a bit of a reality check. During our review, we noticed the PRD assumed we'd be creating and maintaining our own WAF. But that's not how our team works - we rely on existing WAFs provided by our Infrastructure and Security teams.
Human Prompt:
"We will not create or maintain the WAF and its allowlist. We need to work with Infra and Security to identify the appropriate existing WAF to leverage"
The AI handled this really well - it updated multiple sections of the document to:
- Remove references to creating new WAF resources
- Add coordination tasks with Infrastructure teams
- Update security sections to reflect our integration approach
- Modify pre-deployment activities
What was nice about this was how the AI understood the broader context and made consistent changes throughout the entire document, not just in one section.
A couple of quick organisational tweaks showed how responsive the AI was to our specific context:
Human Prompt:
"use lhv.com as base domain"
The AI automatically updated domain references throughout the document, changing examples from dice.mcp.example.com
to dice.mcp.lhv.com
. Nice attention to organisational context.
Human Prompt:
"MCP URL should be at
/mcp
"
Quick, precise update to API endpoint specifications - no detailed explanation needed.
As we dug deeper into the technical details, we discovered something that changed our approach:
Human Prompt:
"There are two 'FastMCP' implementations - one that is included in the Python SDK, and another standalone artifact intended for use with FastAPI..."
This was a game-changer. The AI created a comprehensive investigation section that covered:
- Both implementation options
- OAuth compatibility implications
- Risk assessments for each approach
- Updated dependencies and scope sections
The AI generated a complete initial PRD structure from a simple prompt, giving us a solid foundation that would normally take hours to create manually.
By clearing the AI's context and asking for a fresh review, we got an unbiased critique that spotted 15+ missing elements we would have overlooked. Pretty clever approach.
Complex technical sections that usually require significant research time were generated quickly with solid detail levels.
When requirements changed, the AI kept everything consistent across multiple sections, avoiding the contradictory information that often creeps into documents.
The AI suggested industry-standard approaches for testing, monitoring, and security that might not be immediately obvious to all team members.
The resulting PRD followed clear formatting standards with logical section organisation and proper cross-references.
Our AI-assisted process delivered a comprehensive 15-section PRD that covered all the bases:
- Business Context: Clear problem statement and success metrics
- Technical Architecture: Detailed AWS deployment specifications
- Security Requirements: OAuth integration and WAF configuration
- Test Strategy: Unit, integration, and security testing approaches
- Investigation Areas: Known unknowns requiring research
- Risk Assessment: Identified risks with mitigation strategies
- Operational Considerations: Monitoring, alerting, and incident response
The document works as a complete blueprint for implementation, needing minimal additional research or clarification.
After your initial creation, clear the AI's context and ask it to review your draft with fresh eyes. This helps catch gaps and issues that might slip by due to conversation bias.
Focused prompts work much better than broad requests. Instead of "improve the document," try "add technical implementation details for AWS Lambda configuration."
As requirements evolve, update the AI with new context and ask for consistent integration across the document.
When you have specific organisational details (domains, existing infrastructure, etc.), provide this information for accurate customisation.
AI suggestions should always be reviewed for accuracy and alignment with your organisational standards and technical feasibility.
- Structure and completeness: Spotting missing sections and ensuring comprehensive coverage
- Technical depth: Generating detailed specifications based on best practices
- Consistency maintenance: Keeping related sections aligned when changes occur
- Research synthesis: Combining technical knowledge into actionable specifications
- Business context: Understanding organisational priorities and constraints
- Technical feasibility: Validating that AI suggestions align with existing systems
- Risk assessment: Evaluating whether suggested approaches make sense
- Stakeholder alignment: Ensuring requirements meet actual business needs
Here's where this gets really interesting. This PRD creation process is just the first step in a broader AI-enhanced development workflow. The comprehensive documentation we've created serves as the foundation for the next phase: AI-driven task creation and implementation.
With our detailed PRD in place, we can now use AI agents to:
1. Break Down Epic into Actionable Tasks
- Convert PRD sections into specific Jira tickets
- Ensure proper task dependencies and sequencing
- Create solid acceptance criteria for each deliverable
2. Generate Implementation Code
- Use the technical specifications to guide AI code generation
- Leverage the defined architecture patterns and security requirements
- Implement based on the documented testing strategies
3. Automate Infrastructure Deployment
- Generate Terraform code based on the documented AWS architecture
- Create CI/CD pipelines following the specified requirements
- Implement monitoring and observability as defined in the PRD
The structured, detailed nature of our PRD makes it perfect for AI agent consumption. Each section provides clear, unambiguous requirements that AI agents can translate into working code, infrastructure, and test suites. This creates a pretty seamless pipeline:
Requirements → PRD → AI Agents → Implementation
Using AI assistance transformed our PRD creation from what could have been a week-long slog into a focused, iterative session that delivered a comprehensive, technically detailed document. The key was treating AI as a collaborative partner rather than trying to replace human expertise.
What's exciting is that this PRD now serves as a blueprint for AI-driven implementation. The detailed specifications, clear acceptance criteria, and comprehensive technical requirements give AI agents everything they need to translate business requirements into working software with minimal human intervention.
The resulting PRD gave us clear implementation guidance while identifying important investigation areas and risk factors. Most importantly, the process showed us how AI can help ensure nothing important gets overlooked while maintaining the speed and agility we need in modern software development.
For teams looking to improve their requirements documentation process, AI assistance offers real benefits in terms of completeness, consistency, and speed - as long as you combine it with appropriate human oversight and domain expertise. When you pair this with AI-driven implementation, this approach has the potential to dramatically accelerate the journey from concept to production-ready software.
The complete PRD created through this process can be found at docs/prds/mcp-roll-dice.md
, and the associated Jira epic tracking implementation is available at AI-122.