Synthetic data generation is not about creating random test cases. It's about systematically surfacing specific failure modes in your LLM application.
Before generating any synthetic data, use your application yourself. Try different scenarios, edge cases, and realistic workflows. If you can't use it extensively, recruit 2-3 people to test it while you observe their interactions.
Create synthetic data only when you have a clear hypothesis about your applications's failure modes. Synthetic data is most valuable for failure modes that:
- Require systematic testing across many variations to understand the pattern
- Occur infrequently in natural usage but have high impact when they do occur
- Involve complex interactions between multiple system components
When real user data is sparse, use structured generation rather than asking an LLM for "random queries."
Define 3-4 key dimensions that represent where your application is likely to fail. For a recipe bot:
- Recipe Type: Main dish, dessert, snack, side dish
- User Persona: Beginner cook, busy parent, fitness enthusiast, professional chef
- Constraint Complexity: Single constraint, multiple constraints, conflicting constraints
Create tuples by combining one value from each dimension:
- (Main dish, Beginner cook, Single constraint)
- (Dessert, Busy parent, Multiple constraints)
Generate queries from tuples using a second LLM call. This two-step process produces more diverse, realistic queries than single-step generation.
Start with around 100 synthetic examples to achieve sufficient coverage and approach theoretical saturation—the point where additional examples reveal few new failure modes. Focus on:
- High-impact failure modes: Problems that affect core user workflows
- Coverage gaps: Scenarios underrepresented in your real usage data
- Systematic failure patterns: Issues that require testing across multiple variations
Generate more examples only if initial testing reveals additional failure modes that need deeper exploration.
- Use your application extensively to build intuition about failure modes
- Define 3-4 dimensions based on observed or anticipated failures
- Create 5-10 structured tuples covering your priority failure scenarios
- Generate natural language queries from each tuple using a separate LLM call
- Scale to more examples across your most important failure hypotheses (we suggest at least ~100)
- Test and iterate on the most critical failure modes first
The goal is targeted failure discovery, not comprehensive test coverage. Generate synthetic data strategically to accelerate iteration on problems that matter for your users.