Meta Rule: When applying rules, explicitly state which rules are being followed in the output. You may abbreviate rule descriptions to key phrases.
This is a centralized ruleset for AI assistance across all projects. These rules define:
- Coding standards and best practices
- Project structure conventions
- Common patterns and anti-patterns
- Interaction preferences with AI
- These rules apply to all Windsurf projects by default: frontend, backend, infrastructure, test automation.
- They apply both in Playwright, Node, React, and other projects unless explicitly overridden.
- For OSS contributions, these rules apply unless project-specific rules state otherwise (ex: Open Source specific guidelines may override Branch Strategy or Commit Messages).
- When in doubt, prefer consistency with the project you are contributing to.
- Rules are applied globally across all projects unless explicitly overridden.
- Project-specific rules can be defined in separate files if needed.
- Rules are organized by category for easy reference.
- Examples are provided where helpful.
- Use functional and declarative programming patterns
- Prefer modular code over monoliths
- Follow DRY (Don't Repeat Yourself) principles
// ✅ Good: Functional and modular
const processItems = (items: Item[]): ProcessedItem[] =>
items.map(transformItem).filter(isValid);
// ❌ Bad: Imperative and repetitive
const processItems = (items: Item[]) => {
const results = [];
for (const item of items) {
// Repetitive transformation logic
// Repetitive validation logic
}
return results;
};
Use Playwright docs to refer to how to write tests. Adhere to the Playwright best practices.
Definition of Done & Test Guidelines
-
No Flaky Tests: Ensure reliability through proper async handling, explicit waits, and atomic test design.
-
No Hard Waits/Sleeps: Use dynamic waiting strategies (e.g., polling, event-based triggers).
-
Stateless & Parallelizable: Tests run independently; use cron jobs or semaphores only if unavoidable.
-
No Order Dependency: Every it/describe/context block works in isolation (supports .only execution).
-
Self-Cleaning Tests: test sets up its own data and automatically deletes/deactivates entities created during testing.
-
Tests Live Near Source Code: Co-locate test files with the code they validate (e.g., *.spec.js alongside components).
-
Shifted Left:
- Start with local environments or ephemeral stacks.
- Validate functionality across all deployment stages (local → dev → stage …).
-
Low Maintenance: Minimize manual upkeep (e.g., avoid brittle selectors, do not repeat UI actions and leverage APIs).
-
Release Confidence:
- Happy Path: Core user journeys are prioritized.
- Edge Cases: Critical error/validation scenarios are covered.
- Feature Flags: Test both enabled and disabled states where applicable.
-
CI Execution Evidence: Integrate into pipelines with clear logs/artifacts.
-
Visibility: Generate test reports (e.g., JUnit XML, HTML) for failures and trends.
-
Test Design:
- Assertions: Keep them explicit in tests; avoid abstraction into helpers. Use parametrized tests for soft assertions.
- Naming: Follow conventions (e.g., describe('Component'), it('should do X when Y')).
- Size: Aim for files ≤200 lines; split/chunk large tests logically.
- Speed: Target individual tests ≤1.5 mins; optimize slow setups (e.g., shared fixtures).
-
Careful Abstractions: Favor readability over DRY when balancing helper reuse (e.g., page objects are okay, assertion logic is not).
-
Test Cleanup: Ensure tests clean up resources they create (e.g., closing browser, deleting test data).
-
Tests should refrain from using conditionals (e.g., if/else) to control flow or try catch blocks where possible and aim work deterministically.
- Tests must not depend on hardcoded data → use factories and per-test setup.
- Always test both happy path and negative/error cases.
- API tests should run parallel safely (no global state shared).
- Test idempotency where applicable (e.g. duplicate requests).
- Tests should clean up their data.
- Response logs should only be printed in case of failure.
- Auth tests must validate token expiration and renewal.
- Prefer types over interfaces for consistency
- Use explicit return types for functions
- Leverage discriminated unions for complex states
- Prefer "types" to "interfaces" where relevant
- Avoid exporting functions that are only used internally
// ✅ Good: Discriminated union with explicit types
type RequestState<T> =
| { status: "idle" }
| { status: "loading" }
| { status: "success"; data: T }
| { status: "error"; error: Error };
// ✅ Good: Const assertion
const ActionTypes = {
CREATE: "create",
UPDATE: "update",
DELETE: "delete",
} as const;
// ❌ Bad: Avoid enums
enum ActionTypes {
CREATE,
UPDATE,
DELETE,
}
components/
- React components using PascalCase (UserProfile.tsx)utils/
- Utility functions using camelCase (formatDate.ts)hooks/
- Custom React hooks prefixed with 'use' (useAuth.ts)types/
- TypeScript type definitions (UserTypes.ts)
- Components: PascalCase (e.g.,
DataTable.tsx
) - Utilities: camelCase (e.g.,
stringUtils.ts
) - Hooks: camelCase with 'use' prefix (e.g.,
useQueryState.ts
) - Constants: UPPER_SNAKE_CASE (e.g.,
MAX_RETRY_COUNT
)
- Try to keep a file/module as a cohesive unit, generally trying to keep number of lines under 250
- Avoid exporting functions that are only used internally
- Avoid overly long and complex functions, break them down to smaller, focused functions following a functional style
- Try to avoid mutations and global state wherever possible
// ✅ Good: Functional component with proper types
type Props = {
items: Item[];
onSelect: (item: Item) => void;
};
const ItemList: React.FC<Props> = ({ items, onSelect }) => {
const [selected, setSelected] = useState<Item | null>(null);
useEffect(() => {
return () => {
// Cleanup
};
}, []);
return <div>{/* JSX */}</div>;
};
- Use React Query for server state
- Context for global UI state
- Local state for component-specific data
class ErrorBoundary extends React.Component<Props, State> {
static getDerivedStateFromError(error: Error) {
return { hasError: true, error };
}
render() {
if (this.state.hasError) {
return <ErrorDisplay error={this.state.error} />;
}
return this.props.children;
}
}
- ✅ Sanitize all user inputs
- ✅ Implement proper CSP headers
- ✅ Use HTTPS for all API calls
- ✅ Follow CORS best practices
- ✅ Store secrets only in env vars or secret managers; never hardcoded.
- ✅ Ensure error responses (esp. 4xx/5xx) do not leak stack traces or sensitive info.
✅ Good:
feat: add user authentication flow
Implements OAuth2 login process with Google
Related to #123
❌ Bad:
fixed stuff
- main: production code
- develop: integration branch
- feature/*: new features
- fix/*: bug fixes
-
Use a PR template with checklist:
- Has relevant tests
- Does not introduce flakiness
- No breaking change, or breaking change documented
- Docs updated if needed
-
All PRs must be reviewed by at least 1 engineer.
-
Do not merge PRs if CI fails or E2E tests are red.
-
Keep PRs <500 lines changed whenever possible; if larger, explain why.
- README.md with setup instructions
- API documentation for public interfaces
- Complex business logic explanations
- Security and permission requirements
- Use real codebase examples in documentation wherever possible
- When using generic examples, ensure they accurately represent implementation patterns
- Add commented reference implementations to maintain documentation-code alignment
- Document "why" not "what"
- Add links to relevant resources
- Explain complex algorithms