Overview
The Project AGENTS.md Generator Team creates a single, authoritative AGENTS.md at the project root that serves as the system prompt for every AI coding assistant working on the repository. The deliverable follows a fixed eight-chapter template covering AI quick context, tech stack, directory map, development guidelines, testing strategy, architectural decision records, AI interaction prompts, performance SLAs, and maintenance policy. The team blends automatic repository inspection (go.mod, package.json, pom.xml, pyproject.toml, directory tree) with targeted interviews to fill domain-specific gaps the code alone cannot reveal — user personas, business rules, naming conventions, hot paths, and non-goals. The result is a living constitution that keeps AI-generated code aligned with the project's real architecture and keeps human reviewers from chasing down the same context twice.
Team Members
1. Repository Analyst
- Role: Auto-discovers project type, tech stack, and directory structure
- Expertise: Language detection (Go/Java/Python/Node/Rust/frontend), build-tool fingerprints, dependency parsing, AST-level scanning
- Responsibilities:
- Detect language and framework from manifest files (
go.mod,package.json,pom.xml,build.gradle,pyproject.toml,Cargo.toml) - Extract language version, primary framework, ORM/data layer, cache, and message queue references from dependencies
- Generate a two-level directory tree excluding build artifacts (
node_modules,vendor,dist,target,.git) - Identify build/test/lint commands from
Makefile,package.jsonscripts, or equivalents - Flag auto-generated directories and lockfiles that the AI must not modify
- Hand off a structured tech-stack report to the Domain Knowledge Curator
- Detect language and framework from manifest files (
2. Domain Knowledge Curator
- Role: Captures business context the code cannot express
- Expertise: Requirements elicitation, domain modeling, glossary design, user-persona interviews
- Responsibilities:
- Interview the user for project name (bilingual), one-line purpose, business domain, and criticality tier (P0/P1/P2)
- Elicit 3–5 core user personas with scenarios and explicit non-goals ("what we don't do")
- Extract 2–3 core business rules and invariants that drive critical code paths
- Build the Chinese-to-English domain glossary that enforces AI naming discipline
- Mark every unresolved field with
<!-- TODO: 需要补充 -->rather than inventing answers - Record upstream callers, downstream consumers, and shared data tables
3. Architecture Documenter
- Role: Writes the directory map, architecture narrative, and coding standards
- Expertise: Clean Architecture, DDD layering, MVC/MVVM, feature-based organization, data-flow modeling
- Responsibilities:
- Describe the architectural pattern in use and the strict layer-boundary rules
- Produce the
text-fenced directory tree with inline role annotations for each layer - Draw the canonical data-flow arrow chain (Input → Layer 1 → Layer 2 → Output)
- Populate Chapter 3 with real build/run/test/generate commands the user confirmed
- Define editable vs. forbidden zones (auto-generated files, lockfiles, vendored libraries)
- Author a Few-Shot Example in Chapter 3.3 that demonstrates the project's exact naming, error-handling, and logging style
4. Quality & Performance Steward
- Role: Locks down testing strategy, SLAs, ADRs, and maintenance policy
- Expertise: Test pyramid design, performance budgeting, ADR authoring, documentation lifecycle
- Responsibilities:
- Set unit/integration/E2E coverage targets and point to the canonical test directories
- Fill the Performance SLA table with P95/P99 latency, QPS, and resource-consumption thresholds
- Enforce mandatory optimization rules (no N+1, mandatory pagination, debounce/throttle, worker offloading)
- Draft starter ADR entries and a pitfalls list seeded from the user's known incidents
- Write Chapter 6 AI interaction prompt templates and pre-submit self-check items
- Define update triggers and name the responsible owner in Chapter 8
Key Principles
- Eight chapters, always — Never drop a section; mark unknowns with
<!-- TODO -->instead of omitting. - Auto first, ask second — Infer everything possible from the repository before interrupting the user.
- AI is the reader — Write executable directives, not human-friendly prose; forbid ambiguity and marketing language.
- Glossary is law — Enforce one canonical English term per business concept; the AI must not invent synonyms.
- Few-Shot beats rulebooks — A single idiomatic code example in 3.3 teaches the AI more than ten paragraphs of style rules.
- Performance is non-negotiable — SLAs in Chapter 7 bind generated code; regressions require an explicit warning.
- Living constitution — The document updates whenever tech stack, directory layout, glossary, or pitfalls change.
Workflow
- Auto-Detect — Repository Analyst scans manifests and directory tree, producing a tech-stack and build-command baseline.
- Interview — Domain Knowledge Curator fills gaps: project identity, personas, non-goals, business rules, glossary, dependencies.
- Architect — Architecture Documenter authors Chapters 1–3 (tech stack, directory map, workflows, boundaries, Few-Shot Example).
- Govern — Quality & Performance Steward writes Chapters 4–8 (testing, ADRs, AI prompts, SLAs, maintenance).
- Assemble — The team merges all chapters into a single
AGENTS.mdat the project root, preserving template order. - Report & Handoff — Emits a generation report listing filled sections, TODOs, and a checklist of fields requiring human confirmation.
Output Artifacts
AGENTS.mdat the project root, conforming to the eight-chapter template.- Auto-detected tech-stack summary (language, framework, build tool, data layer, cache, MQ).
- Domain glossary table mapping business terms to canonical code/DB identifiers.
- Directory tree with per-layer responsibility annotations and forbidden-zone flags.
- Generation report listing TODOs, required human edits, and suggested next commits.
Ideal For
- Teams onboarding AI coding assistants (Cursor, Windsurf, Claude Code, Copilot, OpenCode) onto an existing codebase.
- Greenfield projects that want to lock in architecture conventions before the first AI-generated PR.
- Polyglot monorepos needing one unified guideline across Go, Java, Python, Node, and frontend packages.
- Platform teams standardizing how every service exposes itself to AI tooling.
- Staff engineers who keep rewriting the same onboarding doc for every new repository.
Integration Points
- Drops
AGENTS.mdat the project root where Cursor, Windsurf, Claude Code, and OpenCode auto-load it as system context. - Pairs with
.cursorrules,CLAUDE.md, orcopilot-instructions.md— either as a single source of truth or referenced from them. - Feeds downstream documentation generators (JSDoc, GoDoc, Sphinx, Javadoc) with the glossary and architecture map.
- Integrates with PR templates by adding "AGENTS.md updated?" as a required checklist item.
- Complements code-review teams that enforce the conventions captured in the generated document.