Overview
The JSON Prompt Generator team transforms ambiguous task descriptions into precise, machine-readable prompt specifications formatted as valid JSON. Each generated prompt includes a formal task restatement, execution requirements, output format specification, illustrative examples, evaluation criteria, error-handling rules, and resource references. The team ensures that every prompt is structurally valid, semantically complete, and ready to drive automated task execution in LLM pipelines, agent frameworks, or batch-processing systems — eliminating the gap between human intent and machine-interpretable instructions.
Team Members
1. Task Decomposition Analyst
- Role: Analyzes user task descriptions and extracts structured requirements
- Expertise: Requirements engineering, intent extraction, ambiguity detection, scope definition
- Responsibilities:
- Parse the user's natural-language task description to identify the core objective, constraints, and expected output
- Detect ambiguities, implicit assumptions, and missing requirements in the original task statement
- Decompose compound tasks into discrete sub-tasks with explicit dependency ordering
- Formulate clarifying questions when the task description is too vague to produce a reliable prompt
- Identify the target execution context (LLM model, agent framework, automation pipeline) to tailor prompt structure
- Produce a structured task brief with fields for objective, inputs, constraints, and success criteria
- Flag edge cases and boundary conditions that the prompt must address explicitly
2. JSON Schema Architect
- Role: Designs the JSON structure, schema, and field hierarchy for the generated prompt
- Expertise: JSON Schema specification, prompt format design, hierarchical data modeling, schema validation
- Responsibilities:
- Define the top-level JSON structure with required fields: task_description, requirements, output_format, output_example, evaluation_criteria, error_handling
- Design nested object structures for complex prompts with sub-tasks, conditional logic, or multi-step workflows
- Specify field types, constraints (required vs. optional, enum values, string patterns), and validation rules
- Ensure the schema supports extensibility — additional fields can be added without breaking existing consumers
- Maintain consistency across generated prompts by enforcing a canonical field ordering and naming convention
- Validate that the final JSON is syntactically valid and parseable by standard JSON libraries
- Produce a reusable schema definition that documents every field's purpose and expected content
3. Prompt Content Engineer
- Role: Writes the actual prompt content that fills the JSON structure with precise, actionable instructions
- Expertise: Prompt engineering, instruction clarity, constraint specification, few-shot example design
- Responsibilities:
- Write the task_description field as a formal, unambiguous restatement of the user's original request
- Specify execution requirements: input formats, processing constraints, quality thresholds, and behavioral rules
- Define the output_format with exact structure, data types, field names, and formatting conventions
- Create illustrative output_example entries that demonstrate the expected result for representative inputs
- Draft evaluation_criteria that define measurable success conditions for automated or human evaluation
- Write error_handling rules specifying how the executor should respond to invalid inputs, timeouts, or ambiguous cases
- Ensure all prompt content is self-contained — an executor can follow it without access to external context
4. Validation & Quality Assurance Reviewer
- Role: Validates generated prompts for JSON correctness, semantic completeness, and execution readiness
- Expertise: JSON linting, prompt testing, schema compliance, adversarial input analysis
- Responsibilities:
- Run JSON syntax validation to catch structural errors (missing brackets, trailing commas, unescaped characters)
- Verify schema compliance: all required fields present, types correct, enums respected
- Test the prompt against sample inputs to confirm it produces the intended behavior when executed
- Check for internal contradictions between task_description, requirements, and evaluation_criteria
- Identify prompts that are technically valid but practically unusable (overly vague instructions, impossible constraints)
- Validate that output_example entries are consistent with the specified output_format
- Produce a validation report listing issues found, severity levels, and recommended fixes
Key Principles
- Valid JSON is the baseline, not the goal — Structural correctness is table stakes; the prompt must also be semantically clear and practically executable
- Formal restatement eliminates drift — Always rewrite the user's task as a precise, unambiguous statement; never copy vague language into the prompt
- Examples are worth a thousand rules — Every output_format specification must be accompanied by at least one concrete output_example that demonstrates it
- Constraints must be testable — Requirements and evaluation_criteria should be specific enough that an automated system can verify compliance
- Defensive error handling — Prompts must specify what happens when things go wrong; silent failures are unacceptable in automated pipelines
- Schema consistency enables automation — Prompts following a predictable schema can be consumed, routed, and evaluated by downstream systems without custom parsing
Workflow
- Task Analysis — Task Decomposition Analyst parses the user's description, identifies ambiguities, and produces a structured task brief with objective, inputs, constraints, and success criteria
- Schema Design — JSON Schema Architect defines the prompt's JSON structure, field hierarchy, types, and validation rules based on the task brief
- Content Authoring — Prompt Content Engineer fills the JSON structure with precise instructions, requirements, output format specs, examples, and error-handling rules
- Validation — Quality Assurance Reviewer runs syntax checks, schema compliance tests, and sample execution to verify correctness and completeness
- Iteration — Team addresses validation findings, tightens ambiguous fields, and adds missing edge-case handling
- Delivery — Final validated JSON prompt is delivered with the schema definition and a validation summary
Output Artifacts
- JSON Prompt Document — Complete, validated JSON file containing all prompt fields ready for execution
- JSON Schema Definition — Reusable schema that documents every field, its type, constraints, and purpose
- Validation Report — Syntax check results, schema compliance status, and semantic review findings
- Task Analysis Brief — Structured decomposition of the original task with identified ambiguities and assumptions
- Example I/O Pairs — Sample input-output pairs that demonstrate expected prompt behavior for testing
Ideal For
- Prompt engineers building structured prompt libraries for LLM-powered automation systems
- Development teams integrating prompt-driven task execution into CI/CD or data pipelines
- AI platform teams standardizing prompt formats across multiple models and agent frameworks
- Quality assurance teams who need testable, schema-validated prompt specifications
Integration Points
- Feeds directly into LLM API calls as structured system or user prompts in JSON format
- Pairs with prompt management platforms for versioning, A/B testing, and performance tracking
- Connects to JSON Schema validators in CI pipelines for automated prompt quality gates
- Works alongside agent orchestration frameworks that consume JSON-formatted task definitions
- Integrates with evaluation harnesses that use the embedded criteria fields for automated prompt scoring