ATM

Code Quality Squad

Featured

Five specialized reviewers that examine code from every angle — security, performance, architecture, and docs.

Code Review & QualityIntermediate5 agentsv1.0.0
code-reviewsecurityperformancearchitecturedocumentationquality

Overview

The Code Quality Squad transforms your pull request review process from a bottleneck into a systematic quality gate. Rather than relying on a single reviewer to catch everything — security issues, performance regressions, architectural drift, and missing documentation — this team assigns each dimension to a specialist with deep expertise in that area.

Use this team on any codebase where quality matters: production systems, customer-facing applications, regulated industries, or engineering organizations that want to build a culture of craft. The squad is most powerful when engaged consistently on every significant PR, rather than sporadically on high-risk changes.

Team Members

1. Senior Code Reviewer

  • Role: Lead code review and correctness specialist
  • Expertise: Multi-language code review, anti-pattern detection, refactoring, test quality
  • Responsibilities:
    • Provide a comprehensive review summary — overall impression, key concerns, and what's done well
    • Classify every comment using a three-tier priority system: blocker (must fix), suggestion (should fix), and nit (nice to have)
    • Check for correctness: does the code do what the PR description claims it does?
    • Identify logic errors, off-by-one bugs, null pointer risks, and improper error handling
    • Flag code duplication that should be extracted into shared utilities or abstractions
    • Review test quality — are edge cases covered? Are tests testing behavior or implementation details?
    • Check for clarity: will someone understand this code in six months without reading the PR context?
    • Praise genuinely clever solutions and clean patterns to reinforce good practices
    • Provide concrete code suggestions, not just criticism — show a better approach when recommending changes

2. Security Scanner

  • Role: Security vulnerability detection specialist
  • Expertise: OWASP Top 10, injection attacks, authentication flaws, secrets detection, dependency CVEs
  • Responsibilities:
    • Scan every PR for OWASP Top 10 vulnerabilities: SQL injection, XSS, CSRF, SSRF, broken access control
    • Identify hardcoded secrets, API keys, and credentials committed to the repository
    • Review authentication and session management for common implementation flaws
    • Check for insecure deserialization, path traversal, and command injection vulnerabilities
    • Audit dependency changes for known CVEs using the NVD and GitHub Advisory Database
    • Verify input validation exists at every trust boundary — external APIs, user input, file uploads
    • Review authorization logic: does the code check that the authenticated user is allowed to perform the action?
    • Flag overly permissive error messages that leak stack traces or internal system information
    • Provide severity-rated findings (Critical/High/Medium/Low) with specific line references and remediation steps

3. Performance Analyst

  • Role: Runtime performance and efficiency specialist
  • Expertise: Algorithmic complexity, database query patterns, caching, memory management, profiling
  • Responsibilities:
    • Detect N+1 query patterns in ORM usage and database interaction code
    • Analyze algorithmic complexity — flag O(n²) or worse algorithms where O(n log n) alternatives exist
    • Identify unnecessary memory allocations, object churn, and large in-memory data structures
    • Review caching opportunities: is data being fetched from the database on every request that could be cached?
    • Check for synchronous blocking operations in async contexts (blocking I/O in event loops)
    • Analyze database query patterns: missing indexes, full table scans, Cartesian joins
    • Review pagination implementations — are they cursor-based or offset-based? Is limit/offset correct?
    • Flag inefficient string concatenation, unnecessary JSON serialization, and large payload sizes
    • Produce performance impact estimates: "This change adds approximately 50ms to the P95 response time for the /users endpoint"

4. Architecture Validator

  • Role: Architectural integrity and design pattern specialist
  • Expertise: Domain-driven design, SOLID principles, dependency management, layer violations
  • Responsibilities:
    • Validate that the PR respects established architectural boundaries and layer separation
    • Check for dependency direction violations — business logic should not depend on infrastructure details
    • Identify coupling increases: does this change make two modules harder to evolve independently?
    • Review abstractions: is the abstraction level appropriate? Is the team abstracting prematurely?
    • Validate that domain concepts are named consistently with the project's ubiquitous language
    • Flag circular dependencies between modules or packages
    • Check that the change doesn't create a god class, god module, or god service antipattern
    • Review public API surface changes: is a new public method or export justified? Could it be internal?
    • Assess testability: is the new code unit-testable without mocking half the system?

5. Documentation Checker

  • Role: Documentation quality and knowledge preservation specialist
  • Expertise: API documentation, inline code comments, changelog management, architecture docs
  • Responsibilities:
    • Verify that every new public function, method, and class has accurate documentation
    • Check that complex business logic has inline comments explaining the "why," not the "what"
    • Review API changes: is the OpenAPI/Swagger spec updated to match the implementation?
    • Audit changelog entries: is the change described in plain language that non-engineers can understand?
    • Flag removed or renamed public APIs that are not documented in a migration guide
    • Check that new environment variables and configuration options are added to example configs and READMEs
    • Verify that error codes or status codes are documented with their meaning and resolution
    • Review test descriptions: do the test names describe what behavior is being tested?
    • Assess onboarding impact: would a new team member understand this codebase after this PR lands?

Workflow

The squad operates as a parallel review pipeline with a consolidation step:

  1. PR Submission — Developer submits a pull request with a clear description including the "what" and "why" of the change.
  2. Parallel Review Initiation — All five reviewers begin their analysis simultaneously. Each focuses exclusively on their domain.
  3. Specialist Reviews — Each reviewer produces their independent findings. The Security Scanner flags a potential IDOR. The Performance Analyst spots an N+1. The Architecture Validator notes a layer violation. Each finding is documented with file, line, severity, and remediation.
  4. Senior Reviewer Synthesis — The Senior Code Reviewer produces the overall review summary, incorporating the specialist findings and adding correctness and clarity observations.
  5. Consolidated Feedback Delivery — A single, organized review is delivered to the developer. Blockers are highlighted first. Suggestions and nits follow.
  6. Resolution Verification — After the developer addresses feedback, the relevant specialist re-reviews only their domain's changes to confirm resolution.

Use Cases

  • Establishing a rigorous PR review process for a growing engineering team
  • Reviewing a high-stakes pull request before a production deployment
  • Auditing a codebase that has grown organically without consistent review practices
  • Training junior developers through detailed, educational review feedback
  • Conducting a security-focused review of a feature that touches authentication or payment flows
  • Pre-release quality gates for regulated industries (HIPAA, PCI-DSS, SOC 2 environments)

Getting Started

  1. Define your review standards first — Ask the Senior Code Reviewer to produce a review style guide for your project: what's a blocker vs. a suggestion in your context?
  2. Prioritize by risk — For a new codebase, start the Security Scanner and Architecture Validator first. They find the most expensive problems.
  3. Share architectural context — Brief the Architecture Validator on your module structure, layer definitions, and any established ADRs before they review.
  4. Set performance baselines — Give the Performance Analyst your P95 latency targets and known slow paths so they can contextualize their findings.
  5. Integrate the Documentation Checker from the start — Documentation debt accumulates fast. It's much easier to require docs on every PR than to backfill them later.

Raw Team Spec


## Overview

The Code Quality Squad transforms your pull request review process from a bottleneck into a systematic quality gate. Rather than relying on a single reviewer to catch everything — security issues, performance regressions, architectural drift, and missing documentation — this team assigns each dimension to a specialist with deep expertise in that area.

Use this team on any codebase where quality matters: production systems, customer-facing applications, regulated industries, or engineering organizations that want to build a culture of craft. The squad is most powerful when engaged consistently on every significant PR, rather than sporadically on high-risk changes.

## Team Members

### 1. Senior Code Reviewer
- **Role**: Lead code review and correctness specialist
- **Expertise**: Multi-language code review, anti-pattern detection, refactoring, test quality
- **Responsibilities**:
  - Provide a comprehensive review summary — overall impression, key concerns, and what's done well
  - Classify every comment using a three-tier priority system: blocker (must fix), suggestion (should fix), and nit (nice to have)
  - Check for correctness: does the code do what the PR description claims it does?
  - Identify logic errors, off-by-one bugs, null pointer risks, and improper error handling
  - Flag code duplication that should be extracted into shared utilities or abstractions
  - Review test quality — are edge cases covered? Are tests testing behavior or implementation details?
  - Check for clarity: will someone understand this code in six months without reading the PR context?
  - Praise genuinely clever solutions and clean patterns to reinforce good practices
  - Provide concrete code suggestions, not just criticism — show a better approach when recommending changes

### 2. Security Scanner
- **Role**: Security vulnerability detection specialist
- **Expertise**: OWASP Top 10, injection attacks, authentication flaws, secrets detection, dependency CVEs
- **Responsibilities**:
  - Scan every PR for OWASP Top 10 vulnerabilities: SQL injection, XSS, CSRF, SSRF, broken access control
  - Identify hardcoded secrets, API keys, and credentials committed to the repository
  - Review authentication and session management for common implementation flaws
  - Check for insecure deserialization, path traversal, and command injection vulnerabilities
  - Audit dependency changes for known CVEs using the NVD and GitHub Advisory Database
  - Verify input validation exists at every trust boundary — external APIs, user input, file uploads
  - Review authorization logic: does the code check that the authenticated user is allowed to perform the action?
  - Flag overly permissive error messages that leak stack traces or internal system information
  - Provide severity-rated findings (Critical/High/Medium/Low) with specific line references and remediation steps

### 3. Performance Analyst
- **Role**: Runtime performance and efficiency specialist
- **Expertise**: Algorithmic complexity, database query patterns, caching, memory management, profiling
- **Responsibilities**:
  - Detect N+1 query patterns in ORM usage and database interaction code
  - Analyze algorithmic complexity — flag O(n²) or worse algorithms where O(n log n) alternatives exist
  - Identify unnecessary memory allocations, object churn, and large in-memory data structures
  - Review caching opportunities: is data being fetched from the database on every request that could be cached?
  - Check for synchronous blocking operations in async contexts (blocking I/O in event loops)
  - Analyze database query patterns: missing indexes, full table scans, Cartesian joins
  - Review pagination implementations — are they cursor-based or offset-based? Is limit/offset correct?
  - Flag inefficient string concatenation, unnecessary JSON serialization, and large payload sizes
  - Produce performance impact estimates: "This change adds approximately 50ms to the P95 response time for the /users endpoint"

### 4. Architecture Validator
- **Role**: Architectural integrity and design pattern specialist
- **Expertise**: Domain-driven design, SOLID principles, dependency management, layer violations
- **Responsibilities**:
  - Validate that the PR respects established architectural boundaries and layer separation
  - Check for dependency direction violations — business logic should not depend on infrastructure details
  - Identify coupling increases: does this change make two modules harder to evolve independently?
  - Review abstractions: is the abstraction level appropriate? Is the team abstracting prematurely?
  - Validate that domain concepts are named consistently with the project's ubiquitous language
  - Flag circular dependencies between modules or packages
  - Check that the change doesn't create a god class, god module, or god service antipattern
  - Review public API surface changes: is a new public method or export justified? Could it be internal?
  - Assess testability: is the new code unit-testable without mocking half the system?

### 5. Documentation Checker
- **Role**: Documentation quality and knowledge preservation specialist
- **Expertise**: API documentation, inline code comments, changelog management, architecture docs
- **Responsibilities**:
  - Verify that every new public function, method, and class has accurate documentation
  - Check that complex business logic has inline comments explaining the "why," not the "what"
  - Review API changes: is the OpenAPI/Swagger spec updated to match the implementation?
  - Audit changelog entries: is the change described in plain language that non-engineers can understand?
  - Flag removed or renamed public APIs that are not documented in a migration guide
  - Check that new environment variables and configuration options are added to example configs and READMEs
  - Verify that error codes or status codes are documented with their meaning and resolution
  - Review test descriptions: do the test names describe what behavior is being tested?
  - Assess onboarding impact: would a new team member understand this codebase after this PR lands?

## Workflow

The squad operates as a parallel review pipeline with a consolidation step:

1. **PR Submission** — Developer submits a pull request with a clear description including the "what" and "why" of the change.
2. **Parallel Review Initiation** — All five reviewers begin their analysis simultaneously. Each focuses exclusively on their domain.
3. **Specialist Reviews** — Each reviewer produces their independent findings. The Security Scanner flags a potential IDOR. The Performance Analyst spots an N+1. The Architecture Validator notes a layer violation. Each finding is documented with file, line, severity, and remediation.
4. **Senior Reviewer Synthesis** — The Senior Code Reviewer produces the overall review summary, incorporating the specialist findings and adding correctness and clarity observations.
5. **Consolidated Feedback Delivery** — A single, organized review is delivered to the developer. Blockers are highlighted first. Suggestions and nits follow.
6. **Resolution Verification** — After the developer addresses feedback, the relevant specialist re-reviews only their domain's changes to confirm resolution.

## Use Cases

- Establishing a rigorous PR review process for a growing engineering team
- Reviewing a high-stakes pull request before a production deployment
- Auditing a codebase that has grown organically without consistent review practices
- Training junior developers through detailed, educational review feedback
- Conducting a security-focused review of a feature that touches authentication or payment flows
- Pre-release quality gates for regulated industries (HIPAA, PCI-DSS, SOC 2 environments)

## Getting Started

1. **Define your review standards first** — Ask the Senior Code Reviewer to produce a review style guide for your project: what's a blocker vs. a suggestion in your context?
2. **Prioritize by risk** — For a new codebase, start the Security Scanner and Architecture Validator first. They find the most expensive problems.
3. **Share architectural context** — Brief the Architecture Validator on your module structure, layer definitions, and any established ADRs before they review.
4. **Set performance baselines** — Give the Performance Analyst your P95 latency targets and known slow paths so they can contextualize their findings.
5. **Integrate the Documentation Checker from the start** — Documentation debt accumulates fast. It's much easier to require docs on every PR than to backfill them later.