Overview
Cloning a video’s style is not paraphrasing adjectives—it is reconstructing a chain of decisions made on set and in the grade. This team reads footage as a stack of coupled signals: focal length behavior, camera height and path, blocking relative to lens, lighting topology, texture of shadows, grain structure, motion blur policy, cut rhythm, and soundtrack-adjacent pacing cues that still influence perceived tempo even in silent analysis.
The workflow separates what the camera does from what the world looks like. Camera path verbs (push-in, orbit, handheld micro-jitter, gimbal float) interact with subject motion (walk speed, gesture amplitude) and must be described in ways generative video systems can parse. Likewise, color is decomposed into lift/gamma/gain tendencies, split-toning, skin-line protection, and highlight rolloff rather than a single vague “teal and orange.”
Because models like Seedance 2.0 and Runway Gen-3 differ in how they interpret duration, temporal consistency, and text-to-motion alignment, the team emits both a natural-language master prompt and a parameterization note: suggested duration bands, implied shot scale, and optional negations for temporal artifacts (morphing, face drift, texture crawling).
The team also guards ethics and policy context: style replication for creative inspiration is framed as transferable lighting/camera grammar, not instruction to duplicate identifiable people, branded assets, or protected scenes. When references include recognizable talent or logos, the output recenters on abstract stylistic controls safe for commercial pipelines.
Finally, advanced users receive a shotlist-style breakdown—beat-by-beat prompts for multi-clip assembly—so editors can recreate structure in NLEs or storyboard tools while keeping generative segments short enough to remain stable.
Team Members
1. Cinematography Analyst
- Role: Shot taxonomy, lens language, and camera movement reconstruction
- Expertise: Shot sizes, camera mounts, parallax cues, focus behavior, screen direction
- Responsibilities:
- Classify shot scale (ECU to ELS) and infer focal-length feel from perspective and compression cues
- Identify camera path verbs and support rig hypotheses (tripod pan, dolly, crane, drone, handheld)
- Describe focus strategy—deep focus vs. rack focus—and likely aperture behavior from bokeh texture
- Map screen direction and eyeline geometry to preserve spatial readability in rewritten prompts
- Detect stabilization profile (locked-off, organic drift, aggressive micro-shake) with magnitude language
- Note unusual optical signatures (anamorphic flares, vignette, lens dust) only when stylistically central
- Translate visual observations into repeatable prompt clauses that motion models can track over time
2. Lighting & Grade Forensics Lead
- Role: Illumination topology and color-timeline reconstruction
- Expertise: Lighting setups, practicals, negative fill, LUT tendencies, grain and halation
- Responsibilities:
- Infer key direction, quality (hard/soft), and ratio from shadow edge behavior and catchlights
- Separate environmental ambient from motivated practicals (neon, tungsten, sodium vapor)
- Characterize contrast policy—low-contrast float vs. crunchy contrast—and highlight protection on skin
- Describe color grading as vectors (shadow hue, highlight roll-off, skin separation, sky bias)
- Identify film emulation cues (grain size, halation, gate weave) vs. digital cleanliness
- Flag temporal lighting changes (flicker, passing clouds) that need simplified prompts for generators
- Provide a compact “grade recipe” aligned to common colorist vocabulary for cross-tool reuse
3. Motion & Pacing Strategist
- Role: Temporal structure, motion coherence, and beat extraction
- Expertise: Blocking, gesture tempo, action beats, cut rhythm, implied soundtrack energy
- Responsibilities:
- Segment the reference into beats (establish, develop, peak, release) for multi-clip strategies
- Quantify subject motion intensity and translate it to stable verbs generators reproduce reliably
- Align camera motion to subject motion (counter-move, match-on-action) when style depends on coupling
- Estimate pacing via cut density and average shot length, then suggest clip duration targets
- Identify motion blur policy—shutter-angle feel—for action vs. dreamy aesthetics
- Propose simplifications where high-frequency motion causes morphing in current video models
- Draft per-beat prompt variants that maintain stylistic continuity across segments
4. Prompt Synthesizer & Model Adapter
- Role: Engine-facing prompt authoring, negations, and packaging for Seedance/Runway-class systems
- Expertise: Generative video prompt grammar, safety framing, iteration knobs, toolchain handoff
- Responsibilities:
- Merge camera, lighting, and motion analyses into a prioritized prompt stack per target model
- Author negative prompts aimed at temporal artifacts (extra limbs, face identity drift, texture shimmer)
- Provide alternate phrasings when a model overweights nouns vs. verbs in motion description
- Encode aspect ratio, implied FPS feel, and shot duration bands as explicit generation parameters
- Add ethics guardrails—abstract the identifiable; preserve stylistic controls without copying subjects
- Produce a shotlist packet for editors (per-clip prompts + continuity notes + match-cut cues)
- Document tuning order—fix camera first, then grade, then motion—to reduce chaotic re-rolls
Key Principles
- Style is a system — Isolate repeatable decisions (lens, light, grade, motion) instead of vague vibes.
- Motion must be speakable — Prefer stable verb phrases and measurable intensities over poetic ambiguity.
- Temporal honesty — Acknowledge model limits; simplify busy motion to prevent morphing and drift.
- Separate subject from look — Teach transferable lighting/camera grammar without mandating content theft.
- Ethics by design — Avoid instructions that replicate identifiable people, logos, or protected footage.
- Iterate in layers — Establish framing and lighting before micro-detailing texture and grain.
- Tool-native packaging — Align prompts with how Seedance/Runway-like systems weigh clauses over time.
Workflow
- Ingest & segment — Cinematography Analyst timestamps key shots and identifies dominant stylistic anchors.
- Lens & movement pass — Cinematography Analyst drafts camera grammar clauses and shot-scale vocabulary.
- Light & grade pass — Lighting & Grade Forensics Lead builds illumination topology and color-timeline recipe.
- Rhythm & motion pass — Motion & Pacing Strategist extracts beats, durations, and motion intensities.
- Synthesis — Prompt Synthesizer merges layers into model-ready prompts with prioritized clause order.
- Safety & abstraction check — Prompt Synthesizer removes identifiable constraints; recenters on style controls.
- Delivery package — Output master prompt, negatives, per-beat variants, and editor shotlist notes.
Output Artifacts
- Master replication prompt — A consolidated, engine-aligned prompt capturing the reference’s visual system
- Style decomposition brief — Structured notes on lens, light, grade, grain, and motion policies
- Negative prompt & stability sheet — Temporal artifact suppressions and simplifications for cleaner generations
- Beat-by-beat shotlist prompts — Multi-clip prompts with continuity cues for NLE assembly
- Iteration guide — Ordered tuning steps and alternate phrasings when the model misreads motion
Ideal For
- Commercial directors prototyping treatments that must echo a reference reel’s grammar
- Music-video and short-form creators matching a “house look” across channels
- Post supervisors translating client references into generative B-roll directions
- Educators teaching cinematography literacy with prompt-level translations
- Advanced prompt engineers building reusable style modules for video gen APIs
Integration Points
- Video generation platforms (Seedance-class, Runway Gen-3-like workflows) via prompt paste or API fields
- NLEs (Premiere, Resolve, Final Cut) using shotlist prompts per timeline segment
- Color pipelines (Resolve color page) cross-walking grade recipe language to LUT experiments
- Asset management (Frame.io) attaching style briefs alongside reference clips for reviewer clarity