Writing Effective Playbook Positions
The quality of your playbook depends on how well you write each position. Good positions lead to accurate AI reviews and actionable recommendations. Poor positions lead to confusion and missed issues.
This guide shows you how to write positions that work.
The Anatomy of a Good Position
Section titled “The Anatomy of a Good Position”Every effective position answers three questions:
- What should the AI look for? (The subject matter)
- What’s acceptable vs. not acceptable? (Your standard)
- What should the reviewer do if there’s an issue? (The action)
If your position doesn’t clearly answer all three, the AI will struggle to give useful results.
Example: A Weak Position
Section titled “Example: A Weak Position”Check the limitation of liability clause.
This fails all three questions:
- What to look for? Vaguely defined.
- What’s acceptable? Unknown.
- What to do? Not specified.
Example: A Strong Position
Section titled “Example: A Strong Position”Limitation of Liability
Our liability should be capped at the fees paid in the 12 months preceding a claim. Mutual caps are preferred.
Acceptable:
- Caps up to 24 months of fees
- Separate caps for different damage types, as long as direct damages are capped
Not acceptable:
- Unlimited liability for direct damages
- One-sided caps where only we are capped
If non-compliant: Flag for negotiation. Propose our standard mutual cap language. If counterparty insists on unlimited liability, escalate to Legal Director.
This works because the AI knows exactly what to evaluate and the reviewer knows exactly what to do.
Writing the Guidance
Section titled “Writing the Guidance”The guidance field is the core of each position. Here’s how to structure it:
Start with Your Preferred Position
Section titled “Start with Your Preferred Position”State what you want clearly and specifically:
| Weak | Strong |
|---|---|
| ”Liability should be reasonable" | "Liability should be capped at 12 months of fees" |
| "Payment terms should be acceptable" | "Payment terms should be Net 30 or longer" |
| "Indemnification should be balanced" | "Indemnification should be mutual, covering each party’s negligence and willful misconduct” |
Avoid subjective words like “reasonable,” “standard,” “appropriate,” or “acceptable” without defining what they mean.
Define Acceptable Alternatives
Section titled “Define Acceptable Alternatives”Your preferred position won’t always be achievable. Tell the AI what compromises are okay:
Preferred: 12 months of fees
Acceptable with approval: Up to 24 months of fees (requires business owner sign-off)
Acceptable for enterprise deals: Up to 36 months for deals over $500K annual value
This helps the AI understand that “non-compliant” isn’t black and white—there are degrees.
Define Red Lines
Section titled “Define Red Lines”Be explicit about what’s never acceptable:
Never acceptable:
- Unlimited liability for direct damages
- Indemnification for counterparty’s negligence
- Governing law of jurisdictions where we have no operations
Red lines should be truly non-negotiable. If you frequently grant exceptions to a “red line,” it’s not actually a red line—revise your guidance.
Specify Actions
Section titled “Specify Actions”Tell the reviewer what to do when something doesn’t comply:
If cap exceeds 24 months: Negotiate to reduce. Propose our standard mutual cap.
If counterparty rejects: Escalate to Legal Director before accepting.
If clause is missing entirely: Add our standard liability clause.
Clear actions prevent reviewers from getting stuck.
Common Guidance Patterns
Section titled “Common Guidance Patterns”Pattern 1: Threshold-Based
Section titled “Pattern 1: Threshold-Based”Use when you have specific numeric limits:
Payment terms must be Net 30 or longer.
- Net 45 or Net 60: Acceptable
- Net 15: Flag for negotiation—propose Net 30
- Net 7 or payment on receipt: Escalate to Finance Director
Pattern 2: Presence-Based
Section titled “Pattern 2: Presence-Based”Use when a clause must or must not exist:
The contract must include a data processing addendum (DPA) or equivalent data protection terms.
- If present and covers GDPR requirements: Compliant
- If present but incomplete: Flag gaps for negotiation
- If missing entirely: Non-compliant—add our standard DPA
Pattern 3: Mutual vs. One-Sided
Section titled “Pattern 3: Mutual vs. One-Sided”Use when balance matters:
Confidentiality obligations should be mutual.
- Mutual obligations: Compliant
- One-sided (only we are bound): Non-compliant—propose mutual obligations
- One-sided (only counterparty is bound): Compliant (favors us)
Pattern 4: Escalation Ladder
Section titled “Pattern 4: Escalation Ladder”Use when approval levels vary by risk:
Insurance requirements:
- Up to $1M per occurrence: Acceptable
- $1M–$2M: Acceptable with Legal review
- $2M–$5M: Requires Director approval
- Over $5M: Requires VP approval and risk assessment
Keywords: Helping the AI Find Clauses
Section titled “Keywords: Helping the AI Find Clauses”Keywords are search terms that help the AI locate relevant clauses. Good keywords improve accuracy; bad keywords cause missed clauses or false positives.
Choosing Keywords
Section titled “Choosing Keywords”Include:
- The common name of the clause type (“limitation of liability”)
- Variations in phrasing (“liability cap,” “cap on liability”)
- Key phrases that appear within these clauses (“shall not exceed,” “aggregate liability”)
For a Limitation of Liability position:
- “limitation of liability”
- “limit of liability”
- “liability cap”
- “cap on liability”
- “total liability shall not exceed”
- “maximum aggregate liability”
Keyword Types
Section titled “Keyword Types”| Type | Behavior | When to Use |
|---|---|---|
| Pattern | Matches variations (liability → liabilities) | Default; use for most keywords |
| Exact | Must match exactly | Use for specific phrases that shouldn’t be varied |
| Negative | Excludes clauses containing this term | Use to filter out false positives |
Example of negative keyword: If your “Limitation of Liability” position keeps matching insurance clauses (which also mention “liability”), add “insurance liability” as a negative keyword.
Testing Keywords
Section titled “Testing Keywords”Run test reviews to check:
- Does the AI find the clause you expect?
- Does it find too many irrelevant clauses?
- Does it miss clauses that use different wording?
Adjust keywords based on results.
Fallbacks: Giving Reviewers Language to Propose
Section titled “Fallbacks: Giving Reviewers Language to Propose”Fallbacks are pre-approved clause language. When the AI flags a non-compliant clause, reviewers can propose your fallback instead of drafting from scratch.
Writing Good Fallbacks
Section titled “Writing Good Fallbacks”Title clearly: Name each fallback so reviewers understand when to use it.
- “Standard Mutual Liability Cap - 12 Months”
- “Alternative Liability Cap - 24 Months (Requires Approval)”
- “Minimum Acceptable - Cap at Total Fees”
Include complete language: Fallbacks should be ready to insert. Don’t use placeholders like “[PARTY NAME]“—the AI will adapt terminology when generating.
Order by preference: Put your preferred fallback first, acceptable alternatives second, minimum acceptable last.
Multiple Fallbacks
Section titled “Multiple Fallbacks”For complex positions, include several options:
Fallback 1: Preferred
The total aggregate liability of either party… shall not exceed the fees paid in the twelve (12) months preceding the event…
Fallback 2: Acceptable Compromise
The total aggregate liability of either party… shall not exceed the fees paid in the twenty-four (24) months preceding the event…
Fallback 3: Final Position
The total aggregate liability of either party… shall not exceed the total fees paid or payable under this Agreement.
This gives reviewers negotiation flexibility while keeping them within approved bounds.
Testing Your Positions
Section titled “Testing Your Positions”Before deploying a playbook, test each position:
Accuracy Check
Section titled “Accuracy Check”- Run reviews on several real contracts
- For each position, check:
- Did the AI find the right clause?
- Is the compliance assessment correct?
- Does the guidance make sense in context?
Clarity Check
Section titled “Clarity Check”Ask someone unfamiliar with the playbook to review a flagged issue:
- Do they understand what’s wrong?
- Do they know what to do next?
- Can they explain it to a counterparty?
If not, simplify your guidance.
Edge Case Check
Section titled “Edge Case Check”Test with contracts that are:
- Missing the clause entirely
- Written in unusual language
- From different industries or jurisdictions
See how the AI handles variations.
Maintaining Positions Over Time
Section titled “Maintaining Positions Over Time”Positions need maintenance as your standards evolve:
When to Update
Section titled “When to Update”- Negotiation outcomes consistently differ from guidance
- New risks emerge (new regulations, new business models)
- Reviewers frequently override the same assessment
- Feedback indicates guidance is unclear
How to Update
Section titled “How to Update”- Create a new version of the playbook (never edit a published version directly)
- Revise the position
- Test against recent contracts
- Publish when confident
Document Changes
Section titled “Document Changes”Add notes explaining why you changed a position. Future editors will thank you.
Quick Reference: Position Checklist
Section titled “Quick Reference: Position Checklist”Before publishing a position, verify:
- Guidance states the preferred position clearly
- Acceptable alternatives are defined
- Red lines are explicit
- Actions are specified for each scenario
- Keywords cover common phrasings
- At least one fallback is provided
- Tested on real contracts
- Another person has reviewed for clarity
Next Steps
Section titled “Next Steps”- Creating Your First Playbook — Build a complete playbook
- What is a Playbook? — Understand the concepts
- Quick Start: Reviewing Contracts in Word — See positions in action