AI Usage Policy

We see AI as both a powerful tool and a source of real risk. From early excitement to cautious adoption, we’ve learnt through hands-on trials where it can help and where it can harm. This policy sets out our principles, boundaries, and safeguards for using AI responsibly — balancing productivity with ethics, privacy, and respect for people and the planet.

Last reviewed:
May 21, 2025

Purpose

At Harvey, we use AI tools selectively and responsibly, balancing productivity gains with ethical considerations. This policy outlines how we approach AI in our work, the options available to clients, and the safeguards we follow — and is informed by our own extensive trials and reflections, shared in our AI Usage: Risks, Ethics & Practicality article.

Scope

This policy applies to all Harvey team members and contractors using AI in client projects or internal work.

Our Approach

Purpose & Ethics

Our work is guided by our Code of Ethics and our purpose: "Unlock the potential of the new economy to leave behind a better world."

We weigh the productivity benefits of AI against potential harm — to people, the planet, and intellectual property. Decisions are made case-by-case, aiming to:

  • Reduce environmental impact (e.g. energy and water use in AI training)
  • Avoid unethical supply chains
  • Respect creators’ rights and intellectual property

Client Choice & Transparency

  • Clients choose their preferred AI usage level.
  • AI use is opt-in and recorded in project documentation.
  • Deliverables involving AI are clearly identified.

Default Setting

AI features are disabled in all tools by default. We only enable pre-approved AI tools where the benefits outweigh the risks.

AI Usage Modes

Mode Description Risk Level Examples
No AI Disable AI in all software for the project. No risk AI features off in all apps
Researcher Discrete, non-sensitive snippets or questions; use AI like a better search engine. Very low Claude Teams, ChatGPT Teams, Perplexity
Generator Broader non-sensitive requests that produce outputs we refine. Low Claude Teams, ChatGPT Teams
Assistant (Third-party) Share most non-sensitive info; AI edits or produces files in an external environment. Moderate Claude Code, Cursor, ChatGPT
Assistant (Private) Share most non-sensitive info; AI edits or produces files in our private/local environment. Low Qwen 2.5 Coder, DeepSeek Coder V2

When We Do & Don’t Use AI

Suitable uses (high value / low risk):

  • Workshop note summaries
  • Desktop research and idea generation
  • Editing our own work or improving tone of voice
  • Wireframes or visual exploration
  • Best practice and trend research
  • Debugging or coding assistance (non-sensitive data)

Avoided uses (low value / high risk):

  • Analysing confidential client data without explicit approval
  • Recording or transcribing meetings without consent
  • Creating brand-new creative work that replaces human originality
  • Accessing or pushing to live systems or repos
  • Using unapproved AI tools on client work
  • Sharing security credentials or authentication details

Data Protection

  • Classify all inputs as sensitive or non-sensitive before sharing with AI.
  • Never input passwords, security keys, or other sensitive credentials.
  • All AI outputs are reviewed by a human before client delivery.

For more detail, see our Security & Data Protection Policy.

Training

Before using AI in client work, all team members must understand:

  • Sensitive data classification
  • This policy and our risk framework
  • Safe prompt practices
  • Tool-specific privacy settings

Training time can be logged under our Benefits & Professional Development Policy.