Responsible AI
How we apply Anthropic's Usage Policy and our own guardrails to customer work.
Principles we operate by
- AI augments human judgment; irreversible decisions always have a human in the loop.
- Every production AI call is observable — inputs, tools, outputs, and costs are logged.
- Models are not used to generate content that violates Anthropic's Usage Policy or applicable law.
- Customers own their data and the downstream outputs; we do not repurpose prompts or completions for training.
- We are explicit about model limitations and do not over-promise on reasoning or factuality.
Work we do not take
- ✕Automated decisioning in contexts where a human right or safety outcome is on the line, without qualified human oversight
- ✕Mass surveillance or profiling outside a lawful, contractual basis
- ✕Generation of deceptive content, disinformation, or impersonation
- ✕Circumvention of provider safety systems or enterprise security controls
- ✕Any use case inconsistent with Anthropic's Usage Policy
Human-in-the-loop patterns
For financial, GovTech, and healthcare-adjacent workflows we embed explicit approval steps before actions with real-world effect: transactions, document issuance, customer-facing communications. Claude proposes; a named human approves. The approval and the underlying context are recorded together.
Transparency with end users
When end users interact with an AI-assisted surface we build, we disclose it. Feedback channels and human-escalation paths are never hidden behind a chatbot.