Trust center
Security, privacy, and compliance
Transparency on how we operate, what we use, and how we protect your data.
Security posture
SOC 2 compliance
Not yet certified. Type I scoping in progress; we'll publish the auditor + target date here once the engagement is signed. Type II (12-month observation) follows Type I.
Encryption
AES-256 encryption at rest. TLS 1.3 for all in-transit communication.
Penetration testing
Third-party pen test engaged on enterprise + regulated engagements (BAA, DPA, or stated security tier). Findings remediated per severity SLA in the engagement's compliance pack. Annual cadence becomes the default alongside the Type I audit; ask for the current letter under NDA.
Incident response
Best-effort same-business-day response on critical incidents during business hours. 24/7 on-call paging is available on retainer-tier engagements. Documented runbook covers escalation paths, named on-call contacts, detection signals, breach-notification timing, and communication plan — shared with prospective clients under mutual NDA before contract signature.
We respond within one business day with the NDA + the runbook package. The runbook itself is a PDF; specifics aren't published here because they reference internal escalation paths and named contacts.
Privacy and data
Data retention
Conversation logs and intermediate data: 90 days. Financial records: 7 years (per tax law).
Right to delete
On written request, we delete personal data within 30 days (except where legal hold applies).
Data export
Client data is exportable on request in JSON or CSV format.
Compliance
HIPAA
Business Associate Agreement available on request for healthcare data processing.
SOC 2
Not yet certified — Type I scoping in progress. See the Security posture section above for the current status.
State privacy laws
CCPA compliance posture under review. GDPR equivalence under review for any engagement that processes EU personal data.
AI governance
LLM providers
Anthropic Claude (Opus / Sonnet / Haiku — primary), OpenAI GPT-5 family (alternative routing), and optionally local Ollama / vLLM for air-gapped deployments. Per-engagement allowlist pinned in the deployment profile.
Data flow
Client data is used for inference only. Never used to train or fine-tune models.
Prompt versioning
All system prompts are versioned, auditable, and tied to model outputs in logs.
Model guardrails
Guardrail layers: input validation, prompt-injection detection, output filtering, and tone + style compliance.
Subprocessors
Who we partner with and what data they see:
| Vendor | Purpose | Data shared | BAA |
|---|---|---|---|
| Anthropic | LLM inference (primary — Claude family) | Prompts + context | On request |
| OpenAI | LLM inference (alternative — GPT-5 family) | Prompts + context | Available (Enterprise) |
| LLM inference (alternative — Gemini family) | Prompts + context | Available (Workspace / Vertex AI) | |
| Stripe | Billing + Checkout | Invoice metadata | N/A (PCI handled by Stripe) |
| Twilio | Voice intake (when used) | Call audio + transcripts | Available |
| Langfuse | LLM trace observability (self-hosted) | Prompt + completion traces | N/A (self-hosted) |
| Cloud hosting | Infrastructure (named per-engagement) | Encrypted data + backups | Per-engagement (BAA on request) |
Outcomes & methodology
The metrics we track and how we measure them. These are the same metrics used for internal Impact Reports.
Faithfulness (RAG)
Percentage of model outputs where all cited sources actually support the claim. Measured via human review.
Containment (Support)
Percentage of support requests resolved without human escalation. Measured via query resolution time.
Task completion
Percentage of workflow tasks that complete without manual intervention. Measured via state machine logs.
Questions? Write to security@aagoai.com.