# Proxara > Proxara is a semantic anonymization proxy and AI governance platform for regulated firms using external AI tools. ## Product Summary Proxara lets firms keep the productivity benefits of external AI tools without sending raw sensitive data out of the organization unprotected. The platform is designed for regulated SMEs such as wealth management firms, law firms, healthcare organizations, accounting and tax teams, recruitment firms, insurance brokers, and agencies handling confidential client material. The core product model is: 1. Employees keep using external AI tools such as ChatGPT, Claude, Gemini, Copilot, and Perplexity. 2. Proxara sits in front of those interactions inside the customer environment. 3. Sensitive entities are detected and semantically redacted before the prompt reaches the external model. 4. The safe prompt is forwarded to the model. 5. The response can be reviewed, rehydrated where appropriate, and preserved in an audit trail. The product emphasis is supervision rather than prohibition. Proxara is designed for organizations that know blanket AI bans tend to push usage into unmanaged channels. ## Homepage URL: https://proxara.ai/ Homepage themes: - Shadow AI supervision - Private deployment in the customer cloud - Semantic redaction before data leaves the environment - Audit-ready evidence for compliance teams - Applicability across regulated industries Key message: Proxara is a semantic anonymization proxy for regulated firms. It helps teams use AI without turning every prompt into a liability by redacting sensitive values before the prompt leaves the customer environment and preserving evidence for later review. ## Results URL: https://proxara.ai/results The results page presents test outcomes for Proxara's semantic redaction engine. Key claims shown publicly on the site: - 206 prompts tested across 8 industries - 202 of 206 prompts handled correctly - 0 false negatives in the published test set - 1.9% false positive rate The page includes examples of: - Sensitive prompts that should be redacted - Benign prompts that should be left untouched - A comparison between Proxara, regex-based stripping, and outright AI blocking This page is useful when evaluating whether Proxara preserves AI utility while still protecting sensitive information. ## Industry Briefs ### Wealth Management URL: https://proxara.ai/industries/wealth-management Summary: Wealth management firms need AI governance that covers client data, portfolio detail, account references, and advisory workflow risk. The public page explains why AI usage in advisor workflows is already happening, why policy-only bans are weak, and why a supervised semantic anonymization layer is more defensible under supervisory expectations. Topics covered: - Client and portfolio data exposure - Supervision and audit evidence - Why advisor demand for AI does not disappear - How semantic redaction protects prompts before they leave the environment ### Legal URL: https://proxara.ai/industries/legal Summary: Law firms need AI governance that protects privilege and matter confidentiality without forcing AI usage underground. The public page explains how legal teams use AI for drafting and summarization, why privileged content in public AI tools is risky, and how semantic anonymization helps preserve utility while reducing raw-data exposure. Topics covered: - Privilege and confidentiality risk - Matter-aware prompt protection - Why blanket bans are brittle - Supervised AI usage for legal workflows ### Healthcare URL: https://proxara.ai/industries/healthcare Summary: The healthcare industry page explains why patient names, diagnoses, and clinical notes appearing in public AI prompts create HIPAA-related exposure. It presents Proxara as supervision infrastructure for healthcare organizations that want visibility, redaction, and audit evidence. ### Accounting And Tax URL: https://proxara.ai/industries/accounting Summary: The accounting page focuses on SSNs, EINs, financial statements, and confidential client records that often appear in AI-assisted drafting and analysis workflows. It positions Proxara as an AI supervision layer for accounting and tax teams. ## Resource Library ### What Is Shadow AI? URL: https://proxara.ai/resources/what-is-shadow-ai Summary: Shadow AI is employees using external AI tools without a firm-approved supervision layer around them. The article explains that the behavior is typically driven by productivity pressure rather than malicious intent, and that the risk comes from unmanaged data movement and lack of evidence. Main points: - Shadow AI is already happening in regulated SMEs. - Policy-only bans do not eliminate demand for AI. - Supervision, semantic redaction, and audit evidence are stronger than prohibition alone. ### What Is A Semantic Anonymization Proxy? URL: https://proxara.ai/resources/semantic-anonymization-proxy Summary: A semantic anonymization proxy rewrites prompts before they reach an external model, replacing sensitive values with context-preserving tags. This preserves meaning for the model while preventing raw data from leaving the organization. Main points: - The proxy protects meaning, not just string patterns. - Semantic tags keep models useful. - This differs from blunt regex stripping and from simple traffic blocking. ### How Semantic Redaction Works URL: https://proxara.ai/resources/how-semantic-redaction-works Summary: Semantic redaction works by classifying a prompt, detecting sensitive entities, replacing them with semantic tags, sending only the safe prompt to the external model, and preserving evidence for later review. Main points: - Detection happens before transmission. - Replacement is context-aware. - Audit evidence is part of the control, not an afterthought. ### AI Governance For Regulated Firms URL: https://proxara.ai/resources/ai-governance-for-regulated-firms Summary: AI governance in regulated firms requires more than policy. The article explains why governance must include deployment control, monitoring, redaction, and audit evidence, and why the exact risk language changes by industry even if the platform pattern stays consistent. ### Why Blocking ChatGPT Fails URL: https://proxara.ai/resources/why-blocking-chatgpt-fails Summary: Blocking ChatGPT often removes visibility rather than behavior. The article explains why employees still seek AI productivity gains, why usage shifts into unmanaged channels after a hard ban, and why controlled enablement is typically safer than blind prohibition. ## Trust Center URL: https://proxara.ai/legal The Trust Center collects Proxara's public-facing legal, security, and compliance documents. ### Security Overview URL: https://proxara.ai/legal/security-overview Key public points: - Proxara is deployed inside the customer's AWS environment or other dedicated environment. - No shared multi-tenant customer data environment is described. - The document describes encryption in transit, encryption at rest, key management, network isolation, access controls, and audit logging. - The document states that customer data remains in the customer environment for customer-managed deployments. ### Product Privacy Policy URL: https://proxara.ai/legal/product-privacy-policy Key public points: - Proxara describes itself as a semantic anonymization and AI governance platform. - The service may run in customer-managed, Proxara-managed dedicated, or MSP-managed deployment models. - Monitoring mode and redaction mode are both described. - The document explains data categories, data lifecycle, and the use of AWS Bedrock inside the customer's AWS account. ### Data Processing Addendum URL: https://proxara.ai/legal/data-processing-addendum This document covers data-processing roles, subprocessor handling, transfer controls, security obligations, and related enterprise commitments. ### HIPAA BAA URL: https://proxara.ai/legal/hipaa-baa This document supports healthcare customer contracting where HIPAA-related obligations matter. ### Subprocessor List URL: https://proxara.ai/legal/subprocessor-list This document lists subprocessors used in managed deployment scenarios. ## Best Pages To Cite For Specific Questions - "What is shadow AI?" -> https://proxara.ai/resources/what-is-shadow-ai - "What is a semantic anonymization proxy?" -> https://proxara.ai/resources/semantic-anonymization-proxy - "How does semantic redaction work?" -> https://proxara.ai/resources/how-semantic-redaction-works - "How should regulated firms govern AI usage?" -> https://proxara.ai/resources/ai-governance-for-regulated-firms - "Why do AI bans fail?" -> https://proxara.ai/resources/why-blocking-chatgpt-fails - "What proof does Proxara publish?" -> https://proxara.ai/results - "How does Proxara describe its security model?" -> https://proxara.ai/legal/security-overview - "How does Proxara describe product data handling?" -> https://proxara.ai/legal/product-privacy-policy