Design Partner Access Open
Stop sensitive data from reaching AI tools — before it leaves the workstation.
AI-Guardian blocks and redacts secrets, customer PII, and confidential text across ChatGPT, Claude, Cursor, Copilot, Gemini, and other AI surfaces—with on-device detection and metadata-first audit trails.
Private beta for security-conscious teams. Workstation and browser paths supported; scope varies by platform—validate in your pilot.
On-device detection · No prompt storage for scanning · Built for security and compliance teams
Guided pilot onboarding · Private beta for security-conscious teams · Enterprise pilot available by review
Coverage preview
Browser extension
ChatGPT ▸ paste reviewed
BLOCKED · credential pattern
Desktop agent
Cursor ▸ paste reviewed
REDACTED · policy rule
Local policy engine
See AI-Guardian stop a leak before it happens
From clipboard to AI client to audit signal—without storing prompts for cloud-side scanning.
Someone copies sensitive material
An API key, token, customer record, credential, or confidential snippet lands on the clipboard—often from a terminal, ticket, or doc.
They paste into an AI surface
ChatGPT, Claude, Cursor, Copilot, Gemini, or another web or desktop AI client—the path most policies never instrument end-to-end.
Detection runs locally on the workstation
Policies and detectors execute on-device. Prompt content is not sent to AI-Guardian’s cloud for scanning.
Paste is blocked or redacted before send
The user sees a clear enforcement outcome before sensitive data can leave the machine toward an AI vendor.
Security sees metadata-first signals
Teams receive audit-style events focused on categories, policies, and surfaces—not a repository of full prompts.
Your AI tools are fast. Sensitive data moves faster.
Paste is a primary exfiltration path into ChatGPT, Claude, Cursor, Copilot, Gemini, and desktop AI clients—often dozens of times per day across browsers, IDEs, terminals, and native apps.
Most teams struggle to answer with evidence: what categories were exposed, which AI surfaces were involved, and whether policy matched the real workflow—not just the acceptable-use PDF.
Engineering & DevSecOps
Secrets move from terminals and logs into Cursor or Copilot-assisted workflows. A browser-only control never sees that path—risk still reaches the AI vendor.
Revenue & customer-facing teams
Account notes and PII get pasted into assistants to draft faster. That is contract and privacy exposure—even when the AI vendor is broadly trusted.
Legal & leadership
Confidential clauses and strategy text enter prompts for rewriting. The failure mode is secrecy and precedence—not “unsafe prompting.”
If you only monitor email attachments and file shares, you can miss the prompt box entirely.
Browser-only DLP has a hole: the desktop.
If the risky paste never touches the browser, browser-only tools cannot be your whole control for workstation AI usage.
Example: a developer copies credentials from a terminal log into Cursor. Browser DLP does not observe that path—but the sensitive payload can still leave the device toward an AI service.
- Native apps: Slack desktop, IDEs, terminals
- Same secret types: AWS keys, JWTs, SSH keys, API tokens
- Same failure mode: paste → send faster than policy training can react
Web ChatGPT / Claude in Chrome
extension sees tab
Terminal → Cursor (native)
Gap: paste never touches the browser
How AI-Guardian works
One policy posture across browser and workstation surfaces—so enforcement follows how teams actually work with AI.
Step 1
Deploy clients
Chrome extension for web AI apps; desktop agent for native clients—aligned to your rollout plan.
Step 2
Detect on-device
Sensitive patterns match locally—built-in detectors plus rules your team configures.
Step 3
Block or redact
Stop the send, strip sensitive segments, and emit metadata-first signals admins can review.
Data handling (detection)
Detection runs on the workstation. AI-Guardian is not architected to warehouse prompts for cloud-side scanning. Admin visibility emphasizes metadata—see our Privacy Policy for fields.
Key use cases
What security and IT teams evaluate first when GenAI paste paths need governance—not an exhaustive feature matrix.
Smart redaction
Mask or remove PII and sensitive segments before content reaches an AI endpoint.
Example: email · token · internal label
Custom rules
Org-specific patterns—codenames, ticket formats, and formats your team defines.
Regex and policy packs
Workstation coverage
Enforcement beyond the browser—IDEs, terminals, and desktop AI clients where paste risk concentrates.
Cursor · Copilot · terminal paths
Admin visibility
Dashboard views for events by category, policy, and surface during pilots.
Role-aware access in team tiers
Audit exports
Export metadata for reviews—fields align to what your policy defines for the pilot.
CSV / structured exports
Managed rollout
Align browser and workstation clients under one policy posture—packages and guidance for staged pilots.
Guided pilot onboarding
Role-specific briefs (CISO, DevSecOps, compliance, IT) can expand on For teams—depth pages are rolling out separately so this page stays scannable.
Who moves first on AI paste risk
Same product—prioritized outcomes depend on who owns the decision. Deeper role pages live on the roadmap; start here or on For teams.
CISO & security leadership
Govern AI paste paths with evidence your board questions—without claiming maturity you have not shipped yet.
DevSecOps & platform teams
Meet developers where they work: terminals, IDEs, and desktop AI—without turning policy into shelfware.
Compliance & privacy
Metadata-first signals and exports sized for reviews—paired with honest limits documented in Privacy and DPA paths.
IT & AI governance
Rollouts that respect helpdesk reality—guided onboarding instead of a thousand silent installs on day one.
Built for security review
AI-Guardian is in active development with design partners. Below is what we optimize for when your team evaluates controls—not a claim of finished enterprise certification.
Browser-only AI controls vs. workstation-level enforcement
If your AI risk is limited to web sessions, a browser-centric approach may fit. If teams paste into Cursor, IDEs, terminals, desktop Slack, and native AI clients, the threat model usually requires workstation-level visibility—not every traditional control fits that shape.
| Dimension | Typical browser-first GenAI control | AI-Guardian |
|---|---|---|
| Native IDEs, terminals, desktop AI clients | ||
| Major browser AI apps (where extension applies) | ||
| Single policy story across web + workstation paste paths | ||
| Workstation-local detection architecture | ||
| GenAI paste / submit enforcement focus |
Illustrative comparison for procurement conversations—not an endorsement or critique of any named vendor. Confirm coverage and architecture in your environment during a pilot.
Pilot programs — guided onboarding
Start free on the extension. Team and enterprise pilots are intentional—we onboard your admins personally rather than leaving billing or scope ambiguous.
Commercial terms are discussed during the pilot path—not hidden behind a self-serve checkout that would underserve security buyers.
Free
$0
For individuals and early testing.
No account required for core extension flows
Team Pilot
Scoped to your team
For 5–50 users — access by application.
Guided onboarding — we align scope before credentials are issued
Enterprise Pilot
Workstation-grade programs
For regulated or security-sensitive teams.
DPA and security review support
Capability comparison
| Capability | Free | Team Pilot | Enterprise Pilot |
|---|---|---|---|
| Chrome extension | |||
| Local detection core | |||
| Block / warn / redact | |||
| Admin dashboard | — | pilot | full |
| Team policies | — | ||
| CSV audit export | — | ||
| RBAC | — | core | expanded |
| MFA enforcement | — | — | |
| Desktop agent | — | by review | |
| DPA / security review | — | — | |
| SSO / SCIM | — | — | roadmap |
Details depend on platform and pilot agreement—validate during your walkthrough.
Questions security teams ask before a pilot
Answers reflect our current beta posture—validate specifics during your walkthrough.
Does AI-Guardian store prompts?▾
Detection is built around workstation-local execution. Admin-facing views emphasize metadata (categories, policies, surfaces)—not a prompt archive. Exact fields are documented in our Privacy Policy and evolve as the product matures.
Does detection happen locally?▾
Yes—the enforcement path is designed to evaluate sensitive patterns on the workstation before content reaches an AI vendor. Scope varies by client and platform; confirm in your pilot.
Which AI tools are supported?▾
Prioritize what your teams already use—major browser AI apps via the extension, plus desktop surfaces where the workstation agent applies (for example Cursor-class workflows on supported platforms). We publish concrete compatibility details as they stabilize.
Is this only for browsers?▾
No—that is the gap we focus on. Browser-only controls miss paste paths through native IDEs, terminals, and desktop AI clients. AI-Guardian pairs a browser extension with a workstation agent so coverage follows real workflows.
How does the admin dashboard work?▾
During Team and Enterprise pilots, authorized admins see metadata-first events aligned to your policies—suited for review workflows rather than raw prompt replay. RBAC tightens who can view exports.
Is this ready for enterprise production?▾
We are in private beta / design-partner stage with guided pilots. Enterprise buyers should run their own diligence, scope surfaces explicitly, and treat roadmap items as roadmap—not finished attestations.
What is included in the beta / design partner program?▾
Structured onboarding, scoped pilot access, feedback loops with our team, and alignment on policies and exports. Commercial packaging is discussed explicitly rather than implied by a self-serve catalog.
Can we deploy to a small team first?▾
Yes—most organizations start with a bounded pilot (often tens of users), measure blocked or redacted attempts and operational friction, then widen rollout with IT.
What happens when sensitive data is detected?▾
Depending on policy, the paste or send can be blocked, redacted, or surfaced as a warning—before the payload reaches the AI service. Users see an explanation oriented toward remediation, not blame.
How do you compare to browser-only AI controls?▾
If risk is strictly in the browser, browser-centric tooling may suffice. If developers and power users paste into desktop AI paths, workstation-level enforcement belongs in scope.
How long does a pilot take to stand up?▾
Many teams begin in days once packages and accounts are aligned—timeline depends on identity, deployment method, and review cycles inside your org.
AI Usage Risk Checklist
A structured PDF for CISO and security leadership reviews—control prompts, stakeholder coverage, and practical checkpoints before you expand AI tools enterprise-wide.
- Stakeholder map across Security, IT, Legal, and Data
- Controls aligned to paste and submit paths—not only file shares
- Incident readiness prompts for AI-related exposure
- Vendor questionnaire you can adapt in one pass
We'll email the PDF. Optional short follow-ups on GenAI leakage controls—you can unsubscribe anytime.
Turn unmanaged AI usage into an auditable security control.
Deploy AI-Guardian with your team, measure real leakage attempts, and build an AI usage control layer before sensitive data reaches external tools.
Private beta — capabilities and coverage evolve; your walkthrough confirms fit for your stack.