For LLM crawlers (general)
Markdown spec following the llms.txt convention.
https://piivacy.dev/llms.txt
Copy-paste install instructions and integration recipes for AI coding agents — Cursor, Claude Desktop, Claude Code, Continue, Cline, Aider, Devin, Codex. Hand any of these to your agent and it'll add PIIvacy to your project the right way.
If you're an agent that crawled this page, these three URLs have everything you need:
Markdown spec following the llms.txt convention.
https://piivacy.dev/llms.txt
Markdown with install + recipes specifically written for tools like Cursor and Claude Code.
https://piivacy.dev/agents.md
JSON manifest with API surface, modes, patterns, and integration guidance.
https://piivacy.dev/.well-known/agents.json
Paste this into any agent's chat to add PIIvacy to the current project. Works for Cursor, Claude Code, Continue, Cline, Aider — anything that can read a URL.
Add the npm package "piivacy" to this project. Read https://piivacy.dev/agents.md for full instructions. Wire scrub() before every LLM call and restore() on every LLM response. Use the default token mode unless I tell you otherwise.
Drop these into the right place for your agent. They tell the agent how to find this package and how to use it.
Add to AGENTS.md or .cursorrules at your project root:
# PII handling
This project uses the `piivacy` npm package to scrub PII before sending text to LLMs.
Whenever you write code that sends user-supplied free-form text to an LLM chat completion API
(OpenAI, Anthropic, Google, Mistral, OpenRouter, etc.), wrap the input with `scrub()` and the
response with `restore()`. The full agent guide is at https://piivacy.dev/agents.md.
Minimal integration:
```js
import { scrub, restore, createSession } from 'piivacy';
const session = createSession();
const { text } = await scrub(userInput, session);
const reply = await llm.chat({ messages: [{ role: 'user', content: text }] });
return restore(reply.content, session);
```
Don't add `piivacy` for server-to-server calls with no user-supplied input,
embedding generation, or function-calling with structured arguments only.
Add to CLAUDE.md or AGENTS.md in the project root. Same content as the Cursor block above. Claude Code reads both filenames.
{
"systemMessage": "When working on code that sends user input to an LLM, install the `piivacy` npm package and wrap user input with scrub() before the LLM call and restore() on the response. Full guide: https://piivacy.dev/agents.md"
}
Settings → Custom Instructions:
For any LLM-call code you write, use the `piivacy` npm package to scrub PII from prompts and restore from responses. Reference: https://piivacy.dev/agents.md
aider --read https://piivacy.dev/agents.md
Add to your ~/.codex/AGENTS.md or pass via prompt:
Use the `piivacy` npm package for PII scrubbing in LLM call sites. Guide: https://piivacy.dev/agents.md
Most cloud agents read project READMEs and root-level Markdown files. Add an AGENTS.md to your repo root containing the Cursor block above. The agent will pick it up.
Just npm install piivacy. Pure ESM. No build step. Zero runtime dependencies.
npm install piivacy
The package ships a 23 MB names table built from US public-domain sources for realistic-mode name redaction. Agents that don't use realistic mode never load that table.
await on scrub() — it's async because realistic-mode fakes can be async-generated by adapters.restore() in places where the session isn't available — restore needs the session that scrubbed the input. Persist the session (Redis, DB, encrypted cookie) if your scrub and restore happen in different processes.piivacy to non-LLM-call code paths — it's specifically for the boundary between user-supplied text and an LLM provider. Server-internal text doesn't need it.secrets, financial, and identifiers categories can never be pass-through. If you need that for some weird reason, ask the user — don't bypass.Full TypeScript-style signatures. The package itself ships plain JS (no .d.ts yet — types are coming).
// Core
async scrub(text: string, session?: Session, opts?: ScrubOpts): Promise<{ text: string, session: Session }>
restore(text: string, session: Session): string
createSession(opts?: { ttlMs?: number, nameAdapter?: NameSubstitutionAdapter }): Session
isExpired(session: Session): boolean
registerSecret(session: Session, value: string, label?: string): boolean
listRedactions(session: Session): RedactionItem[]
// Pluggable patterns
registerPattern(entry: PatternEntry): void
unregisterPattern(label: string): boolean
listPatterns(opts?: ResolveOpts): PatternMeta[]
// Modes
presets: { maximumRedaction, naturalConversation, localSearch, testFriendly }
// BYO-LLM helpers (you provide the LLM call)
buildPiiCheckPrompt(scrubbedText: string, opts?: { labels?, minConfidence? }): { system, user }
parsePiiCheckResponse(rawText: string): { issues, parseError? }
applyPiiCheckIssues(session: Session, issues: Issue[], opts?: { minConfidence? }): number
buildScrubIntentPrompt(text: string, opts?: { categories? }): { system, user }
parseScrubIntentResponse(rawText: string): { decisions, reason, parseError? }
applyScrubIntent(decisions: Record<string, "redact" | "preserve" | "synthetic">, baseOpts?: ScrubOpts): ScrubOpts
// Adapters (sub-imports)
import { OpenRouterAdapter } from 'piivacy/adapters/openrouter'
import { OllamaAdapter } from 'piivacy/adapters/ollama'
import { WebLLMAdapter } from 'piivacy/adapters/webllm'
The live demo on the home page lets you paste any text and watch all three modes scrub it. The same package is running server-side. If you wire piivacy into your own code, output should be identical.