AI access, responsibly

Welcome AI in, with seatbelts.

Open API access without controls is a CTO’s nightmare. So we built six primitives specifically for AI agents — and a security-first architecture that holds up to enterprise review.

Why this page exists

Giving AI write-access to your CRM is scary. Reasonably.

Hand an AI agent the keys to your CRM and three things might happen. Most days, nothing — it does what it was asked, the work gets done, your team moves faster. Some days, it hallucinates a confident-sounding action that’s flat wrong. Once in a blue moon, a single bad prompt cascades into a thousand bad operations and you’re restoring from backup.

The fear isn’t irrational. It’s the right reaction to a real risk. Conduyt’s job is to keep that risk bounded — so that even when an AI gets it wrong, the blast radius is small, the audit is complete, and the rollback is one query.

We built six primitives for this. Then we layered them under enterprise-grade auth and GDPR compliance.

Purpose-built for AI

Six things between your AI and your data.

Scoped API keys

Limit what each AI can see and do

Three permission tiers — read-only, write, admin. On top of those, per-domain restrictions: a key can be limited to specific modules (e.g., “Contacts and Deals only, no Users”). Add IP allowlisting (CIDR-based) and per-key expiration if you want belt-and-suspenders.

In practice: you give Claude a read-only key for analytics, a write-scoped key for one specific automation, and an admin key only to the engineer maintaining the integration. Each key’s audit log is separate.

# Create a scoped key (admin only)
curl -X POST https://conduyt.app/api/v1/api-keys \
  -H "Authorization: Bearer cdy_admin_key" \
  -d '{"scope": "read", "modules": ["contacts","deals"], "expires_in": 90}'

Dry-run mode

Preview every mutation before it runs

Append ?dry_run=true to any POST/PATCH/DELETE. The endpoint executes the validation and authorization logic, computes the result, and returns what would have changed — without persisting anything.

Use it to let an AI propose changes for human review before committing. Or to test integrations against production data without polluting it.

Currently live on 5 high-impact endpoints (bulk reassign, automation publish, smart-list updates, drip-campaign mutations, deal stage changes). Expanding to all mutating endpoints by Q3.

Sandbox flag

Full transactions, fully rolled back

Add ?sandbox=true and the entire request runs inside a Postgres transaction that gets rolled back at the end. Returns the full would-be response. Useful for AI agents simulating multi-step workflows: “what if I created this contact, enrolled them in this drip, and assigned them to this rep — what happens?”

Different from dry-run. Dry-run skips the writes. Sandbox does the writes, then undoes them. Pick the right tool for the question you’re asking.

Confirmation tokens

Two-step handshake on destructive actions

Some actions can’t be dry-run safely. Bulk delete. Mass reassign. Account-level changes. For those, the API returns a confirmation token instead of executing. The token has a 5-minute TTL and is tied to the exact payload.

The flow: AI proposes the action → API returns a token → human (or a separate, more-trusted AI) reviews → human submits the token → action executes. Stops a single hallucinated DELETE from cascading into a real one.

AI action budgets

Rate limiting per key

Default is 120 requests per minute per API key, enforced via Upstash at the Edge middleware layer. Hits the limit, gets a 429 with X-RateLimit-Reset.

The budget catches the runaway-loop scenario: an AI agent stuck in a retry loop or hallucinating a bulk operation gets stopped before it does real damage. Budgets are tunable per key, so a heavy-use admin key can have a higher limit and a customer-facing AI can have a lower one.

Full audit trail

Every call logged. Forever queryable.

Every API call is logged: who called (key ID), what (endpoint + sanitized payload), when (millisecond timestamp), where (source IP), and the response status. PII is masked in the log itself — emails and phone numbers never get persisted in audit records.

90-day default retention, exportable as JSON or CSV for your SIEM. Query it with GET /admin/audit-log (admin keys only). Filter by key, date range, endpoint, or response code.

Beyond the primitives

Security-first architecture and the boring stuff that matters.

Safety primitives are the application-layer story. Underneath them, the infrastructure has to hold up too.

Security-first architecture

We’re a startup, not a Fortune 500. We don’t yet hold a SOC 2 audit. What we do have: scoped API keys, full audit logging, encryption at rest and in transit, GDPR-compliant data handling, and a security review process available on request for enterprise prospects. SOC 2 Type I is on the roadmap; we’ll publish the timeline once it’s locked.

GDPR endpoints

Two purpose-built endpoints for data subject rights: POST /gdpr-export returns a complete data package for a subject (machine-readable JSON), and POST /gdpr-forget triggers right-to-be-forgotten cascading deletion. Both are audit-logged.

Data at rest, data in transit

All data encrypted at rest (Neon-managed Postgres encryption). All traffic over TLS 1.3. Database is in US-East with multi-AZ failover; EU data residency available on Enterprise. Sub-processors listed publicly at /trust/.

Honest about gaps

What we’re still building.

A safety page that doesn’t admit gaps is selling fiction. Two things we’re still working on:

Field-level encryption on PII columns. Today we rely on Neon’s at-rest encryption, which is industry-standard but doesn’t isolate sensitive fields from operators. Per-field encryption (where SSNs, financial data, etc. are encrypted with separate keys) is on the roadmap for Q3 2026.

OAuth2 client_credentials flow for AI agents. Today our scoped API keys cover the same use case, and most AI integrations work fine without OAuth. But for some enterprise SSO setups, OAuth2 machine-to-machine is the standard. Tracking it; not yet shipped.

If either of these is a blocker for your security review, talk to us. We can sometimes accelerate based on customer demand.

Your AI gets the access. You keep the controls.