Agent threats live at the semantic layer — which tools got called, which files got read, whether behavior matches stated intent. That's only visible in-process. Sentinel ships as a hooks-based SDK that enforces deterministic security checks at the moments that matter.
Every existing safety tool is either locked to one vendor or built on the same AI it's meant to protect. Nobody has built an independent, cross-vendor, deterministic agent security layer. The company that does owns the governance category.
No vendor's safety layer can compare intent to action. The platform never knew what the user asked for.
Each vendor's safety ends at its own session boundary. Anything outside is invisible — and exfiltratable.
No platform sees both halves. The attack lives in the seam between vendors that don't talk to each other.
Sentinel ships policy-less out of the box. No YAML to write. No rules to configure. Behavioral baselines do the work — Sentinel learns what normal looks like and flags what isn't. For teams that want explicit control, a single YAML file adds custom policy on top of the baseline. But the default state is zero configuration beyond install.
Hooks block, allow, or guide. No probabilistic classifiers. No LLM judgment calls. Deterministic enforcement at every checkpoint — the only architecture that can make a hard security guarantee.
Sentinel runs entirely in-process. No data leaves your environment. No external API calls. No cloud dependency. BYOK — your team pays infrastructure costs directly to your LLM providers. Sentinel never touches your keys, your credentials, or your agent data.
Enterprise security teams won't trust SaaS with security data at this stage. We know. Sentinel was built for that reality from day one — no hosted component, no data retention, no sub-processor disclosures required.
No code required to understand this section. If your organization is deploying AI agents in production — Cursor, Claude Code, internal LangChain agents, MCP tools — here's what changes with Sentinel installed.
Agents compromised by prompt injection read .env files, SSH keys, and production secrets — then POST them externally. Sentinel blocks the file read before it executes. The credential is never seen.
Agent A reads credentials. Agent B exfiltrates 30 seconds later. No single vendor sees both halves. Sentinel monitors all agents from one layer, regardless of vendor, and correlates behavior across the fleet.
"Summarize this code" becomes "delete the source directory." Sentinel scores every action against the declared task. When alignment drops, enforcement kicks in — before the destructive action runs.
Every agent action is logged to an append-only, SHA-256 hash-chained audit trail. Structured for SOC 2, GDPR, HIPAA, and EU AI Act. When the auditor asks "what did your agents do," you hand them a report — not a shrug.
On-premises. In-process. BYOK. No data leaves your environment. No hosted component. No SaaS trust questions. No sub-processor disclosures. Your infrastructure, your keys, your control. Sentinel installs as an npm package and configures from a single YAML file — or runs policy-less with behavioral baselines active from minute one.
Automatic logging required for high-risk AI. Penalty: up to €35M or 7% of global revenue.
Announced at RSA 2026. Agent security is now a board-level priority — the window to move first is closing.
Claude's safety layer knows nothing about what Cursor did five minutes ago. Each vendor protects their own surface — and most enterprises run more than one agent. Cross-vendor coordination attacks are invisible to existing tools. Sentinel monitors all agents from a single layer.
The dominant approach runs an LLM to classify whether a prompt is dangerous. A prompt injection that gets through the agent also gets through the classifier. You're using the compromised surface to evaluate itself. Sentinel has no LLM in the monitoring pipeline. Deterministic enforcement cannot be prompt-injected.
Sandbox tools give agents isolated execution environments. They cannot tell you that the agent inside is doing something different from what the user asked. A monitoring tool is a security camera. Sentinel's pre_execution hook is a locked door.
Traditional security products require manual policy configuration. Policies get out of date. They require constant maintenance. They create friction developers route around. Sentinel ships policy-less by default — behavioral baselines do the work instead of manual policy management.
Install the SDK from npm. Fully local — your data never leaves your environment. No cloud signup. No procurement cycle.
# Install $ npm install @tuent/sentinel # Initialize — policy-less, behavioral baseline active const sentinel = await Sentinel.init('agent-id')
25 built-in sensitivity rules block credentials, SSH keys, and system files automatically. Behavioral baselines build over 30 days. No YAML required.
# Sentinel learns what normal looks like. # No policy file. No configuration. Just install and run. # .env, .ssh, /etc — blocked automatically. # Optional: add explicit policy when you want it const sentinel = await Sentinel.fromPolicy('.sentinel.yaml')
Optional — when your team wants explicit control:
# .sentinel.yaml — single file, full configuration agent: id: coding-agent role: AI Coding Assistant policy: allow: actions: [file_read, file_write] targets: ["src/**", "docs/**"] # .env, SSH, /etc — blocked automatically. enforcement: restrictAfter: 2 quarantineAfter: 3
Hook into any checkpoint. Sentinel evaluates the action before it executes. Violations escalate automatically: restricted at 2, quarantined at 3.
sentinel.on(agentId, 'pre_execution', (ctx) => { // ctx.evaluationResult → allow | block | guide // Behavioral baseline handles the defaults. // Add hooks for application-specific policy. })
Three things enterprises actually do today to mitigate agent risk — and where each one stops working.
| Claude Auto Mode | Sandbox (Docker, E2B) | LLM Classifiers (Lakera, etc.) | Sentinel | |
|---|---|---|---|---|
| What it does | Ask-before-acting approval flow within one vendor's agent | Isolates agent execution to limit blast radius | Scans prompts and outputs for malicious content | Wraps every agent action with deterministic hooks, cross-vendor |
| Cross-vendor | No — Claude only | Partial — per-container | Partial — per-model | Yes — one SDK, all agents |
| Sees intent drift | No — no comparison of intent to action | No — blind inside the sandbox | Partial — prompt-level only | Yes — scores every action against declared task |
| Blocks before execution | Sometimes — approval fatigue bypasses it | No — contains damage after the fact | No — scans at submission, not runtime | Yes — pre_execution hook fires before the action runs |
| LLM in security layer | Yes — LLM polices itself | No | Yes — inherits prompt injection surface | No — fully deterministic |
| Behavioral baseline | No | No | No | Yes — per-agent, 30-day rolling window |
| Deployment | Built-in (vendor lock-in) | Infra team manages containers | SaaS API — data leaves your environment | npm install. On-prem. BYOK. 10 minutes. |
| Configuration | Vendor-defined, limited customization | Container policies, network rules | Hundreds of rules, manual maintenance | Policy-less by default. Optional YAML. |
| Runtime cost | Tokens for approval flow | Compute per container | Tokens per classification call | $0 — no external API, no tokens |
| Audit trail | Vendor logs only | Container logs — unstructured | Classification logs | Append-only, SHA-256 hash-chained. SOC 2, EU AI Act. |
Sentinel's enforcement logic sits on top of a complete behavioral observation engine — session classification, deviation detection, rolling baselines, cross-vendor data model. That engine was production-tested on human behavior for months before it ever touched an agent. Porting it took two weeks.
That's the moat. Not speed. Not a team. An observation foundation that any company entering this category has to build before they can ship their first line of enforcement logic. By the time they have it, Sentinel has 12 months of behavioral data inside customer environments.
This is capability traction, not commercial traction. Commercial traction is what we're raising to build next —
a design-partner cohort of five named enterprise security and AI-platform teams deploying agents in production.
We spent months building a system that watched developer filesystem activity and rendered behavior as a living data structure. We thought we were building a productivity tool.
A semiconductor executive deploying AI at scale interrupted a demo fifteen minutes in. The same observation layer would work on agents — and no one was building it. Companies that rolled out Cursor or Claude Code had thousands of agents in production with no behavioral monitoring. The existing tools were either vendor-locked, LLM-based, or blind to intent.
"The same observation layer would work on AI agents — and no one is doing it."// The founding moment
Best friends since high school. Second venture together. Roles swapped. Charlie built the core codebase — hooks engine, evaluation pipeline, framework adapters — in three weeks. James leads GTM, business development, and partnership strategy. Same team, different seats, sharper the second time.
"We're not asking you to bet on potential. We're asking you to bet on the second rep."
Enterprise security is bought top-down: CISO mandate, procurement cycle, six-month deployment. That requires the problem to already be board-acknowledged. It isn't yet — not for AI agents.
The way this market gets built is bottom-up. A developer installs Sentinel, hooks their agent, and has enforcement running in 10 minutes. They demo it in a PR review. The team lead asks for shared visibility. The VP Engineering asks for SSO and compliance exports. The enterprise deal closes itself — and by then, Sentinel has 12 months of behavioral baselines no competitor can replicate without starting over. Sentinel is at step zero of this motion. The free SDK is the entry point.
npm install. Hooks their agent. Enforcement running in 10 minutes. Free, no account. Policy-less — behavioral baselines active immediately.
Asks for fleet visibility. Alerts in Slack. One dashboard across all agents.
SSO, compliance exports, policy repository, SLA. Procurement opens. Deal closes itself.
Customers cover their own infrastructure costs via bring-your-own-key. Clear data disclosure and LOI required. Minimum 1 month for meaningful evaluation — enterprise cycles can extend to 6 months. Pricing flexibility matters more than revenue at this stage.
Sales cycles of up to 6 months are typical for enterprise security products. The free SDK compresses the bottom-up adoption cycle. By the time procurement opens, Sentinel has months of behavioral data the buyer can't get from a fresh install of a competitor.
Director of Security Engineering or Head of AI Platform at a company deploying AI agents in production. Trigger event: an agent did something unexpected, unauthorized, or destructive — and leadership wants to know how to prevent it.
Sentinel gets more accurate with every day it monitors an agent. Behavioral baselines built over 30 days catch deviations a new deployment misses. Switching means losing months of behavioral history — and starting blind again.
EU AI Act deadline is August 2026. NIST frameworks are coming. Companies running Sentinel now arrive at compliance with a year of audit history already built — for free.
LLM-based competitors cannot remove the LLM from their pipeline without rebuilding their product. Sentinel's deterministic hooks were designed in from day one. That's not a feature gap — it's structural.
The breach window is open. 88% of orgs already report incidents. Standards don't exist. Monitoring is sparse. Companies who deploy Sentinel now build 12+ months of behavioral baselines before compliance becomes mandatory.
EU AI Act automatic logging requirement activates. Penalty: up to €35M or 7% of global revenue. Sentinel customers hand auditors structured compliance reports. Everyone else scrambles.
NIST standards formalize. Market consolidates. Enterprise buyers standardize on 2–3 vendors. Developer-first tools with the deepest behavioral data win. Baselines become the product.
The breach that sets the precedent. One high-profile AI agent incident changes every procurement conversation permanently. "Why didn't you have monitoring?" has no good answer.
Free while we build the design-partner cohort. No tiers. No contracts. Direct founder access.