Securing the Thinking Machines: Manifold’s $8M Bet on Protecting Autonomous AI Agents at the Endpoint
Autonomous AI agents are moving from labs into everyday enterprise workflows. A fresh $8 million injection into detection and response promises to make that transition safer — if architects and operators rethink security for a new class of autonomous software.
The new attack surface no one was built for
Enterprises are adopting autonomous AI agents that can read email, negotiate contracts, order supplies, remediate infrastructure and automate workflows across internal systems. These agents promise productivity leaps by moving decisions and actions closer to the data and people who need them. Yet that same autonomy creates an unfamiliar threat model: software that reasons, chooses, composes other services and persists state without traditional user-driven telemetry.
Manifold’s recent $8 million raise to build an AI detection and response platform targeted at autonomous agents running on enterprise endpoints signals a shift in how the security community must think about this emerging class of software. The problem is no longer only about protecting APIs or hardening models in the cloud; it is about monitoring and controlling intelligent processes that can span endpoints, cloud services and third-party APIs while acting with varying degrees of autonomy.
Why endpoints matter again — but differently
Endpoints were once the frontline of traditional antivirus and EDR strategies. Those defenses were designed around predictable binaries and signature patterns. Autonomous agents change the game in three ways:
- Behavioral plasticity: Agents can adapt behavior by calling external models, chaining prompts and changing strategies based on outcomes. Signature-based detection is ineffective against such fluid tactics.
- Hybrid execution: Workflows cross the boundary between local execution and cloud APIs. Sensitive data may never leave an endpoint as raw text, yet it can be reconstructed from a series of metadata calls, API parameters and derived outputs.
- Delegated authority: Agents may be granted limited privilege to act on behalf of users or services. That delegation reduces friction but amplifies the consequences when an agent behaves maliciously or incorrectly.
Protecting these endpoints requires a new blend of runtime visibility, context-aware policies and real-time response — the very problems Manifold aims to address.
Detection: from signatures to intent-aware telemetry
Detecting misbehavior by autonomous agents demands signals that capture intent and flow, not just anomalous system calls. A well-rounded detection strategy will likely include:
- Process and API tracing: Mapping sequences of calls — which services were summoned, what data was requested, which models or chains were invoked — to reconstruct an agent’s plan.
- Semantic monitoring: Analyzing the content and role of inputs and outputs to spot risky patterns, such as repeated requests for credential-like tokens, unusual exfiltration-like transformations, or attempts to bypass policy-relevant redactions.
- Provenance and lineage: Tracking where data originated and how it was transformed across agent steps to detect suspicious recomposition or aggregation that could enable leakage or inference attacks.
- Behavioral baselining: Modeling typical agent behaviors for specific roles and detecting deviations that indicate compromise, misuse or overreach.
These are not simple telemetry problems; they require instrumenting agents, endpoints and the services they call in a way that preserves privacy while enabling enforcement.
Response: containment without breaking workflows
When an agent is suspected of risky behavior, reaction must be immediate yet proportionate. Traditional playbooks of outright shutdown and blanket quarantines will break business processes and erode trust in automation. Effective response mechanics include:
- Graceful containment: Isolating the agent instance, throttling outbound calls, and switching sensitive operations to safe tenants or stubbed endpoints to prevent damage while preserving non-sensitive functionality.
- Stepback and rewind: Rolling back or annotating recent actions taken by an agent so downstream systems and humans can reverse or review changes.
- Policy-driven overrides: Enforcing context-aware policies that can, for example, deny data access for specific data classes or require escalation for high-risk actions.
- Forensic playback: Recreating the chain of decisions in a human-readable timeline for audit and remediation without exposing raw secrets.
Response mechanisms must be architected for speed and traceability. The goal is not only to stop harm but to maintain the continuity of business operations and preserve the ability to learn from incidents.
Design trade-offs and product challenges
Building a platform that detects and responds to autonomous agents at endpoints surfaces thorny trade-offs.
- Privacy vs. visibility: Collecting the telemetry needed for robust detection can expose sensitive content. Solutions will need local aggregation, differential privacy, or encrypted telemetry channels to strike the balance.
- Latency vs. depth: Deep semantic analysis can be computationally expensive. Architects must decide which signals are analyzed locally in real time, which are sampled, and which are analyzed asynchronously.
- Standardization vs. flexibility: The agent ecosystem is heterogeneous. A successful product must support multiple agent frameworks and orchestration patterns while enabling consistent policy enforcement.
- Usability vs. safety: Too many false positives will frustrate users and defeat adoption, while false negatives lead to risk. Calibrating systems to maintain trust is a product-level challenge as much as a technical one.
Enterprise implications and regulatory pressure
Organizations deploying autonomous agents will face legal and compliance obligations. Data protection rules, sector-specific security requirements and the need for auditable decision trails elevate the importance of detection and response tooling. Insurers and auditors are likely to ask for demonstrable controls around delegation, reasoned decision-making and protection of sensitive assets.
Manifold’s platform, by centering on detection and response at the endpoint, positions itself to become part of that compliance stack — providing not just alarms but documented evidence of how agents behaved and how the enterprise mitigated risk.
Beyond defense: enabling safe, scalable autonomy
Detection and response are defensive needs, but they also enable broader adoption of autonomous agents. When organizations can monitor and control agent actions with confidence, they are more likely to entrust higher-impact tasks to automation. The resulting feedback loop improves agent design, policy frameworks and operational playbooks.
What starts as a security product can become an operational fabric: policy-as-code for agent permissions, runtime primitives for trust, and shared telemetry standards that allow different vendors and internal teams to interoperate safely. The prize is not just fewer incidents — it is an infrastructure that supports robust, auditable, enterprise-grade autonomy.
What success looks like
A successful detection and response platform for autonomous agents will do three things well:
- Make the invisible visible: Offer concise, contextual views into agent decisions and data flows without exposing raw secrets.
- Act with precision: Provide graduated responses that stop harmful actions but preserve legitimate, low-risk activity.
- Provide trust artifacts: Generate auditable timelines and attestations that satisfy compliance and governance needs.
Products that hit these marks will accelerate responsible adoption of agents across finance, healthcare, legal and other domains where the stakes are high and the benefits of automation are greatest.
A collective moment for a new security paradigm
Manifold’s $8 million raise is more than a funding headline; it is a signal that the market is waking up to the need for a security stack designed for thinking software. The coming years will require cooperation between platform vendors, endpoint teams and policy makers to define telemetry standards, response playbooks and privacy guarantees that make autonomous agents trustworthy.
For the AI community, the ask is both practical and philosophical: build systems that are not only powerful but observable, controllable and accountable. Security will no longer be an afterthought bolted onto models — it must be a first-class concern integrated into agent design, deployment and operations.

