Forging the New Perimeter: How Fortinet, SentinelOne and CrowdStrike Are Hardening Enterprise AI
The age of generative models and large-scale inference has turned the perimeter concept on its head. Traditional firewalls and endpoint controls were built for files, ports and signatures. Today’s enterprise AI workloads — live inference endpoints, training pipelines, model registries, and GPU-backed clusters — demand a reimagined security stack. In recent weeks, three of the industry’s dominant vendors signaled that reimagining is underway: Fortinet introduced a new firewall capability aimed squarely at AI traffic, while SentinelOne and CrowdStrike rolled out enhancements designed to protect models and the infrastructure that runs them. Together these moves show the market’s accelerating pivot from generic cyber defenses to model-aware, data-aware, and runtime-aware controls.
Why AI workloads need new defenses
AI systems change the game in several ways. Data flows that used to be benign now carry training labels and sensitive user prompts. Models become high-value assets that attackers want to steal, manipulate or poison. Inference services expose rich semantic APIs that can be probed for information leakage or trojanized through crafted inputs. And the compute backbone — GPUs, containers, orchestration platforms, and cloud-native storage — introduces fresh attack surfaces that are often not protected by existing endpoint policies.
As organizations rush to deploy models in production, they inherit a new set of threats:
- Model theft and extraction, where adversaries infer model parameters or behavior through repeated API queries.
- Data and label leakage during training and inference — embeddings and logs can reveal user data.
- Poisoning attacks injected into training pipelines that silently corrupt model behavior.
- Prompt-injection and API abuse that subverts model logic or exfiltrates sensitive information.
- Runtime attacks on shared GPU resources, container escapes, and misconfigurations in orchestration layers.
Defending against these threats requires visibility into semantic traffic, continuity of control from dataset to deployment, and new controls that operate at the level of model inputs, outputs and provenance.
Fortinet’s new firewall capability: a perimeter for model traffic
Fortinet’s announcement centers on expanding the traditional firewall into an “AI-aware” control plane. At its core this capability treats model endpoints and model-related APIs as first-class citizens. Instead of relying solely on port or IP rules, Fortinet layers in:
- Protocol and content inspection for model inference calls, spotting abnormal query patterns and high-volume probing attempts.
- Data-loss controls tailored to embeddings and unstructured responses, enabling teams to redact or block sensitive tokens in transit.
- Rate-limiting and challenge-response mechanisms tuned to guard against automated model extraction campaigns.
- Integration hooks to model registries and orchestration tooling so that policy follows the model lifecycle.
Viewed holistically, the move converts the firewall from a blunt instrument into a contextual gatekeeper that can differentiate an administrator’s model-scoring request from a scripted extraction attempt. That matters because the most damaging attacks against models don’t always look like classic malware — they look like many small, legitimate-seeming requests that gradually peel away a model’s secrets.
SentinelOne: runtime telemetry and model integrity
SentinelOne’s enhancements emphasize continuous runtime protection across containers, virtual machines and GPU-based workloads. Key themes from the announcement include:
- Deep telemetry across model training and inference pipelines, surfacing anomalous process behavior and suspicious I/O patterns within GPU containers.
- Behavioral detection that recognizes signs of model tampering — unusual file writes to model artifacts, unexpected model weight changes, or foreign code injection into serving processes.
- Automation and rollback capabilities that can quarantine compromised model artifacts and restore a known-good model from registry snapshots.
- Stronger supply-chain visibility that tracks the provenance of pre-trained components, dependencies and third-party packages used during model development.
In practical terms, these enhancements aim to close the gap between instrumented visibility and actionable response. Detecting that a training job is opening suspicious network connections or that a serving process is suddenly writing out checkpoint files allows defenders to stop an attack before it compromises a model’s integrity or leaks sensitive training data.
CrowdStrike: protecting the cloud-native model lifecycle
CrowdStrike’s announcement focused on hardening the cloud-native aspects of model deployment. Several elements stood out:
- Broader cloud workload protection for cluster-level misconfigurations that can expose model artifacts or metadata stores.
- Enhanced identity and access monitoring specifically for model registries and inference endpoints, helping detect lateral movement or privilege escalation aimed at model theft.
- Real-time policy enforcement and application allowlisting at the container and function level, reducing the attack surface available to adversaries targeting inference containers.
- Telemetry and integrations for data governance systems so that access to training data and model artifacts is auditable and tied to enterprise policy.
By centering cloud workload controls and access telemetry, CrowdStrike is addressing the operational reality that models are rarely isolated. They’re woven into CI/CD pipelines, storage systems, monitoring stacks and identity systems. Compromise anywhere along that chain can lead to model exposure or misuse.
What these moves mean for enterprises
Taken together, the announcements reflect a maturing security posture across three common control layers: network and perimeter (Fortinet), runtime and endpoint (SentinelOne), and cloud/workload and identity (CrowdStrike). For enterprises that are serious about operationalizing AI safely, several practical implications follow:
- Security must follow the model lifecycle. Policy artifacts should attach to model metadata in registries and travel with model versions from dev to production. Firewalls and workload agents that can consume that metadata reduce friction and increase enforcement fidelity.
- Visibility must be semantic. Telemetry that only records IPs and file names isn’t enough. Observability must capture API intent, query patterns, embedding outputs and the provenance of training artifacts.
- Response mechanisms must be model-aware. Quarantining a host or revoking a certificate is important, but so is the ability to revoke model keys, freeze registries, and roll back to verified checkpoints without crippling inference services.
- Automation and orchestration are critical. AI deployments move fast. Runbooks that require manual intervention won’t keep pace; automated containment, policy enforcement and recovery become indispensable.
- Cross-functional controls will win. Security, ML engineers, platform teams and product owners must share a common language of threats, telemetry and remediation. Integrations across registry, CI/CD, logging and security platforms are table stakes.
Beyond product features: the evolving security mindset
Perhaps the most significant development is not a single firewall rule or detection algorithm but the shift in mindset these vendors embody. They are acknowledging that risk flows differently when models are central to business value. Defenders can no longer treat AI as just another application layer. Models are both business logic and a repository of sensitive information — and they require controls that reflect both realities.
That means investing in defenses that are adaptive, that reason about semantics and provenance, and that can operate at the temporal granularity that AI systems demand. A firewall that only logs requests every minute may miss a coordinated extraction campaign that runs at low-and-slow cadence. A DLP system that ignores embeddings will overlook a vector that leaks personally identifiable information. The new guardrails must be built for the peculiarities of AI.
What organizations should do now
For AI teams and security leaders looking to translate announcements into action, a pragmatic roadmap can help:
- Map your model landscape. Identify where models are trained, stored, served, and who has access. Catalog registries, orchestration layers, and inference endpoints.
- Introduce semantic telemetry. Capture API calls, query rates, embedding outputs (with privacy controls), and registry events so you can detect model-targeted anomalies.
- Adopt model-aware network controls. Use AI-aware firewalling to limit exposure, throttle abnormal query patterns, and inspect inference traffic for exfiltration attempts.
- Harden runtime environments. Enforce container allowlisting, GPU isolation, and continuous integrity checks for model artifacts and serving binaries.
- Automate recovery. Maintain signed model checkpoints and automated rollback procedures tied to your detection systems.
- Govern third-party models. Maintain provenance and risk scores for any pre-trained models or components you incorporate.
Where the market is headed
Expect the next wave of innovation to focus on tighter integrations between ML platforms and security telemetry, standardized signals for model integrity, and shared controls for model provenance. Look for features such as model attestation (cryptographic proofs about model origin and state), native hooks into model registries for policy enforcement, and industry consortiums developing common threat taxonomies for model attacks.
Vendors that can convert semantic model behavior into reliable detection signals, and then automate containment in a way that respects model availability and business continuity, will win enterprise trust. The product updates from Fortinet, SentinelOne and CrowdStrike are early markers on that roadmap — practical steps toward an architecture where model trust is measurable and enforceable.
Conclusion
AI is changing not just what software does, but what must be protected. Firewalls, endpoints and cloud controls are not obsolete; they must evolve. Fortinet’s move to bring context to network defenses, SentinelOne’s emphasis on runtime model integrity, and CrowdStrike’s focus on cloud-native model protections together sketch a future where defenses are built around the realities of machine learning.
For enterprises racing to deploy AI at scale, the imperative is clear: secure the model as you would any crown-jewel asset — with visibility, provenance, automated response and a perimeter that understands the language of models. The vendors’ announcements are an invitation to rethink security architectures for the era of AI, and a reminder that protecting tomorrow’s systems requires a different, more semantic kind of vigilance.

