NetSec Spotlight: AI Access Security

In the previous post, Data Loss Prevention (DLP) was introduced as a core network security capability for identifying sensitive information and enforcing policy to prevent exposure.

AI introduces a new access pattern that challenges traditional application-centric controls. AI-driven interactions span browsers, APIs, SaaS integrations, and internal data sources, often occurring within encrypted and otherwise sanctioned traffic flows.

This post examines AI Access Security and how the NetSec platform applies content-aware inspection and policy enforcement to govern AI-driven traffic. AI Access Security applies these controls natively, and can also align with DLP policies when both are deployed.

AI tools are being adopted faster than almost any other technology, and users are already:

  • Copying sensitive data into prompts
  • Connecting AI tools to internal systems
  • Using browser-based, API-based, and embedded AI features interchangeably

From a network security perspective, this represents a new access pattern and introduces some challenges:

  • How do you distinguish AI interactions from standard encrypted application traffic?
  • How do you detect AI features embedded within otherwise sanctioned SaaS applications?

Many organisations will respond by bolting on additional point controls, increasing complexity without guaranteeing improved outcomes.

Platform Context

AI Access Security is a shared inspection capability delivered through Cloud-Delivered Security Services to enable the safe adoption of GenAI.

AI Access Security identifies applications, scores risk, applies granular access policies, and integrates Data Loss Prevention (DLP) into prompts and responses.

It extends the inspection layer within the existing platform architecture, rather than introducing a new enforcement tier or control plane:

  • Policy authority remains centralised
  • Enforcement remains distributed
  • Telemetry remains normalised within the same data model
High level visual mapping of AI Access Security within the NetSec platform

AI Access Security extends inspection beyond application and user context. Operationally, it provides:

  • Automatic identification of GenAI applications, including browser-based, API-driven, and AI features embedded inside SaaS platforms
  • Detection of AI usage within otherwise sanctioned applications
  • Inline inspection of prompts and responses without redirecting traffic to an external proxy tier

Policy decisions are made based on AI-specific risk signals - including data handling posture, model training behaviour, identity characteristics, and compliance alignment.

In practice, this means enforcing granular controls such as:

  • Blocking data uploads while allowing query-only usage
  • Restricting high-risk AI applications
  • Applying Data Loss Prevention policies to prompt content
  • Limiting AI access by user, device posture, or risk profile

Importantly, inspection is applied inline and continuously, using the same enforcement model as the rest of the platform. This approach extends inspection depth within the platform, rather than being a reactive bolt-on tool.

Operational Scenario

Scenario: User copies internal financial data into a public AI chatbot.

Without AI-specific inspection:

  • Traffic decrypted and inspected inline
  • Application or web traffic identified
  • No prompt-level data inspection
  • Data exposure undetected

With AI Access Security:

  • Traffic decrypted and inspected inline
  • AI application identified
  • Prompt content inspected inline
  • DLP policy evaluated against user and device context
  • Upload blocked or restricted according to policy
Example screenshots of AI application discovery by use case in Strata Cloud Manager (SCM)

Platform Outcomes

When AI Access Security is implemented as part of a NetSec platform, the outcomes include:

  • AI usage governed without introducing a parallel security stack
  • Clear visibility into AI usage patterns
  • Consistent data protection across AI and non-AI traffic
  • Policies that scale as AI capabilities evolve
  • No additional operational domain to manage

Read more