As AI tools spread across the enterprise, security teams face unprecedented challenges in visibility and control. Traditional security was designed for a world of predictable data flows and defined boundaries, not the dynamic, often opaque world of modern AI. Witness AI is a pioneer in network-based observability and has capabilities that traditional approaches can’t match. This article explores how Witness AI is transforming AI security through network visibility.
Twain Taylor, editor at Software Plaza, sat down with Trevor Welsh, VP of Product at Witness AI, discussing AI security for enterprises. In this article, we will explore major insights from their conversation, focusing on how traditional security measures compare with Witness AI's innovative approach to network-based observability.
Limitations of traditional security approaches
Enterprise security relies heavily on endpoint protection, browser-based monitoring tools, and data loss prevention (DLP) systems. Effective for traditional IT environments, but massive blind spots when applied to AI systems:
Browser-centric monitoring fails
Many organizations try to secure AI by adding browser extensions or monitoring tools. But Trevor Welsh, the VP of Product at Witness AI, asserts that the issue is not solely a browser issue; it does not affect the Windows 11 Copilot feature that is integrated into your operating system. Doesn’t do anything for thick clients.
This creates huge visibility gaps as AI is embedded in the operating system and applications rather than just being accessed through the browser. When employees use Windows 11’s built-in Copilot or thick client AI apps, browser-based security is completely ineffective.
Static policy limitations
Traditional security tools rely on regex patterns or keyword matching—approaches that don’t work for the nuanced conversational nature of AI. Static policies can’t interpret the meaning behind prompts or recognize sensitive info shared through creative phrasing.
A DLP system might block messages with specific customer info patterns, but miss a request asking an AI to summarize the executive compensation details from the attached document a request Witness AI would flag immediately through intent analysis.
Shadow AI blindness
A significant issue for traditional security is the proliferation of "Shadow AI," or unapproved or uncontrolled AI use. Employees access AI tools through their accounts to get work done faster, creating huge security risks that traditional monitoring can’t detect.
Witness AI’s network-based observability
Witness AI reimagines AI security through network-based observability. Instead of relying on endpoint agents or browser extensions, Witness AI sits in the network data path to see all AI traffic, regardless of origin.
Deep packet inspection for AI conversations
The technical advantage of Witness AI’s approach is its deep packet inspection specifically for AI. By integrating with Secure Access Service Edge (SASE) solutions, proxies, and network components, Witness AI can see encrypted AI traffic that’s invisible to traditional tools.
This allows security teams to see the entire AI conversation, prompts, and responses, even when it’s happening through OS features or thick clients. Welsh explains that Witness AI can see the conversation between Windows 11 Copilot in the operating system and Microsoft's backend models and AI servers.
This visibility covers all major AI providers, including ChatGPT, Claude, Gemini, and internal models, something that’s not possible with traditional methods.
Intent-based policy enforcement
The biggest innovation in Witness AI’s approach is the shift from static pattern matching to intent-based policies. While traditional tools search for specific strings or patterns, Witness AI looks at the semantic meaning behind the prompt to understand what the user is trying to do.
This allows policies based on user intent rather than specific keywords. For example, the demo showed how Witness AI identifies prompts trying to “summarize executive compensation” or “identify card number types” – intentions that would trigger the right security response regardless of the wording used.
For security teams, this is a giant leap beyond traditional approaches. Instead of constantly updating regex patterns to catch new variations, they can create policies for fundamental concerns like “prevent sharing of financial data” that work regardless of how the user phrases the prompt.
Real-time intervention and redirection
Traditional security tools detect violations after they happen. Witness AI enables real-time intervention by sitting in the data path. This architectural decision allows for things that are impossible with traditional approaches:
- Automatic redaction of sensitive info from prompts before they hit external AI systems
- Seamless redirection of queries to internal models based on content or intent
- Warning users about potential policy violations before they happen
- Blocking data exposure by blocking inappropriate interactions entirely
Value proposition compared to traditional approaches
See how AI’s network-based observability beats traditional security:
Visibility
The biggest benefit is visibility. Traditional tools might track browser-based interactions with some AI platforms, but Witness AI has a complete catalog of all AI applications, users, and usage patterns across the organization. This includes operating system integrations, clients, and API-based interactions that are invisible to traditional security tools.
With Witness AI, enterprises can answer questions impossible with traditional approaches: What AI tools are employees using? What data are they sharing? Which models are processing sensitive information?
Intent-based protection
Traditional regex-based approaches will create both false positives (blocking legitimate activity) and false negatives (missing violations, expressed differently). Witness AI’s intent-based analysis protects more accurately by focusing on what users are trying to do rather than specific syntax. This means more protection with fewer disruptions to legitimate work, a big win in environments where AI productivity is a strategic priority.
Enabling AI innovation securely
Possibly the most critical aspect of Witness AI is its comprehensive network-based observability, which enables security teams to become what Welsh refers to as “the CISO of Yes.” This allows for AI innovation while maintaining the necessary controls. Traditional security approaches often force organizations to choose between innovation and security, but network-based observability gives you visibility and control to do both.
As Welsh says, many CISOs are “less worried about AI as much as worried about competitors using it to be faster.” By providing controls that enable rather than restrict innovation, network-based observability helps organizations get the competitive advantage of AI without accepting unacceptable risk.
As Twain Taylor's conversation with Trevor Welsh revealed, the gap between traditional security measures and what's needed for effective AI governance continues to widen. Witness AI's network-based observability approach represents a significant evolution in how enterprises can secure their AI ecosystems without stifling innovation. Find more by listening to the podcast.