Modern infrastructure is already complex, characterized by distributed environments, multi-cloud deployments, and dynamic change. Now add Large Language Models (LLMs) to the mix, and the challenge grows exponentially.
Engineering leaders are under pressure to deliver innovation fast, while also safeguarding against breaches, misconfigurations, and human error. That’s why initiatives like eliminating static credentials, enforcing just-in-time access, and reducing SSH key sprawl are gaining traction. The rise of LLMs adds both opportunity and risk to this already dynamic environment.
The Impact of Model Context Protocol (MCP)
Historically, LLMs have been black boxes—you could feed them data, but they couldn’t interact meaningfully with external systems. That changed in late 2024 when Anthropic introduced the Model Context Protocol (MCP)—a standard for connecting LLMs to external data sources in a structured way.
Since then, major players like Microsoft, OpenAI, and Cloudflare have embraced MCP, accelerating its adoption and enabling it to break away as a go-to method for bridging LLMs with proprietary systems.
For engineers, this solves a classic dilemma: which new standard is worth investing in? MCP is emerging as a clear winner.
Opening the Door to Innovation… and Risk
Standard interfaces like MCP open the door to innovation—but also to risk. Without strong access controls, LLMs can overreach: reading sensitive data they shouldn’t, bypassing audit trails, or becoming prime targets for attackers as overpermissioned accounts.
Just as with any other service interacting with your infrastructure, LLMs need to follow the same identity, access, and audit rules that govern other user types within an organization.
Teleport + MCP: Secure LLM Integration
Teleport’s Infrastructure Identity Platform provides that secure bridge. Since 2015, Teleport has integrated with a wide range of databases—PostgreSQL, MySQL, Cassandra, MongoDB, Oracle, DynamoDB, and more. It enforces fine-grained access control (down to table-level) through RBAC and ABAC, and logs every access event—human or machine.
When LLMs are wired into infrastructure through Teleport, organizations get:
- Strict access control: LLMs can only access the data that the user identity is authorized for.
- Full audit trails: Every LLM-initiated request is logged for accountability and review.
LLM & Teleport Data Access Governance: A Query Example
Let’s walk through a simple scenario.
An LLM is tasked with retrieving data from a products
table. Behind the scenes, Teleport governs access:
✅ When the requesting engineer has database read permissions, the LLM returns the data and an event is logged.
❌ When the user lacks those permissions, the LLM is blocked from accessing the table—and the denial is logged as well.
In this example, Claude (the LLM) examines the database to identify the products table:
The user is authorized with read access to the products table, so the LLM is able to retrieve the information requested:
And, the actions are logged as events, providing an audit trail:
Switching to a user “alice” without read privileges:
One can now see how access is not approved and the LLM cannot return the data:
The failed access attempt is also seen and logged by Teleport:
LLM & Teleport Access Control: An MCP Server example
In another example, an MCP server has been authorized to query data but not modify it.
✅ A read request is approved and executed.
❌ A write attempt is denied by Teleport before it even reaches the database.
This demonstrates how Teleport enforces principle-of-least-privilege, even when an LLM is in the loop.
In the following example, LLM permissions are scoped-down to read-only access to the filesystem using the following Teleport role:
kind: role
metadata:
name: llm-access
spec:
allow:
mcp_tools:
- get_*
_ list_*
- read_*
- search_files
Because the user is authorized to search and read data, the LLM is able to respond to the request:
However, the request to the change the data is denied:
This access request and denial is also recorded in the audit logs.
Move Fast, Stay Secure
MCP is a powerful unlock for engineers building LLM applications that need to interact with real infrastructure. But with power comes risk.
Teleport ensures LLMs follow the same guardrails as any other service—strong authentication, governed authorization, and airtight audit logging—so teams can drive innovation at speed while maintaining their security posture.
Want to learn more about how Teleport can help you secure LLMs?