Why MCP Servers Should Not Be Built as API Wrappers
As the Model Context Protocol (MCP) gains widespread adoption in the AI ecosystem, a common architectural pattern has emerged: building MCP servers as thin wrappers over existing APIs. While this approach might seem pragmatic, it fundamentally misunderstands the distinct nature of MCP and the security implications of AI-driven interactions. The problem runs deeper than mere implementation convenience – it touches on the core differences between how developers consume APIs versus how AI agents operate on behalf of humans.
The Fundamental Difference: Developer Intent vs. AI Autonomy
APIs were created for developers to integrate into preprogrammed code. When a developer writes code that consumes an API, they carefully plan the sequence of calls, handle edge cases, implement rate limiting, and consider the impact of each operation. The developer acts as a responsible intermediary who understands both the capabilities of the API and the needs of their application. This human oversight is baked into the traditional API consumption model.
Key Insight: Traditional APIs assume a developer is orchestrating calls in predictable, well-tested patterns. MCP assumes an AI agent is making dynamic decisions based on natural language requests from users who may not understand the technical implications.
MCP, on the contrary, is designed for AI agents to consume based on dynamic human requests. The person talking to the AI is usually not a developer and doesn't plan efficient, non-harmful scenarios. They express intent in natural language, and the AI interprets that intent and decides which tools to invoke. This introduces a fundamentally different risk profile.
As security researchers at Red Hat noted, "MCP servers pose significant security risks due to their ability to execute commands and perform API calls. One major concern is that even if a user doesn't intend a specific action, the LLM might decide it's the appropriate one." This autonomy creates scenarios where the gap between user intent and system action can have serious consequences.
API Security: Protecting the Provider, Not the Consumer
Traditional API security is primarily concerned with protecting the API provider. Authentication mechanisms like API keys and OAuth exist to ensure that only authorized clients can access the API, that rate limits prevent abuse of the provider's infrastructure, and that malicious actors cannot compromise the provider's systems.
The Provider-Centric Model: OAuth 2.0, the gold standard for API security, focuses on ensuring that applications have proper authorization to access resources on the provider's side. While it protects user credentials through token delegation, its primary goal is controlling access to the API provider's resources.
Consider the typical API security measures: rate limiting protects the provider from being overwhelmed, API keys identify consumers for billing and throttling, and OAuth scopes limit what actions a client application can perform. All of these mechanisms serve the provider's interests. While they provide some protection for consumers – preventing credential exposure, for instance – they don't address the question: "Should this particular request be made right now, given the user's context and potential consequences?"
API functionality is designed to allow developers to compose efficient communication, usually based on predefined scenarios. A developer integrating a payment API, for example, knows to implement confirmation steps, validate amounts, and provide clear user feedback. These safeguards are programmed into the application logic, not enforced by the API itself.
MCP's Different Security Requirements
MCP needs to address an entirely different set of concerns. When AI assistants gain access to sensitive files, databases, or services via MCP, organizations must ensure those interactions are secure, authenticated, and auditable. But beyond organizational security, MCP must also protect individual users from their own requests when those requests might have unintended consequences.
Flexibility with Guardrails
MCP must provide more flexibility than traditional APIs because it cannot predict all possible user requests in advance. An AI agent might need to chain together multiple operations, access data in novel ways, or adapt its approach based on intermediate results. This flexibility is a core feature, not a bug.
However, this same flexibility creates risks. A legitimate user might submit a prompt they didn't write to the MCP client, perhaps one recommended by a malicious third party. Such a prompt could be obfuscated but still leak private information from the user's conversation or accessible tools. The AI might interpret an ambiguous request in a way the user didn't intend, potentially triggering destructive operations.
Consumer Protection as a First-Class Concern
Unlike APIs, where the assumption is that a knowledgeable developer is making careful decisions, MCP must assume that the end user making the request may not fully understand the implications. They might ask to "clean up my inbox" without realizing the AI could interpret this as permanently deleting thousands of emails. They might request to "update all customer records" without considering the scale of that operation.
The MCP Imperative: MCP servers should implement user consent flows, action previews, and impact assessments that help users understand what's about to happen before it happens. This level of user protection is not typically found in API design.
Security experts at Palo Alto Networks emphasize this concern: "These connections require credentials such as API keys. While the MCP's current implementation requires servers to run locally, this setup still represents another instance of credential storage that can lead to exposure. What's more, an MCP server will often request broad permission scopes to provide flexible functionality."
Why API Wrappers Fall Short
When you build an MCP server as a wrapper over an existing API, you inherit all of the API's design assumptions – assumptions that don't align with AI agent usage patterns.
Rate Limiting and Cost Control
APIs typically implement rate limits to protect the provider's infrastructure. But in an MCP context, you need rate limits to protect the consumer from accidentally triggering expensive operations. An AI agent could easily make hundreds of API calls while trying to "analyze all documents in the folder," potentially costing the user significant money in API fees or exceeding their quota.
Scope Granularity
OAuth scopes are designed for application-level permissions: "this app can read your emails" or "this app can post to your timeline." But MCP needs much more granular, context-aware permissions. Poor scope design increases token compromise impact, elevates user friction, and obscures audit trails. An attacker obtains an access token carrying broad scopes (files:*, db:*, admin:*) that were granted up front because the MCP server exposed every scope in scopes_supported and the client requested them all.
Context and State Management
APIs are typically stateless – each request is independent. But AI conversations are inherently stateful. The AI needs to maintain context across multiple exchanges, remember previous operations, and adapt its behavior based on the conversation flow. One of MCP's core strengths is its ability to maintain context across multiple API calls. By managing and preserving state information in real time, MCP ensures that AI models have the necessary background data to execute complex, multi-step workflows accurately.
An API wrapper doesn't naturally provide this context management. You end up building it as an afterthought, rather than designing it in from the start.
The Confused Deputy Problem
One of the most significant security concerns with API-wrapped MCP servers is what security researchers call the "confused deputy" problem. When an MCP server performs an action triggered by a user's request, there is a risk of a confused deputy problem. Ideally, the MCP server should execute this action on behalf of the user and with the user's permission. This is not guaranteed, however, and depends on the implementation of the MCP server.
In a typical API integration, the application has its own credentials and operates on behalf of itself. When you wrap this in an MCP layer, you create a situation where the MCP server might use its own elevated permissions to perform actions that should be subject to per-user authorization. The AI becomes a deputy acting on the user's behalf, but with powers that exceed what the user should have.
Security Risk: Attackers can exploit MCP proxy servers that connect to third-party APIs, creating "confused deputy" vulnerabilities. This attack allows malicious clients to obtain authorization codes without proper user consent by exploiting the combination of static client IDs, dynamic client registration, and consent cookies.
MCP as a Non-Visual User Interface
To truly understand why MCP servers shouldn't be built as API wrappers, we need to recognize what MCP fundamentally represents: a non-visual user interface. When a user interacts with an AI agent through MCP, they're using an interface – it happens to be conversational rather than graphical, but it's a UI nonetheless. This reframing has profound implications for how we should architect MCP servers.
The UI Protection Model
Traditional graphical user interfaces have evolved sophisticated mechanisms to protect users from self-harming actions. When you try to delete a file, the UI asks for confirmation. When you're about to perform a bulk operation, the interface shows you a preview of what will be affected. When you're making an irreversible change, the system might require you to type the name of what you're deleting to ensure you understand the consequences.
These protections aren't there because the underlying API can't handle the delete operation – the API will happily execute any valid command it receives. The protections exist because humans make mistakes, act impulsively, or misunderstand the scope of their actions. A good UI anticipates these human factors and builds guardrails accordingly.
MCP, as an interface for autonomous AI agent use, requires the same level – if not a higher level – of user protection. The conversational nature of AI interaction actually amplifies the risk: users express intent in natural language, which is inherently ambiguous. An AI agent must interpret that intent and translate it into concrete actions. The gap between what the user meant and what the AI understood can lead to destructive outcomes.
The UI Parallel: Just as a file manager UI doesn't directly expose raw filesystem API calls to users, an MCP server shouldn't directly expose raw API operations to AI agents without appropriate safeguards, context interpretation, and user protection mechanisms.
Why Public APIs Don't Make Good UI Foundations
While it's technically possible to build a user interface on top of a public API, this is rarely considered best practice in software development. The industry standard approach is to build UIs on top of dedicated backend APIs that are specifically designed to support UI requirements – not the same APIs offered to the public for their programmed integrations.
Why this separation? Because public APIs and UI-supporting APIs serve fundamentally different purposes:
- Public APIs are designed for efficient, programmatic access by developers who understand the technical details. They assume the consumer (the developer writing code) will handle validation, confirmation flows, error recovery, and user feedback in their application layer.
- UI-supporting APIs are designed with user interaction patterns in mind. They might return additional metadata about the consequences of an action, provide preview capabilities, or implement undo mechanisms. They're built to support the specific needs of the UI layer.
Consider how modern web applications are structured. A company might offer a public REST API for third-party integrations, but their own web interface typically communicates with a separate set of backend services. These internal APIs might aggregate data differently, provide UI-specific validation, or include additional safety checks that don't make sense in a public API context.
The same principle applies to MCP. An MCP server acts as the backend for a conversational UI. Building it as a wrapper over a public API is like building a web application that directly calls public API endpoints without any intermediate layer to handle UI concerns. It can work, but it's fragile, risky, and misses opportunities for appropriate user protection.
Context and Intent in UI Design
Good UIs understand context. They know what the user was doing before the current action, what state the system is in, and what the user is trying to accomplish. This contextual awareness enables smarter interactions and better safety mechanisms.
A traditional UI can disable a button if the current context makes that action invalid. It can change its behavior based on what the user has selected. It can maintain state across a series of related operations. Public APIs typically don't provide this kind of contextual support – they're designed to be stateless and respond to isolated requests.
An MCP server, as a UI backend, needs to maintain conversational context, understand the chain of operations the user is performing, and make intelligent decisions about when to prompt for confirmation or provide additional information. This context management can't be bolted onto an API wrapper as an afterthought – it needs to be a core architectural consideration.
Building MCP Right: An Independent Approach
Rather than wrapping existing APIs, MCP servers should be purpose-built with AI agent interactions in mind, following UI design principles rather than API consumption patterns. This means:
Intent Verification and Preview
MCP servers should implement mechanisms to verify user intent before executing potentially harmful operations:
- Show previews of what will be affected before taking action
- Require explicit confirmation for destructive operations
- Provide "undo" capabilities where possible
- Log all actions for audit trails
As Writer's engineering team emphasizes, "The actions performed by the MCP servers should always be confirmed by the users or restricted to reduce risk to an acceptable level."
Context-Aware Rate Limiting
Instead of simple request-per-minute limits, implement intelligent throttling that considers:
- The cost or impact of each operation
- The user's historical patterns
- The current context of the conversation
- The cumulative resource consumption of a task
Granular, Dynamic Permissions
Move beyond coarse-grained OAuth scopes to implement permission systems that understand:
- What data the user is trying to access and why
- Whether the operation aligns with the stated intent
- The potential blast radius of an action
- The sensitivity of the data involved
Not every tool should be available to every AI query or user. Define what actions the AI is allowed to perform via MCP and restrict everything else. For instance, if an AI assistant should only read customer data but never delete it, the MCP server's API should enforce that rule.
The DataLabs.Store Approach: UI Principles in Practice
The AI Context Bridge from DataLabs.Store exemplifies this UI-oriented approach to MCP server design. Rather than wrapping HubSpot's public API – which was designed for developer integrations – it treats the MCP interface as what it truly is: a conversational UI for data exploration and analysis.
Key architectural decisions reflect UI design principles:
- Dedicated backend infrastructure: Rather than relying on the public API, it maintains a synchronized database optimized specifically for conversational, analytical queries – much like how web applications maintain dedicated backend services for their UI needs.
- Read-only by design: Recognizing that data analysis through conversation carries inherent risks of misinterpretation, the system eliminates destructive operations entirely. This is analogous to how many UIs offer separate "view" and "edit" modes.
- Semantic understanding: The system provides metadata that helps AI understand data relationships and business context – similar to how UI labels, tooltips, and help text guide human users.
- Query optimization for conversational patterns: The architecture anticipates how humans naturally explore data through conversation, rather than optimizing for predetermined programmatic access patterns.
- Context preservation: The system enables complex, multi-step analytical operations that build on previous queries – maintaining conversational state in a way that public APIs typically don't support.
This architecture recognizes that conversational interfaces have different requirements than programmatic integrations. By treating MCP as a UI concern rather than an API consumption concern, the Context Bridge avoids the pitfalls of API wrapping while providing capabilities that align with how humans actually want to interact with their data through AI.
Conclusion: Interfaces Need Interface-Appropriate Backends
APIs and MCP serve different purposes and should be built with different priorities. APIs can be a part of tools that are available via an MCP server. For example, when an LLM decides on the tool to use, that tool can include making a certain API request and then providing the response to the LLM. But this doesn't mean the MCP server itself should be designed as an API wrapper.
The key insight is recognizing MCP for what it truly is: a non-visual user interface. Just as we wouldn't build a modern web application by directly exposing public API endpoints to the browser without any intermediate layer, we shouldn't build MCP servers by directly wrapping public APIs without appropriate UI-oriented safeguards and context management.
The consumption model matters: APIs assume a responsible, knowledgeable developer is orchestrating calls within programmed logic. Traditional UIs assume a human user who needs protection from mistakes, requires confirmation for destructive actions, and benefits from contextual guidance. MCP combines the worst of both worlds if done incorrectly – the autonomous operation of programmatic API access with the ambiguity and error-prone nature of human communication.
Bottom Line: Build MCP servers as UI backends, not API wrappers. Apply the same level of user protection, context awareness, and safety mechanisms that we've learned to implement in traditional interfaces. The fact that the interface is conversational rather than graphical doesn't change the fundamental responsibility to protect users from unintended consequences.
As organizations increasingly adopt AI agents with MCP access, the architectural approach to MCP servers will determine not just their functionality, but their safety and trustworthiness. Taking the time to build them correctly – as purpose-built UI backends rather than thin API wrappers – will pay dividends in security, reliability, and user trust. The industry has decades of experience building safe, intuitive user interfaces. We should apply those lessons to conversational interfaces, not abandon them in pursuit of implementation convenience.