Remote MCP
Remote MCP servers are accessed over the network — typically over HTTPS. They connect AI agents to cloud APIs, SaaS platforms, third-party services, and any system that lives outside your local environment. This is where MCP's reach extends beyond your own infrastructure, and where the security considerations become most important.
What Remote MCP Means
A remote MCP server is one that communicates over the network using HTTP-based transport — typically Server-Sent Events (SSE) or the newer Streamable HTTP transport. The server might be hosted by a third party, self-hosted in the cloud, or running on a different machine in your organisation.
Remote MCP is what makes AI agents truly capable of operating in the real world. Local MCP gives agents access to your files and development tools. Remote MCP gives them access to everything else: your CRM, your email service, your cloud storage, your project management tool, your payment processor, your analytics platform. Any service with an API can be wrapped in an MCP server and made available to your agents.
The transport mechanism matters here. Unlike local MCP, which uses stdio (standard input/output) for near-instant communication, remote MCP goes over HTTP. That means network latency, connection management, authentication headers, and all the other realities of distributed systems. It also means the server and client don't have to be on the same machine — which is the whole point.
I build remote MCP integrations when clients need their AI systems to reach beyond local resources. The pattern is consistent: identify the external services the agent needs, build or configure MCP servers for each one, secure the connections, and expose them through an orchestration layer that manages authentication, rate limiting, and failure handling.
Common Use Cases
Remote MCP covers every external service your AI agents need to interact with. These are the patterns that come up most often.
SaaS Integrations
Google Workspace, Slack, Notion, Jira, Salesforce, HubSpot — the platforms your team already uses daily. Remote MCP servers wrap their APIs so agents can read emails, update tickets, search documents, and post messages through the same protocol they use for everything else.
Cloud Service APIs
AWS, Google Cloud, Azure, Vercel, Cloudflare — infrastructure and platform services. Remote MCP servers can manage deployments, query cloud databases, interact with storage buckets, and monitor services. The agent becomes a capable cloud operator without needing direct console access.
Data & Analytics Services
Analytics platforms, data warehouses, business intelligence tools, and reporting APIs. Remote MCP makes it possible for agents to pull metrics, generate reports, and answer questions about your business data by querying the services where that data actually lives.
Communication Services
Email (SendGrid, Resend, Gmail), messaging (Slack, Teams), SMS (Twilio), and notification platforms. Agents that can communicate on your behalf — sending updates, responding to inquiries, and routing messages — rely on remote MCP to reach those services.
Security Considerations
Remote MCP introduces trust boundaries that don't exist with local servers. Every connection crosses a network boundary, and that changes the security calculus significantly.
Authentication & Credentials
Remote MCP servers need credentials — API keys, OAuth tokens, service account keys. Managing those credentials securely is a first-order concern. I use secret management systems (not environment variables in plain text) and ensure credentials are scoped to the minimum permissions each server needs. Token rotation and expiry policies are non-negotiable.
Data in Transit
Every tool call to a remote MCP server sends data over the network. That data might include customer information, internal documents, database queries, or any other sensitive content the agent is working with. TLS is the baseline, but you also need to think about what data is being sent and whether it should be leaving your environment at all.
Trust Boundaries
When you connect to a third-party MCP server, you're trusting that server with the data you send it and the actions it performs on your behalf. That trust needs to be evaluated carefully. Who operates the server? What do they do with the data? What happens if the server is compromised? For sensitive workloads, self-hosting remote MCP servers is often the right call.
Rate Limiting & Cost Control
Remote services charge money and enforce rate limits. An agent making uncontrolled API calls can rack up unexpected costs or get your account throttled. I build rate limiting into the MCP layer itself — not just relying on the downstream service's limits — so you have control over how aggressively your agents consume external resources.
Architecture Patterns
How you deploy remote MCP servers affects security, performance, and manageability. There are three common patterns, and most production systems use a combination.
Direct connection. The AI client connects directly to the remote MCP server. This is the simplest pattern — suitable for development and simple single-user setups. The client handles authentication and error handling. The downside is that every client needs its own credentials, and there's no centralised control over which servers are accessible.
Gateway pattern. A gateway or proxy sits between the client and the remote MCP servers. The gateway handles authentication, rate limiting, logging, and routing. Clients connect to the gateway, which forwards requests to the appropriate MCP server. This is the pattern I recommend for teams and production deployments — it gives you a single enforcement point for security policy.
Self-hosted servers. Instead of connecting to a third party's MCP server, you run your own. You build an MCP server that wraps the external API, host it on your own infrastructure, and connect your agents to it. This gives you full control over the data flow, caching, and error handling — at the cost of maintaining the server yourself.
In practice, most systems I build use the gateway pattern as the backbone, with a mix of third-party and self-hosted MCP servers behind it. The gateway provides the consistent security layer. Individual servers are chosen based on the specific requirements of each integration — third-party when it's well-maintained and trustworthy, self-hosted when you need more control.
Where It Fits in the Stack
Remote MCP is the reach layer — extending your AI system's capabilities beyond the local environment into the broader ecosystem of cloud services and third-party platforms.
If local MCP is the foundation and administrative MCP is the infrastructure layer, remote MCP is the integration layer. It's how your AI system participates in the broader ecosystem of tools and services that your organisation depends on.
Most production AI systems need all three types. An agent handling a customer inquiry might use local MCP to read internal documentation, administrative MCP to query the customer database, and remote MCP to update the CRM and send a follow-up email. The agent doesn't distinguish between them — the MCP protocol abstracts the location and transport — but the architecture behind each server type is different.
The key design decision with remote MCP is how much to expose. Start with read-only integrations (querying data, searching records, reading messages) and expand to write operations (creating records, sending messages, triggering actions) only after you've validated the safety controls. The blast radius of a misconfigured remote MCP server is bounded only by the permissions of the credentials it holds.
Pairs With
Local MCP
Local and remote MCP work together in almost every production system. Local handles fast, private operations. Remote handles external services. The MCP protocol makes them interchangeable from the agent's perspective — the agent calls tools, not servers.
Safety
Remote MCP amplifies the need for safety controls. Every remote tool call is a potential data leak, a potential cost, and a potential side effect in an external system. Approval gates, output filtering, and scope constraints are essential when agents can reach the outside world.
Agents
Agents that operate in the real world — customer service, operations automation, data analysis — rely heavily on remote MCP. It's what turns an agent from a local assistant into a system that can actually interact with your business tools and external partners.
Observability
Remote MCP calls are the most important to monitor. Network latency, error rates, credential failures, rate limit hits, and unexpected responses all need tracking. The gateway pattern makes this straightforward — all traffic flows through a single instrumentation point.
Need to connect your AI to external services?
I build remote MCP integrations that are secure, reliable, and well-governed. From SaaS connectors to custom API wrappers, I can help you extend your AI system's reach without compromising your security posture.