MCP Servers

MCP servers expose tools to AI agents using the Model Context Protocol (MCP), an open protocol for structured AI tool access. In ProvenanceOne, every MCP tool call — regardless of whether the server is hosted externally or on the platform — routes through the MCP Gateway proxy. The gateway enforces tool allowlists and denylists, input and output data loss prevention (DLP), and rate limiting, and logs every invocation as a mcp.tool_called audit event.

MCP servers are the right choice when you need to expose a rich, multi-tool external service to agents and want centralised policy control over how those tools are used.


When to use MCP servers

Use an MCP server when:

  • You want to integrate an external tool provider that already supports MCP (e.g. GitHub MCP, Slack MCP, database query servers)
  • You are building a custom tool server that you want multiple agents across multiple workflows to share
  • You need centralised governance over which specific tools within a server agents are permitted to call
  • You want all tool interactions with a service to be audited in one place regardless of which agent or workflow triggers them

For simpler, single-purpose operations, a skill is often a better fit. See the FAQ below for a direct comparison.


Key concepts

Model Context Protocol (MCP) — an open protocol that defines a standard interface for AI systems to discover and invoke tools provided by external servers. ProvenanceOne implements MCP client capabilities in the agent runtime.

MCP Gateway — the mcpproxy serverless function that sits between all agents and all MCP servers. No agent connects directly to an MCP server. Every tool call passes through the gateway, where policies are evaluated before the call is forwarded and before the response is returned.

External vs. hosted — an MCP server can be external (running outside ProvenanceOne, accessed via a URL) or hosted (running as an the container hosting service task managed by ProvenanceOne). Hosted servers have a deployment lifecycle managed within the platform.

Execution mode — hosted servers have an execution mode that controls when the Fargate task runs:

  • always_on — the Fargate task runs continuously, regardless of traffic
  • on_demand — the task starts when a tool call arrives and stops automatically after 15 minutes of idle time
  • disabled — the server will not accept connections

Gateway policies — workspace-level policies stored in the platform that the MCP Gateway evaluates on every tool call. Policies define tool allowlists and denylists, input and output DLP rules, and rate limits.


How it works

Request flow

  1. An agent step in a workflow run calls a tool on an MCP server.
  2. The call is intercepted by the MCP Gateway (mcpproxy serverless function).
  3. The gateway evaluates the workspace gateway policy: is the tool on the allowlist? Does the input violate DLP rules?
  4. If the policy permits the call, the gateway forwards it to the MCP server (external or hosted Fargate task).
  5. The server responds with the tool result.
  6. The gateway evaluates the response against output DLP rules.
  7. The (possibly redacted) result is returned to the agent.
  8. The mcp.tool_called audit event is emitted with the tool name, workspace, agent, and outcome.

Warning: MCP servers are external services that can return arbitrary content, including content designed to manipulate agent behavior (prompt injection). Always validate and sanitise MCP server outputs, especially when they will be used in further agent steps or actions.

Hosted server deployment lifecycle

For servers hosted on ProvenanceOne (the container hosting service), the deployment states are:

  • building — the server package is being built and containerised
  • deployed — the container is ready; the server can be started
  • active — the server is running and accepting tool calls
  • paused — the server is stopped; tool calls will fail
  • error — the server encountered an error during build or start

Deploy a hosted server by:

  1. Uploading the server package via POST /mcp-servers/{id}/upload-url
  2. Calling POST /mcp-servers/{id}/deploy
  3. Starting the server with the start endpoint once it reaches deployed status

on_demand idle timeout

When execution mode is on_demand, the Fargate task starts automatically when a tool call arrives and shuts down after 15 minutes of inactivity. The first tool call after an idle period incurs a cold start latency. If latency is critical for your use case, use always_on mode.

Auth methods

MCP servers support the following authentication methods:

MethodUse case
OAuth 2.0Servers that support OAuth flows for user or service authorization
API KeySimple bearer token authentication
mTLSMutual TLS for high-security server-to-server communication
Service AccountGoogle or cloud service account credentials
HMAC WebhookWebhook signature validation for inbound event verification

Configuration options

FieldTypeRequiredDefaultDescription
serverIdstringautoUnique identifier (mcp_*)
namestringYesInternal name
displayNamestringYesHuman-readable name shown in the UI
statusenumactive | paused | error | building | deployed
executionModeenumYesalways_on | on_demand | disabled

Gateway policy fields

Gateway policies are workspace-level and apply to all tool calls. Each policy specifies:

FieldDescription
Tool allowlistExplicit list of tool names that are permitted; all others are denied
Tool denylistExplicit list of tool names that are always denied, regardless of allowlist
Input DLP rulesPatterns to detect and redact in tool call arguments before forwarding
Output DLP rulesPatterns to detect and redact in tool call results before returning to the agent
Rate limitsMaximum tool calls per time window per agent, workflow, or workspace

Note: Allowlist and denylist evaluation: the denylist takes precedence. A tool on both lists is always denied.


API endpoints

MethodEndpointDescription
POST/mcp-servers/{id}/connectRegister a connection to an external MCP server
POST/mcp-servers/{id}/authorizeInitiate an OAuth authorization flow
POST/mcp-servers/{id}/deployDeploy a hosted MCP server package
POST/mcp-servers/{id}/upload-urlGet a pre-signed URL to upload a hosted server package
GET/mcp-servers/{id}/metricsRetrieve usage metrics for the server
GET/mcp-servers/{id}/logsRetrieve logs from the server
GET/gateway-policiesList workspace gateway policies
POST/gateway-policiesCreate a new gateway policy
PUT/gateway-policies/{id}Update a gateway policy
DELETE/gateway-policies/{id}Delete a gateway policy
GET/bus/mcp/resourcesList MCP resources on the event bus
POST/bus/mcp/resources/{name}/callCall an MCP resource via the bus

Examples

GitHub MCP server (external, always_on)

  • Register the GitHub MCP server URL and configure OAuth 2.0 auth
  • Set execution mode to always_on for consistent latency
  • Create a gateway policy with an allowlist that includes only create_pull_request, list_issues, and get_file_contents
  • Deny delete_repository and update_team_membership explicitly in the denylist
  • Attach to a code review agent

Database query server (hosted, on_demand)

  • Upload a custom MCP server package that wraps a read-only database query interface
  • Deploy to the container hosting service with execution mode on_demand — queries are infrequent and cold start latency is acceptable
  • Configure input DLP to block SQL injection patterns before the query is forwarded
  • Configure output DLP to redact PII field names (e.g. SSN, credit card patterns) in query results
  • Attach to a data analysis agent

Common mistakes

  • Connecting agents directly to MCP servers without configuring gateway policies. Without a policy, all tools on the server are accessible. Define an explicit allowlist from the start.
  • Using always_on for servers with very low utilisation. always_on keeps a Fargate task running continuously, which incurs cost even when idle. Use on_demand for low-traffic servers.
  • Ignoring MCP server output before passing it to downstream steps. MCP servers can return malformed, unexpected, or adversarially crafted content. Add a validation or transform step after MCP steps in high-risk workflows.
  • Not reviewing mcp.tool_called audit events. These events are your primary visibility into what tools agents are actually calling. Review them regularly, especially when a new MCP server is first connected.
  • Forgetting that disabled mode blocks all calls. If an MCP server is set to disabled, agent steps that depend on it will fail. Check server status before running production workflows.

Troubleshooting

Server in error status — check the server logs via GET /mcp-servers/{id}/logs. Common causes: the uploaded package failed to build, a missing dependency, or a misconfigured entrypoint.

Tool calls failing with policy.violation — the gateway policy is blocking the call. Check the workspace gateway policy: the tool may be on the denylist or not on the allowlist. Update the policy via PUT /gateway-policies/{id}.

Auth failures on external MCP server — verify that the connection credentials are valid and have not expired. For OAuth connections, the token may need to be refreshed. Re-authorise via POST /mcp-servers/{id}/authorize.

High cold start latency on on_demand server — the Fargate task is starting from cold. Either accept the latency for low-frequency use cases, or switch to always_on if consistent latency is required.

Rate limit errors — a gateway policy rate limit is being exceeded. Review the policy rate limit settings and either increase the limit or reduce call frequency in the workflow logic.


Security and permissions

  • editor and admin can create, configure, and deploy MCP servers and manage gateway policies.
  • viewer can view server configuration and metrics.
  • ALL MCP tool calls route through the MCP Gateway (mcpproxy serverless function). There is no direct agent-to-server path.
  • The mcp.tool_called audit event is emitted for every tool invocation, regardless of outcome.
  • Gateway policy violations emit the policy.violation audit event.
  • Gateway policy changes emit gateway_policy.created, gateway_policy.updated, and gateway_policy.deleted audit events.
  • Auth credentials for MCP server connections are stored in the secrets vault and never exposed via the API.


FAQ

What is the difference between a skill and an MCP server?

A skill is a sandboxed serverless function you write and upload directly to ProvenanceOne. It is workspace-specific, has a defined JSON Schema, and runs in an isolated sandboxed environment. An MCP server is an external or hosted service that exposes tools via the Model Context Protocol. MCP servers are typically richer, multi-tool integrations that already exist or are maintained separately. Skills are best for custom, workspace-specific logic; MCP servers are best for integrating with established external tool providers.

Is MCP traffic audited?

Yes. The mcp.tool_called audit event is emitted for every tool invocation that passes through the MCP Gateway, regardless of which agent or workflow triggered it. The event records the tool name, workspace, agent, and outcome. Gateway policy violations emit a separate policy.violation event.

What is on_demand mode?

on_demand execution mode means the hosted MCP server's Fargate task starts automatically when a tool call arrives and stops after 15 minutes of idle time. This reduces cost for servers with intermittent traffic. The tradeoff is cold start latency on the first call after an idle period. Use always_on for latency-sensitive workflows.

How do gateway policies work?

Gateway policies are workspace-level rules evaluated by the MCP Gateway on every tool call. A policy can define a tool allowlist (only listed tools are permitted), a tool denylist (listed tools are always blocked), input DLP rules (patterns to redact in tool arguments), output DLP rules (patterns to redact in responses), and rate limits. The denylist takes precedence over the allowlist. Policies apply to all agents in the workspace.

Can I connect to an MCP server running in my own infrastructure?

Yes. Use POST /mcp-servers/{id}/connect to register an external MCP server by URL. Configure the appropriate authentication method (OAuth 2.0, API key, mTLS, service account, or HMAC webhook). All calls will still route through the ProvenanceOne MCP Gateway.