The insider guide to connecting your AI agent to hundreds of tools through a single integration point.
The average AI agent needs access to 8-12 external tools to be useful. Email, calendar, CRM, web search, code execution, browser automation, file storage, messaging. Each one requires its own API key, OAuth flow, token refresh logic, rate limit handling, and error normalization. For a solo developer, that is a week of plumbing before the agent does anything intelligent. For a team shipping a product with multi-tenant support (where each end-user authenticates their own accounts), that is months of infrastructure work.
This is the problem that tool gateways solve. One connection, many tools. Authenticate once, access everything.
The concept is not new. Unified APIs have existed in the B2B integration space for years. But the explosion of AI agents in late 2025 and 2026 has created a new category: platforms specifically designed to give LLMs and autonomous agents access to external tools through a single integration point, with authentication, schema generation, and execution handled for you.
This guide maps the entire tool gateway landscape as of April 2026. It covers how these platforms work architecturally, which ones exist, how they compare, and (most importantly) how to actually connect one so your agent goes from "chatbot that talks" to "agent that acts." We start with the structural forces driving this market, walk through a concrete implementation using the Nous Research Tool Gateway as an example, then catalog every major platform in the space.
Written by Yuma Heymans (@yumahey), who builds AI workforce infrastructure at O-mega.ai and tracks the autonomous agent ecosystem through the AI Agent Index.
Contents
- Why Tool Gateways Exist: The Structural Problem
- How Tool Gateways Work: Architecture and Mechanics
- The Nous Research Tool Gateway: A Walkthrough
- The Full Market Map: Every Tool Gateway in 2026
- Purpose-Built Agent Tool Platforms
- Unified API Platforms with Agent Support
- MCP Aggregators and Gateways
- Auth-Focused Platforms for Agent Tools
- Choosing the Right Gateway: Decision Framework
- Implementation Patterns and Code Examples
- Where This Market Is Heading
- Conclusion: The Integration Layer Is the Moat
1. Why Tool Gateways Exist: The Structural Problem
The fundamental economics of AI agents create a predictable infrastructure bottleneck. As LLM inference costs drop (Gemini 3.1 Flash Lite processes a million tokens for under a dollar, and open-source models run locally for free), the cost of making an agent "think" approaches zero. But the cost of making an agent "act" remains stubbornly high, because acting means connecting to external services, and each connection requires authentication plumbing, API translation, error handling, and ongoing maintenance.
This is not a cosmetic inconvenience. It is a structural constraint on what agents can do. Consider what happens when you build an agent that needs to read emails, check a calendar, search the web, create documents, and post to Slack. With direct integrations, you need to register OAuth apps with Google, Microsoft, Slack, and whatever search API you choose. You need to build consent screens, handle token exchange, store encrypted credentials, implement token refresh, handle rate limits, normalize errors across five different API styles, and maintain all of this as APIs change. For a single user, that is tedious. For a multi-tenant product where thousands of end-users each connect their own accounts, it is a serious engineering challenge - Nango.
The tool gateway pattern eliminates this by centralizing the integration burden. Instead of your agent connecting to N services directly, it connects to one gateway that handles connections to all N services on your behalf. The gateway manages OAuth flows, stores tokens, translates function calls into API requests, and returns normalized results. Your agent sees a clean list of available tools with standardized schemas. It calls them through a single API. The gateway handles everything else.
This pattern has existed in the B2B integration world for years through companies like Merge (unified HRIS, ATS, and CRM APIs) and Nango (white-label OAuth for SaaS integrations). But AI agents created a new consumption pattern: tools are not called by deterministic application code but by LLMs that generate function calls dynamically based on natural language instructions. This means tool definitions need to be formatted as function-calling schemas (OpenAI format, Anthropic format, or MCP tools), and the gateway needs to handle the translation between "LLM wants to call send_email with these parameters" and "here is the actual Gmail API request with the right OAuth token for this specific user."
The structural argument for why this market exists is straightforward. When intelligence is cheap and abundant (which it now is), the bottleneck shifts to capabilities. An LLM that cannot send emails, browse the web, or update a CRM is a chatbot. An LLM connected to those tools is an agent. Tool gateways are the infrastructure that turns chatbots into agents at scale. We covered this shift from passive AI to active AI agents extensively in our guide on how to make LLMs autonomous, where the tool-calling capability emerged as the single most important differentiator.
2. How Tool Gateways Work: Architecture and Mechanics
Every tool gateway, regardless of vendor, follows the same core architectural pattern. Understanding this pattern makes it much easier to evaluate and implement any specific platform. The architecture has five layers, each solving a distinct problem.
The first layer is the function schema registry. The gateway maintains a catalog of every tool it supports, defined as structured function schemas that LLMs can understand. In the OpenAI function-calling format, each tool has a name, description, and JSON Schema defining its parameters. In the Model Context Protocol (MCP) format, tools are defined with similar metadata but use a different transport mechanism. The schema registry is what makes discovery possible: your agent can query the gateway to find out what tools are available, what they do, and what parameters they accept. Some gateways (like ACI.dev) go further with "intent-aware dynamic tool discovery," where the gateway only returns tools relevant to the current task, preventing context window overload.
The second layer is authentication management. This is the core value proposition and the hardest engineering problem. The gateway stores OAuth client credentials (client_id, client_secret) for each supported service. When an end-user needs to connect their Gmail or Slack account, the gateway orchestrates the OAuth flow: redirect to the provider's consent screen, handle the callback, exchange the authorization code for access and refresh tokens, encrypt and store those tokens, and automatically refresh them before they expire. For a multi-tenant product, this means each of your end-users gets their own set of tokens, scoped to their own accounts, managed entirely by the gateway. Platforms like Composio support OAuth, API keys, and JWT authentication across 900+ integrations with full multi-tenant isolation.
The third layer is request translation. When your LLM generates a function call (for example, "call send_email with recipient='john@example.com' and body='Meeting at 3pm'"), the gateway translates that into the actual HTTP request to the target API. For Gmail, that means constructing a POST to https://gmail.googleapis.com/gmail/v1/users/me/messages/send with the right headers, body encoding (base64 for Gmail), and the user's OAuth bearer token injected. Different APIs have wildly different conventions (REST, GraphQL, SOAP, custom protocols), and the gateway abstracts all of this behind a uniform function-calling interface.
The fourth layer is execution and error handling. The gateway actually makes the HTTP request, handles retries on rate limits (429 responses), normalizes errors into a format the LLM can understand, and returns the result. Good gateways also handle pagination (if the LLM asks to "list all files in my Drive" and there are thousands), timeout management, and idempotency (preventing duplicate actions if the LLM retries).
The fifth layer is transport. This is how the gateway communicates with your agent. The two dominant patterns in 2026 are REST APIs (where you call the gateway's HTTP endpoint with the function name and parameters) and MCP (Model Context Protocol), the open standard launched by Anthropic in late 2024 that has become the de facto standard for tool connectivity - Anthropic. MCP uses a server-sent events (SSE) or HTTP transport where the gateway acts as an MCP server, and your agent connects as an MCP client. The advantage of MCP is that any MCP-compatible agent (Claude Desktop, Cursor, Windsurf, or any agent built with the MCP SDK) can connect to any MCP-compatible gateway without custom integration code.
The practical implication of this architecture is that integrating a tool gateway into your agent typically requires three steps: initialize the gateway client with your API key, fetch available tools (formatted as function schemas), and pass those schemas to your LLM alongside the user's prompt. When the LLM generates a function call, you forward it to the gateway for execution and return the result. The entire integration is usually under 20 lines of code. Our deep dive on multi-agent orchestration explores how this tool-calling pattern scales when multiple agents need coordinated access to the same external services.
3. The Nous Research Tool Gateway: A Walkthrough
The Nous Research Tool Gateway is one of the most instructive examples to study because it represents a fundamentally different approach from the SaaS-integration gateways. Rather than connecting to hundreds of business applications (Gmail, Slack, Jira), the Nous Gateway bundles the four core infrastructure capabilities that every AI agent needs: web search, image generation, text-to-speech, and browser automation. It launched on April 16, 2026, alongside Hermes Agent v0.10.0.
The philosophy behind this gateway is worth understanding. Nous Research observed that most agent developers were spending time managing API keys for the same four or five foundational services. Every agent needs to search the web. Many need to generate images. Some need browser automation. Almost all need text-to-speech for voice interfaces. Instead of making developers register accounts with Firecrawl, FAL, OpenAI, and Browser Use separately, the Nous Gateway bundles these under a single Nous Portal subscription.
What the Gateway Provides
The gateway includes four tool categories, each backed by production-grade providers. Web search and extraction is powered by Firecrawl, which replaces the need for separate Firecrawl, Exa, or Tavily API keys. Image generation runs through FAL with access to eight models including FLUX 2 Pro, GPT-Image, Ideogram, Recraft V4 Pro, Qwen, and Z-Image. Text-to-speech uses OpenAI's TTS models. Browser automation uses Browser Use for full headless browser control.
These four categories may sound narrow compared to Composio's 900+ integrations, but they cover a different problem space. Composio connects agents to SaaS applications (Gmail, Slack, Jira). The Nous Gateway provides agent infrastructure (search, generate, speak, browse). Most serious agents need both: the infrastructure capabilities from something like Nous Gateway, and the SaaS integrations from something like Composio or ACI.dev.
Setting It Up
The setup process illustrates how tool gateways simplify developer experience. Without the gateway, your Hermes Agent configuration file would look like this:
# ~/.hermes/config.yaml (WITHOUT gateway - managing keys yourself)
web:
backend: firecrawl
# Plus separate .env or auth for:
# FIRECRAWL_API_KEY=fk_xxxxx
# FAL_KEY=fal_xxxxx
# OPENAI_API_KEY=sk_xxxxx
# BROWSERUSE_API_KEY=bu_xxxxx
With the gateway enabled, the configuration collapses to:
# ~/.hermes/config.yaml (WITH gateway - one subscription covers all)
web:
backend: firecrawl
use_gateway: true
The use_gateway: true flag tells the Hermes runtime to route API calls through Nous gateway endpoints instead of using direct provider keys. Authentication uses Nous Portal credentials stored in ~/.hermes/auth.json, which are set up once when you run hermes model and select Nous Portal as your provider.
For developers who want to self-host, the gateway supports environment variable overrides. Setting TOOL_GATEWAY_DOMAIN, TOOL_GATEWAY_SCHEME, and TOOL_GATEWAY_USER_TOKEN lets you point the agent at your own gateway instance rather than the Nous-hosted one.
The Mixed-Mode Pattern
One particularly thoughtful design decision in the Nous Gateway is mixed-mode support. You can enable the gateway for some tools while using your own API keys for others. If you already have a Firecrawl account with a generous quota, you can keep using your own key for web search while routing image generation through the gateway. The gateway takes precedence only when use_gateway: true is set for a specific tool category. Your existing .env keys remain intact and can be re-enabled by toggling the gateway off.
This non-destructive design matters because it avoids vendor lock-in. You can try the gateway, compare performance and cost against direct keys, and switch back at any time without reconfiguring anything. The practical workflow is:
# Interactive configuration
hermes model # Select Nous Portal
hermes tools # Enable/disable individual gateway tools
hermes status # Verify active configuration
Who This Is For
The Nous Gateway is purpose-built for Hermes Agent users who want the fastest path from "downloaded an agent" to "agent that can search, generate, and browse." It is not a general-purpose tool integration platform. If you need to connect to Gmail, Slack, or Salesforce, you still need a SaaS integration platform. But for the core agent infrastructure layer, the Nous Gateway eliminates four separate vendor relationships with a single subscription.
The gateway requires a paid Nous Portal subscription (free tier cannot access it). All accounts start with $5 in free credits, and model pricing ranges from $0.02 per million tokens for Hermes 2 Pro 8B to $1.00/$3.00 per million tokens for Hermes 4 405B. Tool usage bills against the same credit balance - Nous Research.
4. The Full Market Map: Every Tool Gateway in 2026
The tool gateway market in 2026 breaks into five distinct tiers, each solving a different slice of the "connect agents to tools" problem. Understanding these tiers prevents the common mistake of comparing platforms that serve fundamentally different purposes.
The first tier is purpose-built agent tool platforms. These are companies whose entire product is giving AI agents access to external tools. Composio, Arcade AI, ACI.dev, and Toolhouse live here. They provide hundreds of pre-built integrations, managed OAuth, LLM-optimized function schemas, and SDKs for every major AI framework.
The second tier is unified API platforms that added agent support. Companies like Nango, StackOne, Merge, Paragon, Unified.to, and Truto built integration infrastructure for B2B applications and then extended it for AI agents, often through MCP server support. These platforms tend to have deeper integration depth (more endpoints per service) but fewer AI-specific features.
The third tier is MCP aggregators and gateways. These are infrastructure tools that sit between MCP clients and multiple MCP servers, providing routing, governance, and management. MetaMCP, IBM ContextForge, the Linux Foundation's agentgateway, and Kong AI Gateway fall here. They do not provide integrations themselves but make it easier to manage many MCP-based integrations.
The fourth tier is auth-focused platforms that solve specifically the authentication problem for AI agents. Stytch's Connected Apps product is the clearest example, providing OAuth and consent management specifically designed for the agent pattern where a machine (not a human) needs delegated access to a user's accounts.
The fifth tier is workflow/automation platforms that enable tool connectivity through their existing connector ecosystems. Zapier, Make, n8n, Workato, and Pipedream have thousands of connectors built for traditional automation and have added AI agent capabilities on top. These are not pure tool gateways, but they can function as one.
The following sections break down each tier in detail.
5. Purpose-Built Agent Tool Platforms
These platforms were built from the ground up to solve the "agents need tools" problem. They are the most relevant category for developers building AI agents or autonomous systems.
Composio: The Market Leader
Composio has emerged as the dominant platform in this category, with the broadest tool catalog and deepest framework support. The platform provides 900+ pre-built integrations spanning 3,000+ individual tools across every major SaaS category: productivity (Google Workspace, Microsoft 365, Notion), developer tools (GitHub, GitLab, Jira), communication (Slack, Discord, WhatsApp, Instagram, TikTok), CRM (Salesforce, HubSpot), finance, marketing, and more.
What makes Composio particularly strong is its authentication infrastructure. The platform handles OAuth end-to-end for every supported service, including multi-tenant support where each of your end-users connects their own accounts. This means you do not need to register OAuth apps with Google, Slack, or any other provider. Composio has already done this and manages the entire consent, token exchange, and refresh cycle. The platform is SOC 2 Type II certified, which matters for enterprise deployments handling sensitive user data.
The SDK integrates with every major AI framework. Python and TypeScript SDKs are available, with native support for LangChain, CrewAI, OpenAI Assistants, Anthropic Claude, Vercel AI SDK, and LlamaIndex. Composio also provides an MCP server, making it compatible with any MCP client including Claude Desktop. An intelligent tool router (currently in beta) dynamically selects the right tools based on the agent's current task, which helps prevent context window overload when hundreds of tools are available.
Composio's pricing makes it accessible for startups and scales predictably. The free tier provides 20,000 tool calls per month, which is generous enough for development and small production workloads. The Growth plan at $29 per month covers 200,000 tool calls with overage at $0.299 per 1,000 calls. Enterprise pricing is custom. For context, our analysis of AI agent costs found that tool integration infrastructure is often the second-largest cost after LLM inference for production agents.
The primary trade-off with Composio is that it is a hosted service. Your agent's tool calls route through Composio's infrastructure, which adds latency and creates a dependency on their uptime. For teams that need self-hosted tool infrastructure, ACI.dev is the open-source alternative.
ACI.dev: The Open-Source Contender
ACI.dev (formerly Wildcard, a Y Combinator W25 company) provides 600+ pre-built integrations under an Apache 2.0 open-source license. This is the platform for teams that want the tool gateway pattern without the vendor dependency of a hosted service.
The most interesting technical feature of ACI.dev is intent-aware dynamic tool discovery. Instead of dumping all 600+ tool schemas into the LLM's context window (which would consume tens of thousands of tokens and degrade reasoning quality), ACI.dev uses semantic matching to return only the tools relevant to the current user request. If the user asks "send an email to John," the gateway returns email-related tools, not GitHub or Slack tools. This keeps the context window lean and improves tool selection accuracy.
ACI.dev also introduced the agents.json specification, an open standard for API-agent contracts built on OpenAPI. The idea is that API providers can publish an agents.json file alongside their existing OpenAPI spec, explicitly defining which endpoints are safe for autonomous agent access, what parameters agents should use, and what rate limits apply. If adopted broadly, this would make it much easier for any tool gateway to onboard new APIs.
The multi-tenant authentication system supports OAuth flows and secrets management for both developers and end-users, with natural language permission boundaries. Instead of defining permissions as "read:messages, write:messages," you can define them as "can read emails but cannot delete them," and ACI.dev translates that into the appropriate OAuth scopes.
ACI.dev provides Python and TypeScript SDKs, plus a unified MCP server that exposes all tools through a single endpoint. Since it is open-source, you can self-host the entire stack, audit the code, and extend it with custom tools.
Arcade AI: The Authorization-First Platform
Arcade AI differentiates by treating authorization as a first-class concern rather than an afterthought. The platform provides 100+ integrations organized into three components: the Tool SDK (for building custom tools), the Engine (the execution runtime with built-in auth), and the Actor system (isolated execution environments for each tool).
Where Arcade stands out is its user challenge system. Instead of pre-authorizing all tool access upfront, Arcade can prompt the end-user for real-time consent when an agent wants to perform a sensitive action. If an agent wants to send an email on behalf of a user, Arcade can present a consent prompt (via webhook, in-app notification, or other channels) and wait for explicit user approval before executing. This "human-in-the-loop for tool authorization" pattern is increasingly important as agents gain more autonomy.
The platform provides an OpenAI-compatible API endpoint, which means any code written for OpenAI's function-calling API works with Arcade by changing the base URL. This is a clever distribution strategy: developers do not need to learn a new SDK. Arcade also has an open-source MCP server for integration with MCP clients.
Pricing starts with a free tier (1,000 standard tool executions, 100 pro executions per month). The Growth plan at $25 per month provides 2,000 standard and 100 pro executions. Pro executions (which include browser automation and other compute-heavy tools) are $0.50 each or $0.01 with BYO credentials - Arcade.
Toolhouse: The Agent Backend-as-a-Service
Toolhouse takes a broader approach than pure tool gateways by positioning itself as a "backend-as-a-service for AI agents." Beyond tool integrations, the platform provides pre-built RAG pipelines, evaluation frameworks, memory systems, and caching. Tools are part of the package but not the only offering.
The platform is trusted by notable companies including Cloudflare, NVIDIA, Groq, and Snowflake, which speaks to its production readiness. Toolhouse integrates with Vercel AI SDK and LlamaIndex, and provides one-click deployment of agents as API endpoints. Their no-code builder lets non-developers create agents from natural language prompts.
Pricing is straightforward. The Sandbox tier is free with 50 agent runs per month. The Pro tier at $10 per month provides 1,000 runs. Enterprise pricing is custom. A "run" in Toolhouse terms encompasses the full agent execution cycle, not individual tool calls, which makes cost prediction simpler but can be more expensive per-tool-call for agents that make many calls per run.
6. Unified API Platforms with Agent Support
These platforms were not built for AI agents originally. They were built for B2B application integration, where one SaaS product needs to connect to another. But the AI agent explosion created a natural adjacent market: if your platform already manages OAuth tokens and API translations for hundreds of services, you can expose that same infrastructure to AI agents through function schemas and MCP servers. Several platforms have made this pivot successfully.
Nango: Open-Source Integration Infrastructure
Nango provides 700+ API integrations with a code-first, TypeScript-native approach. The platform is open-source and self-hostable, which makes it attractive for security-conscious teams. Integration logic is written as TypeScript functions that Nango executes, giving developers full control over data mapping and transformation.
Nango's authentication layer is particularly mature. It provides white-label managed OAuth for all 700+ supported APIs with automatic token refresh, which eliminates the most painful part of building integrations. The platform handles API keys, OAuth, and JWT authentication, with full multi-tenant support.
For AI agent use cases, Nango recently launched an MCP server that exposes all integrations as MCP tools. This means any MCP-compatible agent can connect to Nango and immediately access 700+ APIs through a single endpoint. The platform also integrates with LangChain and PydanticAI through dedicated SDKs.
Nango is SOC 2 Type II, HIPAA, and GDPR compliant and provides deep observability through OpenTelemetry integration. For teams that want the tool gateway pattern but need to own the infrastructure, Nango is the most mature open-source option for the B2B integration layer.
StackOne: Agent-First Integration Infrastructure
StackOne provides 200+ connectors and 10,000+ pre-built actions with a design philosophy specifically oriented toward AI agents. The key differentiator is that StackOne preserves each provider's real data model rather than flattening everything into a lowest-common-denominator unified schema. This matters because LLMs need rich, specific tool definitions to make good decisions about how to use them.
The platform supports REST API, A2A (Agent-to-Agent) protocol, and MCP for tool delivery. It handles OAuth, SAML, rate limiting, and retries across all connectors. StackOne also includes context window management features that compress tool definitions to prevent LLM overload, which becomes critical when an agent has access to thousands of actions.
StackOne published a comprehensive analysis of the AI agent tools landscape in early 2026 that is worth reading for anyone evaluating this space - StackOne.
Merge: The B2B Integration Leader
Merge has two products relevant to the tool gateway market. Merge Unified provides 220+ integrations across seven B2B categories (accounting, HRIS, file storage, ticketing, CRM, knowledge base, ATS) through a single API. These integrations are deeply normalized, meaning the same API call returns the same data structure regardless of whether the underlying provider is Salesforce, HubSpot, or Pipedrive.
Merge Agent Handler is the newer product, launched specifically for AI agents. It provides an MCP-based interface to all of Merge's integrations, giving agents access to thousands of tools across the B2B stack. The integration depth here is exceptional: Merge claims 4.7x more efficient data syncing than in-house integrations because their team has spent years handling the edge cases and API quirks of every supported provider.
The trade-off is that Merge focuses exclusively on B2B applications. If you need consumer service integrations (Instagram, TikTok, WhatsApp), you will need to supplement Merge with another platform.
Paragon (ActionKit): Single API for Agent Tools
Paragon's ActionKit provides 130+ SaaS connectors accessible through either a REST API or an MCP server. The platform supports both SSE and HTTP transport for MCP, and is listed on Anthropic's official MCP registry.
The authentication pattern uses "magic links" that simplify end-user onboarding. Instead of building custom OAuth consent screens, you send the user a link that handles the entire authentication flow. Multi-tenant MCP support lets you run one MCP server instance that serves multiple end-users, each with their own authenticated connections.
Paragon provides a self-hostable Docker container for the MCP server, with a one-click Heroku deployment option for testing. This hybrid model (hosted or self-hosted) gives teams flexibility in how they deploy the tool gateway.
Unified.to: Stateless Real-Time API
Unified.to takes a distinctive architectural approach with 429+ sources across 25 categories and a completely stateless design. The platform never stores end-customer data. Every API call is a real-time passthrough to the underlying service. There are no sync jobs, no cached data, no data-at-rest. This makes Unified.to attractive for data-sensitive use cases where storing customer data in a third-party platform is not acceptable.
The platform supports regionalized traffic routing (US, EU, Australia), which matters for GDPR and data residency requirements. MCP server support is available for AI agent integration.
Truto: Auto-Generated MCP from API Schemas
Truto solves a different problem in the tool gateway space: automatically generating MCP tool definitions from existing API schemas. Instead of hand-writing function definitions for each integration, Truto ingests an API's OpenAPI spec and dynamically generates the corresponding MCP tool schemas. This dramatically accelerates onboarding new APIs.
The platform provides both a Proxy API (which preserves the native API's shape) and a Unified API (which normalizes schemas across providers). For agent developers, the auto-generated MCP server means you can connect to any API that Truto supports without waiting for someone to manually write tool definitions. Truto integrates with LangChain and PydanticAI through dedicated connectors.
7. MCP Aggregators and Gateways
As the number of MCP servers has exploded (the Glama MCP registry now lists over 21,500 servers), a new infrastructure layer has emerged: platforms that aggregate, route, and govern access to multiple MCP servers. These are not tool gateways in the integration sense. They do not provide connections to Gmail or Slack. Instead, they sit between an MCP client and multiple MCP servers, providing a single connection point with management features.
This matters because a production AI agent might need to connect to five or ten MCP servers simultaneously (one for each tool gateway or service), and managing those connections individually becomes unwieldy. MCP aggregators solve this by presenting a single virtual MCP server that transparently routes tool calls to the correct underlying server.
MetaMCP: The Open-Source Aggregator
MetaMCP is an MCP proxy that dynamically aggregates multiple MCP servers into a single unified MCP server. It uses a three-level hierarchy (Servers, Namespaces, Endpoints) to organize tools from different sources and prevent naming collisions. The platform supports SSE, HTTP, and OpenAPI transports, with middleware support for adding custom logic (logging, rate limiting, authorization) to tool calls.
MetaMCP solves the practical problem of using multiple tool gateways simultaneously. If your agent uses Composio for SaaS integrations, the Nous Gateway for search and image generation, and a custom MCP server for internal tools, MetaMCP aggregates all three into a single endpoint. The agent sees one tool catalog. MetaMCP handles routing each call to the correct underlying server.
Enterprise MCP Gateways
Several enterprise-grade MCP gateways launched in Q1 2026, each targeting different operational concerns. IBM ContextForge provides virtual servers and mDNS federation for nested aggregation, with the broadest transport support (7+ protocols). The Linux Foundation's agentgateway focuses on multi-tenant governance with v1.0 maturity and strong access control. MCPJungle provides tool groups with include/exclude filters and per-client allowlisting. Bifrost from Maxim AI offers virtual keys for client-level visibility and a "Code Mode" for token optimization. MCP Mesh from deco.cx provides virtual MCPs with multi-level role-based access control.
Kong AI Gateway deserves special mention as the most enterprise-ready option. Kong has been a dominant API gateway for over a decade, and their AI Gateway extends that infrastructure to handle LLM, MCP, and Agent-to-Agent traffic. If your organization already uses Kong for API management, the AI Gateway plugin adds MCP governance without introducing new infrastructure. This approach aligns with how enterprises are thinking about agent security and governance more broadly.
Cloudflare also entered this space with MCP Server Portals, providing hosting and aggregation infrastructure for MCP servers on their edge network. This gives MCP servers the same global distribution and DDoS protection that Cloudflare provides for web applications.
8. Auth-Focused Platforms for Agent Tools
Authentication is the hardest problem in the tool gateway space, and one company has built its entire product around solving it specifically for AI agents.
Stytch: Auth Infrastructure for AI Agents
Stytch launched its "AI Agent Ready" product suite to address the authentication gap that most tool gateways handle implicitly but few handle well. The product includes OAuth 2.0/OIDC specifically configured for machine-to-machine (agent-to-service) authentication, a consent management system that handles user authorization for agent actions, and token lifecycle management (issuance, refresh, revocation, audit logging).
The most interesting component is "Connected Apps," which provides the consent screens and authorization flows that agents need when accessing user resources. When an agent wants to read a user's Google Calendar, Stytch handles presenting the consent prompt, capturing the user's approval, scoping the OAuth token to the approved permissions, and maintaining an audit trail. This is the same problem that every tool gateway must solve, but Stytch provides it as a standalone service that can be used alongside any gateway.
Stytch's pricing is free for the first 10,000 active users and AI agents, making it accessible for startups. The platform also supports Remote MCP with dynamic registration and scoped access, which means it can serve as the authentication layer for custom MCP servers.
The distinction between Stytch and the tool gateway platforms is that Stytch does not provide tool definitions, API translations, or pre-built integrations. It provides the authentication and authorization infrastructure that tool gateways need internally. If you are building your own tool gateway or custom agent infrastructure, Stytch gives you production-grade auth without building it from scratch.
9. Choosing the Right Gateway: Decision Framework
With over twenty platforms mapped in this guide, choosing the right one requires understanding what you actually need. The decision comes down to four questions, each of which narrows the field significantly.
The first question is about scope. Are you connecting your agent to SaaS applications (Gmail, Slack, Salesforce, Jira), or do you need agent infrastructure tools (web search, image generation, browser automation, code execution)? If the answer is SaaS applications, you are looking at Composio, ACI.dev, Arcade, Merge, or one of the unified API platforms. If the answer is infrastructure tools, the Nous Gateway or a combination of specialized MCP servers is more appropriate. Most production agents need both, which means you may end up using two platforms (one for SaaS, one for infrastructure) connected through an MCP aggregator.
The second question is about hosting. Can you use a hosted service, or do you need to self-host? If hosted is acceptable, Composio is the broadest option with the most integrations and framework support. If you need self-hosted, ACI.dev (fully open-source) or Nango (open-source with optional hosted mode) are the primary options. This decision often comes down to compliance requirements. Healthcare and financial services companies frequently require self-hosted infrastructure for handling user credentials - Nango.
The third question is about multi-tenancy. Are you building a product where many end-users each connect their own accounts, or are you building an internal tool where only your team needs access? Multi-tenant support (where the gateway manages separate OAuth tokens for each of your end-users) is a hard engineering problem, and not all platforms handle it equally well. Composio, Nango, ACI.dev, and Stytch have the most mature multi-tenant implementations. If you are building an internal tool with a single set of credentials, the requirements are simpler and almost any platform works.
The fourth question is about depth. Do you need broad integration coverage (connect to hundreds of services with basic CRUD operations) or deep integration with a few services (complex workflows, webhooks, real-time sync)? Broad coverage favors Composio (900+), Nango (700+), or ACI.dev (600+). Deep integration favors Merge (deeply normalized B2B data) or StackOne (10,000+ actions preserving native data models).
The chart above illustrates the raw integration count across platforms, but numbers alone do not tell the full story. The Nous Gateway has only four integrations, but those four are deep, production-grade agent infrastructure tools (search, image generation, TTS, browser automation) that solve a fundamentally different problem than connecting to SaaS APIs. Composio's 900+ integrations cover the broadest range of SaaS applications, but many of those integrations provide basic CRUD operations rather than the deep workflow support that platforms like Merge or StackOne offer for their focused domains.
For teams building AI workforce platforms (like O-mega, where agents need both infrastructure capabilities and SaaS integrations at scale), the emerging pattern is a layered approach: a purpose-built agent platform handles the core agent infrastructure (search, browsing, code execution), a SaaS tool gateway like Composio or ACI.dev handles external application integrations, and an MCP aggregator ties everything together into a single tool catalog.
10. Implementation Patterns and Code Examples
Understanding the theory is useful, but what matters is implementation. This section walks through the three most common integration patterns: direct SDK integration with a tool gateway, MCP-based integration, and the multi-gateway aggregation pattern.
Pattern 1: Direct SDK Integration (Composio + OpenAI)
The most straightforward pattern is using a tool gateway's SDK directly with your LLM provider. Here is a complete example using Composio with OpenAI's function-calling API:
from composio_openai import ComposioToolSet, Action
from openai import OpenAI
# Initialize clients
openai_client = OpenAI()
toolset = ComposioToolSet() # Uses COMPOSIO_API_KEY env var
# Get tool schemas for specific actions
tools = toolset.get_tools(actions= [
Action.GMAIL_SEND_EMAIL,
Action.SLACK_SEND_MESSAGE,
Action.GITHUB_CREATE_ISSUE
])
# Pass tools to the LLM
response = openai_client.chat.completions.create(
model="gpt-4o",
messages= [{"role": "user", "content": "Send a Slack message to #engineering saying the deploy is complete"}],
tools=tools # Gateway-provided function schemas
)
# Execute the tool call through the gateway
result = toolset.handle_tool_calls(response)
The critical thing to notice is that the developer never interacts with the Slack API directly. The toolset.get_tools() call fetches LLM-formatted function schemas from Composio. The LLM generates a function call based on those schemas. The toolset.handle_tool_calls() method sends that function call to Composio, which handles authentication (injecting the user's Slack OAuth token), request translation (converting the function parameters to a Slack API request), execution, and result formatting. The entire integration is eight lines of code.
Pattern 2: MCP-Based Integration
MCP provides a framework-agnostic way to connect agents to tool gateways. Any MCP client can connect to any MCP server, which means you can swap tool gateways without changing your agent code. Here is the conceptual pattern:
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Connect to a tool gateway's MCP server
server_params = StdioServerParameters(
command="npx",
args= ["-y", "@composio/mcp-server"] # Or any MCP-compatible gateway
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Discover available tools
tools = await session.list_tools()
# Execute a tool
result = await session.call_tool(
"gmail_send_email",
arguments={"to": "john@example.com", "body": "Hello"}
)
The MCP approach is particularly powerful when combined with agents that natively support MCP, like Claude Desktop or agents built with the Anthropic SDK. You configure the MCP server in the agent's settings, and the agent automatically discovers and uses the available tools. We explored this pattern in depth in our guide to the Anthropic ecosystem, which covers how MCP fits into the broader Anthropic tool stack.
Pattern 3: Multi-Gateway Aggregation
For production agents that need both infrastructure tools and SaaS integrations, the aggregation pattern uses an MCP aggregator to combine multiple gateways:
# MetaMCP configuration - aggregating multiple gateways
servers:
- name: "nous-gateway"
type: "sse"
url: "https://gateway.nousresearch.com/mcp"
# Provides: web_search, image_generation, tts, browser
- name: "composio"
type: "sse"
url: "https://mcp.composio.dev"
# Provides: gmail, slack, github, jira, salesforce, etc.
- name: "internal-tools"
type: "stdio"
command: "python"
args: ["./my_internal_mcp_server.py"]
# Provides: custom internal tools
With this configuration, your agent connects to MetaMCP as a single MCP server and sees a unified tool catalog that includes Nous Gateway's search and image tools, Composio's SaaS integrations, and your custom internal tools. MetaMCP handles routing each tool call to the correct underlying server. This is the pattern that scales: as you add more gateways or custom tools, you add them to the aggregator configuration without changing your agent code.
Pattern 4: The Nous Gateway Integration
For Hermes Agent users, integrating the Nous Gateway is even simpler because it is built into the agent's configuration:
# One-time setup
hermes model # Select "Nous Portal" as provider
hermes tools # Toggle gateway tools on/off
# Verify configuration
hermes status
# Output:
# Model: hermes-4-405b (Nous Portal)
# Web Search: firecrawl (gateway)
# Image Gen: fal (gateway)
# TTS: openai (gateway)
# Browser: browser-use (gateway)
After this setup, the Hermes Agent automatically routes web search, image generation, TTS, and browser automation calls through the Nous Gateway. No SDK integration code is needed because the gateway is built into the agent runtime. This tight coupling between the agent and the gateway is what makes the Nous approach so smooth for Hermes users, even though it limits portability to other agent frameworks.
11. Where This Market Is Heading
The tool gateway market is evolving rapidly, and several structural trends will shape it over the next 12 to 18 months. Understanding these trends helps you make infrastructure decisions that will not need to be ripped out in a year.
MCP Becomes the Universal Transport
The Model Context Protocol has achieved escape velocity. With over 21,500 MCP servers listed on the Glama registry and adoption by Anthropic, OpenAI (via compatibility layers), Google, and every major AI framework, MCP is the de facto standard for tool connectivity. Every tool gateway launched in 2026 supports MCP. Every gateway that existed before 2026 has added MCP support.
The implication is that your agent should be MCP-native. If you are building a new agent, use MCP as the tool interface from the start. If you are using an existing agent framework, ensure it supports MCP clients. This gives you maximum flexibility to swap tool gateways, add new ones, and use the emerging MCP aggregation layer.
The deeper structural shift is that MCP turns tool gateways into commodity infrastructure. When every gateway exposes tools through the same protocol, the switching cost drops to near zero. Competition shifts from "do you support my framework?" (which MCP makes universal) to "how many integrations do you have?" and "how reliable are they?" This commoditization pressure will drive consolidation: the gateways with the most integrations and best reliability will absorb market share from smaller players.
Authentication Becomes a Standalone Layer
The most interesting structural development is the separation of authentication from tool execution. Stytch's "AI Agent Ready" product signals that auth for agents is becoming its own infrastructure category rather than a feature bundled into tool gateways. This makes sense architecturally: authentication is a security-critical concern that benefits from specialization, while tool execution is a functional concern that benefits from breadth.
The pattern that is emerging looks like this: a specialized auth platform (Stytch, Auth0, or a new entrant) manages OAuth tokens, consent flows, and audit logging for all agent-to-service connections. Tool gateways consume those tokens at execution time rather than managing their own auth flows. This separation lets each layer specialize: auth platforms focus on security, compliance, and token lifecycle, while tool gateways focus on integration breadth, schema quality, and execution reliability.
This is analogous to how web applications separated authentication (Auth0, Okta) from application logic a decade ago. The same structural forces (increasing security requirements, compliance burdens, and the desire to specialize) are driving the same separation in the agent tool stack.
Open Source Pressure and Commoditization
ACI.dev's Apache 2.0 release puts pressure on every proprietary tool gateway. When a fully open-source alternative exists with 600+ integrations and no vendor lock-in, the proprietary platforms must justify their pricing with superior reliability, support, and features. This is the same dynamic that played out in databases (PostgreSQL vs. Oracle), container orchestration (Kubernetes vs. proprietary), and API gateways (Kong vs. proprietary).
The likely outcome is that basic tool gateway functionality (OAuth management, function schema generation, request translation) becomes table stakes and effectively free. Value accrues to platforms that provide superior integration depth (handling the long tail of API quirks), better reliability (99.99%+ uptime with automatic failover), and enterprise features (audit logging, compliance certifications, multi-region deployment). This is consistent with the broader pattern in the AI infrastructure stack that we analyzed in the big pipe, where commoditized layers get compressed and value concentrates at the edges.
Agent-Native Tool Design
The current generation of tool gateways wraps existing human-designed APIs for agent consumption. The next generation will feature APIs designed from the ground up for agent use. ACI.dev's agents.json specification is an early step in this direction, where API providers explicitly declare which endpoints are safe for autonomous agent access.
The structural difference between human-designed and agent-designed APIs is significant. Human-designed APIs assume a developer reads documentation, understands the API model, and writes deterministic code. Agent-designed APIs assume an LLM reads function schemas, makes probabilistic decisions about which tools to use, and may need guardrails (confirmation prompts, rate limits, rollback capabilities) built into the tool definition itself.
Arcade AI's user challenge system is a preview of this pattern: the tool definition includes authorization checkpoints that trigger human approval for sensitive actions. As agents gain more autonomy and handle more consequential tasks, this "tool with built-in governance" pattern will become standard. We explored the governance implications of autonomous AI agents in our self-improving AI agents guide, where the tool access governance problem was identified as one of the top three risks in production deployments.
Vertical Tool Bundles
The Nous Gateway represents the first wave of a pattern we expect to see more of: tool gateways that bundle a curated set of tools for a specific use case rather than trying to be the universal connector. Imagine a "sales agent gateway" that bundles CRM, email, calendar, LinkedIn, and call recording tools with sales-specific schemas (lead qualification, pipeline updates, meeting scheduling). Or a "financial agent gateway" that bundles banking APIs, payment processors, and accounting tools with financial-specific guardrails (transaction limits, dual authorization).
These vertical bundles trade breadth for depth and usability. Instead of giving an agent 900 tools and hoping the LLM picks the right ones, vertical bundles give the agent exactly the tools it needs with schemas optimized for its domain. This reduces context window waste, improves tool selection accuracy, and enables domain-specific governance.
The chart reflects how SaaS integration gateways (Composio, Nango, ACI.dev) reached maturity first because they built on existing B2B integration infrastructure. MCP aggregation is maturing rapidly as the protocol standardizes. Agent-specific authentication is the youngest layer but growing fastest, driven by the security and compliance requirements of production agent deployments.
12. Conclusion: The Integration Layer Is the Moat
The tool gateway market is not just an infrastructure convenience. It is becoming the strategic layer that determines how powerful an AI agent can be.
From first principles, the argument is clean. LLM inference is commoditizing. The cost of making an agent "think" drops every quarter as new models launch and prices fall. But the cost of making an agent "act" (connecting to real services, authenticating on behalf of users, executing actions with real consequences) remains high because it requires integration engineering that scales linearly with the number of services supported. Tool gateways convert that linear cost into a fixed cost: pay one platform, get access to hundreds of services.
The practical decision framework for 2026 comes down to your specific position in the ecosystem:
If you are building an AI agent from scratch and need it to interact with SaaS tools, start with Composio (broadest coverage, best framework support) or ACI.dev (open-source, self-hostable). Both provide MCP servers, so you can swap later without rewriting your agent.
If you are using Hermes Agent and need core infrastructure tools (search, images, TTS, browser), the Nous Gateway eliminates four vendor relationships with a single subscription toggle.
If you are building a B2B product where your end-users need to connect their own accounts, invest in a platform with strong multi-tenant auth: Composio, Nango, or Stytch (for the auth layer specifically).
If you already use multiple MCP servers and need to unify them, add MetaMCP or the Kong AI Gateway as an aggregation layer.
If you are building an AI workforce platform where multiple agents need coordinated tool access, the layered approach (infrastructure gateway + SaaS gateway + MCP aggregator) provides the most flexibility. Platforms like O-mega.ai have adopted this pattern, where each agent in the workforce gets access to a curated tool catalog appropriate for its role, with authentication and governance managed centrally.
The integration layer is where value concentrates because it is where capabilities concentrate. An LLM without tools is a chatbot. An LLM with a tool gateway is an agent. And as the agent economy continues to grow, the platforms that control tool access will shape what agents can and cannot do.
The most important thing to remember is that these platforms are evolving fast. What you build today should use MCP as the tool interface (it is the emerging universal standard), should not be tightly coupled to any single gateway (because the landscape will consolidate), and should separate authentication from tool execution (because the auth layer is becoming its own category). Build on those principles, and whatever specific platforms rise or fall, your agent infrastructure will adapt.
This guide reflects the LLM tool gateway landscape as of April 2026. Pricing, features, and platform availability change frequently. Verify current details on each platform's website before making purchasing decisions.