As enterprises race to operationalize AI, the challenge isn't only about building and deploying large language models (LLMs), it's also about integrating them seamlessly into existing API ecosystems while maintaining enterprise level security, governance, and compliance. Apigee is committed to lead you in this journey. Apigee streamlines the integration of gen AI agents into applications by bolstering their security, scalability, and governance.
While the Model Context Protocol (MCP) has emerged as a de facto method of integrating discrete APIs as tools, the journey of turning your APIs into these agentic tools is broader than a single protocol. This post highlights the critical role of your existing API programs in this evolution and how Apigee is committed to supporting your success, regardless of the specific tool integration path you choose.
It's crucial to understand that having APIs is the foundation. If you aren't currently participating in an API program, establishing your APIs and engaging with the Apigee API hub should be your immediate next step.
The Model Context Protocol (MCP) has emerged as a prominent concept in this space. However, the reality is that MCP is evolving rapidly and doesn’t speak to enterprise requirements for authentication, authorization, and observability.
Apigee, Google Cloud’s native API management platform, brings your existing enterprise APIs to AI. We will continue to guide your agentic journey through this changing landscape of AI. And we will continue to make sure we bring first class, enterprise features to all of your AI workloads.
Leveraging MCP services across a network requires specific security constraints. Perhaps you would like to add authentication to your MCP server itself. Once you’ve authenticated calls to the MCP server you may want to authorize access to certain tools depending on the consuming application. You may want to provide first class observability information to track which tools are being used and by whom. Finally, you may want to ensure that whatever downstream APIs your MCP server is supplying tools for also has minimum guarantees of security like already outlined above
Apigee is providing an open source example of an MCP server that provides precisely this type of API security: and all of this is available and supported for your MCP services right now.
This example demonstrates the use of Apigee’s API Products for authentication and authorization controls for your tools. Further, the APIs themselves that ultimately lie behind the MCP server, deployed to Cloud Run in this case, are themselves hosted on Apigee and therefore receive the same security, distribution, and observability features that every other API hosted on Apigee. It bridges the gap between managed APIs and explorative AI interactions with Apigee’s rich feature sets to secure, scale, and govern your AI journey. This demonstration shows how you can get up and running with an MCP server right now, while providing the necessary enterprise controls you need. And if the MCP standard changes this setup is very easy to adapt as it’s ultimately just serving backends like any other with Apigee.
As seen below we bring API products to these Agents and MCP Tools and make them AI products. These AI Products have their own consumers and developers just like any other API.
Our GitHub repository containing a quick start guide, sample artifacts, and documentation will help you navigate to build, and deploy the reference MCP Serving architecture in Apigee and understand the steps involved in exposing your APIs as tools for AI Agents by leverage API Products.
This journey will adapt and change over time: MCP is evolving, as seen with the shift from no initial AuthN to OAuth for AuthZ and resource serving. Google Apigee is committed to evolving with it.
Learn more about operationalizing generative AI apps with Apigee and explore our AI policies documentation.