Apigee announces general availability of APIM Extension Processor

4월 15, 2025
Ishita Saxena Strategic Cloud Engineer
Sanjay Pujare Software Engineer

We are excited to announce the General Availability (GA) of the Apigee Extension Processor (version 1.0)! This powerful new capability significantly expands the reach and flexibility of Apigee, making it easier than ever to manage and secure a wider range of backend services and modern application architectures.

For developers embracing modern deployment models, the Extension Processor offers seamless integration with Cloud Run, allowing you to apply Apigee policies to your scalable containerized applications.

The Extension Processor also unlocks powerful new communication patterns. Now you can easily manage advanced real-time interactions with gRPC bidirectional streaming, enabling highly interactive and low-latency applications. Furthermore, for event-driven architectures, the Extension Processor provides a pathway to manage and secure Server-Sent Events (SSE), facilitating efficient data streaming to clients.

But the benefits extend beyond application deployment and communication protocols. The Apigee Extension Processor, coupled with Google Token Injection policies, dramatically simplifies secure access to your Google Cloud infrastructure. You can seamlessly connect and control access to powerful data services like Bigtable, and leverage the intelligence of Vertex AI for your machine learning workloads, all while maintaining Apigee's consistent security framework.

Finally, by integrating with the intelligent traffic management capabilities of Google's Cloud Load Balancing, the Extension Processor offers unparalleled flexibility in routing and managing diverse traffic flows. This powerful combination opens up countless possibilities for managing even the most complex API landscapes.

This blog outlines a powerful solution to a key challenge in today's landscape of high-performance and real-time applications: managing gRPC streaming within Apigee. While gRPC is a cornerstone of efficient microservices, its streaming nature presents a challenge for organizations leveraging Google Cloud's Apigee as an inline proxy (traditional mode). We'll explore how the Apigee Extension Processor enables Apigee's data plane to enforce policies on gRPC streaming traffic as it passes through the Application Load Balancer (ALB). This is achieved via a Service Extension (traffic extension), allowing for effective management and routing without the gRPC stream directly traversing the Apigee gateway.

Read along as we delve into the core elements of this solution, highlighting its benefits and providing a high-level overview of a real-world use case involving a Cloud Run backend.


Understanding the Apigee Extension Processor

The Apigee Extension Processor is a powerful traffic extension (a type of service extension) that enables you to leverage Cloud Load Balancing to send callouts to Apigee as part of its API management. This enables Apigee to apply API management policies to requests before the ALB forwards them to user-managed backend services, effectively extending Apigee's robust API management capabilities to workloads fronted by Cloud Load Balancing.


Infrastructure and Data Flow

The diagram outlines the required components of the Apigee Extension Processor configuration:

The Apigee Extension Processor configuration involves several key components. These include an ALB, an Apigee instance with the Extension Processor enabled, a Service Extension. For a detailed description of these components, please refer to the Apigee Extension Processor overview.

Data flow diagram for Apigee Services extensions

The following numbered steps correspond to the numbered arrows in the flow diagram, illustrating the sequence of events:

1: The client sends a request to the ALB

2: The ALB acting as the Policy Enforcement Point (PEP), processes the traffic . As part of this processing, it calls out to Apigee via the configured Service Extension (traffic extension).

3: The Apigee Extension Processor, acting as the Policy Decision Point (PDP), receives the callout, applies the relevant API management policies to the request, and returns the processed request to the ALB (PEP).

4: The ALB completes processing and forwards the request to the backend service.

The backend service initiates the response, which is received by the ALB. The ALB may call out to Apigee again via the Service Extension to enforce policies on the response before forwarding it to the client.


Bridging the gap: Enabling gRPC streaming pass-through

Many modern applications require and use the power of streaming gRPC, but Apigee - used as an inline proxy - does not currently support streaming. This is where the Apigee Extension Processor becomes invaluable - by allowing the ALB to process the streaming gRPC traffic and act as the PEP (policy enforcement point) while the Apigee runtime acts as the PDP (policy decision point).


Main elements needed to enable gRPC streaming pass-through with Apigee

To enable gRPC streaming pass-through using the Apigee Extension Processor, the following key elements are required. For detailed configuration instructions, please refer to Get started with the Apigee Extension Processor.

  • gRPC streaming backend service: A gRPC service implementing the necessary streaming capabilities (server, client, or bidirectional).

  • Application Load Balancer (ALB): The entry point for client requests, configured to route traffic and call the Apigee Service Extension.

  • Apigee Instance with Extension Processor enabled:An Apigee instance and environment configured with the Extension Processor feature uses a targetless API proxy for ext-proc processing of traffic from the Service Extension.

  • Service Extension configuration: A traffic extension (a type of Service Extension) acting as the bridge between the ALB and Apigee runtime (ideally using Private Service Connect (PSC)).

  • Network connectivity: Proper network setup allowing communication between all components (client to ALB, ALB to Apigee, ALB to backend).

Use Case: Securing and managing gRPC streaming services on Cloud Run with Apigee

Consider a scenario where a customer develops a high-performance backend service with gRPC streaming capabilities, such as providing real-time application logs. For scalability and ease of management, this backend application is deployed on Google Cloud Run within their primary Google Cloud project. Now, the customer wants to expose this gRPC streaming service to their clients through a well-managed and secure API gateway. They choose Apigee for this purpose, leveraging its robust API management features like authentication, authorization, rate limiting and other policies.


The Challenge

As mentioned earlier, Apigee doesn't natively support gRPC streaming when used in the inline proxy mode. Direct exposure of the Cloud Run gRPC service through standard Apigee configurations would not support any of the streaming use-cases: client, server or bi-di streaming.


Solution

The Apigee Extension Processor provides the necessary bridge to manage gRPC streaming traffic destined for a backend application deployed on Cloud Run within the same Google Cloud project.

Here's a simplified flow:


1: Client initiation

  • The client application initiates a gRPC streaming request.

  • This request is directed towards the public IP address or DNS name of the ALB that serves as the entry point.


2:
Application Load Balancer processing and Service Extension callout

  • The ALB receives the incoming gRPC streaming request.

  • The ALB is configured with a backend service that uses a serverless Network Endpoint Group (NEG) pointing to the backend on Cloud Run.

  • The ALB is also configured with a Service Extension (Traffic extension) that has a specific Apigee runtime backend configured.

  • The ALB first calls out to this Service Extension for relevant traffic.


3: Apigee proxy processing

  • The gRPC request is forwarded to the designated Apigee API proxy via the Service Extension.

  • Within this Apigee X proxy, various API management policies are executed. This can include authentication, authorization, and rate limiting.

Note: The Apigee proxy in this scenario is a no-target proxy, that is, it doesn’t have a Target Endpoint configured.It relies on the ALB for final routing.


4: Return to ALB

  • As the Apigee proxy has no target, after policy processing, control returns to the ALB via the Service Extension response.


5: Routing to Backend in Cloud Run by Load Balancer

  • The ALB, based on its backend service configuration, forwards the gRPC streaming request to the appropriate backend service which is mapped to the serverless NEG where Cloud Run service resides.

  • The ALB handles the underlying routing to the Cloud Run instance.


6: Response handling

Response handling follows a similar pattern to the request flow. The backend initiates the response, which is then handled by the ALB. The ALB may call out to Apigee via the Service Extension (traffic extension) for policy enforcement before forwarding the response to the client.


This simplified use case demonstrates how the Apigee Extension Processor can be used to apply API management policies to gRPC streaming traffic destined for an application deployed on Cloud Run within the same Google Cloud project. The ALB primarily handles the routing to the Cloud Run service based on its NEG configuration.


Benefits of Leveraging the Apigee Extension Processor for gRPC Streaming

Utilizing the Apigee Extension Processor to manage gRPC streaming services on backend offers several key advantages, extending Apigee's core strengths to this new application of the platform:

  • Extending Apigee's reach

This approach successfully extends Apigee's robust API management capabilities to gRPC streaming, a streaming communication pattern not natively supported by the Apigee platform's core proxy.

  • Leveraging existing investments

For organizations already using Apigee for their RESTful APIs, this solution enables them to manage their gRPC streaming services within Apigee. While requiring the use of the Extension Processor, it leverages familiar API management concepts and reduces the need for separate tools.

  • Centralized policy management

Apigee provides a centralized platform for defining and enforcing API management policies. By integrating gRPC streaming through the Extension Processor, you can maintain consistent governance and security across all your API endpoints.

  • Monetization potential

If you are exposing gRPC streaming services as a product, Apigee's Monetization features can be leveraged. You can generate revenue whenever your gRPC streaming APIs are used by adding rate plans to customized API products you create within Apigee.

  • Improved observability and traceability

While detailed gRPC protocol-level analytics might be limited in a pass-through scenario, Apigee still provides valuable insights into the traffic flowing to your streaming services, including connection attempts, error rates, and overall usage patterns. This observability is crucial for monitoring and troubleshooting.

Apigee's distributed tracing systems can help you track requests in distributed systems that involve your gRPC streaming services, providing end-to-end visibility across multiple applications, services, and databases.

  • Business intelligence

Apigee API Analytics collects the wealth of information flowing through your load balancer, providing data visualization in the UI or the ability to download data for offline analysis. This data can be invaluable for understanding usage patterns, identifying performance bottlenecks, and making informed business decisions.

By considering these benefits, it becomes clear that the Apigee Extension Processor offers a valuable and practical way to bring essential API management capabilities to gRPC streaming services on Google Cloud.


Looking Ahead

The Apigee Extension Processor represents a significant step forward in extending Apigee's capabilities. We envision a future where any gateway, anywhere can leverage the power of Apigee's policy enforcement capability. This will involve harnessing the ext-proc protocol and integrating with various Envoy-based load balancers and gateways, enabling them to act as Policy Enforcement Points (PEPs) with the Apigee runtime serving as the Policy Decision Point (PDP). This evolution will further empower organizations to consistently manage and secure their digital assets in increasingly distributed and heterogeneous environments.