Discontinuing support for JSON-RPC and global HTTP batch endpoints

JUL 13, 2020

UPDATE:

August 19, 2020: Revised guidance for Google Cloud client libraries and Apache Beam/Cloud Dataflow

August 12, 2020: Added guidance for Dataproc hadoop connector

July 13, 2020: Enumerate endpoints for JSON-RPC and Global HTTP Batch. Include examples of Non-Global HTTP Batch endpoints for contrast.

July 8, 2020: Limit usage of JSON-RPC and Global HTTP batch endpoints to existing projects only. Starting July 15 (JSON-RPC) and July 16 (Global HTTP Batch) we will no longer allow new projects to call these two endpoints. Projects with calls in the last 4 weeks will continue to work until the deadline of Aug 12, 2020.

Apr 23, 2020: gcloud min version has been updated

Apr 22, 2020: error injection planned for Apr 28 is CANCELLED, JSON-RPC and Global HTTP batch endpoint will perform normally. Next error injection window will be on May 26 as scheduled.

We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.

The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.

We had originally planned to decommission these features by Mar 25, 2019. However, it came to our attention that few highly impacted customers might not have received the earlier notification.

As a result, we are extending the deprecation timeline to Aug 12, 2020, when we will discontinue support for both these features. Please note the timeline extension to Aug 12, 2020 does not apply to Places API. Please visit Places SDK for iOS migration for information on Places API migration.

Starting February 2020 and running through August 2020, we will periodically inject errors for short windows of time. Please see below for exact details and schedule of planned downtime windows.

We know that these changes have customer impact and have worked to make the transition steps as clear as possible. Please see the guidance below which will help ease the transition.

Planned downtime

To enable customers to identify systems that depend on these deprecated features, before the final turndown date is reached, there will be scheduled downtime for Global Batch and JSON-RPC starting from February 2020 and running through August 2020.

Below are details and schedule of periodic error injection windows, details of the downtime in June and July will be confirmed nearer the time, so please check back this blog post for the latest schedule.

How do I know if I should migrate ?

JSON-RPC

To identify whether you use JSON-RPC, you can check whether you send requests to "https://www.googleapis.com/rpc" or "https://content.googleapis.com/rpc". If you do, you should migrate.

HTTP batch

A batch request is homogenous if the inner requests are addressed to the same API, even if addressed to different methods of the same API. Homogenous batching will still be supported but through API specific batch endpoints. If you are currently forming homogeneous batch requests, using Google API Client Libraries or using non-Google API client libraries or no client library (i.e making raw HTTP requests), you should migrate.

A batch request is heterogeneous if the inner requests go to different APIs. Heterogeneous batching will not be supported after the turn down of the Global HTTP batch endpoint. If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests, i.e you should migrate.

What do you need to do to migrate ?

Clients will need to make the changes outlined below to migrate.

JSON-RPC

Endpoints

The following JSON-RPC endpoints will no longer be supported.

Using Client Libraries

If you are using JSON-RPC client libraries (either the Google published libraries or other libraries), switch to REST client libraries and modify your application to work with REST client libraries.

Example code for Javascript
Before

// json-rpc request for the list method
      gapi.client.rpcRequest('zoo.animals.list', 'v2',
      {name:'giraffe'}).execute(x=>console.log(x))
      
After
// json-rest request for the list method
      gapi.client.zoo.animals.list({name:'giraffe'}).then(x=>console.log(x))
      

Not Using Client Libraries i.e making raw HTTP requests

If you are not using client libraries (i.e. making raw HTTP requests):

  1. Use the REST URLs, and
  2. Change how you form the request and parse the response.

    Example code

    Before
    // Request URL (JSON-RPC)
          POST https://content.googleapis.com/rpc?alt=json&key=xxx
          // Request Body (JSON-RPC)
          [{
          "jsonrpc":"2.0",
          "id":"gapiRpc",
          "method":"zoo.animals.list",
          "apiVersion":"v2",
          "params":{"name":"giraffe"}
          }]
          
    After
    // Request URL (JSON-REST)
          GET
          https://content.googleapis.com/zoo/v2/animals?name=giraffe&key=xxx 

HTTP batch

Endpoints

The following Global HTTP Batch endpoints will no longer be supported.

Non-Global HTTP Batch endpoints that include the API name in the URI will continue to be supported. Examples includes:

Heterogeneous batch requests

If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests.

Example code
The example demonstrates how we can split a heterogeneous batch request for 2 apis (urlshortener and zoo) into 2 homogeneous batch requests.


Before
// heterogeneous batch request example.

      // Notice that the outer batch request contains inner API requests
      // for two different APIs.

      // Request to urlshortener API
      request1 = gapi.client.urlshortener.url.get({"shortUrl":
      "http://goo.gl/fbsS"});

      // Request to zoo API
      request2 = gapi.client.zoo.animals.list();

      // Request to urlshortener API
      request3 = gapi.client.urlshortener.url.get({"shortUrl":
      "https://goo.gl/XYFuPH"});

      // Request to zoo API
      request4 = gapi.client.zoo.animal().get({"name": "giraffe"});

      // Creating single heterogeneous batch request object
      heterogeneousBatchRequest = gapi.client.newBatch();
      // adding the 4 batch requests
      heterogeneousBatchRequest.add(request1);
      heterogeneousBatchRequest.add(request2);
      heterogeneousBatchRequest.add(request3);
      heterogeneousBatchRequest.add(request4);
      // print the heterogeneous batch request
      heterogeneousBatchRequest.then(x=>console.log(x));

      

After
// Split heterogeneous batch request into two homogenous batch
      requests

      // Request to urlshortener API
      request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"});

      // Request to zoo API
      request2 = gapi.client.zoo.animals.list();

      // Request to urlshortener API
      request3 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"})

      // Request to zoo API
      request4 = gapi.client.zoo.animals.list();

      // Creating homogenous batch request object for urlshorterner
      homogenousBatchUrlshortener = gapi.client.newBatch();
      // adding the 2 batch requests for urlshorterner
      homogenousBatchUrlshortener.add(request1);
      homogenousBatchUrlshortener.add(request3);

      // Creating homogenous batch request object for zoo
      homogenousBatchZoo = gapi.client.newBatch();

      // adding the 2 batch requests for zoo
      homogenousBatchZoo.add(request2);
      homogenousBatchZoo.add(request4);
      // print the 2 homogenous batch request
      Promise.all([homogenousBatchUrlshortener,homogenousBatchZoo])
      .then(x=>console.log(x));
      

Homogeneous batch requests

Google API client libraries

If you are using Google API Client Libraries, these libraries have been regenerated to no longer make requests to the global HTTP batch endpoint. Recommendation for clients using these libraries is to upgrade to the latest version if possible. Please see the language specific guidance below for minimum Google API Client Library to upgrade to.

1. Java
2. Python 3. PHP 4. .NET 5. Javascript 6. Objective-C 7. Dart 8. Ruby 9. Node.js 10. Go 11. C++

Google Cloud Client Libraries

  1. Java

    Upgrade to v1.39.0 or later of the Google Cloud Java Client for Storage (released in v0.57.0 of the Google Cloud Client Library).

  2. Python

    Upgrade to v1.9.0 or later of the GCS client.

GCloud

Older versions of gcloud used global batch calls in some cases. You should update your version of gcloudl. GCloud from v148.0.0 onwards is compatible with this deprecation.

In general this can be done by running "gcloud components update".

Apache Beam/ Cloud Dataflow

Users with Dataflow pipelines written using the Beam (or Dataflow) Python or Java SDKs 2.4 and below should update their pipelines to a later SDK (than 2.4), ideally to the latest one available here.

Dataproc hadoop connector

Users who run Apache Hadoop or Apache Spark jobs directly on data in Cloud Storage using gcs-connector should update their connector to 1.6.3 or later.

Non-Google API client libraries or no client library

If you are currently forming homogeneous batch requests and using non-Google API client libraries or no client library (i.e making raw HTTP requests), then:

We’re here to help

For help on migration, consult the API documentation or tag Stack Overflow questions with the 'google-api' tag.