Don't Trust, Verify: Building End-to-End Confidential Applications on Google Cloud

DEC. 9, 2025
Alberto Gonzalez Staff Software Engineer
Rene Kolga Senior Product Manager

In today's data-driven world, many helpful insights may utilize sensitive data categories, for example: whether it's processing personally identifiable information (PII) for personalized services, collaborating on confidential datasets with partners, or analyzing sensitive financial information. The need to protect data not just at rest or in transit, but also during processing, has become a critical business requirement.

While encryption for data at-rest (on disk) and in-transit (over the network) are well understood problems, the "data-in-use" challenge is usually overlooked. This is where Confidential Computing comes in, providing hardware-level protection for data even while it's being processed.

This post demonstrates how, with Google Cloud’s Confidential Space, organizations can build an end-to-end confidential service. We will show how an end user of this confidential service can gain cryptographic assurance that their sensitive data is only ever processed by verified code running inside a secure, hardware-isolated environment—including scenarios where the developer has deployed this service using a scalable, load-balanced architecture.

The Challenge of Trust and Confidentiality at Scale

Running a confidential service in a modern, scalable cloud environment introduces two challenges:

  • Trust and Transparency: For the customers of services to trust their data is processed privately, they need a way to verify the privacy properties of the code that's running. The simple answer is to open-source the entire application, but this is a non-starter for businesses with valuable intellectual property, proprietary algorithms, or sensitive AI models to protect. This creates a fundamental tension: how can an operator prove their service is confidential without revealing the very source code that makes it valuable?
  • Scalability: Modern cloud applications are built for resilience and scale, which typically means running multiple service instances behind a load balancer. While the usual practice of terminating TLS at the load balancer simplifies key management and the risk profile of the business workload, it means sensitive data is exposed in plaintext in the load-balancer for inspection and routing. This breaks the end-to-end confidentiality promise and expands the trusted computing base (TCB) to include the load-balancing infrastructure. The alternative—terminating TLS in each backend server task—would require securely distributing and managing TLS private keys across all application instances, making those keys part of the workload's attack surface and potentially vulnerable within the application itself.

Anchoring Trust with Google Cloud Confidential Space & Oak Functions

The solution starts with a strong, hardware-enforced foundation: Google Cloud Confidential Space. It’s a hardened Trusted Execution Environment (TEE) built on state of the art confidential computing hardware. It creates a hardware-isolated memory enclave where code and data are protected from the host OS, other tenants, the cloud provider, or even the cloud project owner. The key primitive it provides is attestation: a signed report from the platform itself that provides verifiable proof of the environment's integrity and the identity of any Open Container Initiative (OCI) container running inside.

For the user to trust their data is handled in a private manner, this would necessitate full transparency of the source code of the application. This is a complex task, and sometimes infeasible if the code or data is proprietary or sensitive. To achieve trust when full workload transparency is not possible, we run our application logic inside a containerized version of a verifiably private sandbox: Oak Functions. The sandbox prevents the business logic code from logging, storing data to disk or outside the TEE boundary, creating network connections, and interacting with the untrusted host in any way other than what it explicitly allows in a controlled way. This way, a user's sensitive data is kept private.

Thanks to this sandboxed architecture, the user's trust is anchored in the well-defined sandbox (which is open source and reproducibly buildable by anyone), and the sandbox developer’s endorsement of it, not the specific logic it executes. This architectural choice simplifies the trust story for the end-user. Instead of needing to audit and trust a complex, custom application, the user only needs to verify the small, transparent, and open-source Oak Functions container image.

Establishing Trust with Attested End-to-End Encryption with Oak Session

To enable establishing a trusted connection over a load-balanced connection, we layer application-level encryption on top of the standard network-level TLS. For this, we use Oak Session, an open-source library that implements an end-to-end encrypted session protocol. It's designed to build a secure channel directly with the application logic inside the enclave, even when routed through untrusted intermediaries like a load balancer. Oak Session achieves this through the following features:

Nested end-to-end encryption channel: An encrypted channel is opened inside of the outer TLS connection, directly with the confidential workload. This way, even if the outer TLS connection is decrypted by the load balancer, the inner data is still protected. TLS could be used for this nested channel as well, but in Oak Functions we have opted to use the Noise framework instead.

Noise is a flexible framework for building secure channel protocols based on Diffie-Hellman key exchange. A key advantage of Noise is its simplicity compared to a full-blown TLS stack. Our implementation of Noise handshakes and resulting encrypted channel is about 2.5K LOC, compared to BoringSSL 1.2M LOC. Such a small code footprint provides a smaller, more auditable set of cryptographic primitives, making it easier to implement correctly and verify.

Attestation: The trusted execution platform environment provides a signed report confirming its integrity and the identity of the workload running within its hardware-isolated memory enclave. Oak Session has been developed as a composable framework that allows assertions to be exchanged and verified. In the case of Confidential Space this assertion is made up of this information:

  1. A binding verification key (public key).
  2. A signature of a session token derived from the nested encryption handshake. This signature can be verified with the binding verification key. The role of this signature is explained in the Session binding section below too.
  3. An attestation JSON Web Token (JWT) signed by Google Cloud Attestation service. The JWT contains, among many other claims, the fingerprint of the verification key in the eat_nonce field.

The JWT alone is the first step towards establishing the identity of the platform, platform parameters (system image, environment configuration, etc) as well as the identity of the workload itself. The JWT alone is, however, not sufficient. JWT can be exfiltrated by a malicious operator and replayed by other workloads. The role (detailed below) of the binding key and the session token helps prevent this, completing the picture.

Session Binding: This is the critical step that connects the secure channel to the attestation JWT. During the handshake, both parties derive a unique session token, bound to the cryptographic identity of the session. The enclave signs this unique token with a private key (binding key) that is cryptographically tied to its attestation assertion. In our case, this is accomplished by including the fingerprint of the verification key. By verifying this signature, the client confirms that the entity it performed the handshake with is the exact same one that was attested to by the hardware, preventing MITM and replay attacks where an old or invalid attestation could be used to bootstrap a new, malicious session. In our example, we use the hash of the Noise handshake transcript. In the case of a TLS channel, the session token may be created based on the Exported Key Material.

While the session token could be included in the JWT eat_nonce directly, this would require one attestation per connection, which would add to latency and would require increased attestation quota (by default, attestation is limited to 5 QPS per project). Introducing the binding key allows us to decouple JWT requests from the critical serving flows.

Establishing trust: The JWT token extract below shows how it contains information about the platform setup, and workload identity. Crucially it contains the identity of the OCI registry the image was fetched from, as well as the digest of the image itself.

1 {
 2   "aud": "oak://session/attestation",
 3   "iss": "https://confidentialcomputing.googleapis.com", 
 4   "sub": "https://www.googleapis.com/compute/v1/projects/oak-functions/zones/us-west1-b/instances/oak_functions",
 5   "eat_nonce": "d3ee341dbbf8986b11e14db61bece35c02a18943ac6bbcd3868cb00676210fa4",
 6   "submods": {
 7     "container": {
 8       "image_reference":  "europe-west1-docker.pkg.dev/oak-examples/c0n741n3r-1m4635/echo_enclave_app:latest",
 9       "image_digest": "sha256:2f81b55712a288bc4cefe6d56d00501ca1c15b98d49cb0c404370cae5f61021a",
10     },
11   }
12 }
JSON

To establish trust on the workload, the client needs to perform the following verification steps:

  1. Verify the validity of the JWT, including signature, and expiration date.
  2. Verify that the binding verification key matches the fingerprint in the eat_nonce field.
  3. Verify the session token signature with the binding verification key.
  4. Verify the setup and parameters of the workload in JWT match expected values.
  5. Verify that the container field matches an expected value.

After this last step, the client has established trust on the platform, its setup, and the workload. There are a few details that have to be discussed though. What can we use as expected values? In some cases, it is easy. The public key used to verify the JWT is well known. The majority of claims in the JWT are well documented, including the system images that Confidential Space uses. This only leaves one last value: the expected container identity.

Oak Functions is reproducibly buildable, so the client can build it on their own and use the resulting OCI image digest as reference value, since it would be identical to the image digest in the JWT token. This simple approach has a drawback: since Oak Functions is released periodically, keeping up verifying and building each image reference can become impractical. The solution to this is a classic of Software Engineering: to introduce one level of indirection. For Oak Functions, we trust any OCI image was loaded from a trusted OCI registry and repository combination, to which only the Oak Team can publish. This is a form of endorsement. More advanced forms of endorsement exist, usually involving cryptographic signatures and provenance statements. A well known example is Cosign which is natively supported in Confidential Space.

Conclusion: Your Path to Trusted Confidential Computing and Beyond

Google Cloud’s confidential computing architecture and the technologies developed by Project Oak give organizations the best of both worlds: standard, scalable infrastructure and verifiable, end-to-end data confidentiality. The following diagram illustrates the main components and sequence of interactions between components until the channel has been open.

oak session diagram

Google Cloud, in collaboration with open-source security tools from Project Oak, provides a complete solution to protect your most sensitive data-in-use within standard, scalable cloud architectures.

What's Next: Powering Secure Generative AI and Agentic Experiences

These principles we’ve discussed allow you to unlock new business opportunities through secure data collaboration and provide cryptographic, auditable proof of your security and privacy posture to customers and regulators, even in cases where the provider's workload needs to remain proprietary due to intellectual property concerns. For example, in AI, healthcare, and genomic research.

As businesses increasingly adopt Generative AI, the need to protect proprietary models, sensitive prompts, and confidential data processed by AI agents becomes paramount. With the availability of GPUs in Confidential Space, Google Cloud is extending these hardware protections to demanding AI workloads. Imagine a future where an AI agent processes your confidential corporate data to provide insights. By combining Confidential Space with GPUs running Open Source Models like Gemma, Oak Functions and Oak Session to secure the prompts and responses, you can build agentic experiences that are not only powerful but also verifiably secure and private. This framework provides the trust necessary to deploy GenAI in high-stakes, enterprise environments.

Call to Action