Confidential Computing, Part 1: Tackling the Challenge of Multi-cloud, Distributed Security at Scale

Post by John Manferdelli, original post is found here.

In this three-part series, readers will learn all about Confidential Computing, an emerging standard for providing secure distributed security at scale. Though Confidential Computing environments provide obvious benefits, adoption and implementation barriers loom large. With the introduction of the open source Certifier Framework project by VMware, barriers to implementation diminish, putting the reality and benefits of Confidential Computing in reach for more applications and environments. It’s an especially powerful construct for today’s multi-cloud world because it enables true end-to-end data security: data at rest, in flight and in use.

Part 1 defines Confidential Computing and provides a high-level overview of the challenges and key factors. Part 2 will address the nuts and bolts of a Confidential Computing environment. The series closes with Part 3, introducing the open source Certifier Framework for Confidential Computing.   

What is Confidential Computing? 

As multi-cloud becomes the de facto strategy for computing, the urgency to secure the programs and their data in those third-party managed and shared environments looms large. The challenge of securing data depends not only on encryption of data at rest and in flight but also while in use. Today, data is most commonly encrypted at rest – in storage and in transit – across the network, but not while in use (or in memory). Security is often enhanced with secure key management and trust establishment that can fail without perfect operational excellence and unconditional (and unverifiable) reliance on operators of computing resources.  However, these practices don’t adequately address a critical gap. When data is in use (or when the program consumes and manipulates the data), it is vulnerable. It’s at this phase where security threats and privacy breaches are most profound. Often the infrastructure operator and insiders are the weak link.   

According to the Confidential Computing Consortium, an industry group dedicated to open source solutions, “Confidential Computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data.”

Today’s conventional infrastructure makes encrypting in-use data challenging. You need both the program and the hardware platform to work in unison. If both are not equally enabled, the ability to encrypt and protect in-use data fails. While adding additional security products and practices may address a portion of the risk, this strategy may actually increase risk by expanding the attack surface or points of failure. So rather than solving the problem, these additional products make it worse. Shrinking the attack surface requires a principled and simplified systems-level approach to security and privacy that involves end-to-end security enforcement and removes the cloud provider, or any third party, from the chain of trust. This is exactly what Confidential Computing aims to deliver.  

Background: The evolution of Confidential Computing  

The concept of Confidential Computing starts with the hardware, specifically the chip providers. In 2011, Intel introduced the concept of a trusted execution environment (TEE) with its Software Guard Extensions (SGX). The TEE concept proved so compelling that every major processor design today incorporates the key ideas. AMD offers Secure Encrypted Virtualization (SEV), Arm offers a Confidential Computing Architecture (CCA), RISC-V is exploring Keystone, and NVIDIA is developing Hopper.  

But for Confidential Computing to deliver its benefits, developers must make changes in the software to form a complete environment. The hardware must work in concert with software. 

So, what does it do?  

Confidential Computing practices offer platform-based mechanisms for protecting the software and the data it uses wherever the software runs. It relies on both the hardware and the software running on it to work in concert to provide these additional protections. These measures are effective even in the presence of malware or when the software is run on a computer managed by an untrustworthy platform administrator.  

Confidential Computing protection is principled and verifiable across a distributed computing substrate, in the sense that it can unconditionally safeguard the integrity and confidentiality of a program’s processing and its data within certain trust assumptions.  When deployed in a multi-cloud setting, Confidential Computing promises a whole new vision of distributed security enabling new guarantees and new privacy-preserving workloads and services. The attestation, verification and encoded “handshakes” between programs and their platforms (processors) ensures a secure computational environment: data at rest, in flight and in use. Finally, since it enables verifiable security properties, Confidential Computing opens the door to new opportunities (like protected data sharing) while reducing the cost of security by replacing ad hoc and ineffective protections with more effective ones.   

A whole new world  

With Confidential Computing practices in place, applications become more secure and even possible in a multi-cloud environment:    

  1. Collaborative machine learning and data sharing: CC allows many different entities to pool training and analytic data without disclosing it to any party in the pool or a trusted third party.  A related application is selective policy-controlled data sharing, often called data economy applications.  
  2. Privacy-protected services including server-assisted motion planning:  CC-enabled privacy guarantees services. For example, if a robot manufacturer communicates with a robot on your factory floor to do motion planning, CC can ensure the manufacturer can operate the service without exposing your operational data.    
  3. Secure Kubernetes management including data protection unconditionally protected by infrastructure providers:  CC allows you to run your applications in a multi-cloud environment while assuring that cloud providers cannot see or change your data.  
  4. Privacy-protected data processing that provides auditable rules to enforce specific government regulations or legal requirements, such as GDPR protections even outside sovereign boundaries:  CC can ensure that sensitive processing. including PII or health information, is used under strictly enforced policy wherever the data is processed.  A sovereign cloud can be established in a data center anywhere and ensure absolute compliance with privacy rules for data originating in another jurisdiction.  
  5. Hardware-secure modules without additional hardware, secure key and data services:  Among the low-hanging fruit for CC is the ability for organizations to provide protected key service and protected, policy-controlled data access anywhere in the cloud.

Challenges ahead 

Pairing a Confidential Computing-enabled program with an equally enabled hardware platform produces an entirely new method to secure workloads and cloud environments. Because Confidential Computing principles are embedded and, to a certain extent, immutable, this combination of hardware and software offers more assurances than standalone security programs or practices.   

But employing Confidential Computing requires some significant changes to the cloud environment (data center server farms), as well as the software programs. While the processor manufacturers enjoy a head start thanks to Intel’s early work, the software and cloud providers need to play catch-up. 

Stay tuned to the Open Source Blog for Part 2 and Part 3. Follow us on Twitter for more deep dives into the world of open source contributing.

Post by John Manferdelli, original post is found here.

Source