Organizations want to lock down tricky workloads or customer data in the cloud. Confidential computing is one way to do that, albeit with some performance and portability limitations.
Confidential computing is a maturing technology that organizations and cloud providers can use to secure data not only in transit and at rest, but while it’s being processed, too. The goal is complete, end-to-end data encryption. With this technology, organizations can migrate and house sensitive data in the cloud and securely collaborate with other companies.
Confidential computing could also alleviate some of the anxiety cloud customers have about a cloud provider’s staff eavesdropping into or tampering with their workloads, said Steve Riley, a Gartner analyst. Confidential computing implementations essentially lock the provider out of your workloads, so they have no access to or view of your data.
Still, confidential computing has yet to reach maturity and relies on hardware providers such as Intel, AMD and ARM to fully invest in and support the technology. The Confidential Computing Consortium (CCC), a collaborative Linux Foundation project between hardware and cloud providers, aims to accelerate confidential computing maturity and adoption. Its principle founding members include Alibaba, Arm, Google, Huawei, Intel, Microsoft and Red Hat.
While market leader AWS hasn’t fully embraced confidential computing, other major providers believe it’s where the cloud is headed.
“As these technologies mature and get more flexible, they’re going to be easier to adopt,” said Microsoft Azure CTO Mark Russinovich. “And what we ultimately believe and what we’re pushing for is that confidential computing, at least as a basic capability, will be eventually ubiquitous.”
Microsoft was the first public cloud provider to get in on confidential computing in 2017. Since then, the other major cloud providers have taken a few different strategies to offer this capability to users.
As we’ll discuss below, Microsoft, Google Cloud and IBM have deeper engagement in this technology than AWS. This is one area where the other providers could look to distinguish themselves from market leader AWS.
How does confidential computing actually work?
While data encryption at rest and in transit has become standard operating procedure, there are still a few ways hackers can get at your data. When an app is in use, its data is decrypted, and hackers can use this as an opportunity. With confidential computing, data is encrypted in use by running it in a trusted execution environment (TEE), also known as an enclave.
A TEE, as defined by the CCC, is an environment that guarantees a level of assurance in data integrity, data confidentiality and code integrity. This is basically a secured box. Once you’ve set up the TEE, no unauthorized entity, whether it is the OS or application host, system admin or cloud provider, can access or change the data inside the environment.
TEE technology is embedded in hardware, and there are two specific implementations cloud providers are using in their offerings — AMD’s Secure Encrypted Virtualization (SEV) technology and Intel’s Software Guard Extensions (SGX). SEV offers better portability and performance for complex and legacy workloads when compared to SGX, but it lacks memory integrity protection. It also doesn’t shield code from platform owners, which Intel’s technology does, Riley said.
AMD has an updated offering called Secure Nested Paging (SEV-SNP) that supports memory integrity protection, but it isn’t used by cloud providers at this time, according to Riley. Google’s offerings are supported by SEV, while Microsoft and IBM use SGX. The strengths and weaknesses of the hardware are reflected in the cloud providers’ offerings.
Attestation is another key confidential computing concept. This is a verification process by which you ensure your secure environment is actually locked up. Organizations must verify their environments via their hardware provider.
Let’s look at what each cloud provider offers for confidential computing technology.
Microsoft Azure began offering confidential computing VMs in 2017 and has close ties to the open source side of confidential computing. Microsoft contributed the Open Enclave SDK, a single unified abstraction layer to build TEE-based applications, to the CCC. By using the Open Enclave SDK, your code within an enclave is portable between Intel and ARM hardware and both Linux and Windows.
Microsoft offers SGX-enabled confidential computing VMs, currently its DCsv2 series. These SGX-enabled VMs work as an abstraction layer between hardware and your application and fully removes Microsoft from the Trusted Compute Base. This means Microsoft cannot access or view this data in any way. Because of this, SGX-enabled VMs have a more defined shield between you and the cloud provider than those offered via AMD, such as Google Cloud Confidential VMs.
And while these Azure VMs are more portable across different compute frameworks, you can’t simply use it with any workload, as you can with Google Cloud Confidential VMs. To make an existing workload confidential, you’ll have to refractor the application code and make major software changes.
IBM Cloud first announced its confidential computing capabilities in 2018 and is on its fourth generation of the technology. IBM Cloud has a variety of services in the space, offering both enclave and managed confidential computing services.
IBM Secure Execution for Linux is a TEE service that isolates large workloads in the cloud for data integrity. IBM Cloud Data Shield is an SGX-based service to secure containerized workloads on Kubernetes and Red Hat OpenShift. IBM Secure Execution for Linux runs on IBM z15 and is a good fit for isolating large workloads in hybrid cloud environments.
On the managed side, IBM’s Hyper Protect Services portfolio embeds secure enclave technology in database, key management, virtualized servers and financial services. IBM’s Hyper Protect Services is the only public cloud service with Level 4 FIPS 140-2 certification.
IBM’s confidential computing road map also includes Red Hat’s Enarx, an open source project that aims to make it easier to build confidential computing applications that work with different providers and platforms, via hardware independence in TEE environments. Enarx is not yet available.
IBM’s confidential computing managed services and scale is a fit for large enterprises that need to protect large amounts of regulated or confidential data in the cloud.
Announced in July 2020, Google’s Confidential Computing portfolio is the most recent cloud offering in this space. At time of publication, it includes Confidential VMs and Google’s confidential computing open source framework, Asylo. With these tools, users can make an existing Google VM confidential with one click in the Google Cloud Platform console. Confidential Google Kubernetes Engine nodes, currently in preview, will have similar capabilities.
Because these services run on AMD’s hardware, you can make existing workloads and applications confidential without changing the code. Essentially, any Google workload you’re running can be made confidential.
Confidential VMs also include a few additional features:
- Audit reports for compliance;
- Policy controls to define privileges for Confidential VMs;
- Integration with additional security measures like Shared VPCs and firewalls to separate Confidential and regular VMs within projects; and
- A virtual Trusted Platform Module to share secret keys for encrypted data within the Confidential VM.
Google’s offering works for the most traditional confidential use case — securing sensitive data in the cloud — but it also works when multiple, independent parties must share data. Organizations can always store sensitive data on premises, but companies trying to collaborate and share data sets may not trust the other organization’s data center. Keeping this data in Confidential VMs offers a solution.
Google’s offering is the easiest to implement, but until SEV can provide memory integrity protection and fully shield code from the platform owner, these offerings won’t be as secure as the SGX implementations.
Unlike the other public clouds with confidential computing offerings, AWS is not a member of the CCC. AWS’ offering, Nitro Enclaves, is in preview at time of publication. Nitro Enclaves is built with AWS’ Nitro Hypervisor technology and is a VM that attaches to an EC2 instance to create secure isolated environments.
A Nitro Enclave inherits some of the CPU and RAM from the first EC2 instance, which gives you an array of compute and memory options to process your sensitive workloads. Cryptographic attestation is performed through the Nitro Hypervisor, so only your enclaved EC2 processes this data. It also integrates with AWS Key Management Service.
While the Nitro Enclaves service isn’t built on Intel SGX or AMD SEV, it does shield data from its attached EC2 instance. The hardware virtualization traps and blocks any attempt by the instance to randomly read from or overwrite the memory allocated to the enclave, Riley said.
It serves various use cases, such as securing confidential healthcare or financial data. If multiple independent parties wanted to process the same set of sensitive data in a way that benefitted each party, they can also do so with Nitro Enclaves. This is one of the primary uses cases for confidential computing today, Riley said.