Building and running modern cloud-native applications has its risks. One of the biggest is that you’re sharing computing resources with an unknown number of other users. Your memory and CPU are shared, and there’s always a possibility that data may accidentally leak across boundaries, where it can be accessed from outside your organization.
A breach, even an accidental one, is still a breach, and if you’re using Azure or another cloud platform to work with personally identifiable information or even your own financial data, you’re in breach of any compliance regulations. It’s not only user or financial data that could be at risk; your code is your intellectual property and could be key to future operations. Errors happen, even on well-managed systems, and a networking problem or a container failure could expose your application’s memory to the outside world.
Then there’s the risk of bad actors. Although Azure has patched its servers to deal with known CPU-level bugs that can leak data through processor caches, microcode-level issues are still being discovered, and it’s not hard to imagine nation-state or organized cybercriminals using them to snoop through co-tenants’ systems.
Azure’s cybersecurity infrastructure is one of the best. It uses a wide range of signals to look for malicious activity with machine learning-based threat detection to quickly spot possible areas for investigation. Security and encryption are built into its underlying platform. Even so, some customers want more than the defaults, as good as they may be. They’re businesses that are building cutting edge financial technology in the cloud or using it to process and manage health data. They even may be governments or the military.https://imasdk.googleapis.com/js/core/bridge3.433.0_en.html#goog_1459789581Volume 0%
Introducing Azure confidential computing
By default, Azure ensures that data is secured when it’s at rest and in transit. We’re familiar with using encrypted storage and network connections, but in most cases, we need to process it in the raw, decrypting it right where it’s most at risk of leaking. That’s where the concept of confidential computing comes in, building on a mix of hardware and software, along with work from Microsoft Research, to build and operate TEEs (trusted execution environments). These TEEs are perhaps best thought of as secure containers that protect both the compute and memory resources your application needs, shielding them from other users by preventing untrusted code from running in that memory space.
By protecting both CPU and memory, it’s possible to provide authorization methods that lock down compute to ensure that only your own trusted code runs and that prevent code from crossing memory boundaries into protected space. When an application frees up a TEE, it’s flushed, ensuring that there’s no data left in processor caches or in memory. External applications can’t read that memory and they can’t modify it either, so they’re unable to inject code across protection boundaries.
Using SGX in Azure
Azure offers two different TEE models: Virtual Secure Mode and Intel’s SGX. The first is based on Microsoft’s own Hyper-V, using a modified version to increase isolation by preventing code from crossing hypervisor boundaries. This includes code being injected into the TEE by Azure administrators, preventing insider attacks that might otherwise go undetected. Intel’s SGX security extensions add hardware protection to TEEs, and Azure offers access to SGX-enabled servers for applications that don’t trust Microsoft or for multiparty applications where only the application is trusted and where each party can’t have access to the other’s data (for example, machine learning over health care data from multiple providers).