Compute Confidential: In Hardware We Trust

Technically Speaking with Chris Wright
00:01 — Chris Wright

Cloud providers are great. They give us access to all the infrastructure we need to run our applications without having to worry about maintaining it ourselves. But there's one big catch. If we don't control the hardware, we can't be sure it won't be compromised. Now, suppose you do trust your cloud provider, but have a public edge device. How do we tell whether it's been compromised even if a reliable source provided it? Traditionally, running code exposes data to lower hardware layers. But if your hardware is vulnerable, how do we know that where our code is running is confidential and the integrity of the data is protected?

00:36 — INTRO ANIMATION
00:45 — Chris Wright

When it comes to integrity, we've made strides in code signing, like the Sigstore project, to verify the provenance of software and certify the code's trustworthiness. And to keep our data confidential, cryptographic techniques like secure multi-party compute assure that no individual can access another party's data by shipping the computation to the data source, though this necessitates a bidirectional trust assertion. And to ensure trust, we need to have system level support. And without that, we need a reliable way to tell us, or attest, that the runtime environment is, in fact, trustworthy. Attestation allows the ability to authenticate and is a means for one system to make more reliable statements about the software it is running on another system. The remote party can then, make authorization decisions based on that information. It makes sense that we protect sensitive data at rest using full-disk encryption, and in transit, using TLS and HTTPS. But we've only recently developed a technical capacity to encrypt data during runtime as well. A trusted execution environment is a secure enclave in which code runs protected from the host. In this way, we have a level of assurance that our data will remain confidential and tamper-free while executing in a cloud environment. A TEE is defined by the confidential computing consortium as "an environment that provides a level of assurance "of data confidentiality, "data integrity, and code integrity." And all of this is reliant on a hardware root of trust. To shed some more light on this topic, we have Lily Sturmann, an engineer in Red Hat's emerging technologies team. Hey, Lily, good to see you.

02:34 — Lily Sturmann

Hi, Chris, it's great to be here.

02:36 — Chris Wright

So if I wanna build a secure system, I know that it needs a hardware root of trust. I think about it as these sort of layers or levels of attestation where you can leverage the trust of another layer. And I'm familiar with TPMs and there's differences between a TPM and a TEE or trusted execution environment. Can you break down how a TEE's hardware root of trust is distinct from past approaches with TPMs?

03:05 — Lily Sturmann

Yeah, definitely. This is a really good point to start on. So both TPMs and TEEs will be using a hardware root of trust, and that's really important. Like you were saying with the different layers of trust, I like to think of it as a chain of trust. So each link in the chain is going to sign off on or verify the next link. And the root of that trust chain is obviously really important. Everything else in the chain hinges on that route being trustworthy. This is why for secure purposes and applications, it's really important to have a hardware root of trust because hardware is more difficult to tamper with and it's more tamper-evident than software generally. So when we have a hardware root of trust for a TPM, that can help to test the state of the system and let you know that your system is running with integrity in the way that you expect, but it doesn't actually provide confidentiality. So the TEE is different because the TEE is actually using the CPU as the hardware root of trust. And this lets you know that your system is setting up a TEE, which is a confidential area of memory where your sensitive workloads can run. And the CPU is the root of trust that lets you know that this is happening correctly.

04:37 — Chris Wright

Yeah, that makes sense. There's a lot of work going on under the hood and making sure you're applying this technology to the right problem domains makes a lot of sense. And my mind just goes immediately to things like crucial infrastructure, things like PKI, or in regulated industries like healthcare or financial sectors where you may wanna care about secure multi-party computing where you can safely bring code and data together without actually giving access to the data, I really see a need for confidential computing. I don't see that clear de facto standard emerging. There's a lot of different hardware models. There's different approaches of the software layer. So how do you see this evolving into the future?

05:26 — Lily Sturmann

So in order to make it easier for people to interact with these different hardware models of TEEs, like Intel SGX, or TX now is the newer one, or with AMD SEV, these two, for example, have very divergent ways that you interact with them. So at the software level, what we can do is give people a predictable interface and a familiar set of tools that they can use to use the capabilities that are provided in the hardware. So at Red Hat, we have a couple different initiatives right now around confidential computing. One of them is called CoCo or Confidential Containers. And this is actually an integration with Kubernetes. And at the same time, we have another project called Confidential Workloads. And this actually is more of an integration with Podman so that you can run confidential services. And as you might guess, these both can have really important applications at the edge as well. Furthermore, there is actually a group at the Linux Foundation called the Confidential Computing Consortium, and Red Hat is a member of the CCC. And a lot of important conversations are happening there around how to make confidential computing more visible, and accessible, and useful to people. So this is a group that brings together hardware vendors, cloud providers, as well as software engineers to disambiguate the different terms and the ways of doing confidential computing, and come to more of a consensus and make it more accessible.

00:01 — Chris Wright

I think that's key that low level proliferation of the details, the hardware, and then, the higher level accessibility really gives me some confidence that will build this secure future that I think we all depend on it as just data and IT becomes an intrinsic part of society. So Lily, thank you a lot. I really appreciate this conversation.

02:34 — Lily Sturmann

Thank you very much for having me here.

07:07 — Chris Wright

With hardware roots of trust, we can run our most sensitive workloads in secure enclaves anywhere, in the cloud, on premises, at the edge, wherever. With confidential compute, we can better protect our privacy, create more trusted systems, and continue to generate data-driven insights. The innovation in hardware combined with the accessibility of consistent software gives us the tools we need to build the trusted future we want.

07:56 — OUTRO ANIMATION

  • Keywords:
  • Security

Meet the guests

Lily Sturmann

Lily Sturmann

Software Engineer
Red Hat

Keep exploring

Trusted Execution Environment landscape

Learn how Trusted Execution Environments (TEEs) help to maintain data confidentiality and integrity during runtime, regardless of who might own or have access to the machine on which the software is running.

Read the article

5 security considerations for edge

With edge deployments increasingly in demand, security teams need to adjust for an attack surface that extends beyond the datacenter.

Read the blog post

More like this

Technically Speaking with Chris Wright

Unlocking Zero-trust Supply Chains

A zero-trust security model involves rethinking endpoints and network security. But how does it apply to developers?

Command Line Heroes

Command Line Heroes: Season 9

Malware haunts us all. Viruses, worms, trojan horses, and the harm they do often corrupts the promise of the internet. Season 9 features the people in security who fight back.

Compiler

How Can Memes Improve Security?

This episode, a couple of Red Hatters tackle an unusual security challenge. And while intentions were good, the memes they planted could have easily been something much more nefarious.

Share our shows

We are working hard to bring you new stories, ideas, and insights. Reach out to us on social media, use our show hashtags, and follow us for updates and announcements.

Presented by Red Hat

Sharing knowledge has defined Red Hat from the beginning–ever since co-founder Marc Ewing became known as “the helpful guy in the red hat.” Head over to the Red Hat Blog for expert insights and epic stories from the world of enterprise tech.