Podcast
Root Causes 455: PQC Standardization in IETF


Hosted by
Tim Callan
Chief Compliance Officer
Jason Soroko
Fellow
Original broadcast date
January 10, 2025
We talk with guest Sofia Celi of Brave Browser, who leads the IETF PQC standardization effort, about the process of setting standards for PQC-compatible digital certificates. We learn about expected timelines, hybrid strategies, the NIST PQC onramp's role, and more.
Podcast Transcript
Lightly edited for flow and brevity.
And then that is true that eventually, once they became a standard that is also already incorporated in the protocols that secure the transport of the internet, then what happens is that a lot of companies and a lot of organizations adopt those standards and push them. Either the big CDNs and kind of the server side of the things will adopt them, and also on the client side, the browsers will start also adopting them, and, of course, all of the parallel organizations that exist to them, which is like the certificate management, key management, even like UI concerns. There's like a lot of places that eventually the internet will touch.
For example, one of the biggest drafts that we currently have at the TLS Working Group, which is how to add a KEM into the key exchange of the TLS 1.3 protocol. That draft has existed for like years now, so we were almost on par, but we were still waiting for NIST to get the final decision on who the winners were and actually publish a set of standards in order to also make those drafts and RFCs.
At the moment what is happening is that because those drafts already exist there and have been packed for a while, what is currently happening is that now we're actually trying to transform it into proper RFC so people could use them. There's a bunch of drafts currently. There's a lot of them, the TLS working group. There's some at the MLS working group. There's one at the ACME working group. There's a lot of working groups that have already like starting preparing. The focus has been mostly to focus on the confidentiality parts of the protocols. The ways in which we establish as share security is the one that we are putting our focus the most on, because we consider that that could be, that there's like a concrete attack against the confidentiality part.
Someone can harvest now and decrypt later. There has been less of a focus on the authentication part. Though I do think that next year we will see the rise of the authentication part, mostly because actually, two weeks ago or three weeks ago, Amazon actually published that they are going to be issuing certificates for the internal, private CA with MLDS. That means that the industry is already trying to issue these certificates, even for the private use cases. This is a private CA, but that already like signals that there’s interest into actually experimenting and deploying the authentication part.
There was a really early experiment that was launched between Cloudflare and Google Chrome in 2016, I think, that showed that it was feasible to put like a hybrid key exchange. I think they use an isogeny one plus like an elliptic curve algorithm. And it showed that it was okay, that nothing really broke. There was some impacts on the round trip at the TCP layer, and there was some middle boxes that couldn't handle, but it was something that we could solve. And we saw that quite a lot for the key exchange part. For the confidentiality part, there were a bunch of experiments performed by a lot of companies. There were also similar experiments done by Amazon, but we saw that less with the authentication part, because it's really difficult to have these emulated experiments, because you will have to have a way to issue these certificates, and we will have to do a coordinated experiment between public CAs and the server and the client. That's much more difficult to launch, but it's great to see this experimentation and it’s exciting, at least from Amazon, from a private CA even when they are both controlling the endpoint, because if they succeed and they showed us that there was nothing that really went terribly wrong, then that means that maybe the public CAs will be able to start migrating, because now we have the assurances from the experimentation and actual data and actual numbers that show us that everything was fine.
On the authentication part on the contrary, and the reason why I think it has been delayed is both because it's much more difficult to migrate the authentication systems. You have to coordinate with the different Certificate Authorities. You have to come up with a way to expire this now all certificates which were always horrible, horribly done how we expire certificates. We have to coordinate with a lot of parties, and also the majority of the signature algorithms that have been proposed to standardize by NIST do have some public key sizes that are much longer when compared with the classical counterparts. We still don't know, because we send so many signatures, for example, at a TLS layer. I think we send six signatures in a handshake. Because we send so much the impact of the signature at the authentication level is much more felt than at the key exchange level.
For example, one of the proposals of the American tree is also to rethink the way that the certificates are kind of handled in a way that it will be more efficient without having to think so much on the size of the public key. Or, for example, there's another proposal that I have co-authored, which is called KEM-TLS, which, instead of using signatures, use KEMs to arrive to authentication, which is also a way that you could use. There's like other things that you could also use for authentication.
At the moment, we seem to be fixated for signatures, but maybe it turns out that we cannot handle those signatures in the real world, and hence we have to move to other authentication mechanisms. Or it also maybe means that some protocols will not be able to handle. Let's take the case of maybe DNSSEC, and if it turns out that we cannot, like, just completely migrate the signatures of the DNSSEC to a postquantum one, that maybe we'll have to redo the whole DNSSEC protocol. That's also an approach that we could take.
There's like a lot of open avenues. At the moment, they are all stopped because we're only focusing more in the case change part. But it turns out that if, for example, some of the experiments that Amazon is making at the moment, or that Cloudflare is making at the moment, really showed us that the signatures would really make connection chokes that maybe we should think of other authentication mechanisms.
If you want that to actually become a standard of the IETF, then there's different, because then what you have to actually do is present the proposal and getting it standardized by the IETF. But nothing prevents people from using a code point and an algorithm assign a code point in the infrastructure. It is something that definitely we're keeping in mind, but the IETF independently creates their own opinions. Like it is a point that we take into account, but independently we also decide what the consensus or the different parts of the internet is.
There's another one that is like, there is one that is based on hashes. It's a hash-based algorithm that was called SPHINX. And I don't remember anymore how NIST is calling that one. I don't remember what's the new name, but it's a SPHINX, and that one, for example, it has been really interesting for other protocols. This has not been discussed at the TLS level, for example, but some people have actually proposed it to use it at the DNSSEC layer, because it has some sizes that are much more amenable to be used in DNSA, but it will mean that you will have to change, in a way, kind of how the DNSSEC protocol works. There's still some discussion if that's the best path forward.
Then there has been also some people talking about what it will mean for the future of new TLS. One of the things that that proposal will for sure, have to have, is quantum security and some people have also claimed that it will be great to even incorporate the new privacy mechanisms that certain extensions of TLS 1.3 has. Maybe having good mechanisms to check for expirations of certificates or check for if the certificates are included in transparency logs and preserve privacy when you're performing those checks should be considered as well. Those are, like open avenues as well as if we want to think of a new TLS.
If you have more exposures, for example, in the different regions of the world, how the internet works, because you see certain different types of censorship, because you see internet degradation because the connectivity of the internet is not great, that is also something that the IETF is deeply interested in, and it touches a lot on postquantum cryptography, because the majority of the experiments we have done, we have always done them with the perfect internet conditions. But that's not the whole world. If you have actual experience of when the connectivity of the internet is not great or it's heavily censored, and you're trying to put postquantum cryptography, and you failed, because it makes it even worse, then we would really love to hear that specific perspective. The perspective of the IETF, in my opinion, at the moment, is actually listening more to all of the different cases that the world has.

