Podcast
Root Causes 460: The State of PQC with Michele Mosca


Hosted by
Tim Callan
Chief Compliance Officer
Jason Soroko
Fellow
Original broadcast date
January 28, 2025
In this episode we are joined by Dr. Michela Mosca. We discuss his pioneering work identifying the need for post-quantum cryptography, where PQC stands today, and what the future may hold.
Podcast Transcript
Lightly edited for flow and brevity.
So we are recording this in the late days of 2024. You'll probably be listening to it, listeners, in early 2025 and there was just a very important announcement from Google. So for starters, tell us what that was.
This has been an almost 30 year journey where, in 1996, researchers said that if these almost, if these surely impossible assumptions are met, we could do fault tolerant error correction. To us in academia, that was a big breakthrough, because going from impossible to nearly impossible is a big advance. We knew that of course, we're going to have to remove a lot of these assumptions or make them more practical. But again, from impossible to you could do it with these because it was like 10 to the minus six error rates, and every qubit could interact with every other qubit and so on and then years later, the theory improved way beyond my expectations, and nearest neighbor interaction sufficed. Then error rates started approaching even 1%. Like in terms of tolerable error rates. So we've come a long way in the theory as well as the experiments. But the thing with fault tolerance is, if the devices doing the error correction are noisy and the noise is too high, it actually makes things worse. Because the way error correcting codes work, even with classical information, like if this channel here is noisy, we would try to fix it by repeating the same somehow introducing redundancy. But if our memory is bad, and then we're writing things down poorly, and we try to error correct, we actually make things worse. There's a lot of moving parts when you try to implement quantum error correcting codes, and we were improving all the parts, and often they would meet the necessary, thresholds and quality criteria but they have to do that in the same system at the same time, so that when you do the error correction, extract the syndromes in real time and the end game is you get a lower error rate. You get a lower logical error rate than you had a physical error rate. The theory tells you, if you go to a bigger code, the error gets oppressed even lower. That's what the theory says.
Google's experiments showed that. When they went to a bigger code, they actually had a lower logical error rate than when they had a smaller code, or no code at all. So that's just amazing, really beautiful and is really taking us into this near final phase of toward a cryptographically relevant quantum computer, which started with again, in the early days, it's like, find me a qubit and show me some basic functionality. Get me multi qubit systems. Start implementing the building blocks of error correction and so on. Now we're in, as John Preskill called it, the error of quantum error correction. The next phase is really large scale. Logical qubit systems, which, among other things, will be cryptographically relevant.
So it's been a long journey, but it’s really come a long way, and this is one of the exciting things, and it's kind of like the four minute mile. Like, it's not just, there's several players now that are making amazing advances here alongside this Google News. Like just, I think yesterday, Physics World gave a prize to Google. I think Q-Square - I don't know much how they pronounce it who do neutral atoms, a completely different platform, also doing amazing things. Then, of course, you have the Quantinuum results and the IBM results, all you know, super impressively advancing this frontier of quantum error correction.
Very importantly, when you make a bigger computer, you have to make sure your error rates don't go up. Of course, to do that, or, I mean, a natural way to do that is to have modular systems, so that once you kind of get each module working and the networking pieces working, then it's a matter of "just" integrating. But, and that's many moving parts, and many achievements, starting with getting a qubit that kind of works and two qubits that work, and then reducing the error rates and getting all these other functionalities, like measurement and feedback to now, again, we're at the cusp now where we have these architected into platforms where you can do everything in the same system at the same time, fast enough and well enough that you can you can start entering the scaling era, which will be sort of that final, fifth phase, where you can start implementing algorithms on logical qubits.
Now the number of logical qubits we have now are still not enough. You could do it on a laptop still. But remember 5 – 10 years ago, say, 10 years ago, a laptop could outperform even the physical qubits we had. Now that day has come and gone. We're now at the point where you can't simulate on a laptop or even a super computer the current quantum computers at the physical qubit level. We're still looking for useful applications of these sort of limited size quantum computations, but they're already at the point they've passed the threshold of we can't just simulate it on classical devices, and people are working hard on doing useful things with them. But now we're switching from, not switching, but evolving. We still have the physical qubits. We still have NISQ. People are still looking for commercially useful applications, and now we're starting to counter on logical qubits. We're actually quite soon, like we're talking how many quarters, not how many years or decades, how many quarters before our logical qubit systems can't be simulated by classical computers. Then when we enter the era, and what can they do that's useful, we're not limited to which algorithms are noise tolerant anymore. We can do arbitrarily large, I mean, larger and larger error corrected quantum computations on these devices. There'll be useful applications before we get to cryptographic relevance, and there'll be useful applications even more beyond computers big enough to break RSA-2048.
My background is actually in classical cryptography. Early 90s, I was trying to break the public key algorithms of the time, back when it was really a niche and a curiosity for most people. So I've had many mentors and colleagues who've been a part of implementing real world cryptography and deploying it and getting it standardized and so I understand. I saw from them that this is a hard exercise they’re doing. The theory is hard. The implementation is way harder. There's so many aspects to it where, from academia, we’ll come to it and say, oh, why don't you just do it this way? Then you get the 10 reasons why you can't just do it that way. There's a lot of real world considerations, like, deprecation and you don't want to lose significant fraction of your customer base, and interoperability and implementation errors and all this real world stuff that we have to be very respectful of. So what I'm trying to do to is be useful to the community, trying to, you know, really, what are they doing? They're giving us the safe, the platform, a safe foundation so we can run. Like we can race ahead because, we couldn't have had the last 30 years of economic growth without public key cryptography. We're going to leverage digital platforms more and more than we ever have, and for all this value creation, like, that's the fun part that people like to talk about. Like, you can't do that if you don't have all the robust, the guard rails, secure ways to do all these things. So we need that. We want to enable that.
So again, what we're trying to do is frame this risk amongst all the other ones in a way where we don't understate it and we don't overstate it. But the fact is, we don't know when it's going to happen exactly, and we don't know the full extent what else quantum computers can break. So the best we can do is try to honestly portray what we do know. Because my practitioner friends for years have been saying, Look, if you gave us a timeline, we'd be better positioned to manage this risk alongside all the other risks we have to manage. Academics would tend to say, well, we can't possibly know. I'm not going to answer that question. I'm like, look, we don't have to give definite answers but there's a material difference between unlikely less than 1 in 1000 chance unlikely, versus 49% unlikely. I think let's start to give a bit of color there, and let's be honest about where we disagree, and let's see where we do agree.
So we're trying to give a realistic timeline for when these devices will come, then organizations will have to take their risk tolerance, and match it along with these estimates. These are estimates. It's not perfect. It's not super precise, but it's far better than anything else I've seen. It's really meant as a guide.
Back in 1996 my Master's Thesis Committee at Oxford very cruelly, asked me, when do you think we're gonna have a quantum computer? I just started learning about quantum computing, but I did get to meet a lot of the pioneers before that, and some of them have won Nobel Prizes. I was starting to get an intuition for what the deal was. I tried not to answer the question, but eventually did. I said, look, it's going to be more than 20 years, but in 20 years, we're going to have 20 qubits, and then it'll start to become clearer.
Translating that into sort of a risk timeline or threat timeline approach, what I was saying is the probability is effectively zero for the next 20 years, and then it kind of sort of starts to crawl up. So all the probability mass is 20 years into the future. Of course, if you asked many people, you'd get different distributions. But for most of us, it was some many years in the future. Some people will say, oh, it's always 20 years in the future. Of course now some of them are saying it's always 10 years into the future. Well, which one is it? Was it always 10 years in the future or always 20 years in the future? So I think the fact that it went from always 20 years in the future to always 10 is saying there was progress. I think what's happened is that probability mass went from gradually over the last 25 years plus, from mostly 20 years in the future and beyond to I would say there's anywhere between a fifth to a third of that probability mass is in less than 10 years. So it went to being negligible in the next 20 years to there's already a third chance or higher in the next 5 to 10 years, which is, given what's at stake, and unacceptably high risk to not have some sort of mitigation plan.
I understand why the White House and others are saying, get this done by 2030. Do we know for sure it's going to happen by 2030? Well, no. But even being one day too late could have catastrophic consequences. This is not like Y2K where we know the date with perfect precision. Like there's going to be a spectrum of uncertainty here. As time goes on, the probability distribution starts to become tighter and tighter. Like it becomes earlier, and the variance gets tighter. But we'd be super lucky to even have a 5-year variance, which we don't have.
So, if we get this done in 2030 and crypt analysis starts in 2032 then we'll have been super lucky. Because if it happens in 2032 that means there was a non-negligible chance it could have happened in 2028. I think they're right, on the one hand, to be pushing us to have at least our more critical systems ready and have the tools ready by 2030 and hopefully it doesn't happen. The crypt analysis doesn't start in 2030 because that would have meant we were playing it way too close.
Like I always hear this talk. What date? What is the date? What is the date of cryptographic relevance? It always strikes me as a funny question, because it seems to me like it shouldn't be a date. It should be a spectrum of time. With a concentration in the middle and kind of a fading relevance on either end. How do you think about that?
I think you have to be ready. It's one thing to say, I'm going to kind of wait and see and decide how to turn the knob. But, you have the solutions, they're deployed, they're certified, they're scalable and you’re kind of deciding, well, when am I going to turn it on or deploy it, or scale it, versus when am I going to start thinking about all the real world issues to deploy it? So you have to be ready to act way before we even get to that first phase. Again, for certain not super critical systems, you can wait before switching it on or scaling an existing, trustworthy solution, but there's decades of work to get to that point. So that work has to get done.
The other thing I would emphasize is there it's not necessarily going to be a 10-year gap between the first, where it's a small set in the hands of state actors, versus there's many of these things and just everyday cyber criminals get access. I mean, it's not like there's a threat actor who has to own the quantum computer and who is coming after your bank. Like there's going to be, just like we have today, people who sell malware, people who sell DDoS as a service and all these other things. There's going to be criminal services built out of quantum code breaking capabilities. Most criminals aren't going to go buy private keys. They're going to buy a criminal service that is built on a criminal service that's built out of somebody who can get private keys.
Another sort of factor that some people assume is that, well, of course, the first few quantum computers that are commercially available, there's going to be know your client controls, and only nice people doing nice things are going to have access to these quantum computers. That's not how criminals work. They're going to certainly bypass those controls. It's just a matter of how much, how hard it is for them, and the harder you make it for criminals to get in, the harder you're making it for legitimate users to get in and use them to create value.
So like we want strong foundations, so this is just another high performance computer (HPC), not some weapon of mass destruction. Or like, we don't want it to be that, because we want to benefit from quantum computers. We have to get ahead of it. We want them to stay online and be accessible. Of course, there's going to be export control and stuff, but I mean, cyber actors will get on these platforms by either hacking people who have legitimate access or pretending to be people that, I mean, they have the playbook, so we don't have to write them for them, and they'll come up with new ones. So again, that assumption that I often hear, I don't think it's going to really be the case that the bad actors are, we're going to be able to hold them off these commercially available computers. That's the end game. Like we really, we want to make these commercially available so people want to all be, the quantum value of the world, and we want to create economic prosperity for the world using these platforms. So we don't want to unduly control and slow down their deployment.
Because people often say, they look at the literature and they say, oh, but it'll take this massive computer, 24 hours or 8 hours to get one key. Like these are just benchmarks. Like in our threat timeline survey, we say 24 hours just to have some benchmark that is not too big, not too small, because most of my colleagues are not like 24 hours, two hours, one week. Like this is still very coarse grained estimation. I didn't want to over complicate it, so we just picked something meaningful, not too big, not too small. But there's like adversaries, first of all, don't tell you when they've come up with a better algorithm. So, a very, very modest algorithmic advance could shave off a factor of 10 or 100. In the real world cost. So I agree with you, this is going to be something that will ramp up, but we don't know the exact start time, because adversaries might not tell us. For the secret quantum computers, we don't know when those are going to be available. For the publicly known ones, we don't know exactly how fast they'll progress. We don't know the algorithmic advances that will let people get more crypt analytic power out of a fixed, out of a finite, certain size quantum computer.
So we do need to be ready. I think you just have to take this balanced approach of, yeah, the sky isn't necessarily going to fall the moment the first quantum computer comes but don't use that as a reason to delay being ready.
Which is, what everyone really cares about. Is when will cryptographic relevance occur. This is a hugely debated topic. You study this very closely. In your mind, if I made you give me an over/under date, what are you gonna pick?
You asked for the 50% number. But I don’t know if you will take the 1/3 number. That’s my personal estimate. And of course, for any moderately critical that's obviously not acceptable. And you could factor in maybe a year or two, depending on which threat actors you're worried about. But rough order, that means we really, really, really need to be ready in less than 10 years.
Then, I put ECC at about a 1% chance. So RSA about 5x that. So small, but not that small. Honestly, we should have been mitigating that risk all along. Not everywhere. Like for most systems, pretty good is good enough. But as you start betting more and more of our lives and economy, and when you were saying you wouldn't bet your life, in some cases, we are literally betting lives. Like there are in some cases, it's not just IT issue, IT security, it's a safety issue. And not to over dramatize, but there are places, especially as various other technologies and health technologies and IoT and so on, it really can become a safety issue. So sometimes it's not just a metaphor, but the economic stakes are just way too high that you would take a 1% to 5% chance.
Again, these are my personal estimates. Some people might put them lower. I don't think there's rationale for making them that much lower. So I think we really should have been ready for this day. I think that's wherethis “quantum threat” is actually a blessing, because this was something we should have been doing anyway. Like, the way I sometimes put it is, back in the 90s, we architected for the cryptographic foundation we architected was good for a three story building. We'd been living in bungalows in the 80s, and now we're going to go up to these awesome three story buildings. So we built pretty good foundations. We had RSA and Diffie-Hellman. So we had alternatives. We had some diversity. It was all great. But then we built a sky rise, 100 story building, and now we're talking about much more massive complexes on top of more or less the same foundation. If it weren't for quantum, we'd still be kind of letting it ride, just hoping for the best. Like hope is not a good strategy when the stakes are unacceptably high.
So, and I'm not suggesting, oh, well, it's risky, so we shouldn't use it. No, no. Absolutely use it for all the wealth creation it's enabled already and will continue to enable, but let's just build in those guardrails, that defense in depth, the agility, especially for the more critical systems, so that we can sort of confidently continue to leverage everything that digital and connected technologies will enable.

