Redirecting you to
Podcast Apr 23, 2024

Root Causes 380: What If Quantum Supremacy Comes Earlier Than We Thought?

Repeat guest Bruno Coulliard gives us an update on the US government's migration to post-quantum cryptography (PQC). We talk about the challenges to migration, the possibility of a black swan event in achieving quantum supremacy, and what happens if we all respond by pressing the "panic button" at the same time.

  • Original Broadcast Date: April 23, 2024

Episode Transcript

Lightly edited for flow and brevity.

  • Tim Callan

    Okay, so today we want to talk about - - I'd say we want to introduce what is a rather complex topic that has too much in it for us to cover completely in one episode, but let's set the framework and then I think what you and I talked about, Jason, is we'll return to this one, as we sometimes do with complicated topics, and we'll make sure that we fill in the blanks over time - and that is the new European Union Artificial Intelligence Act. Big topic.

  • Jason Soroko

    Big, huge topic, which we're obviously going to return to, because, you know, if you read through all of it, I doubt anybody even could, because the implications are so huge, in some of the sections. Let's just talk about it briefly here right now. And then we'll get into some of the interesting bits. So the AI Act out of the EU - Now remember, you guys all remember GDPR and a lot of, you know, a lot of people thought to themselves, oh my God, this either will never have teeth as it stands - which turned out to be totally false – Or it's only going to affect European companies or people in Europe. False.

  • Tim Callan

    Which turned out to be totally false. Yep.

  • Jason Soroko

    And I think let's just let's talk about the elephant in the room. Yes, this AI Act is going to affect everybody. And I think that that's going to be the absolute truth. I don't think we can look at this European law and put our heads in the sand in North America or anywhere else in the world and think that this won't affect all of us. Because, Tim, nobody's going to build two AIs, one for Europe and one for everybody else.

  • Tim Callan

    Yes. That's a good point. All right. So at a high level, what does this Act require? Or what does it command?

  • Jason Soroko

    The EU has taken an interesting tack, Tim, in that they're looking at specifically some of the risks around AI that they want to mitigate. And so the regulations, tackle, mostly, mostly tackle, problems, or potential problems associated with AI. Now, when you and I - and we've had a lot of legislation and regulation podcasts. We've done a lot of these, especially in IoT and other areas, and quite often, it's you must put in this level of security or anything that gets prescriptive, Tim, you and I have been on record many times saying those kinds of legislations, those kinds of regulatory write ups, quite often are going to be obsolete really quickly.

    And we've seen in the United States there being a, you know, legislation is sometimes very brief, and it says, okay, refer to guidance by NIST. Whatever the latest guidance is, refer to it. And therefore, the legislation doesn't pretend that it can be prescriptive in a legal sense. And I think what the Europeans have done here is taken a different tack and saying, okay, what are the things we're actually worried about and we're going to make legislation around that.

    I'll give you one really good example. And then we can get into some of the other nitty-gritties.

    Imagine Tim, a country in Europe, who decides, you know what, the law can kind of be codified because it kind of already is code. How about we determine whether a person should be arrested, a policeman should go and arrest them and when they come to court, AI will decide whether they are guilty or whether they are innocent.

    You can see that happening. Right?

  • Tim Callan

    Oh, you can totally imagine that. Yes, absolutely. Maybe a little bit at a time. It's kind of a slippery slope kind of thing. I'm sure.

  • Jason Soroko

    You got it. And so the European Union is looking into the future and saying, well, that is an outcome, that is a risk outcome that we're going to ban out right now.

  • Tim Callan

    Actually, yeah, that's one of the things that I've seen from this is exactly that. That choices about, I guess, arrest, prosecution and guilt cannot be delegated to an AI and that's going to be by law within the European Union. Right?

  • Jason Soroko

    Exactly, exactly right, Tim. So it's different, isn't it, than a lot of technology legislation in that it's not dictating exactly how you implement your technology, but it is dictating an outcome and I think that that's different. Now, is it entirely like that? Well, the answer is no. And in fact, Tim, you know, let's - - the other elephant in the room here with some of this and this is where I think you and I will have to come back and revisit this a number of times in the podcast but some of the requirements as of right now, there are some pretty darn smart people that I respect, who are saying the requirements are actually technically impossible to comply with, for example.

    And the MIT Technology Review, actually has a few good blogs on this that I would recommend people to read. And first draft of the bill actually said that all datasets that go into an AI must be free of errors and humans be able to fully understand how the AI systems work. That's gonna be impossible, right?

  • Tim Callan

    That's not possible. Right. Yeah. First of all datasets that are free of errors, right there, there is no way you can guarantee that every dataset is, or any dataset, is free of errors. But then the second one is by the nature of how AIs work, they are self-teaching and that's part of the reason that sometimes they do such funny things like hallucinate or make hands with 12 fingers is because they are working this out on their own by themselves without actually being programmed by a person, which means we don't know what goes on in there.

  • Jason Soroko

    Tim, exactly right. So there's also some other little things like – very important things. Just some of the requirements around having to hand over AI source code and algorithms to auditors. Well, first of all, I'm not sure of too many auditors who are fully capable of understanding implications of some of the source code and algorithms.

  • Tim Callan

    Yes. For sure. 100%.

  • Jason Soroko

    And as well, some of these things are, as you just said, they're self‑ creating. And so how do you audit? Never mind, AI. I’ll just bring up another topic. Imagine if cellular automata, which you might not understand the full implication of the outcome of it until its millionth iteration, it's the same thing around a non-deterministic AI in that, how in the world could an auditor looking at an algorithm that self-generates itself be able to determine whether something is good or not good, you know, checkmark, no checkmark, by just looking at it, without having run it through to its conclusion, and God knows what the outcome is. This is where I think the EU might be at its most egregious in the sense that goodness, do they actually understand how this works?

  • Tim Callan

    Yeah. And this sounds like this theme that is too common that we talked about in our recent lookback episode for 2023 and we also predicted in our recent prediction episode for 2024, which is government not understanding, trying to regulate technologies, using information that fundamentally is inadequate for what they're attempting to do. I mean, that sounds like a great example, and that it sounds like career legislators and career politicians declaring things without understanding how the technology actually works.

  • Jason Soroko

    Exactly. Exactly, Tim. You know, there's some interesting things that we're going to get into. I think there's such a richness here, in I think there's some things where all of us can agree, hey, yeah, AI should never be allowed to do X, Y, or Z. We think banning that upfront is probably not a bad idea. And then there's some of these things where, my God, the regulation is impossible, or the regulation is just flat out this doesn't help us. This is wrong. I don't think you should stifle innovation in the way that this legislation might. And I think you and I can tackle a lot of these things.

    Let me just bring up another example, Tim, where I don't know whether this is good or bad but there perhaps might be a ban on emotion recognition AI.

    Now, you might think from a standpoint of privacy or, or whatnot, geez, maybe I don't want a computer measuring my emotions. But on the other hand, there are some use cases that some innovators have come up with that are really good. So a blanket ban - I don't know.

  • Tim Callan

    Sure. Yeah. I could imagine lots of utility for emotion recognition AI. You stick something on the suicide hotline, and you get a sense for how much risk this person is at. If it works, right? Or, you know, sentiment. If we're talking about, you know, not voice to voice but tech space stuff, sentiment measuring has been around for a long time is considered very important. You don't just want to look at how much your brand is being discussed online, but you want to look at the nature of the discussion. So now imagine carrying that same kind of thing to something that's, you know, looking at a face or listening to a voice that completely seems credible that that would have lots of utility, right? This person is talking about me on TikTok. Are the saying I’m good or are they saying I’m bad? And yeah. Like there absolutely is utility for that. That's clear.

  • Jason Soroko

    Tim, you and I have talked a lot about, especially with respect to generative AI, I think I was quoted as saying generative AI is basically mathematics plus plagiarism. I think I actually said - - I think that's pretty close to what I said on a previous podcast, and you laughed exactly the same way when I said it. But Tim, there might be actually restrictions on this. So, in other words, ways that generative AI will have to be labeled when it's using copyrighted material to train its large language models. So I had a gut feel way back in the day when we were first looking at this that, geez, this looks an awful lot like very handy plagiarism tools to me. The European Union also is seemingly recognizing this, which is quite interesting.

  • Tim Callan

    Yeah. The plagiarism one is also particularly thorny. I'm skeptical. I understand that there are attempts to put things in place to detect what was AI written and what was not but, you know, the plagiarism thing gets really, really hard. Like if I can quote, if I can read articles on a topic, and then write my own original assessment of that and that isn't plagiarism, how come an AI doing the same thing for me is. So I also don't know how they're going to deal with that one. That strikes me as something where again, it's easy to say it, just kind of say it, but if you're going to get down to a brass tacks of what are we going to dictate for a technology product that is actually going to work that is objective, that is measurable, where I can look at something, I can look at a product and say it is compliant, or it is not compliant and that's not a matter of opinion and that's something that other parties who are making or using these things can reliably look on, and feel confident that they know whether or not it will be deemed compliant, that all strikes me as insanely difficult to do in the real world.

  • Jason Soroko

    Oh, Tim, you're so right. I mean, especially in a world where you do want to allow commentary against copyright material. I think even people who put up copyright material want those kinds of things. It’s their usage.

  • Tim Callan

    Right, and you want the propagation of information, right? If I learn something, if I buy a book, I can't reprint that book, but I absolutely can use what I learned from that book. 100%. That's fine.

  • Jason Soroko

    It is fine. And so this is where - - isn’t it interesting, Tim, and this is maybe the conclusion to the first podcast on this is AI is absolutely going to be transformative in the ways in which we consume and disseminate information amongst ourselves, so many new business models, so many new government models of how to deal with people, and all these old ways of doing things that either have been settled for years, or are, you know, still up in the air in terms of how do we actually deal with this? And how do we codify this into law? Wow, AI changes everything. And the EU is now trying to tackle, hey, where are the big risk items here? But in doing that, in trying to even do this in a good way, to handling risk, are they are they writing up risk in such a way that isn't helpful or actually hurts things? I think it would be very easy to hurt things. If you write it up just the wrong way. What a task trying to write this law, Tim.

  • Tim Callan

    Yeah. And yeah, I mean, you can imagine a consequence of this being that well-meaning technology providers who are doing everything they can to create products that are useful and not harmful will still not be able to have confidence that they're not going to be penalized and fined presumably, if GDPR is any indication, by the European Union. And if that's really the case, if I'm creating AI products, and I don't think I can make these products available in Europe, do I not?

  • Jason Soroko

    There you go.

  • Tim Callan

    And is Europe willing to live with that? You know, you and I have talked in the past about your not being able to get Google Bard inside the borders of Canada and scratching our heads on why we think that that makes your life better. Right. And I think this is the same thing here. If there are vast benefits to be gained from AI, and the EU succeeds in preventing really excellent products from being available to its citizens and its businesses while they are available to others outside its borders, I'm hard pressed to understand how that ultimately winds up being to the benefit of the European citizen.

  • Jason Soroko

    Yeah, Tim. It is a huge topic. And I think that you and I will cherry pick some of the areas of legislation that has been written up and we're going to talk through some of the more interesting implications that are global. And I think that we're going to have a real richness in some of those podcasts coming up, Tim.

  • Tim Callan

    All right. I think that sounds great. I look forward to it. This is a meaty, interesting topic, and we will definitely return to it.

  • Jason Soroko

    You got it.

  • Tim Callan

    All right. Thank you, Jason.

  • Jason Soroko

    Thank you.

  • Tim Callan

    This has been Root Causes.