Redirecting you to
Podcast Feb 07, 2025

Root Causes 465: Twelve Bugzilla Sins for CAs to Avoid

In the wake of the Bugzilla Bloodbath, we list and describe twelve sins CAs commit on Bugzilla and its like, why they're detrimental, and how CAs should avoid them.

  • Original Broadcast Date: February 7, 2025

Episode Transcript

Lightly edited for flow and brevity.

  • Tim Callan

    So Jason, we're here at Toronto session season three. One of the things that I really want to talk about - we all know that 2024, was a big, big year for Bugzilla, and I had the opportunity to observe firsthand a lot of really bad practices for public CAs on Bugzilla. I thought, for the sake of we have a non-trivial community of people who watch this podcast who also pay close attention to WebPKI, public CA media like MDSP, Bugzilla, CCADB, etc. We thought it might be useful to go down this list to just kind of put on everybody's radar what good behavior and bad behavior looks like on Bugzilla.

  • Jason Soroko

    I think this makes a lot of sense, and the timing is right. We as an industry, the Certificate Authorities, we hold a lot of trust. People hold a lot of trust in us, and I think it is very important to reeducate the public, reeducate ourselves about what it means to be a good CA.

  • Tim Callan

    It’s a good reminder. Like even good CAs can get in bad habits, can make mistakes, and it's good to be reminded of this stuff, I think.

    Here I have a list. So I have 12 things. CAs avoid these 12 things in your Bugzilla incidents. Here's what I was thinking, Jay. Let's go down the list and define them. Then let's go back down them and talk about why they're a problem and how to rectify that.

  • Jason Soroko

    Tim, right before you get to the list, how about for some of the listeners who are not as on a daily basis living and breathing WebPKI - - A little bit of background.

  • Tim Callan

    I think that’s great. Thank you. That's a good suggestion.

    So there are a number of sets of governing rules that public CAs need to follow, and one of them is that all root programs have some kind of root program requirements. One of the big set of root program requirements that we really care about is Mozilla. They were early on. They were kind of the first browser with a real codified, public set of root program requirements and Mozilla online properties that encourage online dialog are fundamentally open, and there's an expectation that everything is done in the public because it is one of the world's largest open source projects. So, as a result, if you want to be in the Mozilla root store, you're expected to be very transparent and very public in your dealings with what you do as a public CA. So some of these tools that I just rattled off, there's probably three that are important to us here.

    One of them is called Bugzilla, and this is the Mozilla bug base. Every time a CA has an incident that meets certain criteria, or it qualifies as an incident, that CA has to open an incident report, respond to public questions, follow a certain set of codified behaviors, which include fixing the problem and demonstrating that the problem should be expected not to occur again, and then at the end of that, you close out the incident. All of this is done in public, and anybody with a browser can come and create an account and ask questions and challenge assumptions and do things along those lines.

    There are a couple other ones that are noteworthy. One of them is called MDSP, which is Mozilla's online meeting message board. So for things for the Mozilla root program, and Mozilla as a public CA browser, that's where we go to have discussions that aren't CA incidents. Then there's another one called CCADB, which is the exact same thing, but instead of being focused on Firefox specifically, it's focused on the whole body of the CCADB participating browsers, which is Firefox, Chrome, Apple. If you have a topic that's bigger than just Firefox, you bring it to CCADB. If you have a topic that's specific to Firefox, you bring it to MDSP, and if it is a misissuance or another CA operational error, you bring it to Bugzilla.

    So I sort of said avoid these 12 things in your Bugzilla incidents, just because Bugzilla is the most frequently used of these. But, this, what we're talking about here, applies to all of them. That's a good edit. So why don't we run down the list? We'll say what it is, and then we'll come back, and we'll comment on them. And again, I've got thoughts on why this is bad and what to do about it.

    I'll say a couple of other things before we get going. Just by the nature of a list like this, there's a certain amount of overlapping. So if we look at any individual episode of bad behavior, there's a very good chance that it ticks more than one of these boxes. But by having these boxes here, we've got a real useful codification of what to do and not to do. This is my first glance at it. Feel free to go on LinkedIn and tell me what I missed, and maybe we'll update this over time. But this is where we're sitting right now.

    The second thing I'm going to say is, of these 12 items, I try to kind of batch them. So the first three are close to each other. And I would say, like, you shouldn't do any of these things. But the first three are really unforgivable because they're fundamentally dishonest in their dealings with the WebPKI, and the last three are really unforgivable. And then the ones in the middle are you still shouldn't do them.

    Then the last point I'm going to make is that I have seen all of these things occur multiple times in the last year.

  • Jason Soroko

    So not just in the history of WebPKI, but in the past year.

  • Tim Callan

    So this isn't theoretical. This is going on. This is CAs - come on, guys. Get better.

  • Jason Soroko

    That helps to explain why you've chosen things not to do, rather than a list of, here is a list of good behaviors. This is a warning list of don't do these things.

  • Tim Callan

    Correct. We could flip these. Every one of these could have been written as a good behavior. I did give that some thought. And the reason I went this way is because I am seeing these behaviors happening and part of what I want to say to the community is there isn't really an excuse for any of this. You should look at yourself and you should ask yourself, honestly, am I doing this? If the answer is yes, you need to change.

    So number one on the list - obfuscation. People trying to look like they are telling everything, but really not tell everything. Carefully crafted things, weasel wordy stuff, partial answers to questions. If there's four questions in a post, maybe you answer one, two and four, and just kind of forget about three. I see a lot of this sort of stuff going on.

  • Jason Soroko

    I just have to have a thought here, Tim, which is a lot of the problems that are just everyday issues that have come up on the Bugzilla are mistakes are inevitable, and these things are expected, and so nothing looks better to me than when a CA basically says, yep, here was the problem. And just complete transparency.

  • Tim Callan

    Absolutely. I liken it a lot to software errors. Like bugs. Software bugs, traditional bugs, everybody knows that every software package of any kind of complexity has bugs. That it's not realistic to ship bug free software. And what the good vendors do is they admit it; they own it; they encourage the public to tell them; they fix it quickly; they talk about what happened and why, they have blameless postmortems, and they push patches, and that is what we need CAs to do. And honestly, any CA that's doing its earnest best is going to get a lot of breaks from the browsers in the community, because we know it's hard to be a CA. CAs that are going to get themselves in trouble are CAs that aren't really doing their earnest best. One of the ways I framed this in the past, Jay, is you have competence problems and integrity problems. I think all but one of the distrusts that have ever occurred in the WebPKI, and there's been something like 14 of them, all but one, there was some kind of integrity problem in the mix.

    So, if you're committing integrity problems, this is bad. Your integrity. Without your integrity, you can't be a steward of the WebPKI. So obfuscation, that's the first one. Trying to hide stuff.

    Second one - obstruction. Trying to prevent the process from happening. And this takes a variety of forms, like refusing to answer questions or deliberately answering questions that were not the questions that was asked, hiding behind excuses. Things like, well, we have customer NDA, so we won't discuss that. Or my legal team says, I can't discuss that. Silly things that are fundamentally obstructionist.

  • Jason Soroko

    It defeats the whole purpose of the Bugzilla forum.

  • Tim Callan

    It's anti-transparent. We're going to get to why it's bad. We're going to get back to that.

    Then the third one, I think this is connected to these first two, is letting negative emotions take the driver's seat. And this happens. It’s kind of crazy, but you can look at some of these posts from CAs, and it comes across as defensive or angry or whiny, and that's not the point. If there is an error in your operation, that's a fact, and if that's upsetting to you that other people understand what facts are, you got to get over that. You're a public CA. This is our jobs. We're grown ups. And in particular - I made a list of things. So here's some specific things I noticed.

    First of all, being insulting to other community members, and I just, for the second one for B, I wrote churlishness. Just general churlishness. Just nastiness. And then third one is pettiness. I'll give you an example of pettiness. I'm not going to name any names, by the way, on this episode, but there's an ongoing set of incidents right now, and we've got a CA that is entrenched in a position about a major browser’s stated policy, and this is as indefensible as positions come. There's plain language, and their stated policy is just plain language, but this CA does not want to admit that they were wrong. So what they're claiming is that this browser's policy has changed between when the bug was written up, almost a year ago, and today, as a way to get out of admitting they were wrong while also closing the bug. They're saying, well, they changed their policy, so moving forward we're going to do it the way they want. No, they always did it that way. You just screwed up. The only reason I can come up with that they don't want to do it is they just don't want to publicly say that they got something wrong. It's just petty.

    So all of this stuff looks bad. These first three, these are a bad look. And the reason these are unforgivable, CAs, is just because we have to question your commitment to the process when you do these things. When you do these things, I don't believe that this is somebody who is really openly trying to improve.

  • Jason Soroko

    I think what we've learned in 2024, Tim, is that the process is why the CAs are trusted.

  • Tim Callan

    Absolutely. There is a mantra of continuous improvement that is built into all of this. This is a fairly new thing. Until 2012, there weren't any rules. Everybody just did whatever they wanted. And we all knew that it was going to be a learning process for the entire WebPKI community and there's an expectation that's built in, that even today, 13 years later, that we haven't gotten to a perfect end state. And of course, we’re never gonna get into perfect end state because the world keeps changing. But just even making up for that deficit of having no rules at all 13 years ago, even that everybody understands isn't done yet, and that means that there's an expectation that everybody, root programs and CAs, and auditors, and other parts of the ecosystem, are committed to continual improvement. The need for continual improvement is not a personal insult, and don't act like it is. So that's one, two and three.

    Number four - lateness. Lateness. So there's a few expectations where things are supposed to happen in specific time frames. When you find out you have a misissuance error, you're supposed to report it within 72 hours. Or a Bugzilla bug, which isn't always misissuance, you're supposed to report it within 72 hours. Not 73 hours. 72 hours. If there are questions on Bugzilla, you need to answer them within one week. Exactly seven days, not eight days. And also, you need to maintain a weekly cadence of some kind of update. When an incident occurs, you have two weeks to have an incident report and if other parts need to be filled in, but you need a complete incident report to the full extent of your knowledge within two weeks. And then the last one is if you have what we call a next update date, which is where Mozilla can set a date and say, look, we don't need to hear from you until this day. Like, let's say, I say, look, it's all done, but we got an engineering project. We want to fix this one thing. It's going to take 45 days, Mozilla. Mozilla will pick a date in the future. They say, okay, I'll set an update. Next update here. When you have a next update date, you have to update by that date, even if it's to say we're not done yet. So these are well understood. They're well codified. There's no ambiguity on any of this. And we just routinely see people just miss these deadlines.

  • Jason Soroko

    When there is openings for the bad guy, when there is potential to have problems on the internet, timeliness is important.

  • Tim Callan

    And for a lot of these, I'd say, for your average Bugzilla bug, my concern is less about security and more about interoperability. Interoperability is the biggest thing we get out of this, because every machine has got to connect to every other machine, and it's just got to work. But either way interoperability is critical. Without that, all of our digital systems stop.

    Number five - improper markdown, which can include no markdown at all. Now, these are text-based bulletin board systems, and sometimes we're putting up big, complicated messages. And there is a set of markdown, very common, very HTML-ish that you can put in that will give you bolds and change font sizes and bullets and all of that good stuff. And there are published rules around how the markdown works so you can go read, so you can see exactly what your options and mark down. There’s a preview mode so you can look and see what it actually is gonna look like before you commit to it. Then, for certain actions, like an incident report, like a closing incident, there is a specific format that they want you to use, including the markdown, so that readers can navigate it. We just frequently see people either just do terrible, horrible markdown, like, what were you thinking, or none at all. Maybe that's a little bit of a judgment call if it's in your bug, and you're just supposed to use the markdown to help people navigate it, but in these formatted, prescribed, formatted incident episodes, like a preliminary incident report or a final incident report or a bug closing statement, it's clear that the markdown is codified, and if you don't use it, you’re just making it harder for everybody. That might go back to obstructionism or obfuscation. That's a way to obfuscate.

  • Jason Soroko

    So sometimes it's not laziness, it's people hiding things.

  • Tim Callan

    It could be right. Then that flows right into number six. Number six being failure to follow clear procedures. Those are clear procedures. Or, my favorite one of this that's going on right now is this closing statement where, when you want to close a bug, you're supposed to put certain facts in a certain way and have a certain markdown. See, it's just don't seem to wanna. They keep putting up these messages saying, okay, we're ready to close this bug. And there is a clear closing statement. It's not ambiguous. It's what you've been told to do, and they just don't follow that procedure.

    So again, some of that could be a bad attitude, like having negative emotions in the driver's seat. Some of that could be obfuscation, or it could be the next one on the list, number seven, which is failure to understand the expectations of a public CA. And we might put this on the unforgivable list, too. But kind of the poster child for this one was last year, one of the two CAs that was distrusted last year, eCommerce Monitoring in I'm going to say the Spring. Part of the incident report is a section called root causes. You like that, Jay. And what do you think the section called root causes is supposed to do?

  • Jason Soroko

    It’s supposed to get to the root cause of what the problem was.

  • Tim Callan

    Yes. That’s absolutely what it’s for. So they had a bug that they posted, and under root causes, the whole thing said, roots not affected. Head explodes. Boom. It's funny. In a way. Because they're using a different use of the word root. It's kind of a PKI pun. It's not an intentional PKI pun, but it just shows this egregious lack of understanding of what the community and the browsers and in this case, particularly Mozilla and also the CCADB community, because they're all kind of bought into Bugzilla as a concept, is expecting you to do. I know the baseline requirements are long and complicated, but the root program requirements really are not. And so if you don't understand what the root programs expect of you, you're just not reading because they're not that long and they're not that complicated, and you really should be able to keep in mind everything the root programs are saying. So that's a crazy one.

    Then one of the things that they do - that was number seven - number eight - one of the things that the browsers do expect, and that CCADB does expect is they expect CAs to follow and extrapolate from all Bugzilla bugs. As a CA you're supposed to follow the resources I said before - Bugzilla, MDSP, CCADB, and not just the stuff that's about you because the idea is if CA-A has an error, CA-B is supposed to go look at their own operations and say, oh, we're making the same mistake. Let's go fix it. And one of the things that browsers get really grouchy about, and commentators on the public CA list get very grouchy about is when the same mistake gets repeated, and they get extra special grouchy when the same mistake gets repeated by the same CA. And so not only do we see CAs that aren't learning from each other's bugs, we see CAs who aren't learning from their own bugs, who do the same thing again. And I've got open bugs right now that I'm monitoring where that's going , and it sort of undermines the purpose of the whole thing.

  • Jason Soroko

    Learning from your mistakes should be a part of this.

  • Tim Callan

    Correct. So failure to extrapolate from all Bugzilla bugs or even your own bugs. That's number eight.

    Number nine – see this a lot - shallow root cause analysis. You’ve got a software error, and it causes a certificate report to get lost, and it doesn't get dealt with. And they come back, and they say, well, there was a software error that caused the significant report. Bugs been fixed. That's my action item. Root cause analysis. I'm all done. And then the browsers will come back and say, no, do you have QA? Do you have like, there was some other flaw. Was this QA’d? Maybe your test beds weren't designed correctly. Maybe your initial architecture wasn't correct. Like, how come this happened in the first place? What's going on in your process that allowed that software error to exist? And you got to go fix that. So what they're looking for is a deeper root cause analysis. What they don't want to do is do a bunch of whack a mole. They want to have a deeper root cause analysis that will cause meaningful quality improvement in the CA, where that we know there's still going to be bugs, but we want those bugs to happen for new, undiscovered reasons, not the reasons we knew about but didn't do anything about. So shallow root cause analysis you see a lot.

    Now that last three – 10, 11, and 12, once again, I'm back into unforgivable. And I think you'll see why.

    Number 10 - lies and cover ups. So I can't prove it, but I follow Bugzilla very closely. We have resources. We mine CT logs. We look at other things like this. I am quite confident in 2024, that I'm aware of four outright lies that we're told on these public forums by public CAs. Where I just know they were lying. Or cover ups, where I don't think anyone's gonna find out I did this bad thing, so I'm just gonna not say it. And that's an integrity problem. So we're not getting into the lies. Lies and cover ups. It's a real thing.

    Number 11 - refusal to admit your errors. I was just talking about that as an earlier example. They just won't admit it was an error. It's an error. Everybody knows it's an error. It's there in black and white. It's proven factually that was an error and you got a CA that just won't admit it. Just won't. Keeps saying no, keeps saying no, keeps saying no. This is part of what got Entrust distrusted. Won’t admit it's an error, and then directly connected to that number 12 - refusal of change. That was also part of what got Entrust distrusted. Just nope. We're not going to change. We're just not going to change. We won't change. So that's the 12.

    So obfuscation, obstruction. Let's put those together. So why is this a problem? Why is it a problem?

  • Jason Soroko

    The whole point of Bugzilla, the whole point of the WebPKI process, is transparency.

  • Tim Callan

    And it depends on the CA as not only a willing participant, but the primary source of that transparency. We self-report almost every bug. The CA has to be the primary source of that transparency. We've got a goal, which is we want all the information to be in that initial incident report, so that there's no meaningful questions that anyone can ask.

  • Jason Soroko

    This is not like general cybersecurity, where the trend is you typically don't hear about companies finding out, oh, I have a vulnerability. had a breach. Quite often that's done by a third party - found by a third party. And that's not good. It shows that there's still a real lack of maturity. We can't afford to have that lack of maturity in the CAs. They need to be the source of that information most of the time.

  • Tim Callan

    Or even if they're not, sometimes other people discover - -

  • Jason Soroko

    It’s perfectly fine.

  • Tim Callan

    And they might send them a certificate report, and that's okay, and then the CA has got to write that up, or they might just self-report. Sometimes people just report, and they tell the Bugzilla before they tell the CA. That's probably not the best way to do it, but when it happens, a CA can still embrace it and handle it with grace and get in there and figure out what's going on. So obfuscation and obstruction. That’s why that’s bad.

    Putting negative emotions in the driver's seat. Why is that bad?

  • Jason Soroko

    In any in any kind of process, human nature is going to show up. But it’s almost like another form of obfuscation.

  • Tim Callan

    And it interferes. It interferes with this open discovery of information. It interferes with this dialog about making things better. And then it's just small. Like, what are we 12? Is this the playground? Like, really? Do you have to find sneaky little ways to shoot barbs at your critics? Really? And I think it's a bad look. I think the other thing I'm going to say is, for any CA, if you're doing this, or you're letting whoever you have posting do this, because they can't reign it in and control their own emotions, it looks bad for your whole organization. It looks like you aren't in control your people. Looks like you're not committed to the right things in transparency and self-improvement and excellence. It's just a really bad look for a CA.

    Lateness. We kind of got into that one. The whole point, the reason we have these rules around these cadences is because things got to get forward. Things got to get solved.

  • Tim Callan

    Absolutely Tim. As I think you talked about at the interoperability standpoint, timeliness is critical in the process.

  • Tim Callan

    Absolutely. That's probably pretty straightforward.

    Improper markdown. We kind of got into the problem with it. The problem with improper markdown is it makes it harder for the community to navigate and work with these bugs. Big walls of text are just hard to deal with. And perhaps it is deliberate obfuscation, and deliberate obstruction and I do think sometimes that goes on. Perhaps it's just laziness or lack of understanding of how to use the tools, or lack of understanding of the expectations. But whatever those two reasons are, at the end of the day, they have the effect of impairing the public dialog and the public understanding and the learning from the whole community, which is the biggest point of Bugzilla, not the only point, but the biggest point of Bugzilla and MDSP and CCADB, is that ability for the community to learn.

  • Jason Soroko

    It's not like markdown is difficult. That's not the barrier. There's something else going on.

  • Tim Callan

    Exactly. That's why that's bad and then connected. Like the second one is kind of the same thing reframed. Failure to follow clear procedures has the same problem. It has the same impact and the same problem. We're expecting things to happen with these procedures. They're there for a reason. An incident report needs to incur and carry all these things, and those things have specific things they're supposed to need. So for example, there's another area in the incident report that is basically, it's called impact. And what that's supposed to do is that's just supposed to be a list of all the affected certificates and services. So here's the seven certificates that were affected, or I’m including an attachment with the 90,000 certificates that were affected. Or these are the systems that were affected. And you read, you regularly see CAs using the impact section to try to talk down the importance of the bug. Oh, there wasn't really impact here. Nobody was affected. Guys, we have a clear procedure. It's clearly codified what you're supposed to do with this section and when you use this as a place, if this becomes the spin zone, rather than reporting facts that are relevant to the incident, then you're interfering with the process working correctly.

    Failure to follow and extrapolate from all Bugzilla bugs. We kind of discussed this. That the point is that if another CA is having an error, I may be having the same error, and I didn't know, and when I read this bug, it's a chance for me to go fix my error. Maybe that means that I get lucky, and I fix it before I have a misissuance incident. I say, oh, good, that would have got me too, but now I can fix it. If I try to fix it, but I fix it wrong, and I have an incident later, then the community will be forgiving of that. But if I just didn't know about it because I wasn't reading other bugs or asking how do those apply to me, then this multiplying effect that we're looking for that these tools provide is lost. Shallow root cause analysis. What's the problem with that?

  • Jason Soroko

    Again, that sounds like an obfuscation. It's another case of it isn’t it?

  • Tim Callan

    It could be. Or laziness. But at the end of the day, you don't really fix the problem. You're stuck in this whack a mole thing. Again, what the browsers don't want is, hey, I had a misissuance because I had a period in the wrong place. Did you fix it? I've got some software that looks for periods in the wrong place. And then next week, well, I had a misissuance because I had a comma that was in the wrong place. Well, hold on, you wrote the period software and you didn't deal with commas. And so they want you to think bigger, and they want you to say, what are the lessons? What can I learn that will make me operationally better? And when that doesn't happen, it just slows down the improvement of everything. And it also shows, I think, shallow root cause analysis. If you go back to our episode about the privilege of being a public CA, one of the things that it shows is a lot of these rules wouldn't have to exist if every CA was committed to being their best self. And the fact that these rules exist at all states that there are a lot of CAs that aren't committed to being their best self. And shallow root cause analysis indicates that you're not committed to being your best self.

  • Jason Soroko

    Being your best self, Tim, I think I'd like to return to that at the very end.

  • Tim Callan

    I think. I think that's kind of a theme that runs through everything on this list.

    Lies and coverups. Where do we begin? You are a steward of the public trust. You are one of approximately 50 organizations on the globe who has been given the opportunity to vouch for public identity, and you're going to tell lies in public? Really? This is horrible. This is like being a crooked cop. This is like being a judge who takes bribes. It is the opposite of what you're here to be.

  • Jason Soroko

    It is utterly unacceptable. I work for a CA. You do, too. The whole point of this is we want to stick up for our industry as a whole. We want to be trusted as a whole. And any of the players that are doing that, it really, literally is like being the good cop or the good judge who looks at the bad guys and go, you're making a bad name for all of us.

  • Tim Callan

    Absolutely. It smears everybody. If you go back to the analogy we've talked about on root causes in the past about if the WebPKI is this wall keeping the invaders out of the safe zone, any bad CA is a hole in that wall, and it's bad for the wall and it's bad for everybody behind the wall, regardless of which CA it is or where it is. So lies are a huge problem.

    Refusal to admit your errors.

  • Jason Soroko

    This one is truly on the unforgivable list, because the process is the thing. The problems aren't the thing.

  • Tim Callan

    Continual improvement. If you won't admit that you made a mistake, continual improvement will not occur. You are throwing away the concept of improvement. You're just refusing to get better. And refusal to change. Same thing.

  • Jason Soroko

    It’s on the same level.

    So Tim, I’d like to talk about being the best CA that you can be. We've both been around a long time in this industry, and we've known some of the personalities that are behind some of the human side of this because if it was truly an automated process - -

  • Tim Callan

    There’s a very real human side of this for sure.

  • Jason Soroko

    Let's talk about the human. Anytime that you have a process where human beings sometimes stick their nose into the process in a way that degrades it, and it affects that company overall as a CA - and some of these operations are large commercial operations who have shareholders and interests - and the thing is, I think, I just want to get your take on this, Tim, because of the fact that sometimes there's bad apples, who have some of these bad habits going on, you don't have to be a WebPKI, 30 year expert to understand what this list means.

    I think, Tim, you reading off this list today and explaining it very well is useful for people who are in compliance programs of other Certificate Authorities.

    I'm going to suggest that perhaps it's time to organize your company as a CA so that there's an adult in the room who is overseeing this process for the benefit of your shareholders, other people who have a stake.

  • Jason Soroko

    I think that's an interesting point, Jay. I guess I can't state this as factual knowledge, but I strongly suspect that a lot of these CAs have one or very few individuals who do these tasks by themselves in a back room and there's just no visibility.

  • Tim Callan

    My brain starts to spin every time they start throwing out these words. My eyes glaze over. I don't understand what they're saying.

  • Jason Soroko

    But your eyes shouldn't glaze over when you're looking at that list.

  • Tim Callan

    You should be able to understand the concept of obfuscation. You should be able to understand the concept of knowingly tell a lie.

  • Jason Soroko

    Perhaps one way of doing it - just an idea - is to poll other CAs. Hey, how are we doing? Pick an executive member of a team, perhaps it's to your chief counsel, or something like that, and you actually go off to the other CAs, how have a dialog and say, what's your take on our compliance?

  • Tim Callan

    You've just got me thinking with that comment, Jay. One of the things that I never thought about in this term, but one of the things that we're trying to do at Sectigo is we understand that we have a level of resourcing and experience that is quite atypical. There are very few CAs who can point to that, who have the number of 20 plus year veterans that we do. Very few, and we appreciate that, my very esteemed colleague, Rob Stradling, likes to say, you can't become a 20 year PKI expert until you've been doing it for 20 years. And of course, he's correct and so part of what we do with things like this and other frameworks and things we've had, and again, if you go back to that privilege of being a public CA episode and some other ones we've done, is we're trying to take these thoughts and ideas that are abstracting an upper level from a like, here's the browser requirements, but it's almost the deeper root cause analysis thing. How do you learn these more general lessons that you can then take away and turn them into your process and your culture? And if a CA is watching this, my suggestion is write down that list. It's easy for you to get. We numbered them. We said what they were. And go make it a KPI. Say we're going to get a green checkmark on every one of these things at the end of the year, and you will be better for it.

  • Jason Soroko

    Exactly. That's very understandable to any organization. These are normal practices in most of the rest of your business. Why compliance programs sometimes end up being these rogue bubbles that disrupt our entire industry, and that's just allowed to happen, and catastrophic things end up happening because of that behavior. This list is not hard to understand and building KPIs to help to avoid it are is not difficult to do. A little bit of oversight, I think can go a long way.

  • Tim Callan

    I think so. And it's, it's ironic that it's a governance function that needs governance. But I think it is.

  • Jason Soroko

    Yes. I can tell you that in here in Canada, and those of you who are judges, maybe you can tell me differently, but that's one of the problems, is that they're kind of self-policing, but they're kind of not, and there's really no, nobody's inside looking at them.

  • Tim Callan

    Anyway, I think this framework here, I don't think this is the last time we're going to be discussing this.

  • Jason Soroko

    No. This needs to be something we keep on our list.

  • Tim Callan

    And maybe - I'm just thinking - as we do report things that are going on in Bugzilla, maybe at that point we'll return to this framework. Say things like, look, see here, you are kind of getting number five and number seven wrong.