Podcast
Root Causes 492: When Mandatory Security Training Sucks


Hosted by
Tim Callan
Chief Compliance Officer
Jason Soroko
Fellow
Original broadcast date
May 5, 2025
In this episode we get excited about errors we see in mandatory security trainings.
Podcast Transcript
Lightly edited for flow and brevity.
So here's the first one, and I'm quoting from the screen track training. This is talking about phishing, how to detect phishing. Here's the quote. Quote, error free emails, Gen AI chatbots can produce output free of the telltale spelling or grammar mistakes that normally help identify phishing lures. Jason, when is the last time that you saw a phishing attack come in that had misspellings of common English words?
So those are the ones I captured. I think when I look at the theme here, what I see is I see a couple things. One thing is I see just kind of the need the robotic repetition of stuff that other people said to you. There's an old story that I've been told a bunch of time where a couple gets married, and they go and they go to bake a ham. And one of them cuts the edges off the ham. They put the ham in the oven. The other member says, why don't you cut the edges off the ham? And I says, I don't know. That's what my dad always did. So then when they get together a little later, and they're at a family thing, and the dad's there, let's say the the woman is coming in the family. She says to the dad, hey, why do you cut the edges off the ham? And he goes, I dunno. It's what my mom always did. And then so Christmas rolls around, and everyone gets together, and grandma's there. And so she goes to grandma and she says, why do you cut the ends off the ham? Grandma says, well, because my pan is this big. And so what happens is you get this kind of robotic obedience to like these real specific prescriptions and proscriptions without the understanding of the underlying reasons. Because we're all capable of looking at and understanding how all of these pieces of advice are bad. And yet somebody's getting a paycheck, not just somebody. There's people who approve content. There's people who lay it out. Like, there's a lot of humans involved. And nobody said, by the way guys, I'm sorry, but this is effing dumb.
There are things humans but there's a reliability problem with humans. Their their neural nets are weird, and they're unpredictable, and they make weird decisions. And training that out of them is biologically infeasible. Computers are nice because they are extremely predictable. And when you get them right, and they do it right, they will do it right one time or a million times. So I'm all in favor of training. But one of the one of the the rules we have for ourselves as a public CA is remove the human judgment everywhere you can. Certain amount of human judgment you're stuck with. But anywhere you can replace human judgment with a clear set of rules that always come to a correct correct conclusion, you must always do that. And we just wrote up a bug on Bugzilla against ourselves, and one of the takeaways was this is a place where human judgment could have been replaced with automation, but we didn't observe that that was possible until there was an error. Then we went and put that in place. And that's how we think about it as a CA. This is for our own employees and our own systems. And I think that's a huge part of it.

