Why we should start code signing LLM models


AI models are thinking. It’s time we start signing them to ensure trust, integrity, and security at the edge.
Table of Contents
We’ve spent years talking about code signing. It’s a well-understood practice: sign your code so you know what you’re running. But what happens when the “code” isn’t just procedural logic anymore? What happens when a system starts to think, and act autonomously?
In one of our latest Root Causes Toronto Sessions podcasts, Tim and I explored a topic that’s been quietly brewing beneath the surface: model signing. As artificial intelligence becomes more embedded in our everyday devices, from smartphones to IoT sensors, we’re seeing a shift from large, cloud-based language models to small and even nano language models running offline at the edge.
These models are efficient, specific, and increasingly powerful. But here’s the question: Do you know if the model running on your device is the one you intended?
The hidden risk beneath the waterline
Think of AI as an iceberg. The flashy, cloud-based LLMs are the tip, visible and well-known. But the bulk of AI’s future lies below the waterline: small language models embedded in devices, often at the point of manufacture. These models are static, contain statistical weights, and are rarely, if ever, signed.
That’s a problem.
If we’re not signing these models, we’re leaving the door open to manipulation, malicious or accidental. And unlike traditional code, models are barely deterministic by nature, making tampering harder to detect and potentially more dangerous.
Why model signing matters
We already know the risks of unsigned firmware. Now imagine those risks applied to AI models that influence decisions, automate tasks, and interact with sensitive data. The implications are staggering.
So I ask you:
- Are your edge devices running trusted models?
- Do you have a mechanism to verify model integrity?
- Is your organization prepared for the “Wild West” of model deployment?
Because right now, there’s no consortium, no rules, no infrastructure. There’s not even a shared vocabulary for model signing. It’s time we start building one.
This isn’t just about technology. It’s about trust. As AI becomes more pervasive, model signing must become a standard practice, just like code signing did years ago. The thing here, though, is that we’re not just signing code anymore, we’d be signing the machines that think.
The machines are thinking and it’s time we start signing them. This is only the beginning of the conversation. And at Sectigo, we’re committed to leading it. Stay tuned for more on this topic.
