Model Checking Security Properties of AI Assistant Gateways: A TLA+ Case Study of OpenClaw
This paper demonstrates a comprehensive formal verification effort on OpenClaw, an AI assistant gateway, using TLA+ and the TLC model checker. The researchers successfully verified 91 security-critical properties, uncovered three latent bugs in the system's implementation, and prevented two regressions. This work highlights how lightweight formal methods can provide significant security assurance for AI infrastructure.