I’m a Software Engineer based in London, specialising in software correctness, systems reliability, and provable techniques.
As AI increasingly writes and operates software, the risks around correctness are shifting towards machine scale. I integrate rigorous testing and verification into software systems, including AI and agentic pipelines, to build software that is provably correct—particularly in cloud infrastructure, distributed systems, and other settings where bugs are costly or carry reputational, regulatory, or even criminal liabilities.
I lead a team of research engineers on provable reliability at Huawei R&D and previously worked in the Automated Reasoning Group at AWS. I hold a PhD in logics for verification.
In less than two years at Huawei, my team and I prevented 6 critical atomicity and consistency bugs from reaching production in a core cloud database; unblocked the launch of a key IAM security feature through logic encoding optimisations and by detecting 7 soundness bugs; and discovered 15 bugs (4 high-severity) in the eBPF validator in DPDK. All through rigorous verification applied to real codebases.
Model Checking CBMC, ESBMC
Code-level Verification Dafny, Verus, Prusti, Nagini, Gobra
Model-based Specification & Verification TLA+, P
Solvers & Provers Z3, CVC5, Isabelle, Coq
Cauli, Ortiz, Piterman: Actions over Core-closed Knowledge Bases, IJCAR 2022 Paper
Cauli, Ortiz, Piterman: Closed- and Open-world Reasoning for Cloud Infrastructure Security, KR 2021 Paper
Cauli, Li, Piterman, Tkachuk: Pre-deployment Security Assessment for Cloud Services through Semantic Reasoning, CAV 2021 Paper 5-min 25-min
Tkachuk, Cauli, Rungta, Bolignano, Hortala, Maher: Automatically Generating a Machine-Readable Threat Model Using a Template Associated with an Application or Service, US Patent 11,128,653. Amazon Technologies Inc. Patent
Cauli, Piterman: Equivalence of μp-Calculus and p-Automata, CIAA 2017 Paper
