
Verified, Secure Machine Learning
About
Verisimilar is a research project on Verified, Secure Machine Learning.
The name “Verisimilar” is how you might pronounce the acronym “VeriSMLear”, which would be a poor way to abbreviate the phrase “Verified, Secure Machine Learning”.
This project is one (of many) that seeks to understand how formal methods can be applied to produce secure machine learning-based systems.
The word “Verisimilar” is an adjective that means “having the appearance of truth”. It is an apt name for a project focused on applying formal methods, which are traditionally conceived as methods for establishing truth about programs. The so-called “guarantees” established by formal methods are not truths but more akin to formal claims and evidence with much fine print. However, they often have the appearance of truths.
This project was originally funded as an activity within the larger CATCH MURI project, under the AUSMURI award AUSMURIIV000001.
Activities
The Verisimilar project currently comprises the following activities.
Verified Certified Robustness
The Problem: Modern machine learning systems, especially neural networks, are vulnerable to tiny, carefully crafted changes in their input—called adversarial examples—that can cause them to make dangerous mistakes.
Aims: This activity aims to build tools that provide strong, trustworthy guarantees that a neural network is robust to such attacks.
Our Approach: We focus on a new approach that combines the scalability of robustness certifiers, which use efficient methods to check robustness, with the rigor of formal verification, which uses mathematical proofs to guarantee correctness.
Results: Our key innovation is a certifier that has itself been formally verified using the Dafny programming language, ensuring that its robustness claims are sound. Our certifier is usefully applicable to small image recognition models, such as MNIST and CIFAR-10.
Outlook: This work is a step toward making machine learning systems more reliable and secure. Scaling our methods to more expressive and complicated neural networks is a key future research challenge.
Find Out More: For an accessible overview, see the blog post “Verified or Certified Robustness? Why don’t we have both?”, which is Part I of a series on this project.
For detailed information, check out our CAV 2025 paper.
The software is available on GitHub or via the artifact accompanying the CAV 2025 paper.
Verified Oblivious Algorithms
The Problem: How can we securely train machine learning models in the cloud, while ensuring that sensitive training data is kept confidential, including from the cloud provider who might attempt to exploit side-channels, such as the algorithm’s memory-access pattern, to infer sensitive information?
Aims: This project aims to develop methods to formally verify the security of probabilistic oblivious algorithms. Such algorithms are designed to prevent such leaks by hiding data using randomisation.
Our Approach: We develop a novel program logic that blends classical Hoare logic reasoning with probabilistic independence reasoning, able to handle advanced programming idioms such as secret-dependent control flow and random loops, which we apply to reason about perfectly oblivious approximations of oblivious algorithms.
Results: Our logic is formally proved sound in Isabelle/HOL and has been used to carry out the first systematic formal verification of various real-world oblivious algorithms that serve as building blocks for secure machine learning training.
Outlook: This work paves the way toward trusted, side-channel-resistant machine learning infrastructure, with broad applications in privacy-preserving cloud computing and secure data processing. Extending these methods to reason explicitly about encryption in oblivious algorithms is a key future research challenge.
Find Out More: Check out our FM 2024 paper and accompanying artifact.
People

Publications
A Formally Verified Robustness Certifier for Neural Networks.
James Tobler, Hira Taqdees Syeda, Toby Murray.
International Conference on Computer Aided Verification (CAV), 2025.
[extended version (arxiv)]
Combining Classical and Probabilistic Independence Reasoning to Verify the Security of Oblivious Algorithms.
Pengbo Yan, Toby Murray, Olga Ohrimenko, Van-Thuan Pham, Robert Sison.
International Symposium on Formal Methods (FM), 2024.
[extended version (arxiv)]
Software
The Verisimilar project has produced the following software and artifacts.