AI lacks transparency and auditability
As booms and regulation intensifies, tracing what dataset and algorithms were used to train an AI model becomes key. However providing technical proof of weights provenance is an unsolved problem.
Even when open-sourcing a model and its training procedure (software, hardware, and data), due to the randomness induced by software and hardware, AI models are not reproducible, which means we have no way to prove a model comes from a specific training procedure.
Therefore we have no way today to provide trustworthy provenance of AI models, and this poses regulatory and security issues, as models can contain backdoors or be trained on PII, non-consented or copyrighted data, which is non-compliant with the EU AI Act.
An open-source framework to ensure a trustworthy supply chain for AI using secure hardware
AICert is the first AI provenance solution to provide cryptographic proof that a model is the result of the application of a specific algorithm on a specific training set.
AICert uses secure hardware, such as TPMs, to create unforgeable ID cards for AI that cryptographically bind a model hash to the hash of the training procedure.
This ID card serves as irrefutable proof to trace the provenance of a model to ensure it comes from a trustworthy and unbiased training procedure.
Future-proof your AI models
Prove absence of copyrighted data for training
Prove absence of biased data for training
Prove usage of safety procedures during training
It's work in 4 simple steps:
Cryptographic proof with secure hardware
At the core of our traceability solution is the use of secure hardware. Secure hardware such as TPMs or secure enclaves have code integrity properties, i.e. provide proof that a specific software stack is loaded, from the BIOS all the way to the app, through the OS.
As the code and data used for training can be attested inside the secure hardware, we can create a certificate that binds the weights to the training code and data. This certificate is non-falsifiable and can be stored on a public ledger to prove a specific model was trained using a specific training set and code.
Developed by privacy and AI experts
AICert is developed by Mithril Security. We're a startup dedicated to making AI more privacy-friendly and transparent. We have developed BlindBox for the Confidential serving of AI, and we are now launching AICert to cover the traceability of AI models.