Private Llama 2 70b integration coming soon. Join our Alpha !

AICert
Open-source tool to trace
AI models' provenance

Provide technical guarantees your model comes from trustworthy sources for easy AI compliance

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI lacks transparency and auditability

As booms and regulation intensifies, tracing what dataset and algorithms were used to train an AI model becomes key. However providing technical proof of weights provenance is an unsolved problem.

Even when open-sourcing a model and its training procedure (software, hardware, and data), due to the randomness induced by software and hardware, AI models are not reproducible, which means we have no way to prove a model comes from a specific training procedure.

Therefore we have no way today to provide trustworthy provenance of AI models, and this poses regulatory and security issues, as models can contain backdoors or be trained on PII, non-consented or copyrighted data, which is non-compliant with the EU AI Act.

Introducing AICert
An open-source framework to ensure a trustworthy supply chain for AI using secure hardware

AICert is the first AI provenance solution to provide cryptographic proof that a model is the result of the application of a specific algorithm on a specific training set.

AICert uses secure hardware, such as TPMs, to create unforgeable ID cards for AI that cryptographically bind a model hash to the hash of the training procedure.

This ID card serves as irrefutable proof to trace the provenance of a model to ensure it comes from a trustworthy and unbiased training procedure.

Future-proof your AI models

Prove absence of copyrighted data for training

Prove absence of biased data for training

Prove usage of safety procedures during training

Designed for AI Teams

AICert is designed for data science teams to easily create certificates that encompass the information needed to trace a model provenance.

It's work in 4 simple steps:
1
Provide your training code as a Docker image, along with the training set
2
Provision the right machines with the secure hardware and software stack
3
Run the training procedure on the secure hardware to produce certificate
4
Share the certificate with users who can verify provenance of your model

Cryptographic proof with secure hardware

At the core of our traceability solution is the use of secure hardware. Secure hardware such as TPMs or secure enclaves have code integrity properties, i.e. provide proof that a specific software stack is loaded, from the BIOS all the way to the app, through the OS.

As the code and data used for training can be attested inside the secure hardware, we can create a certificate that binds the weights to the training code and data. This certificate is non-falsifiable and can be stored on a public ledger to prove a specific model was trained using a specific training set and code.

Developed by privacy and AI experts

AICert is developed by Mithril Security. We're a startup dedicated to making AI more privacy-friendly and transparent. We have developed BlindBox for the Confidential serving of AI, and we are now launching AICert to cover the traceability of AI models.

Supported by

confidential computing consortiumberkeley skydeckthe linux foundation

Join the
community

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Discord

Join the community, share your ideas and talk with Mithril’s team.

Join the discussion
GitHub

Contribute to our project, and mention open issues and PRs.

Start Contributing