Mithril Security

Our mission

AI is a promising technology that has the potential to revolutionise many sectors such as healthcare, biometrics and industry. However, current security and privacy concerns prevent AI teams from accessing sensitive data to train and deploy their models. The current lack of toolings, providing technical guarantees, hinders AI adoption due to data owners’ fear of uncontrolled data usage. 
That is why we have created BlindAI, an open source solution, to help AI teams leverage sensitive data, while providing the technical guarantees to data owners and regulation that data shared to AI models remains protected at all time.
By leveraging Confidential Computing and existing AI solutions like ONNX or Pytorch, we make AI privacy easy for data teams.

Protect data at all time

With BlindAI, leverage state-of-the-art security to benefit from AI predictions without putting users’ data at risk. Even when dealing with sensitive data, AI predictions can be obtained with the guarantees that they have never been disclosed or exposed in clear. Thanks to the hardware protection provided by Intel SGX on top of the software layer we provide, data remains end-to-end protected (at rest, in transit and in use).
We can see what happens before/after BlindAI in the figure above, where a regular AI identification system exposes data, while with our solution, data remains protected.

BlindAI in action

Step #1 – Launch the inference server

Deploy our confidential inference server in one command using Docker. We provide one image for production workload with all the security guarantees but requiring specific hardware, and one simulation image that can be run anywhere for testing.
				
					docker run -p 50051:50051 -p 50052:50052 --device /dev/sgx/enclave --device /dev/sgx/provision blindai-server /root/start.sh $PCCS_API_KEY
				
			

Step #2 – Load the AI model

Pytorch or Tensorflow models can be loaded inside the confidential inference server using the ONNX format. Simply export your model in ONNX and use our SDK to upload it securely to the inference server.
				
					from blindai.client import BlindAiClient
 
# Create the connection
client = BlindAiClient()
client.connect_server(
    "localhost",
    policy="policy.toml",
    certificate="host_server.pem"
)
 
# Upload the model to the server
response = client.connect_server(model="./distilbert-base-uncased.onnx", shape=(1, 8), dtype=ModelDatumType.I64)

				
			

Step #3 – Perform confidential inference on sensitive data

Use our client SDK to check that the remote inference server has the right security features (i.e. right code loaded and right model), before sending your data to be analysed with end-to-end protection.
				
					from blindai.client import BlindAiClient
from transformers import DistilBertTokenizer
 
# Create the connection
client = BlindAiClient()
client.connect_server(
    "localhost",
    policy="policy.toml",
    certificate="host_server.pem"
)

# Prepare the inputs
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
sentence = "I love AI and privacy"
inputs = tokenizer(sentence, padding = "max_length", max_length = 8)["input_ids"]
 
# Get prediction
response = client.run_model(inputs)

				
			
For more details, have a look at our first article, Deploy Transformers with Confidentiality . Lots of other examples from Speech-to-text with privacy, to ResNets for facial recognition, through Word2Vec for Zero-trust search.

For AI teams:

Fast

Easy to deploy

Our inference solution provides end-to-end protection with little impact on inference speed. Thanks to our packaged server images and client SDK, confidential inference can be deployed easily with no heavy security skills required.

For data owners:

Secure

Seamless

End users can benefit from AI inference without ever having to worry about sharing sensitive data thanks to the end-to-end protection. Clients can benefit from new AI providers’ solutions without having to fear disclosure of sensitive data.

Compatibility

Pytorch / Tensorflow models can be covered by converting them to ONNX

No accuracy loss

Predictions are as accurate as non-confidential ones

Proof of execution

Cryptographic proofs can be given that only authorised code with a specific model were used on the data

Join the community

Github

Contribute to our project, mention open issues and PRs.

Discord

Join the community, share your ideas and talk with Mithril’s team.

Roadmap

Follow and contribute to our upcoming projects.