AI is a promising technology that has the potential to revolutionise many sectors such as healthcare, biometrics and industry. However, current security and privacy concerns prevent AI teams from accessing sensitive data to train and deploy their models. The current lack of toolings, providing technical guarantees, hinders AI adoption due to data owners’ fear of uncontrolled data usage.
That is why we have created BlindAI, an open source solution, to help AI teams leverage sensitive data, while providing the technical guarantees to data owners and regulation that data shared to AI models remains protected at all time.
By leveraging Confidential Computing and existing AI solutions like ONNX or Pytorch, we make AI privacy easy for data teams.
Protect data at all time
With BlindAI, leverage state-of-the-art security to benefit from AI predictions without putting users’ data at risk. Even when dealing with sensitive data, AI predictions can be obtained with the guarantees that they have never been disclosed or exposed in clear. Thanks to the hardware protection provided by Intel SGX on top of the software layer we provide, data remains end-to-end protected (at rest, in transit and in use).
We can see what happens before/after BlindAI in the figure above, where a regular AI identification system exposes data, while with our solution, data remains protected.
BlindAI in action
Step #1 – Launch the inference server
Deploy our confidential inference server in one command using Docker. We provide one image for production workload with all the security guarantees but requiring specific hardware, and one simulation image that can be run anywhere for testing.
docker run -p 50051:50051 -p 50052:50052 --device /dev/sgx/enclave --device /dev/sgx/provision blindai-server /root/start.sh $PCCS_API_KEY
Step #2 – Load the AI model
Pytorch or Tensorflow models can be loaded inside the confidential inference server using the ONNX format. Simply export your model in ONNX and use our SDK to upload it securely to the inference server.
from blindai.client import BlindAiClient # Create the connection client = BlindAiClient() client.connect_server( "localhost", policy="policy.toml", certificate="host_server.pem" ) # Upload the model to the server response = client.connect_server(model="./distilbert-base-uncased.onnx", shape=(1, 8), dtype=ModelDatumType.I64)
Step #3 – Perform confidential inference on sensitive data
Use our client SDK to check that the remote inference server has the right security features (i.e. right code loaded and right model), before sending your data to be analysed with end-to-end protection.
from blindai.client import BlindAiClient from transformers import DistilBertTokenizer # Create the connection client = BlindAiClient() client.connect_server( "localhost", policy="policy.toml", certificate="host_server.pem" ) # Prepare the inputs tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") sentence = "I love AI and privacy" inputs = tokenizer(sentence, padding = "max_length", max_length = 8)["input_ids"] # Get prediction response = client.run_model(inputs)