Our vision at Mithril Security: make AI private again

Image by Darkmoon_Art from Pixabay

Our vision at Mithril Security

These last years we have seen the enormous progress made by AI algorithms. Computer vision has been dominated these last years by deep learning, showing excellent performances on very diverse tasks ranging from radiography analysis to biometric identification. We have also witnessed the emergence of smart assistants using voice recognition, helping us in our everyday life and in our homes.

While these results are quite promising, one question remains : can we trust these AI models ? There are several faces of trust that must be addressed, such as algorithm fairness, transparency and its respect of confidentiality.

At Mithril Security we chose to focus on this last aspect : privacy. How can we choose the right balance between performance and respect of confidentiality ? Usually we either have a model that performs well and truly makes our lives easier but with potential privacy leaks, or we can have a very private model but with poorer performances.

To see why there is a tradeoff, let us focus on the two end of the spectrums :

Performance first

The approach that usually provides the most performance but least privacy is to use cloud AI solutions, sometimes known as AI as a service. In that case the user has to install nothing, does not need to provide computing power, does not need to perform any major update, and benefits from state of the art AI models hosted on a remote cloud.

However, currently people have to trust the service provider as the data will be sent, hopefully encrypted, but it will be decrypted before being fed to the AI model. This means that the service provider can have access to all the data you have sent to be analyzed by the AI model. While this seems distant it actually happened: conversations from smart assistants were in fact recorded and analyzed by employees without your knowledge!

Privacy first

On the other end of the spectrum we can have very privacy friendly solutions. One way of achieving it is to provide the voice recognition algorithm on the user device. This way, people’s data never leaves their device and there will be little risk of privacy leak.

Nonetheless this approach has several drawbacks:

  • First, the user device will be much more utilized, this means that users need to install the app, leverage the device’s computational power, and consume more energy. In addition, models will be often downgraded to make it run on a consumer device, which means accuracy might be much lower than the cloud solution.
  • Secondly, the service provider will have to expose its model to its users, therefore there is a high risk of reverse engineering as people will have unlimited access to the model’s predictions. Also, the service provider will also need to push the user to upgrade the application, while it was seamless with the cloud solution.

We see here that finding a good balance between privacy and performance is a very tricky task. It is hard to imagine a solution that has one but not the other : a poor performing solution will not have many adopters, but a performant solution with high privacy concerns will not gain the trust of its users.

So the question is : how can we try to have a solution that provides both ?

Privacy Enhancing Technologies

There might be a third way to reconcile privacy and performance : enable the use of state of the art AI models with data in use protection technology.

I have written several articles trying to explain a bit how these work, either from my series on homomorphic encryption, or the recent one on confidential computing. You can have a look to learn more, but the basic premise of these solutions is the same : enable the inference of deep learning models on encrypted data.

By doing so, the user who sends her data to the service provider will no longer need to share her data directly, and expose it to privacy leaks, while she will still be able to benefit from the AI model.

Mithril Security

At Mithril Security, we believe that AI shows great promises for the future, but we should not trade privacy for convenience.

That is why Mithril Security is developing the infrastructure to make AI deployment safer.

We want to offer a seamless experience to the machine learning engineers who wish to deploy their AI model, and protect their users’ data and their model weights, even if it is deployed on an untrusted public cloud.

To do so, we leverage confidential computing to provide a fast, secure and easy to use AI inference server. Our vision is that this technology can help answer the needs of privacy and performance that AI needs, and we are therefore on a mission to democratize this technology through:

  • Education : we want to provide easy to read resources for beginners, and we have therefore started a series to explain how one can use this technology
  • Open source : we will open source our code to make anyone try and play with this solution in an easy manner

In the next posts, we will present how we tackle AI private inference with our solution Blind AI, so stay tuned!

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée.