BASTIONAI: Secure multi-party AI training with Confidential Computing.
About this event
The hybrid-format webinar « BASTIONAI: Secure multi-party AI training with Confidential Computing.» was held at Sorbonne Center for Artificial Intelligence – SCAI on October 17 at 6:30 pm.
Multi-party learning is the key to accessing more data and developing more efficient AI. For instance, data is often scarce in healthcare and siloed in small datasets. But a high level of data protection is crucial to convince more organizations to collaborate.
Techniques such as Federated Learning (FL) have emerged to reduce the risk of training models on multiple private datasets. Yet, their deployment complexity and massive overhead hardly make them an excellent fit to answer the needs for secure multi-party training. That is why we, at Mithril Security, are building BastionAI , a frictionless privacy-friendly deep learning framework.
This webinar explains why we need secure AI training solutions and gives an overview of Federated Learning. Then we present BastionAI, our new solution for secure training, and provide a live demo of finetuning a DistilBERT model on a small private dataset with Differential Privacy. The webinar will be organized in the following manner:
- Induction - Why do we need secure AI training?
- Overview of Federated Learning
- Presentation of BastionAI, our multi-party secure training framework project
- Q&A session
Organized by Daniel HUYNH, CEO of Mithril Security
Hosted by: SCAI
Contribute to our project, and mention open issues and PRs.
Join the community, share your ideas and talk with Mithril’s team.
Follow and contribute to our upcoming projects.