A unified framework for confidential data science collaboration
The first open-source privacy toolkit for data exploration, AI training, and deployment that effortlessly fits in your workflow
Open access to your data while staying in control
BastionLab helps data owners expose access to their data while minimizing data exposure by filtering data scientist queries.
Selective dataset sharing
No need to open access to your whole database anymore.
Approved data queries
All results shown to data scientists are sanitized.
Your favorite frameworks
Leverage your usual data science tools.
Flexibility
Change the privacy policies while collaborating and approve individual requests.
Allow AI models to access confidential data
BastionLab allows data scientists to train AI models on confidential data without exposing them to leakage risks.
End-to-end encryption
Data is never accessed in clear thanks to Confidential Computing technologies.
Multi-party learning
Train AI models securely on multiple datasets.
Compatible with the latest models
From ResNet to GPT models.
High performance
Fill privacy gaps without slowing your performance.

Run AI Models on Private Data
With BlindAI, AI models can be used on confidential data, without ever exposing the data in clear.
End-to-end encryption
Third-parties cannot access data in clear thanks to Confidential Computing.
Simple deployment
Deploy models with privacy in a few lines of code.
High performance
Provide privacy with minimal slowdown.
Compatible with the latest models
ONNX compatibility to support latest models like Whisper, GPT, BERT.

Use cases
Discover how our solution can help you solve most AI privacy concerns
Diagnostic
Assistant
Assistant
Read More
Private Speech
Transcription
Transcription
Read More
Confidential
Document Analysis
Document Analysis
Public Space Monitoring
Facial
Recognition
Recognition
Zero-Trust
Search
Search

Join the
community
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.