PyData Global 2022

BastionAI: Towards an Easy-to-use Privacy-preserving Deep Learning Framework
12-02, 08:00–08:30 (UTC), Talk Track I

We present BastionAI, a new framework for privacy-preserving deep learning leveraging secure enclaves and Differential Privacy.
We provide promising first results on fine-tuning a BERT model on the SMS Spam Collection Data Set within a secure enclave with Differential Privacy.
The library is available at https://github.com/mithril-security/bastionai.


Confidential training of deep learning models on sensitive data, possibly between multiple data owners such as hospitals or banks, is still a major milestone toward the adoption of AI in critical industries. Federated Learning approaches have been proposed but hardware and software deployment complexity on each node, communication cost, and high level of Differential Privacy noise (if any) in decentralized setups make them difficult to adopt in practice.
We present BastionAI, a new framework for privacy preserving deep learning leveraging secure enclaves and Differential Privacy. The library departs from traditional Decentralized Federated learning and proposes a Fortified Learning approach, where computations are centralized in a Trusted Execution Environment. This allows for faster training, reasonable Differential Privacy noise for the same budget, and simplifies deployment as each participating node only needs a lightweight interface to check the security features of the remote enclave.
We provide promising first results on fine-tuning a BERT model on the SMS Spam Collection Data Set within a secure enclave with Differential Privacy.
The library is available at https://github.com/mithril-security/bastionai.


Prior Knowledge Expected

No previous knowledge expected

CEO of Mithril Security.
Our goal: democratization of privacy in AI 🧠
https://github.com/mithril-security