Concrete ML v1.0.0 features a stable API, better inference performance, and user-friendly error reporting. Most importantly, tools have been added to make your model deployment in cloud environments hassle-free. This release is offered along with our second Hugging Face application that showcases encrypted image filtering.
Stable API and harmonization with Concrete
We made the Concrete ML API more forward-compatible and consistent with Concrete (formerly Concrete Numpy). It may require users to update their code, so we wrote a guide for Upgrading Concrete ML in Your Project to simplify the transition. Of course, support is still available on the FHE.org Discord and within the Zama community.
Better assistance to design FHE-compatible models
Concrete ML and the underlying stack now return more information, helping you first make ML models FHE-compatible, then debug and optimize them. By leveraging insights about their models, users can more quickly and effectively convert them to FHE.
Improved performance
Version 1.0.0 introduces roundPBS, reducing the latency of non-linear functions in FHE. This is achieved by rounding off the least significant bits from the accumulators when applying Programmable Bootstrapping (PBS) operations that implement activation functions. No loss of accuracy occurs when rounding off these bits, while latency improves dramatically. This feature is only available in the simulation phase, but it will be available soon in FHE.
There is now also a very convenient way to find the best PBS error probability parameters—for example, the largest value which doesn’t significantly diminish accuracy. FHE programs run faster when a larger probability is utilized.
For built-in neural networks, structured pruning has also been introduced. As an ML technique used to reduce the number of neurons in an ML model, this is an excellent way to accelerate inference time when applied to FHE.
Easy deployment
Concrete ML fine tunes models, making them FHE-friendly and precise, but it also makes them easy to deploy. Creating a server which runs FHE executions and setting client functions to encrypt and decrypt private data is now simple. Our tutorials show you how to deploy models easily on your AWS machines.
Demos and technical articles based on Concrete ML
Our new Hugging Face Space demonstrates how to apply filters to encrypted images. You can also review our blog post, which explains how it all works under the hood.
Finally, to help advanced users and scientists gain a deeper understanding of how ML works with TFHE on encrypted data, we’ve published two technical articles: Privacy-Preserving Tree-Based Inference with Fully Homomorphic Encryption, which discusses tree-based models, and Deep Neural Networks for Encrypted Inference with TFHE—accepted to CSCML ‘23—about neural networks in FHE.
Additional links
- Star the Concrete ML Github repository to endorse our work.
- Review the Concrete ML documentation.
- Get support on our community channels.
- Help advance the FHE space with the Zama Bounty Program.