Announcing Concrete ML v0.2

April 21, 2022
The Zama Team

Today, we are announcing the release of Concrete ML as a public alpha. The package is built on top of Concrete Numpy.

The goal is to allow data scientists without any prior knowledge of cryptography to automatically turn classical machine learning (ML) models into their FHE equivalent. This release provides APIs which are as close as possible to what data scientists are already using.

Easy to use APIs to compile ML models to FHE

A major goal of this release is to make the adoption of Concrete ML, for users of popular machine learning frameworks, as simple as it can be. Here’s an example, with a very basic linear model and scikit-learn.

In scikit-learn, it would be something like this:

from sklearn.linear_model import LinearRegression as SklearnLinearRegression
linreg = SklearnLinearRegression()
linreg.fit(x_train, y_train)
y_pred = linreg.predict(x_test)

The equivalent in Concrete ML, with the same model but operating on encrypted data, is:

from concrete.ml.sklearn import LinearRegression as ConcreteLinearRegression
q_linreg = ConcreteLinearRegression(n_bits={"inputs": 6, "weights": 1})
q_linreg.fit(x_train, y_train)
q_linreg.compile(calib_data)
y_test_pred_fhe = q_linreg.predict(x_test, execute_in_fhe=True)

where q_linreg stands for quantized linear regression, calib_data is a representative unlabeled dataset, and n_bits is the number of bits of quantization. compile() is the function which turns the model into its FHE equivalent.

We emphasize that, for linear models and trees, we don’t reimplement model training in Concrete ML, so you can use any kind of variation of these models that scikit-learn supports. This allows you to enjoy all the features of scikit-learn, and to use pipelines or grid search on Concrete ML models, as shown in this tutorial. Once the models are trained, they can be compiled to FHE whatever the training setting passed to scikit-learn.

Overview of our FHE ML classifiers

In the following illustrations, we compare models between scikit-learn and Concrete ML. You can reproduce these experiments with the tutorial from the documentation. This tutorial trains classifiers on three datasets and generates graphs of the decision boundaries while also showing the accuracy obtained on each test set. The accuracies of the Concrete ML classifiers, shown as percentages on each plot, are measuredy on encrypted data (i.e., in FHE) while, to reduce execution time of the notebook, the red / blue decision function contours are computed without FHE (through a simulator called the Virtual Lib). The first dataset is make_moons, the second one is make_circles, and the third is a simple, almost linearly separable dataset. Let’s look at the results of this tutorial in more detail.

Linear models

For these simple 2D linear models, we have good performance in FHE, similar to the performance of their unencrypted scikit-learn counterparts. However, in the current release, the performance of our heavily quantized classifiers rapidly degrades with an increasing number of dimensions. This will be improved in future releases.

Decision-tree models

Tree-based classifiers in Concrete ML achieve excellent accuracy on encrypted data. Although tree models require comparisons (non-linear operations on encrypted data), Zama’s unique take on FHE supports Programmable Bootstrapping that enables this feature with ease. Thus, performance in FHE for tree-based models is as good as that of their scikit-learn/xgboost counterparts. This holds even on datasets which have a high number of dimensions and, in general, tree-based models are usually the most performant ones when it comes to tabular data. Thus, once we have integrated some deployment APIs, you can put our Decision Trees, Random Forests, and Gradient Boosted Trees into production. 

If you’re interested in going further with this, have a look at this tutorial which uses a decision tree to classify a spam dataset or at this tutorial which uses XGBoost on a diabetes prediction task.

Neural network models

Finally, neural-networks are also available in Concrete ML. Since several layers increase the number of computations in these classifiers, the effect of precision loss is compounded, and consequently these classifiers currently have poor performance in FHE. Later versions of the package will improve these results, including with quantization-aware training options and more precision available in next versions of Concrete Framework.

What about deep learning?

We also have made efforts to support generic, user-provided torch models. For this use case, tutorials are available for a Fully Connected Neural Network and for a Convolutional Neural Network. Let’s look at some usage examples.

from concrete.ml.torch.compile import compile_torch_model
quantized_compiled_module = compile_torch_model(net, X_train, n_bits=3)
q_input = quantized_compiled_module.quantize_input(input)
quantized_compiled_module.forward_fhe.run(q_input)

Here the torch network net is assumed to be trained by the user through any kind of torch training pipeline. The compile_torch_model() function will quantize the network post-training, using the X_train dataset for calibration and will then compile the network to FHE.

While performance of networks quantized post-training will degrade rapidly with increasing complexity of the networks (currently 2-3 neurons are supported at most), our efforts for now are focused on feature-completeness. To this end, Concrete ML currently supports a wide array of operators in torch networks, by using an ONNX conversion pipeline. We’re also making bigger networks work well under FHE constraints.

Concrete ML brings an appealing solution for private computation of ML models. Tree-based classifiers are highly performant and are especially well suited to FHE thanks to Zama’s Programmable Bootstrapping. Concrete ML makes them easy to use. We are also working to bring linear models and neural networks to this standard in the near future.

Additional Links

- Release notes

- Github repo

- Documentation

- List of contributors

Read more related posts