Quantization of Neural Networks for Fully Homomorphic Encryption

January 26, 2022
Jordan Frery

× 🚨🚨🚨 Attention — The code in this blog post is deprecated
Check out the latest version of Concrete ML here.

Machine Learning and the Need for Privacy

In machine learning, a neural network is a circuit of artificial neurons or nodes. Neural networks are the standard used to produce high quality models over unstructured data (images, sound and text). When it comes to solving tasks like image segmentation, raw audio generation, or machine translation, machine learning models based on neural networks provide by far the best performance. The rise of neural networks can be summarized in two main points:

  1. Hardware infrastructure (i.e. GPU and distributed training. Fast inference even for huge models)
  2. Model complexity (essentially number of parameters)

With ever more complexity and overhead, it’s more and more difficult for machine learning specialists to build, maintain, and serve these models, and as a result, a few corporations own most of the artificial intelligence (AI) space. As a result, most ML models running in the cloud are only accessible via an API. Trust and privacy are becoming important topics as the end users lack visibility and thus privacy is at stake.

In this article, we’re going to look at how Fully Homomorphic Encryption offers a unique and unmatched solution to tackle the problem of trust and privacy in data and AI. We’ll focus on Quantization, which is all about mapping continuous infinite values to a smaller set of discrete finite values for efficiency reasons. Quantization is also a way to make existing models more FHE-friendly, and we’ll explain what that means below.

Fully Homomorphic Encryption

Fully Homomorphic Encryption (FHE) allows you to compute on encrypted data. The computation can be done with a series of publicly-available keys without endangering the security. The encrypted result is returned to the legitimate owner, who is the only one able to decrypt the message with the private key. Mathematically, it is computationally unachievable for the untrusted server to decrypt anything, since it does not ‘know’ the keys.

Here at Zama, we use FHE to turn neural networks into equivalent computations that work end to end over encrypted data. The goal is to make this as efficient as possible while limiting any drop in accuracy. Programmable Bootstrapping (PBS) is the technique that makes this possible. This is a critical aspect in machine learning since activation functions are essentially nonlinear. In the rest of this post, we’ll assimilate the PBS to a simple table lookup over encrypted data.

Quantization and FHE-friendly Neural Networks

FHE allows you to use a large panel of operations that can be applied to machine learning models effectively. In a previous post, we showed you a practical example for a linear model; neural networks work in a similar way.

Zama initially used an approximate solution to convert neural networks into their FHE counterparts. The problem with this solution was that handling the FHE parameters for different neural network architectures is not trivial, and so we had to shift toward an exact paradigm where FHE can provide the exact same results as in clear.

The limitations of this new paradigm are two fold:

  • Only multiplication by integers is possible.
  • The precision of PBS is limited (i.e.: the table lookups we can apply have a maximal size, typically 2⁷ or 128 different values; the larger the precision needed, the slower the inference).

We can tackle these limitations with neural network quantization. Quantization is generally used to improve the efficiency or compression of neural networks, but it is also well adapted to circumventing some of the limits of FHE:

  • Quantization can replace floating point value multiplications by integer multiplications, which are doable in FHE.
  • Quantization even allows reduction to small integers, which is a way to avoid the limited precision in PBS.

Quantization in Neural Networks

Quantization of neural networks is a subdomain of neural network compression. Such techniques are mainly used to improve neural network inference time on embedded hardware or browser-like interfaces. The goal is to replace costly operations done on floats by much more efficient and relatively good approximations done on small integers. Quantization is currently implemented in some of the best-known deep learning libraries. A common approach is the affine quantization using p bits of precision. In a nutshell, when you want to reach lower precision than 8 bits, Quantization Aware Training is used to maintain a high accuracy between the quantized network and its floating points values counterpart.

Here’s a brief overview of how quantization works. Let X be a set of weights, inputs or activations outputs we want to quantize. Here is an example doing unsigned quantization (i.e. the quantization outputs integers that are greater or equal to 0):

With the above equations, you can transform the entire neural network from floating point values to integers over p bits of precision and adapt the matrix multiplication as it is done in a fully connected layer or in a convolutional neural network (CNN) by replacing real values by their quantized values.

FHE-friendly Neural Network

Moving forward, we’ll use PyTorch and Numpy to advance our example. We also make use of Concrete Numpy and some preliminary versions of the tools that allow us to directly compile torch models. Note that the torch compilation functions are in a very early stage of development for now, while the numpy compilation functions are more robust. The next version of our framework will contain more stable torch compilation functions. In the compilation, we also include quantization in a transparent way for the user.

Now let’s start the coding by defining a standard pytorch model and use concrete.torch to convert it to a full numpy model.

In this snippet of code, we have a fully connected network in numpy that has a 97% accuracy on the MNIST test set.

Then, in the next snippet, we can start playing with quantization provided in the framework to transform our model to integers. We select a precision of 6 bits for our weights, activations, and layer outputs.


Post training quantization with 6 bits quantized precision allows us to maintain the model to its full performance (only 0.07% drop in performance, which is almost negligible). As expected, all layers, activations and inputs have been converted to integers, which allows us to get rid of one of the constraints of FHE: no more difficulties in limiting us to integer multiplications. The core computation (the matrix multiplication) is in full integers and the rest (scales in floats and zero point values) can take advantage of a table lookup along with the activation function (basically done with a PBS).

Finally, in classical quantization, inputs, first layer, and last layer are often left in floating point values to gain accuracy. For FHE, we need to deal with integers even for first and last layers. This will be done by applying an affine quantization process to the input, first, and last layers as well as the rest of the network. Note that in some tasks such as NLP the input is tokenized and thus already a good representation for FHE (one-hot vector along with the learnable embedding layer).

Another challenge is that a 32-bit accumulator is commonly used in the major ML frameworks for the core operation in a neural network (multiply-add). Due to the low-precision you can expect from PBS (currently, 7 bits in Concrete Numpy), this is a much more complicated task to solve for the moment. This will soon be solved by the breakthroughs in FHE!.

Stay tuned to our blog to see how this work develops!

Get the latest news about homomorphic encryption and what we do at Zama: subscribe to our newsletter.

We are hiring! Join Zama and help us safeguard privacy by making the internet encrypted end-to-end. All the info here: jobs.zama.ai

We’re open source — follow Zama on Github here: github.com/zama-ai

Read more related posts