360 Privacy for Machine Learning

December 21, 2022
The Zama Team

How homomorphic encryption enables both the user data and the model parameters to remain private

From various security talks conducted with users and people evolving in the machine learning (ML) space, user data privacy at inference time is indeed a subject of interest, but it is far from being the only one. In addition to regular concerns about secure training, we have often heard that protecting the model is critical for companies to differentiate themselves from the competition. As we know, datasets are very valuable and very costly to build and sanitize, so companies invest time and money to make them secret. It may be sad to say, but several companies value their immediate interest in protecting their intellectual property more than protecting their user’s assets.

Fortunately, fully homomorphic encryption (FHE) can be used to protect both the model and the user’s privacy. The principle of FHE is to use homomorphic properties of the encryption scheme to perform computations directly over encrypted data, without any secret-key material. FHE cryptographic schemes typically offer homomorphism for additions and multiplications. Then, depending on the scheme, non-linear functions (typically, the activations which are used in ML) are either approximated with Chebyshev polynomials (eg., as in BFV, BGV or CKKS) or tabulated if the scheme offers a table-lookup operation over ciphertexts (eg., as in TFHE). With additions, multiplications, and management of non-linear functions, virtually any ML model can be turned into an FHE model, where inputs, intermediate values, and outputs are encrypted under a key that only the legitimate user possesses.

Let’s start by explaining why we believe encrypting the model is far more costly that encrypting user data only. Then, we will explain how FHE can be used to protect user data and that installing the FHE model on a trusted environment (i.e., trusted by the model owner, not by the user) serves the second purpose, which is to protect the intellectual property of the company. Finally, we will take the concrete example of a Hugging Face space that we’ve created, which performs sentiment analysis of encrypted messages; by having the model stored on a trusted server, the goals of both protecting the user and companies interests are fulfilled at the same time.

Why FHE-encrypting the model is very expensive

Initially, it seems possible to encrypt both inputs of the model and the model itself. Then, all constants in the model (typically weights in neurons or thresholds in decision thresholds used in trees)1 would be replaced by encrypted values. However, there would be major implications of such choices.

First, we would need to know under which key the constants would be encrypted. They would be encrypted over a model-owner key, and then (i) the FHE model would be multi-key, since the user also has her key and (ii) we would have to make sure the output of the model can be decrypted solely with the user’s key.

Second, having encrypted weights which are multiplied by encrypted activations or inputs would make the execution much slower since it would imply many multiplications between ciphertexts. Multiplications between ciphertexts are orders-of-magnitude slower than multiplications between a clear constant and a ciphertext. Note that, for a multi-layer neural network, this point is also relevant when inputs are not encrypted. Indeed, activation outputs would be encrypted as a result of the first-layer input (clear), weights (encrypted), and multiplication.

Third, black-box machine learning model attacks is a real concern. If the previous point is made possible, then the user could make any amount of query to the black-box model to discover potential knowledge. Encrypting the weights while allowing a user to decrypt the result of a neural network without controlling the amount of query is thus not sufficient to ensure the privacy of the model owner.

FHE-friendly models with clear weights are already significantly slower than clear models, so we believe that, currently, it is better to keep weights in clear form for a user-friendly experience. Furthermore, this is not at all at the cost of user’s privacy, since this latter doesn’t depend at all on the encrypted-or-not character of weights. Let’s explain in the next section how we also support company interests: protecting its model.

On-premise FHE execution

We believe (without claiming it is particularly new, but merely unrealized by some of the stakeholders) that FHE models can simply be hosted on servers which are trusted by the company. Here, we insist that user privacy doesn’t rely on this trust at all. The trust is only for the company, to know that its model and the corresponding intellectual property are kept far from competitive eyes.

This setting is very appealing for the following reasons:

  • the user data are protected (with FHE encryption), since they are never in the clear, except on her device; the user data are never at risk, as soon as the cryptographic scheme remains secure and the user key remains on the user’s device;
  • the company’s model is protected (i.e., never outside of the trusted environment) from competition by the system setup, which is arguably weaker than security implied by cryptography, but can still give good results since it is in the company’s best interest to ensure the protection of its model;
  • the execution is relatively fast (i.e., much faster than if the model were encrypted);
  • The model can be executed on an adequate hardware (i.e., much less obvious when multiple users with different setups want to run the encrypted model on their hardware).

It’s also worth noting that the model can be trained securely with other techniques, like Federated Learning or Differential Privacy. In other words, fully homomorphic encryption is completely orthogonal to secure training, and both of them can be used at the same time, without interference or speed degradation.

A production-like example on Hugging Face

Following our previous guidelines, we implemented a complete production-like example on Hugging Face Spaces. Our implementation uses Zama Concrete-ML, which is an open-source python package to do machine learning over encrypted data without needing to know how FHE works under the hood.

In the online demo, people can test our sentiment analyzer which works directly over (short) encrypted messages. We also mention that a detailed Jupyter notebook is available.

A transformer (namely, Bert) is chosen by the company to pre-process a dataset which remains secret. A FHE-friendly model is then trained over this dataset using Zama Concrete-ML.

Once and for all, during a setup, the user needs to generate a key pair: the public part is sent to the server for further FHE execution, while the private part is kept secret locally on the device. On the Hugging Face Space, both client and server sides are performed on the same machine, but our code would work just as well if executed on two separate machines.

Once setup is complete, the client writes a message, which is then encoded using the transformer hidden representation. Zama Concrete-ML is used to quantize the data and encrypt it. The encrypted (serialized) data is then sent to the server. There, the model can be executed, in a one-stone, two-birds manner: first, the data is encrypted, so user privacy is ensured; , the model then remains on the trusted server, so intellectual property is retained. Finally, the encrypted result is sent back to the client, who can use the private key to get the clear value.

In terms of performances on Hugging Face servers, we get:

  • a key generation of about 8 seconds
  • an almost instantaneous transformer-then-encrypt
  • an FHE execution of about 12 seconds
  • an almost instantaneous decryption

Conclusion

Fully homomorphic encryption is a powerful tool that can be used to protect user privacy. We have shown settings that keep the model intellectual property secret from users, while retaining their complete privacy at the same time. Further, we give a working example of a sentiment analysis model that is made available through an on-premise API on a trusted server and execute predictions on encrypted data. This allows the company to keep its model secret and manage the number of queries to counter black-box attacks, while still allowing the user to perform computations on their data in a high-standard, privacy-preserving manner.

Additional links

Read more related posts