Hybrid Large Language Models To Improve On-premise Deployments with Concrete ML

October 30, 2023
Jordan Frery

The Trade-Off: Cloud vs On-Premise

Large Language-Models (LLMs) can enable large productivity increases when unleashed on confidential data that companies store in their knowledge bases. However, since users may leak personal or confidential data, companies and sometimes governments have put in place policies to forbid access to cloud based LLMs, The alternative is on-premise deployment, where the model is deployed fully user-side. In this setting, developers of proprietary LLMs want to ensure their model IP is protected, that the usage of their LLM complies with license agreements, and that per-token revenue tracking is streamlined. Weighing the pros and cons between cloud and on-premise can be a brain teaser, but, a new hybrid FHE LLM feature in Concrete ML addresses many concerns of on-premise deployments.

You can find the full code for this use case at https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/hybrid_model

Hybrid Deployment: Best of Both Worlds

FHE can preserve the privacy of the user even when remote servers process their sensitive data. Thus, some computation tasks can be offloaded to the cloud, and in the case of on-premise LLM deployment that means running some LLM layers on an untrusted cloud machine, on encrypted data. It’s win-win: user privacy and the performance of on-premise deployment are maintained while sensitive LLM weights are not shared with the user. Users need to query the server to generate each token, in addition allowing usage and license compliance monitoring.

Hybrid Deployment with Concrete ML

Concrete ML provides a HybridFHEModel class that converts any PyTorch model into a hybrid one. Layers that are kept local are executed on-premise, as before, using accelerators such as one or more GPUs. The user can specify which layers are to be moved to the cloud where they will run on encrypted data.

Putting it into Practice: Model conversion

At model development time, the first step is to load a model, for example from the Hugging Face repository.

model_name = "microsoft/phi-1_5"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Then the model developer will call the Concrete ML hybrid model converter, specifying which layers are to be computed by the server on encrypted data. 

from concrete.ml.torch.hybrid_model import HybridFHEModel

module_names = "layers.1.mixer.WqKV"
hybrid_model = HybridFHEModel(model, module_names)

It’s now time to turn the PyTorch LLM into a Concrete ML hybrid LLM. A hybrid model contains: 

  • a copy of the original model, with the selected layers removed, to be deployed on-premise
  • FHE circuits that compute the selected layers on encrypted data, on the server. 

To produce the circuits that execute on the server, just call the compile function. 

hybrid_model.compile_model(
       inputs,
       n_bits=8,
   )

The arguments to the compile function are: inputs - a set of different prompts used to calibrate the model to FHE requirements, and n_bits - the precision used for quantized weights and activations. Using 8 bits is quite common for LLMs.
It is easy to check if the hybrid FHE version of the LLM is correct with respect to the original model. Simply compare predictions between FHE simulation and floating point inference of the full model!

simulated_predictions = hybrid_model(x, fhe="simulate")
original_predictions = hybrid_model(x, fhe="disable")

Finally, check the execution time when FHE is enabled:

fhe_predictions = hybrid_model(x, fhe="execute")

Hybrid Model Deployment

To use a hybrid model, the client instantiates the HybridFHEModel class as above, but this time they will load weights only for the layers that are computed on-premise. The client also needs to configure the remote endpoint that performs the computation in FHE. The model developer zeros out the sensitive weights and sends this new model to the client which loads it:

model_name = "microsoft/phi-1_5"
module_names = "layers.1.mixer.WqKV"
sanitized_model_name = "on_premise_FHE_phi-1_5"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = torch.load(sanitized_model_name)


hybrid_model = HybridFHEModel(
    model,
    module_names,
    server_remote_address="http://0.0.0.0:8000",
    model_name=f"{model_name}",
    verbose=False,
)

Next, they can execute the LLM model to generate tokens with the familiar syntax: 

input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device=device)
output_ids = model.generate(
      input_ids, max_new_tokens=num_tokens, use_cache=True, streamer=streamer
)
generated = tokenizer.decode(output_ids[0])
print(generated)

Some things to keep in mind:

  • One or more layers can be chosen to be converted to FHE. To convert several ones, simply specify them as a list to the HybridFHEModel class
  • It might seem tempting to make full attention layers private. However, such layers contain computationally expensive operations. Instead, moving only the projection heads of these layers to FHE provides a good trade-off.

Results

You can find the full code for this use case on the Concrete ML Github repository.

This implementation was used to benchmark the phi-1.5 model in a hybrid deployment setting. The model is available on Hugging Face and the experiment was run with two machines with the same hardware configuration: ec2 m6i.metal with 64 cores CPU.

In this experiment, the encrypted ciphertext of a 2048-dimensional embedding vector of a token has a size of 20MB. The encrypted output of the projections is 60MB. The time spent transferring data from/to client/server was 1s and the FHE execution time was around 1.5s. 

This scenario protects a single attention layer which contains about 1% of the total weights. Extra layers could be easily protected by specifying them at compilation time. 

A generation latency of 2.5 seconds per token was achieved. For comparison, a fully on-premise inference needs around 50ms per token on the same machine. The slow-down introduced by the FHE computation and communication will be improved upon in later Concrete ML versions. Using ciphertext seeding the size of the ciphertexts will be decreased by a factor of 1000x. Furthermore, using GPUs for the server-side computation should reduce its latency many times. 

The hybrid LLM model discussed in this blog addresses many of the issues of pure on-premise deployment: it adds IP protection of model weights and easy usage monitoring.

Additional links

Read more related posts

No items found.