Skip to main content

Running Friendli Container

Introduction

Friendli Containers enable you to effortlessly deploy your generative AI model on your own machine. This tutorial will guide you through the process of running a Friendli Container. The current version of Friendli Containers supports most of major generative language models.

Prerequisites

  • Before you begin, make sure you have signed up for Friendli Suite. You can use Friendli Containers free of charge for four weeks.
  • Friendli Container currently only supports NVIDIA GPUs, so please prepare proper GPUs and a compatible driver by referring to our required CUDA compatibility guide.
  • Prepare a Personal Access Token following this guide.
  • Prepare a Friendli Container Secret following this guide.
  • Install Hugging Face CLI with pip install -U "huggingface_hub[cli]".

Preparing Personal Access Token

PAT (Personal Access Token) is the user credentials for logging into our container registry.

  1. Sign in Friendli Suite.
  2. Go to User Settings > Tokens and click 'Create new token'.
  3. Save your created token value.

Preparing Container Secret

Container secret is a secret code that is used to activate Friendli Container. You should pass the container secret as an environment variable to run the container image.

  1. Sign in Friendli Suite.
  2. Go to Container > Container Secrets and click 'Create secret'.
  3. Save your created secret value.
info

🔑 Secret Rotation

You can rotate the container secret for security reasons. If you rotate the container secret, a new secret will be created and the previous secret will be revoked automatically in 30 minutes.

Pulling Friendli Container Image

  1. Log in to the Docker client using the personal access token created as outlined in Preparing Personal Access Token.

    export FRIENDLI_PAT="YOUR PERSONAL ACCESS TOKEN"
    docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT
  2. Pull image

    docker pull registry.friendli.ai/[your_repository]:[your_tag]
info

💰 4-Week Free Trial

During the 4-week free trial period, you can use registry.friendli.ai/trial image only, which can be pulled with docker pull registry.friendli.ai/trial.

Running Friendli Container with Hugging Face Models

If your model is in a safetensors format, which is compatible with Hugging Face transformers, you can serve the model directly with Friendli Containers.

The current version of Friendli Containers supports direct loading of safetensors checkpoints for the following models (and corresponding Hugging Face transformers classes):

  • GPT (GPT2LMHeadModel)
  • GPT-J (GPTJForCausalLM)
  • MPT (MPTForCausalLM)
  • OPT (OPTForCausalLM)
  • BLOOM (BloomForCausalLM)
  • GPT-NeoX (GPTNeoXForCausalLM)
  • Llama (LlamaForCausalLM)
  • Falcon (FalconForCausalLM)
  • Mistral (MistralForCausalLM)
  • Mixtral (MixtralForCausalLM)
  • Qwen2 (Qwen2ForCausalLM)
  • Gemma (GemmaForCausalLM)
  • Starcoder2 (Starcoder2ForCausalLM)

If your model does not belong to one of the above model types, please ask for support by sending an email to Support.

note

If your model is in the above list but has a pickle format, you can follow this guide to convert a pickle format checkpoint to a safetensors format checkpoint.

info

When Friendli Engine loads a model, following files are required along with safetensors files:

  • config.json
  • tokenizer.json
  • tokenizer_config.json (Optional)
  • model.safetensors.index.json (Optional. if the checkpoint is sharded)

If you use huggingface-cli download, the required files above will be downloaded.

Here are the instructions to run Friendli Container to serve a Hugging Face model:

# Fill the values of following variables.
export HF_MODEL_NAME="" # Hugging Face model name (e.g., "meta-llama/Llama-2-7b-chat-hf")
export MODEL_DIR="" # Path to download model files.
export FRIENDLI_CONTAINER_SECRET="" # Friendli container secret
export FRIENDLI_CONTAINER_IMAGE="" # Friendli container image (e.g., "registry.friendli.ai/trial")
export GPU_ENUMERATION="" # GPUs (e.g., '"device=0,1"')

huggingface-cli download $HF_MODEL_NAME \
--local-dir $MODEL_DIR \
--local-dir-use-symlinks False

docker run \
--gpus $GPU_ENUMERATION --network=host -v $MODEL_DIR:/model \
-e FRIENDLI_CONTAINER_SECRET=$FRIENDLI_CONTAINER_SECRET \
$FRIENDLI_CONTAINER_IMAGE \
--ckpt-path /model \
--ckpt-type hf_safetensors \
[LAUNCH_OPTIONS]

The [LAUNCH_OPTIONS] should be replaced with Launch Options for Friendli Container.

By running the above command, you will have a running Docker container that exports an HTTP endpoint for handling inference requests.

note

You can customize the docker run command if you are familiar with Docker. For example, you can connect the container via a virtual network interface instead of specifying --network host, which grants access to all interfaces from the host.

Examples: Llama 2 7B Chat

This is an example running Llama2-7b-chat model with a single GPU.

export HF_MODEL_NAME="meta-llama/Llama-2-7b-chat-hf"
export MODEL_DIR=$PWD/meta-llama--Llama-2-7b-chat-hf
export FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET"
export FRIENDLI_CONTAINER_IMAGE="registry.friendli.ai/trial"
export GPU_ENUMERATION='"device=0"'

huggingface-cli download $HF_MODEL_NAME \
--local-dir $MODEL_DIR \
--local-dir-use-symlinks False

docker run \
--gpus $GPU_ENUMERATION --network=host -v $MODEL_DIR:/model \
-e FRIENDLI_CONTAINER_SECRET=$FRIENDLI_CONTAINER_SECRET \
$FRIENDLI_CONTAINER_IMAGE \
--web-server-port 6000 \
--metrics-port 7912 \
--ckpt-path /model \
--ckpt-type hf_safetensors

Multi-GPU Serving

Friendli Container supports tensor parallelism and pipeline parallelism for multi-GPU inference.

Tensor Parallelism

Tensor parallelism is employed when serving large models that exceed the memory capacity of a single GPU, by distributing parts of the model's weights across multiple GPUs. To leverage tensor parallelism with the Friendli Container:

  1. Specify multiple GPUs for $GPU_ENUMERATION (e.g., '"device=0,1,2,3"').
  2. Use --num-devices (or -d) option to specify the tensor parallelism degree (e.g., --num-devices 4).

Pipeline Parallelism

Pipeline parallelism splits a model into multiple segments to be processed across different GPU, enabling the deployment of larger models that would not otherwise fit on a single GPU. To exploit pipeline parallelism with the Friendli Container:

  1. Specify multiple GPUs for $GPU_ENUMERATION (e.g., '"device=0,1,2,3"').
  2. Use --num-workers (or -n) option to specify the pipeline parallelism degree (e.g., --num-workers 4).
info

🆚 Choosing between Tensor Parallelism and Pipeline Parallelism

When deploying models with the Friendli Container, you have the flexibility to combine tensor parallelism and pipeline parallelism. We recommend exploring a balance between the two, based on their distinct characteristics. While tensor parallelism involves "expensive" all-reduce operations to aggregate partial results across all devices, pipeline parallelism relies on "cheeper" peer-to-peer communication. Thus, in limited network setup, such as PCIe networks, leveraging pipeline parallelism is preferable. Conversely, in rich network setup like NVLink, tensor parallelism is recommended due to its superior parallel computation efficiency.

Advanced: Serving AWQ Models

Running quantized models requires an additional step to search execution policy. See Serving AWQ Models to learn how to create an inference endpoint for the AWQ model.

Advanced: Serving MoE Models

Running MoE (Mixture of Experts) models requires an additional step to search execution policy. See Serving MoE Models to learn how to create an inference endpoint for the AWQ model.

Sending Inference Requests

We can now send inference requests to the running Friendli Container. For information on all parameters that can be used in an inference request, please refer to this document.

Examples

curl -X POST http://0.0.0.0:6000/v1/completions \
-H "Content-Type: application/json" \
-d '{"prompt": "Python is a popular", "max_tokens": 30, "stream": true}'

Options for Running Friendli Container

General Options

OptionsTypeSummaryDefaultRequired
--version-Print Friendli Container version.-
--help-Print Friendli Container help message.-

Launch Options

OptionsTypeSummaryDefaultRequired
--web-server-portINTWeb server port.-
--metrics-portINTPrometheus metrics export port.8281
--ckpt-pathTEXTAbsolute path of model checkpoint. If not specified, use uninitialized (garbage) values for model parameters.-
--ckpt-typeTEXTCheckpoint file type. Choose one of {hdf5|safetensors|hf_safetensors}. If not specified, guess the type from the filename extension of the ckpt, or use HDF5.hdf5
--tokenizer-file-pathTEXTAbsolute path of tokenizer file. This option is not needed when tokenizer.json is located under the path specified at --ckpt-path.-
--tokenizer-add-special-tokensBOOLEANWhether or not to add special tokens in tokenization. Equivalent to Hugging Face Tokenizer's add_special_tokens argument.false
--tokenizer-skip-special-tokensBOOLEANWhether or not to remove special tokens in detokenization. Equivalent to Hugging Face Tokenizer's skip_special_tokens argument.true
--dtypeCHOICE: [bf16, fp16, fp32]Checkpoint data type. Choose one of {fp16|bf16|fp32}. Must be equal to the data type of the model checkpoint being used. This option is not needed when config.json is located under the path specified at --ckpt-path.fp16
--bad-stop-file-pathTEXTJSON file path that contains stop sequences or bad words/tokens.-
--num-request-threadsINTThread pool size for handling HTTP requests.4
--timeout-microsecondsINTServer-side timeout for client requests, in microseconds.0 (no timeout)
--ignore-nan-errorBOOLEANIf set to True, ignore NaN error. Otherwise, respond with a 400 status code if NaN values are detected while processing a request.-
--max-batch-sizeINTMax number of sequences that can be processed in a batch.384
--num-devices, -dINTNumber of devices to use in tensor parallelism degree.1
--num-workers, -nINTNumber of workers to use in a pipeline (i.e., pipeline parallelism degree).1
--search-policyBOOLEANSearch best engine policy for the given combination of model, hardware, parallelism degree. Learn more about policy search at Optimizing Inference with Policy Search.-
--algo-policy-dirTEXTPath to directory containing the policy file. Learn more about policy search at Optimizing Inference with Policy Search.-