Efficient LLM inference on CPU using Transformers-based API
INDUSTRIAL LCD DISPLAYS / IGBT MODULES DISTRIBUTOR

Infineon / Mitsubishi / Fuji / Semikron / Eupec / IXYS

Efficient LLM inference on CPU using Transformers-based API

Posted Date: 2024-01-23

What is Intel Extension for Transformers?

Intel Extension for Transformers is an innovative toolkit launched by Intel that can significantly accelerate Transformers-based large language models (LargeLanguageModel, LLM) based on Intel architecture platforms, especially the fourth generation Intel Xeon scalable processors (codenamed SapphireRapids, SPR). . Its main features include:

Provide users with a seamless model compression experience by extending the Hugging Face transformers API and leveraging Intel Neural Compressor; Provide an LLM inference runtime using low-bit quantization kernel (NeurIPS 2023: Efficient LLM Inference on CPU), supporting Falcon, LLaMA, Common LLMs such as MPT, Llama2, BLOOM, OPT, ChatGLM2, GPT-J-6B, Baichuan-13B-Base, Baichuan2-13B-Base, Qwen-7B, Qwen-14B and Dolly-v2-3B; advanced compressed sensing Runtime (NeurIPS 2022: Fast distillation on CPU and QuaLA-MiniLM: Quantization length-adaptive MiniLM; NeurIPS 2021: Prune once, forget it: sparse/prune pre-trained language models).

This article will focus on the LLM inference runtime (referred to as "LLM runtime"), and how to use APIs based on Transformers to achieve more efficient LLM inference on Intel Xeon Scalable processors and how to deal with LLM in chat scenarios. application problems.

01LLM Runtime

The LLM Runtime provided by Intel Extension for Transformers is a lightweight but efficient LLM inference runtime. It is inspired by GGML and is compatible with llama.cpp. It has the following characteristics:

The kernel has been optimized for the various AI acceleration technologies built into Intel Xeon CPUs (such as AMX, VNNI), as well as the AVX512F and AVX2 instruction sets;

More quantization options can be provided, such as: different granularities (by channel or by group), different group sizes (such as: 32/128);

Has better KV cache access and memory allocation strategies;

It has tensor parallelization capabilities to facilitate distributed reasoning in multi-channel systems.

The simplified architecture diagram of LLM Runtime is as follows:

Figure 1. Simplified architecture diagram of LLM Runtime of Intel Extension for Transformers

02Using Transformers-based API

Implementing LLM efficient inference on CPU

With less than 9 lines of code, you can achieve better LLM inference performance on the CPU. Users can easily enable APIs similar to Transformers for quantification and inference. Just set 'load_in_4bit' to true and import the model from HuggingFace URL or local path. Sample code to enable weight-only INT4 quantization is provided below:

from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM model_name = "Intel/neural-chat-7b-v3-1" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True) outputs = model.generate(inputs, streamer=streamer, max_new_tokens= 300)

The default setting is: store weights as 4 bits and calculate as 8 bits. But it also supports different calculation data type (dtype) and weight data type combinations, and users can modify the settings as needed. Sample code for how to use this feature is provided below:

from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3-1" prompt = "Once upon a time, there existed a little girl," woq_config = WeightOnlyQuantConfig(compute_dtype= "int8", weight_dtype="int4") tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=woq_config) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)

03Performance Testing

After continuous efforts, the INT4 performance of the above optimization scheme has been significantly improved. This article performs a performance comparison with llama.cpp on a system equipped with Intel Xeon Platinum 8480+; system configuration details are as follows: @3.8 GHz, 56 cores/way, hyper-threading enabled, turbo frequency enabled, total memory 256 GB (16 x 16 GB DDR5 4800 MT/s (4800 MT/s)), BIOS 3A14.TEL2P1, microcode 0x2b0001b0, CentOS Stream 8.

The inference performance test results when the input size is 32, the output size is 32, and the beam is 1 are detailed in the table below:

Table 1. LLM Runtime and llama.cpp inference performance comparison (input size = 32, output size = 32, beam = 1)

For the test results of inference performance when the input size is 1024, the output size is 32, and the beam is 1, see the following table for details:

Table 2. LLM Runtime and llama.cpp inference performance comparison (input size = 1024, output size = 32, beam = 1)

According to Table 2 above, it can be seen that compared with llama.cpp also running on the fourth generation Intel Xeon Scalable processor, LLM Runtime can significantly reduce the latency, whether it is the first token or the next token, and the first The inference speed of token and next token is respectively increased by up to 40 times1 (Baichuan-13B, input is 1024) and 2.68 times2 (MPT-7B, input is 1024). The tests for llama.cpp use the default code base.

Combining the test results in Table 1 and Table 2, it can be concluded that compared with llama.cpp also running on the fourth generation Intel Xeon Scalable processor, LLM Runtime can significantly improve the overall performance of many common LLMs: in the input When the size is 1024, an improvement of 3.58 to 21.5 times is achieved; when the input size is 32, an improvement of 1.76 to 3.43 times is achieved3.

04Accuracy test

Intel Extension for Transformers leverages quantization methods such as SignRound, RTN, and GPTQ in Intel Neural Compressor, and verified INT4 inference accuracy using lambada_openai, piqa, winogrande, and hellaswag datasets. The table below compares test result averages to FP32 accuracy.

Table 3. INT4 and FP32 accuracy comparison

As can be seen from Table 3 above, the accuracy loss of INT4 inference performed by multiple models based on LLM Runtime is very small and can almost be ignored. We verified many models, but only some are listed here due to space limitations. If you would like more information or details, please visit this link: https://medium.com/@NeuralCompressor/llm-performance-of-intel-extension-for-transformers-f7d061556176.

05More advanced functions: meet the application needs of LLM in more scenarios

At the same time, LLM Runtime also has the tensor parallelization function of dual-channel CPU, and is one of the first products to have such a function. In the future, dual nodes will be further supported.

However, the advantage of LLM Runtime is not only its better performance and accuracy. We have also invested a lot of effort to enhance its functionality in chat application scenarios and solve the following applications that LLM may encounter in chat scenarios. problem:

01 Dialogue is not only about LLM reasoning, dialogue history is also useful.

02 Limited output length: LLM model pre-training is mainly based on limited sequence length. Therefore, its accuracy decreases when the sequence length exceeds the attention window size used during pre-training.

03 Inefficiency: During the decoding phase, LLM based on Transformers stores the key-value status (KV) of all previously generated tokens, resulting in excessive memory usage and increased decoding latency.

Regarding the first issue, the LLM Runtime's conversation functionality is addressed by incorporating more conversation history data and generating more output, which llama.cpp does not currently handle well.

Regarding the second and third questions, we integrated streaming LLM (Steaming LLM) into Intel Extension for Transformers, which can significantly optimize memory usage and reduce inference latency.

06Streaming LLM

Different from the traditional KV caching algorithm, our method combinesAttention Sink (4 initial tokens)to improve the stability of attention calculations and useThe rolling KV cache keeps the latest token, which is crucial for language modeling. The design is highly flexible and can be seamlessly integrated into autoregressive language models capable of utilizing rotational position encoding RoPE and relative position encoding ALiBi.

Figure 2. KV cache of Steaming LLM (Image source: Efficient streaming language model through attention sinking)

In addition, unlike llama.cpp, this optimization plan also introduces parameters such as "n_keep" and "n_discard" to enhance the Streaming LLM strategy. The user can use the former to specify the number of tokens to keep in the KV cache, and the latter to determine the number to discard among the generated tokens. To better balance performance and accuracy, the system discards half of the latest tokens in the KV cache by default.

At the same time, to further improve performance, we also added Streaming LLM to the MHA fusion mode. If the model uses rotational position encoding (RoPE) to implement position embedding, then you only need to apply a "shift operation" to the existing K-Cache to avoid performing operations on previously generated tokens that have not been discarded. Repeated calculation. This approach not only takes full advantage of the full context size when long text is generated, but also incurs no additional overhead until the KV cache context is completely filled.

A "shift operation" relies on commutativity and associativity of rotations, or complex multiplication. For example: If the initial placement position of a token's K-tensor is m and it is rotated m×θi for i∈0,d/2, then when it needs to move to the position m-1, it can be rotated back to - 1×θi for i∈0,d/2. This is exactly what happens every time a cache of n_discard tokens is discarded, at which point each remaining token needs to be "moved" n_discard positions. The figure below takes "n_keep = 4, n_ctx = 16, n_discard = 1" as an example to show this process.

Figure 3. Working principle of Ring-Buffer KV-Cache and Shift-RoPE

It should be noted that the fused attention layer does not require understanding of the above process. If the K-cache and V-cache are shuffled identically, the attention layer will output almost the same results (there may be small differences due to floating point errors).

You can start Streaming LLM with the following code:

from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v1-1" # Hugging Face model_id or local model woq_config = WeightOnlyQuantConfig(compute_dtype="int8", weight_dtype="int4 ") prompt = "Once upon a time, a little girl" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM. from_pretrained(model_name, quantization_config=woq_config, trust_remote_code=True) # Recommend n_keep=4 to do attention sinks (four initial tokens) and n_discard=-1 to drop half rencetly tokens when meet length threshold outputs = model.generate(inputs, streamer= streamer, max_new_tokens=300, ctx_size=100, n_keep=4, n_discard=-1)

Conclusion and Outlook

Based on the above practical experience, this article provides a solution to achieve efficient low-bit (INT4) LLM inference on Intel Xeon Scalable processors, and verifies its versatility on a series of common LLMs and demonstrates its comparison with other Performance advantages of open source CPU-based solutions. In the future, we will further improve the CPU tensor library and cross-node parallel performance.

Welcome to try the Intel Extension for Transformers and run LLM inference more efficiently on Intel platforms! You are also welcome to submit modification requests (pull requests), questions or queries to the code repository (repository). Looking forward to your feedback!

Review Editor: Tang Zihong


#Efficient #LLM #inference #CPU #Transformersbased #API