Which Quantization Methodology is Proper for You?(GPTQ vs. GGUF vs. AWQ) | by Maarten Grootendorst | Nov, 2023

All through the final yr, we’ve seen the Wild West of Massive Language Fashions (LLMs). The tempo at which new know-how and fashions have been launched was astounding! Because of this, we’ve many various requirements and methods of working with LLMs.
On this article, we’ll discover one such matter, specifically loading your native LLM via a number of (quantization) requirements. With sharding, quantization, and completely different saving and compression methods, it’s not simple to know which methodology is appropriate for you.
All through the examples, we’ll use Zephyr 7B, a fine-tuned variant of Mistral 7B that was skilled with Direct Preference Optimization (DPO).
🔥 TIP: After every instance of loading an LLM, it’s suggested to restart your pocket book to forestall OutOfMemory errors. Loading a number of LLMs requires vital RAM/VRAM. You may reset reminiscence by deleting the fashions and resetting your cache like so:
# Delete any fashions beforehand created
del mannequin, tokenizer, pipe# Empty VRAM cache
import torch
torch.cuda.empty_cache()
You can too comply with together with the Google Colab Notebook to ensure the whole lot works as meant.
Essentially the most easy, and vanilla, method of loading your LLM is thru 🤗 Transformers. HuggingFace has created a big suite of packages that enable us to do superb issues with LLMs!
We are going to begin by putting in HuggingFace, amongst others, from its fundamental department to help newer fashions:
# Newest HF transformers model for Mistral-like fashions
pip set up git+https://github.com/huggingface/transformers.git
pip set up speed up bitsandbytes xformers
After set up, we will use the next pipeline to simply load our LLM:
from torch import bfloat16
from transformers import pipeline# Load in your LLM with none compression methods
pipe = pipeline(
"text-generation",
mannequin="HuggingFaceH4/zephyr-7b-beta",
torch_dtype=bfloat16,
device_map="auto"
)