Convert
Last updated
Last updated
This document provides technical guidance on converting and quantizing large language models (LLMs) for deployment on various platforms. This is crucial for deploying and optimizing LLMs on diverse hardware, including but not limited to NVIDIA Jetson, AMD, Intel, and Qualcomm. This guide covers both foundation LLMs and fine-tuned LLMs.
Currently, this version only supports converting models to the GGUF format. Support for more platforms and formats will be added in the future.
Foundation LLMs
Fine-tuned LLMs
This guide covers the following target platforms and formats:
GGUF: A format for efficient CPU execution of models, particularly using the llama.cpp library.
Select Source Model: Choose from the available foundation LLMs or fine-tuned LLMs.
Model Quantization: (Optional) Apply quantization techniques to reduce model size and improve inference speed.
Quantization is the process of converting model weights from floating-point numbers (e.g., FP32) to lower-precision formats (e.g., INT8). This can significantly reduce model size and improve inference speed, but may slightly decrease accuracy.
Common quantization types include:
q4_k_m: A 4-bit quantization method.
q6_k: A 6-bit quantization method.
Name: Enter a name for the converted model (letters, numbers, . - _ only).
Description: Provide an optional description for the model (limit 20 characters).
Source Model: Select the base model from the dropdown menu.
Quantization Type: Select the desired quantization type from the dropdown menu.
Convert: Click the "Convert" button to begin the conversion process.