✅ From scratch implementations ✅ No abstractions ✅ Beginner friendly ✅ Flash attention ✅ FSDP ✅ LoRA, QLoRA, Adapter ✅ Reduce GPU memory (fp4/8/16/32) ✅ 1-1000+ GPUs/TPUs ✅ 20+ LLMs---   [](https://github.com/Lightning-AI/lit-stablelm/blob/master/LICENSE) [](https://discord.gg/VptPCZkGNa)
Quick start • Models • Finetune • Deploy • All workflows • Features • Recipes (YAML) • Lightning AI • Tutorials
Finetune • Pretrain • Continued pretraining • Evaluate • Deploy • Test
Use the command line interface to run advanced workflows such as pretraining or finetuning on your own data. ## All workflows After installing LitGPT, select the model and workflow to run (finetune, pretrain, evaluate, deploy, etc...): ```bash # litgpt [action] [model] litgpt serve meta-llama/Llama-3.2-3B-Instruct litgpt finetune meta-llama/Llama-3.2-3B-Instruct litgpt pretrain meta-llama/Llama-3.2-3B-Instruct litgpt chat meta-llama/Llama-3.2-3B-Instruct litgpt evaluate meta-llama/Llama-3.2-3B-Instruct ``` ---- ## Finetune an LLM Finetuning is the process of taking a pretrained AI model and further training it on a smaller, specialized dataset tailored to a specific task or application. ```bash # 0) setup your dataset curl -L https://huggingface.co/datasets/ksaw008/finance_alpaca/resolve/main/finance_alpaca.json -o my_custom_dataset.json # 1) Finetune a model (auto downloads weights) litgpt finetune microsoft/phi-2 \ --data JSON \ --data.json_path my_custom_dataset.json \ --data.val_split_fraction 0.1 \ --out_dir out/custom-model # 2) Test the model litgpt chat out/custom-model/final # 3) Deploy the model litgpt serve out/custom-model/final ``` [Read the full finetuning docs](tutorials/finetune.md) ---- ## Deploy an LLM Deploy a pretrained or finetune LLM to use it in real-world applications. Deploy, automatically sets up a web server that can be accessed by a website or app. ```bash # deploy an out-of-the-box LLM litgpt serve microsoft/phi-2 # deploy your own trained model litgpt serve path/to/microsoft/phi-2/checkpoint ```