{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Train LLMs using QLoRA on Amazon SageMaker\n", "\n", "In this sagemaker example, we are going to learn how to apply [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) \n", "to fine-tune Falcon 40B. QLoRA is an efficient finetuning technique that quantizes a pretrained language model to 4 bits and attaches small “Low-Rank Adapters” which are fine-tuned. This enables fine-tuning of models with up to 65 billion parameters on a single GPU; despite its efficiency, QLoRA matches the performance of full-precision fine-tuning and achieves state-of-the-art results on language tasks.\n", "\n", "In our example, we are going to leverage Hugging Face [Transformers](https://huggingface.co/docs/transformers/index), [Accelerate](https://huggingface.co/docs/accelerate/index), and [PEFT](https://github.com/huggingface/peft). \n", "\n", "In Detail you will learn how to:\n", "1. Setup Development Environment\n", "2. Load and prepare the dataset\n", "3. Fine-Tune Falcon 40B with QLoRA on Amazon SageMaker\n", "\n", "### Quick intro: PEFT or Parameter Efficient Fine-tuning\n", "\n", "[PEFT](https://github.com/huggingface/peft), or Parameter Efficient Fine-tuning, is a new open-source library from Hugging Face to enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. PEFT currently includes techniques for:\n", "\n", "- (Q)LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf)\n", "- Prefix Tuning: [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf)\n", "- P-Tuning: [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf)\n", "- Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf)\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "!pip install \"transformers==4.30.2\" \"datasets[s3]==2.13.0\" sagemaker --upgrade --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "import boto3\n", "sess = sagemaker.Session()\n", "# sagemaker session bucket -> used for uploading data, models and logs\n", "# sagemaker will automatically create this bucket if it not exists\n", "sagemaker_session_bucket=None\n", "if sagemaker_session_bucket is None and sess is not None:\n", " # set to default bucket if a bucket name is not given\n", " sagemaker_session_bucket = sess.default_bucket()\n", "\n", "try:\n", " role = sagemaker.get_execution_role()\n", "except ValueError:\n", " iam = boto3.client('iam')\n", " role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']\n", "\n", "sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)\n", "\n", "print(f\"sagemaker role arn: {role}\")\n", "print(f\"sagemaker bucket: {sess.default_bucket()}\")\n", "print(f\"sagemaker session region: {sess.boto_region_name}\")\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Load and prepare the dataset\n", "\n", "we will use the [dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT paper](https://arxiv.org/abs/2203.02155), including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.\n", "\n", "```python\n", "{\n", " \"instruction\": \"What is world of warcraft\",\n", " \"context\": \"\",\n", " \"response\": \"World of warcraft is a massive online multi player role playing game. It was released in 2004 by bizarre entertainment\"\n", "}\n", "```\n", "\n", "To load the `samsum` dataset, we use the `load_dataset()` method from the 🤗 Datasets library." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datasets import load_dataset\n", "from random import randrange\n", "\n", "# Load dataset from the hub\n", "dataset = load_dataset(\"databricks/databricks-dolly-15k\", split=\"train\")\n", "\n", "print(f\"dataset size: {len(dataset)}\")\n", "print(dataset[randrange(len(dataset))])\n", "# dataset size: 15011\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To instruct tune our model we need to convert our structured examples into a collection of tasks described via instructions. We define a `formatting_function` that takes a sample and returns a string with our format instruction." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def format_dolly(sample):\n", " instruction = f\"### Instruction\\n{sample['instruction']}\"\n", " context = f\"### Context\\n{sample['context']}\" if len(sample[\"context\"]) > 0 else None\n", " response = f\"### Answer\\n{sample['response']}\"\n", " # join all the parts together\n", " prompt = \"\\n\\n\".join([i for i in [instruction, context, response] if i is not None])\n", " return prompt\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "lets test our formatting function on a random example." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "### Instruction\n", "Who is the most decorated olympian of all time?\n", "\n", "### Answer\n", "Michael Phelps is the most decorated olympian winning a total of 28 medals.\n" ] } ], "source": [ "from random import randrange\n", "\n", "print(format_dolly(dataset[randrange(len(dataset))]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition, to formatting our samples we also want to pack multiple samples to one sequence to have a more efficient training." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "model_id = \"tiiuae/falcon-40b\" # sharded weights\n", "tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\n", "tokenizer.pad_token = tokenizer.eos_token" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We define some helper functions to pack our samples into sequences of a given length and then tokenize them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from random import randint\n", "from itertools import chain\n", "from functools import partial\n", "\n", "\n", "\n", "# template dataset to add prompt to each sample\n", "def template_dataset(sample):\n", " sample[\"text\"] = f\"{format_dolly(sample)}{tokenizer.eos_token}\"\n", " return sample\n", "\n", "\n", "# apply prompt template per sample\n", "dataset = dataset.map(template_dataset, remove_columns=list(dataset.features))\n", "# print random sample\n", "print(dataset[randint(0, len(dataset))][\"text\"])\n", "\n", "# empty list to save remainder from batches to use in next batch\n", "remainder = {\"input_ids\": [], \"attention_mask\": [], \"token_type_ids\": []}\n", "\n", "def chunk(sample, chunk_length=2048):\n", " # define global remainder variable to save remainder from batches to use in next batch\n", " global remainder\n", " # Concatenate all texts and add remainder from previous batch\n", " concatenated_examples = {k: list(chain(*sample[k])) for k in sample.keys()}\n", " concatenated_examples = {k: remainder[k] + concatenated_examples[k] for k in concatenated_examples.keys()}\n", " # get total number of tokens for batch\n", " batch_total_length = len(concatenated_examples[list(sample.keys())[0]])\n", "\n", " # get max number of chunks for batch\n", " if batch_total_length >= chunk_length:\n", " batch_chunk_length = (batch_total_length // chunk_length) * chunk_length\n", "\n", " # Split by chunks of max_len.\n", " result = {\n", " k: [t[i : i + chunk_length] for i in range(0, batch_chunk_length, chunk_length)]\n", " for k, t in concatenated_examples.items()\n", " }\n", " # add remainder to global variable for next batch\n", " remainder = {k: concatenated_examples[k][batch_chunk_length:] for k in concatenated_examples.keys()}\n", " # prepare labels\n", " result[\"labels\"] = result[\"input_ids\"].copy()\n", " return result\n", "\n", "\n", "# tokenize and chunk dataset\n", "lm_dataset = dataset.map(\n", " lambda sample: tokenizer(sample[\"text\"]), batched=True, remove_columns=list(dataset.features)\n", ").map(\n", " partial(chunk, chunk_length=2048),\n", " batched=True,\n", ")\n", "\n", "# Print total number of samples\n", "print(f\"Total number of samples: {len(lm_dataset)}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "After we processed the datasets we are going to use the new [FileSystem integration](https://huggingface.co/docs/datasets/filesystems) to upload our dataset to S3. We are using the `sess.default_bucket()`, adjust this if you want to store the dataset in a different S3 bucket. We will use the S3 path later in our training script." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# save train_dataset to s3\n", "training_input_path = f's3://{sess.default_bucket()}/processed/dolly/train'\n", "lm_dataset.save_to_disk(training_input_path)\n", "\n", "print(\"uploaded data to:\")\n", "print(f\"training dataset to: {training_input_path}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Fine-Tune Falcon 40B with QLoRA on Amazon SageMaker\n", "\n", "We are going to use the recently introduced method in the paper \"[QLoRA: Quantization-aware Low-Rank Adapter Tuning for Language Generation](https://arxiv.org/abs/2106.09685)\" by Tim Dettmers et al. QLoRA is a new technique to reduce the memory footprint of large language models during finetuning, without sacrificing performance. The TL;DR; of how QLoRA works is: \n", "\n", "* Quantize the pretrained model to 4 bits and freezing it.\n", "* Attach small, trainable adapter layers. (LoRA)\n", "* Finetune only the adapter layers, while using the frozen quantized model for context.\n", "\n", "We prepared a [run_clm.py](./scripts/run_clm.py), which implements QLora using PEFT to train our model. The script also merges the LoRA weights into the model weights after training. That way you can use the model as a normal model without any additional code.\n", "\n", "In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. The Estimator manages the infrastructure use. \n", "SagMaker takes care of starting and managing all the required ec2 instances for us, provides the correct huggingface container, uploads the provided scripts and downloads the data from our S3 bucket into the container at `/opt/ml/input/data`. Then, it starts the training job by running.\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "import time\n", "# define Training Job Name \n", "job_name = f'huggingface-qlora-{time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.localtime())}'\n", "\n", "from sagemaker.huggingface import HuggingFace\n", "\n", "# hyperparameters, which are passed into the training job\n", "hyperparameters ={\n", " 'model_id': model_id, # pre-trained model\n", " 'dataset_path': '/opt/ml/input/data/training', # path where sagemaker will save training dataset\n", " 'epochs': 3, # number of training epochs\n", " 'per_device_train_batch_size': 4, # batch size for training\n", " 'lr': 2e-4, # learning rate used during training\n", "}\n", "\n", "# create the Estimator\n", "huggingface_estimator = HuggingFace(\n", " entry_point = 'run_clm.py', # train script\n", " source_dir = 'scripts', # directory which includes all the files needed for training\n", " instance_type = 'ml.g5.12xlarge', # instances type used for the training job\n", " instance_count = 1, # the number of instances used for training\n", " base_job_name = job_name, # the name of the training job\n", " role = role, # Iam role used in training job to access AWS ressources, e.g. S3\n", " volume_size = 300, # the size of the EBS volume in GB\n", " transformers_version = '4.28', # the transformers version used in the training job\n", " pytorch_version = '2.0', # the pytorch_version version used in the training job\n", " py_version = 'py310', # the python version used in the training job\n", " hyperparameters = hyperparameters,\n", " environment = { \"HUGGINGFACE_HUB_CACHE\": \"/tmp/.cache\" }, # set env variable to cache models in /tmp\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now start our training job, with the `.fit()` method passing our S3 path to the training script." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define a data input dictonary with our uploaded s3 uris\n", "data = {'training': training_input_path}\n", "\n", "# starting the train job with our uploaded datasets as input\n", "huggingface_estimator.fit(data, wait=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "In our example, the SageMaker training job took `53405 seconds`, which is about `14.8 hours`. The ml.g5.12xlarge instance we used costs `$7.09 per hour` for on-demand usage. As a result, the total cost for training our fine-tuned Falcon-40B model was only ~`$105`." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Next Steps \n", "\n", "You can deploy your fine-tuned model to a SageMaker endpoint and use it for inference. Check out the [Deploy Falcon 7B & 40B on Amazon SageMaker](https://www.philschmid.de/sagemaker-falcon-llm) and [Securely deploy LLMs inside VPCs with Hugging Face and Amazon SageMaker](https://www.philschmid.de/sagemaker-llm-vpc) for more details." ] } ], "metadata": { "kernelspec": { "display_name": "pytorch", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" }, "vscode": { "interpreter": { "hash": "2d58e898dde0263bc564c6968b04150abacfd33eed9b19aaa8e45c040360e146" } } }, "nbformat": 4, "nbformat_minor": 4 }