{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "(gptj_deepspeed_finetune)=\n", "\n", "# GPT-J-6B Fine-Tuning with Ray AIR and DeepSpeed\n", "\n", "In this example, we will showcase how to use the Ray AIR for **GPT-J fine-tuning**. GPT-J is a GPT-2-like causal language model trained on the Pile dataset. This particular model has 6 billion parameters. For more information on GPT-J, click [here](https://huggingface.co/docs/transformers/model_doc/gptj).\n", "\n", "We will use Ray AIR (with the 🤗 Transformers integration) and a pretrained model from Hugging Face hub. Note that you can easily adapt this example to use other similar models.\n", "\n", "This example focuses more on the performance and distributed computing aspects of Ray AIR. If you are looking for a more beginner-friendly introduction to Ray AIR 🤗 Transformers integration, see {doc}`this example `.\n", "\n", "It is highly recommended to read [Ray Train Key Concepts](train-key-concepts) and [Ray Data Key Concepts](data_key_concepts) before starting this example.\n", "\n", "```{note}\n", "To run this example, make sure your Ray cluster has access to at least one GPU with 16 or more GBs of memory. The required amount of memory depends on the model. This notebook is tested with 16 g4dn.4xlarge instances (including the head node). If you wish to use a CPU head node, turn on [cloud checkpointing](tune-cloud-checkpointing) to avoid OOM errors that may happen due to the default behavior of syncing the checkpoint files to the head node.\n", "```\n", "\n", "In this notebook, we will:\n", "1. [Set up Ray](#setup)\n", "2. [Load the dataset](#load)\n", "3. [Preprocess the dataset with Ray AIR](#preprocess)\n", "4. [Run the training with Ray AIR](#train)\n", "5. [Generate text from prompt with Ray AIR](#predict)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Uncomment and run the following line in order to install all the necessary dependencies (this notebook is being tested with `transformers==4.26.0`):" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "#! pip install \"datasets\" \"evaluate\" \"accelerate==0.18.0\" \"transformers>=4.26.0\" \"torch>=1.12.0\" \"deepspeed==0.8.3\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import os" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Set up Ray \n", "\n", "First, let's set some global variables. We will use 16 workers, each being assigned 1 GPU and 8 CPUs." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "model_name = \"EleutherAI/gpt-j-6B\"\n", "use_gpu = True\n", "num_workers = 16\n", "cpus_per_worker = 8" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We will use `ray.init()` to initialize a local cluster. By default, this cluster will be comprised of only the machine you are running this notebook on. You can also run this notebook on an Anyscale cluster.\n", "\n", "We define a {ref}`runtime environment ` to ensure that the Ray workers have access to all the necessary packages. You can omit the `runtime_env` argument if you have all of the packages already installed on each node in your cluster." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Ray

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "\n", "
Python version:3.8.16
Ray version: 3.0.0.dev0
Dashboard:http://console.anyscale-staging.com/api/v2/sessions/ses_sedlspnpy16naa5lm9kf2cmi2y/services?redirect_to=dashboard
\n", "
\n", "
\n" ], "text/plain": [ "RayContext(dashboard_url='console.anyscale-staging.com/api/v2/sessions/ses_sedlspnpy16naa5lm9kf2cmi2y/services?redirect_to=dashboard', python_version='3.8.16', ray_version='3.0.0.dev0', ray_commit='4ddbbb3c4b19c2d27bbf54f8c5ffc100dceafbcf', address_info={'node_ip_address': '10.0.30.196', 'raylet_ip_address': '10.0.30.196', 'redis_address': None, 'object_store_address': '/tmp/ray/session_2023-03-06_15-55-37_997701_162/sockets/plasma_store', 'raylet_socket_name': '/tmp/ray/session_2023-03-06_15-55-37_997701_162/sockets/raylet', 'webui_url': 'console.anyscale-staging.com/api/v2/sessions/ses_sedlspnpy16naa5lm9kf2cmi2y/services?redirect_to=dashboard', 'session_dir': '/tmp/ray/session_2023-03-06_15-55-37_997701_162', 'metrics_export_port': 8085, 'gcs_address': '10.0.30.196:6379', 'address': '10.0.30.196:6379', 'dashboard_agent_listen_port': 52365, 'node_id': '77de483c435bf4987fd6f1e91d47602554e876fd41230d8d50c05333'})" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ray\n", "\n", "ray.init(\n", " runtime_env={\n", " \"pip\": [\n", " \"datasets\",\n", " \"evaluate\",\n", " # Latest combination of accelerate==0.19.0 and transformers==4.29.0\n", " # seems to have issues with DeepSpeed process group initialization,\n", " # and will result in a batch_size validation problem.\n", " # TODO(jungong) : get rid of the pins once the issue is fixed.\n", " \"accelerate==0.16.0\",\n", " \"transformers==4.26.0\",\n", " \"torch>=1.12.0\",\n", " \"deepspeed==0.9.2\",\n", " ]\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "# THIS SHOULD BE HIDDEN IN DOCS AND ONLY RAN IN CI\n", "# Download the model from our S3 mirror as it's faster\n", "\n", "import ray\n", "import subprocess\n", "import ray.util.scheduling_strategies\n", "\n", "\n", "def force_on_node(node_id: str, remote_func_or_actor_class):\n", " scheduling_strategy = ray.util.scheduling_strategies.NodeAffinitySchedulingStrategy(\n", " node_id=node_id, soft=False\n", " )\n", " options = {\"scheduling_strategy\": scheduling_strategy}\n", " return remote_func_or_actor_class.options(**options)\n", "\n", "\n", "def run_on_every_node(remote_func_or_actor_class, **remote_kwargs):\n", " refs = []\n", " for node in ray.nodes():\n", " if node[\"Alive\"] and node[\"Resources\"].get(\"GPU\", None):\n", " refs.append(\n", " force_on_node(node[\"NodeID\"], remote_func_or_actor_class).remote(\n", " **remote_kwargs\n", " )\n", " )\n", " return ray.get(refs)\n", "\n", "\n", "@ray.remote(num_gpus=1)\n", "def download_model():\n", " from transformers.utils.hub import TRANSFORMERS_CACHE\n", "\n", " path = os.path.expanduser(\n", " os.path.join(TRANSFORMERS_CACHE, \"models--EleutherAI--gpt-j-6B\")\n", " )\n", " subprocess.run([\"mkdir\", \"-p\", os.path.join(path, \"snapshots\", \"main\")])\n", " subprocess.run([\"mkdir\", \"-p\", os.path.join(path, \"refs\")])\n", " if os.path.exists(os.path.join(path, \"refs\", \"main\")):\n", " return\n", " subprocess.run(\n", " [\n", " \"aws\",\n", " \"s3\",\n", " \"sync\",\n", " \"--no-sign-request\",\n", " \"s3://large-dl-models-mirror/models--EleutherAI--gpt-j-6B/main/\",\n", " os.path.join(path, \"snapshots\", \"main\"),\n", " ]\n", " )\n", " with open(os.path.join(path, \"snapshots\", \"main\", \"hash\"), \"r\") as f:\n", " f_hash = f.read().strip()\n", " with open(os.path.join(path, \"refs\", \"main\"), \"w\") as f:\n", " f.write(f_hash)\n", " os.rename(\n", " os.path.join(path, \"snapshots\", \"main\"), os.path.join(path, \"snapshots\", f_hash)\n", " )\n", "\n", "\n", "_ = run_on_every_node(download_model)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Loading the dataset \n", "\n", "We will be fine-tuning the model on the [`tiny_shakespeare` dataset](https://huggingface.co/datasets/tiny_shakespeare), comprised of 40,000 lines of Shakespeare from a variety of Shakespeare's plays. The aim will be to make the GPT-J model better at generating text in the style of Shakespeare." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading tiny_shakespeare dataset\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Found cached dataset tiny_shakespeare (/home/ray/.cache/huggingface/datasets/tiny_shakespeare/default/1.0.0/b5b13969f09fe8707337f6cb296314fbe06960bd9a868dca39e713e163d27b5e)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "65894225f3b84e5caa117c4d08d9f99d", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00 pd.DataFrame:\n", " text = list(batch[\"text\"])\n", " flat_text = \"\".join(text)\n", " split_text = [\n", " x.strip()\n", " for x in flat_text.split(\"\\n\")\n", " if x.strip() and not x.strip()[-1] == \":\"\n", " ]\n", " return pd.DataFrame(split_text, columns=[\"text\"])\n", "\n", "\n", "def tokenize(batch: pd.DataFrame) -> dict:\n", " tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)\n", " tokenizer.pad_token = tokenizer.eos_token\n", " ret = tokenizer(\n", " list(batch[\"text\"]),\n", " truncation=True,\n", " max_length=block_size,\n", " padding=\"max_length\",\n", " return_tensors=\"np\",\n", " )\n", " ret[\"labels\"] = ret[\"input_ids\"].copy()\n", " return dict(ret)\n", "\n", "\n", "splitter = BatchMapper(split_text, batch_format=\"pandas\")\n", "tokenizer = BatchMapper(tokenize, batch_format=\"pandas\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Fine-tuning the model with Ray AIR \n", "\n", "We can now configure Ray AIR's {class}`~ray.train.huggingface.TransformersTrainer` to perform distributed fine-tuning of the model. In order to do that, we specify a `trainer_init_per_worker` function, which creates a 🤗 Transformers `Trainer` that will be distributed by Ray using Distributed Data Parallelism (using PyTorch Distributed backend internally). This means that each worker will have its own copy of the model, but operate on different data, At the end of each step, all the workers will sync gradients.\n", "\n", "Because GPT-J is a relatively large model, it may not be possible to fit it on smaller GPU types (<=16 GB GRAM). To deal with that issue, we can use [DeepSpeed](https://github.com/microsoft/DeepSpeed), a library to optimize the training process and allow us to (among other things) offload and partition optimizer and parameter states, reducing GRAM usage. Furthermore, DeepSpeed ZeRO Stage 3 allows us to load large models without running out of memory.\n", "\n", "🤗 Transformers and Ray AIR's integration ({class}`~ray.train.huggingface.TransformersTrainer`) allow you to easily configure and use DDP and DeepSpeed. All you need to do is specify the DeepSpeed configuration in the [`TrainingArguments`](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments) object.\n", "\n", "```{tip}\n", "There are many DeepSpeed settings that allow you to trade-off speed for memory usage. The settings used below are tailored to the cluster setup used (16 g4dn.4xlarge nodes) and per device batch size of 16. Some things to keep in mind:\n", "- If your GPUs support bfloat16, use that instead of float16 mixed precision to get better performance and prevent overflows. Replace `fp16=True` with `bf16=True` in `TrainingArguments`.\n", "- If you are running out of GRAM: try reducing batch size (defined in the cell below the next one), set `\"overlap_comm\": False` in DeepSpeed config.\n", "- If you are running out of RAM, add more nodes to your cluster, use nodes with more RAM, set `\"pin_memory\": False` in the DeepSpeed config, reduce the batch size, and remove `\"offload_param\"` from the DeepSpeed config.\n", "\n", "For more information on DeepSpeed configuration, refer to [Hugging Face documentation](https://huggingface.co/docs/transformers/main_classes/deepspeed) and [DeepSpeed documentation](https://www.deepspeed.ai/docs/config-json/).\n", "\n", "Additionally, if you prefer a lower-level API, the logic below can be expressed as an [Accelerate training loop](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py) distributed by a Ray AIR {class}`~ray.train.torch.torch_trainer.TorchTrainer`.\n", "```\n", "\n", "#### Training speed\n", "\n", "As we are using data parallelism, each worker operates on its own shard of the data. The batch size set in `TrainingArguments` is the **per device batch size** (per worker batch size). By changing the number of workers, we can change the **effective batch size** and thus the time needed for training to complete. The effective batch size is then calculated as `per device batch size * number of workers * number of gradient accumulation steps`. As we add more workers, the effective batch size rises and thus we need less time to complete a full epoch. While the speedup is not exactly linear due to extra communication overheads, in many cases it can be close to linear.\n", "\n", "The preprocessed dataset has 1348 examples. We have set per device batch size to 16.\n", "\n", "* With 16 g4dn.4xlarge nodes, the effective batch size was 256, which equals to 85 steps per epoch. One epoch took **~2440 seconds** (including initialization time).\n", "\n", "* With 32 g4dn.4xlarge nodes, the effective batch size was 512, which equals to 43 steps per epoch. One epoch took **~1280 seconds** (including initialization time)." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "import evaluate\n", "from transformers import Trainer, TrainingArguments\n", "from transformers import (\n", " GPTJForCausalLM,\n", " AutoTokenizer,\n", " default_data_collator,\n", ")\n", "from transformers.utils.logging import disable_progress_bar, enable_progress_bar\n", "import torch\n", "\n", "from ray import train\n", "\n", "\n", "def trainer_init_per_worker(train_dataset, eval_dataset=None, **config):\n", " # Use the actual number of CPUs assigned by Ray\n", " os.environ[\"OMP_NUM_THREADS\"] = str(\n", " train.get_context().get_trial_resources().bundles[-1].get(\"CPU\", 1)\n", " )\n", " # Enable tf32 for better performance\n", " torch.backends.cuda.matmul.allow_tf32 = True\n", "\n", " batch_size = config.get(\"batch_size\", 4)\n", " epochs = config.get(\"epochs\", 2)\n", " warmup_steps = config.get(\"warmup_steps\", 0)\n", " learning_rate = config.get(\"learning_rate\", 0.00002)\n", " weight_decay = config.get(\"weight_decay\", 0.01)\n", "\n", " deepspeed = {\n", " \"fp16\": {\n", " \"enabled\": \"auto\",\n", " \"initial_scale_power\": 8,\n", " },\n", " \"bf16\": {\"enabled\": \"auto\"},\n", " \"optimizer\": {\n", " \"type\": \"AdamW\",\n", " \"params\": {\n", " \"lr\": \"auto\",\n", " \"betas\": \"auto\",\n", " \"eps\": \"auto\",\n", " },\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 3,\n", " \"offload_optimizer\": {\n", " \"device\": \"cpu\",\n", " \"pin_memory\": True,\n", " },\n", " \"offload_param\": {\n", " \"device\": \"cpu\",\n", " \"pin_memory\": True,\n", " },\n", " \"overlap_comm\": True,\n", " \"contiguous_gradients\": True,\n", " \"reduce_bucket_size\": \"auto\",\n", " \"stage3_prefetch_bucket_size\": \"auto\",\n", " \"stage3_param_persistence_threshold\": \"auto\",\n", " \"gather_16bit_weights_on_model_save\": True,\n", " \"round_robin_gradients\": True,\n", " },\n", " \"gradient_accumulation_steps\": \"auto\",\n", " \"gradient_clipping\": \"auto\",\n", " \"steps_per_print\": 10,\n", " \"train_batch_size\": \"auto\",\n", " \"train_micro_batch_size_per_gpu\": \"auto\",\n", " \"wall_clock_breakdown\": False,\n", " }\n", "\n", " print(\"Preparing training arguments\")\n", " training_args = TrainingArguments(\n", " \"output\",\n", " per_device_train_batch_size=batch_size,\n", " logging_steps=1,\n", " save_strategy=\"no\",\n", " per_device_eval_batch_size=batch_size,\n", " learning_rate=learning_rate,\n", " weight_decay=weight_decay,\n", " warmup_steps=warmup_steps,\n", " label_names=[\"input_ids\", \"attention_mask\"],\n", " num_train_epochs=epochs,\n", " push_to_hub=False,\n", " disable_tqdm=True, # declutter the output a little\n", " fp16=True,\n", " gradient_checkpointing=True,\n", " deepspeed=deepspeed,\n", " )\n", " disable_progress_bar()\n", "\n", " tokenizer = AutoTokenizer.from_pretrained(model_name)\n", " tokenizer.pad_token = tokenizer.eos_token\n", "\n", " print(\"Loading model\")\n", "\n", " model = GPTJForCausalLM.from_pretrained(model_name, use_cache=False)\n", " model.resize_token_embeddings(len(tokenizer))\n", "\n", " print(\"Model loaded\")\n", "\n", " enable_progress_bar()\n", "\n", " metric = evaluate.load(\"accuracy\")\n", "\n", " def compute_metrics(eval_pred):\n", " logits, labels = eval_pred\n", " predictions = np.argmax(logits, axis=-1)\n", " return metric.compute(predictions=predictions, references=labels)\n", "\n", " trainer = Trainer(\n", " model=model,\n", " args=training_args,\n", " train_dataset=train_dataset,\n", " eval_dataset=eval_dataset,\n", " compute_metrics=compute_metrics,\n", " tokenizer=tokenizer,\n", " data_collator=default_data_collator,\n", " )\n", " return trainer" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "With our `trainer_init_per_worker` complete, we can now instantiate the {class}`~ray.train.huggingface.TransformersTrainer`. Aside from the function, we set the `scaling_config`, controlling the amount of workers and resources used, and the `datasets` we will use for training and evaluation.\n", "\n", "We pass the preprocessors we have defined earlier as an argument, wrapped in a {class}`~ray.data.preprocessors.chain.Chain`. The preprocessor will be included with the returned {class}`~ray.train.Checkpoint`, meaning it will also be applied during inference.\n", "\n", "```{note}\n", "Since this example runs with multiple nodes, we need to persist checkpoints\n", "and other outputs to some external storage for access after training has completed.\n", "**You should set up cloud storage or NFS, then replace `storage_path` with your own cloud bucket URI or NFS path.**\n", "\n", "See the [storage guide](tune-storage-options) for more details.\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "storage_path=\"s3://your-bucket-here\" # TODO: Set up cloud storage\n", "# storage_path=\"/mnt/path/to/nfs\" # TODO: Alternatively, set up NFS" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "storage_path = \"/mnt/cluster_storage\"" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "from ray.train.huggingface import TransformersTrainer\n", "from ray.train import RunConfig, ScalingConfig\n", "from ray.data.preprocessors import Chain\n", "\n", "\n", "trainer = TransformersTrainer(\n", " trainer_init_per_worker=trainer_init_per_worker,\n", " trainer_init_config={\n", " \"batch_size\": 16, # per device\n", " \"epochs\": 1,\n", " },\n", " scaling_config=ScalingConfig(\n", " num_workers=num_workers,\n", " use_gpu=use_gpu,\n", " resources_per_worker={\"GPU\": 1, \"CPU\": cpus_per_worker},\n", " ),\n", " datasets={\"train\": ray_datasets[\"train\"], \"evaluation\": ray_datasets[\"validation\"]},\n", " preprocessor=Chain(splitter, tokenizer),\n", " run_config=RunConfig(storage_path=storage_path),\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we call the {meth}`~ray.train.huggingface.TransformersTrainer.fit` method to start training with Ray AIR. We will save the {class}`~ray.train.Result` object to a variable so we can access metrics and checkpoints." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2023-03-06 17:18:41
Running for: 00:43:11.46
Memory: 31.9/62.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Resources requested: 0/256 CPUs, 0/16 GPUs, 0.0/675.29 GiB heap, 0.0/291.99 GiB objects (0.0/16.0 accelerator_type:T4)\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc iter total time (s) loss learning_rate epoch
TransformersTrainer_f623d_00000TERMINATED10.0.30.196:30861 85 2579.30.0715 4.70588e-07 1
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) 2023-03-06 16:36:00,447\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1964, ip=10.0.26.83) /tmp/ray/session_2023-03-06_15-55-37_997701_162/runtime_resources/py_modules_files/_ray_pkg_f864ba6869d6802c/ray/train/_internal/dataset_iterator.py:64: UserWarning: session.get_dataset_shard returns a ray.data.DataIterator instead of a Dataset/DatasetPipeline as of Ray v2.3. Use iter_torch_batches(), to_tf(), or iter_batches() to iterate over one epoch. See https://docs.ray.io/en/latest/data/api/dataset_iterator.html for full DataIterator docs.\n", "(RayTrainWorker pid=1964, ip=10.0.26.83) warnings.warn(\n", "(RayTrainWorker pid=1964, ip=10.0.26.83) 2023-03-06 16:36:00,453\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1963, ip=10.0.54.163) /tmp/ray/session_2023-03-06_15-55-37_997701_162/runtime_resources/py_modules_files/_ray_pkg_f864ba6869d6802c/ray/train/_internal/dataset_iterator.py:64: UserWarning: session.get_dataset_shard returns a ray.data.DataIterator instead of a Dataset/DatasetPipeline as of Ray v2.3. Use iter_torch_batches(), to_tf(), or iter_batches() to iterate over one epoch. See https://docs.ray.io/en/latest/data/api/dataset_iterator.html for full DataIterator docs.\n", "(RayTrainWorker pid=1963, ip=10.0.54.163) warnings.warn(\n", "(RayTrainWorker pid=1963, ip=10.0.54.163) 2023-03-06 16:36:00,452\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1954, ip=10.0.15.115) /tmp/ray/session_2023-03-06_15-55-37_997701_162/runtime_resources/py_modules_files/_ray_pkg_f864ba6869d6802c/ray/train/_internal/dataset_iterator.py:64: UserWarning: session.get_dataset_shard returns a ray.data.DataIterator instead of a Dataset/DatasetPipeline as of Ray v2.3. Use iter_torch_batches(), to_tf(), or iter_batches() to iterate over one epoch. See https://docs.ray.io/en/latest/data/api/dataset_iterator.html for full DataIterator docs.\n", "(RayTrainWorker pid=1954, ip=10.0.15.115) warnings.warn(\n", "(RayTrainWorker pid=1954, ip=10.0.15.115) 2023-03-06 16:36:00,452\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1955, ip=10.0.58.255) /tmp/ray/session_2023-03-06_15-55-37_997701_162/runtime_resources/py_modules_files/_ray_pkg_f864ba6869d6802c/ray/train/_internal/dataset_iterator.py:64: UserWarning: session.get_dataset_shard returns a ray.data.DataIterator instead of a Dataset/DatasetPipeline as of Ray v2.3. Use iter_torch_batches(), to_tf(), or iter_batches() to iterate over one epoch. See https://docs.ray.io/en/latest/data/api/dataset_iterator.html for full DataIterator docs.\n", "(RayTrainWorker pid=1955, ip=10.0.58.255) warnings.warn(\n", "(RayTrainWorker pid=1955, ip=10.0.58.255) 2023-03-06 16:36:00,453\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1942, ip=10.0.57.85) 2023-03-06 16:36:00,452\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1963, ip=10.0.29.205) 2023-03-06 16:36:00,452\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n", "(RayTrainWorker pid=1942, ip=10.0.51.113) 2023-03-06 16:36:00,454\tINFO bulk_executor.py:41 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[BatchMapper]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) Preparing training arguments\n", "(RayTrainWorker pid=31281) Loading model\n", "(RayTrainWorker pid=31281) [2023-03-06 16:37:21,252] [INFO] [partition_parameters.py:415:__exit__] finished initializing model with 6.05B parameters\n", "(RayTrainWorker pid=31281) Model loaded\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) Using cuda_amp half precision backend\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) [2023-03-06 16:38:03,431] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed info: version=0.8.1, git-hash=unknown, git-branch=unknown\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:03,450] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) ***** Running training *****\n", "(RayTrainWorker pid=31281) Num examples = 1348\n", "(RayTrainWorker pid=31281) Num Epochs = 1\n", "(RayTrainWorker pid=31281) Instantaneous batch size per device = 16\n", "(RayTrainWorker pid=31281) Total train batch size (w. parallel, distributed & accumulation) = 256\n", "(RayTrainWorker pid=31281) Gradient Accumulation steps = 1\n", "(RayTrainWorker pid=31281) Total optimization steps = 85\n", "(RayTrainWorker pid=31281) Number of trainable parameters = 0\n", "(RayTrainWorker pid=31281) /home/ray/anaconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py:2387: UserWarning: torch.distributed._all_gather_base is a private function and will be deprecated. Please use torch.distributed.all_gather_into_tensor instead.\n", "(RayTrainWorker pid=31281) warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,024] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,024] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed using client callable to create LR scheduler\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,025] [INFO] [logging.py:75:log_dist] [Rank 0] DeepSpeed LR Scheduler = \n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,025] [INFO] [logging.py:75:log_dist] [Rank 0] step=0, skipped=0, lr=[2e-05], mom=[[0.9, 0.999]]\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,025] [INFO] [config.py:1009:print] DeepSpeedEngine configuration:\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,026] [INFO] [config.py:1013:print] activation_checkpointing_config {\n", "(RayTrainWorker pid=31281) \"partition_activations\": false, \n", "(RayTrainWorker pid=31281) \"contiguous_memory_optimization\": false, \n", "(RayTrainWorker pid=31281) \"cpu_checkpointing\": false, \n", "(RayTrainWorker pid=31281) \"number_checkpoints\": null, \n", "(RayTrainWorker pid=31281) \"synchronize_checkpoint_boundary\": false, \n", "(RayTrainWorker pid=31281) \"profile\": false\n", "(RayTrainWorker pid=31281) }\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,026] [INFO] [config.py:1013:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,026] [INFO] [config.py:1013:print] amp_enabled .................. False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,026] [INFO] [config.py:1013:print] amp_params ................... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] autotuning_config ............ {\n", "(RayTrainWorker pid=31281) \"enabled\": false, \n", "(RayTrainWorker pid=31281) \"start_step\": null, \n", "(RayTrainWorker pid=31281) \"end_step\": null, \n", "(RayTrainWorker pid=31281) \"metric_path\": null, \n", "(RayTrainWorker pid=31281) \"arg_mappings\": null, \n", "(RayTrainWorker pid=31281) \"metric\": \"throughput\", \n", "(RayTrainWorker pid=31281) \"model_info\": null, \n", "(RayTrainWorker pid=31281) \"results_dir\": \"autotuning_results\", \n", "(RayTrainWorker pid=31281) \"exps_dir\": \"autotuning_exps\", \n", "(RayTrainWorker pid=31281) \"overwrite\": true, \n", "(RayTrainWorker pid=31281) \"fast\": true, \n", "(RayTrainWorker pid=31281) \"start_profile_step\": 3, \n", "(RayTrainWorker pid=31281) \"end_profile_step\": 5, \n", "(RayTrainWorker pid=31281) \"tuner_type\": \"gridsearch\", \n", "(RayTrainWorker pid=31281) \"tuner_early_stopping\": 5, \n", "(RayTrainWorker pid=31281) \"tuner_num_trials\": 50, \n", "(RayTrainWorker pid=31281) \"model_info_path\": null, \n", "(RayTrainWorker pid=31281) \"mp_size\": 1, \n", "(RayTrainWorker pid=31281) \"max_train_batch_size\": null, \n", "(RayTrainWorker pid=31281) \"min_train_batch_size\": 1, \n", "(RayTrainWorker pid=31281) \"max_train_micro_batch_size_per_gpu\": 1.024000e+03, \n", "(RayTrainWorker pid=31281) \"min_train_micro_batch_size_per_gpu\": 1, \n", "(RayTrainWorker pid=31281) \"num_tuning_micro_batch_sizes\": 3\n", "(RayTrainWorker pid=31281) }\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] bfloat16_enabled ............. False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] checkpoint_parallel_write_pipeline False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] checkpoint_tag_validation_enabled True\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] checkpoint_tag_validation_fail False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] comms_config ................. \n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] communication_data_type ...... None\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] curriculum_enabled_legacy .... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] curriculum_params_legacy ..... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] data_efficiency_enabled ...... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] dataloader_drop_last ......... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] disable_allgather ............ False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] dump_state ................... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] dynamic_loss_scale_args ...... {'init_scale': 256, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_enabled ........... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_gas_boundary_resolution 1\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_layer_name ........ bert.encoder.layer\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_layer_num ......... 0\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_max_iter .......... 100\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_stability ......... 1e-06\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_tol ............... 0.01\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] eigenvalue_verbose ........... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] elasticity_enabled ........... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] flops_profiler_config ........ {\n", "(RayTrainWorker pid=31281) \"enabled\": false, \n", "(RayTrainWorker pid=31281) \"profile_step\": 1, \n", "(RayTrainWorker pid=31281) \"module_depth\": -1, \n", "(RayTrainWorker pid=31281) \"top_modules\": 1, \n", "(RayTrainWorker pid=31281) \"detailed\": true, \n", "(RayTrainWorker pid=31281) \"output_file\": null\n", "(RayTrainWorker pid=31281) }\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] fp16_auto_cast ............... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] fp16_enabled ................. True\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] fp16_master_weights_and_gradients False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] global_rank .................. 0\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] grad_accum_dtype ............. None\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,027] [INFO] [config.py:1013:print] gradient_accumulation_steps .. 1\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] gradient_clipping ............ 1.0\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] gradient_predivide_factor .... 1.0\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] initial_dynamic_scale ........ 256\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] load_universal_checkpoint .... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] loss_scale ................... 0\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] memory_breakdown ............. False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] nebula_config ................ {\n", "(RayTrainWorker pid=31281) \"enabled\": false, \n", "(RayTrainWorker pid=31281) \"persistent_storage_path\": null, \n", "(RayTrainWorker pid=31281) \"persistent_time_interval\": 100, \n", "(RayTrainWorker pid=31281) \"num_of_version_in_retention\": 2, \n", "(RayTrainWorker pid=31281) \"enable_nebula_load\": true, \n", "(RayTrainWorker pid=31281) \"load_path\": null\n", "(RayTrainWorker pid=31281) }\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] optimizer_legacy_fusion ...... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] optimizer_name ............... adamw\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] optimizer_params ............. {'lr': 2e-05, 'betas': [0.9, 0.999], 'eps': 1e-08}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] pld_enabled .................. False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] pld_params ................... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] prescale_gradients ........... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] scheduler_name ............... None\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] scheduler_params ............. None\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] sparse_attention ............. None\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] sparse_gradients_enabled ..... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] steps_per_print .............. 10\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] train_batch_size ............. 256\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] train_micro_batch_size_per_gpu 16\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] use_node_local_storage ....... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] wall_clock_breakdown ......... False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] world_size ................... 16\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] zero_allow_untested_optimizer False\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=16777216 allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='cpu', nvme_path=None, buffer_count=5, buffer_size=100,000,000, max_in_cpu=1,000,000,000, pin_memory=True) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=True, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False) sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=15099494 param_persistence_threshold=40960 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=True stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=True\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] zero_enabled ................. True\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,028] [INFO] [config.py:1013:print] zero_optimization_stage ...... 3\n", "(RayTrainWorker pid=31281) [2023-03-06 16:38:25,029] [INFO] [config.py:998:print_user_config] json = {\n", "(RayTrainWorker pid=31281) \"fp16\": {\n", "(RayTrainWorker pid=31281) \"enabled\": true, \n", "(RayTrainWorker pid=31281) \"initial_scale_power\": 8\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"bf16\": {\n", "(RayTrainWorker pid=31281) \"enabled\": false\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"optimizer\": {\n", "(RayTrainWorker pid=31281) \"type\": \"AdamW\", \n", "(RayTrainWorker pid=31281) \"params\": {\n", "(RayTrainWorker pid=31281) \"lr\": 2e-05, \n", "(RayTrainWorker pid=31281) \"betas\": [0.9, 0.999], \n", "(RayTrainWorker pid=31281) \"eps\": 1e-08\n", "(RayTrainWorker pid=31281) }\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"zero_optimization\": {\n", "(RayTrainWorker pid=31281) \"stage\": 3, \n", "(RayTrainWorker pid=31281) \"offload_optimizer\": {\n", "(RayTrainWorker pid=31281) \"device\": \"cpu\", \n", "(RayTrainWorker pid=31281) \"pin_memory\": true\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"offload_param\": {\n", "(RayTrainWorker pid=31281) \"device\": \"cpu\", \n", "(RayTrainWorker pid=31281) \"pin_memory\": true\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"overlap_comm\": true, \n", "(RayTrainWorker pid=31281) \"contiguous_gradients\": true, \n", "(RayTrainWorker pid=31281) \"reduce_bucket_size\": 1.677722e+07, \n", "(RayTrainWorker pid=31281) \"stage3_prefetch_bucket_size\": 1.509949e+07, \n", "(RayTrainWorker pid=31281) \"stage3_param_persistence_threshold\": 4.096000e+04, \n", "(RayTrainWorker pid=31281) \"gather_16bit_weights_on_model_save\": true, \n", "(RayTrainWorker pid=31281) \"round_robin_gradients\": true\n", "(RayTrainWorker pid=31281) }, \n", "(RayTrainWorker pid=31281) \"gradient_accumulation_steps\": 1, \n", "(RayTrainWorker pid=31281) \"gradient_clipping\": 1.0, \n", "(RayTrainWorker pid=31281) \"steps_per_print\": 10, \n", "(RayTrainWorker pid=31281) \"train_batch_size\": 256, \n", "(RayTrainWorker pid=31281) \"train_micro_batch_size_per_gpu\": 16, \n", "(RayTrainWorker pid=31281) \"wall_clock_breakdown\": false\n", "(RayTrainWorker pid=31281) }\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) Model weights saved in output/checkpoint-85/pytorch_model.bin\n", "(RayTrainWorker pid=31281) tokenizer config file saved in output/checkpoint-85/tokenizer_config.json\n", "(RayTrainWorker pid=31281) Special tokens file saved in output/checkpoint-85/special_tokens_map.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) [2023-03-06 17:18:13,320] [INFO] [engine.py:3516:save_16bit_model] Saving model weights to output/checkpoint-85/pytorch_model.bin\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:13,320] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving output/checkpoint-85/pytorch_model.bin...\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:29,075] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved output/checkpoint-85/pytorch_model.bin.\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:29,087] [INFO] [logging.py:75:log_dist] [Rank 0] [Torch] Checkpoint global_step85 is begin to save!\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:29,109] [INFO] [logging.py:75:log_dist] [Rank 0] Saving model checkpoint: output/checkpoint-85/global_step85/zero_pp_rank_0_mp_rank_00_model_states.pt\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:29,109] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving output/checkpoint-85/global_step85/zero_pp_rank_0_mp_rank_00_model_states.pt...\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:37,982] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved output/checkpoint-85/global_step85/zero_pp_rank_0_mp_rank_00_optim_states.pt.\n", "(RayTrainWorker pid=31281) [2023-03-06 17:18:37,984] [INFO] [engine.py:3407:_save_zero_checkpoint] zero checkpoint saved output/checkpoint-85/global_step85/zero_pp_rank_0_mp_rank_00_optim_states.pt\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) \n", "(RayTrainWorker pid=31281) \n", "(RayTrainWorker pid=31281) Training completed. Do not forget to share your model on huggingface.co/models =)\n", "(RayTrainWorker pid=31281) \n", "(RayTrainWorker pid=31281) \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(RayTrainWorker pid=31281) [2023-03-06 17:18:38,143] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step85 is ready now!\n", "(RayTrainWorker pid=31281) {'train_runtime': 2413.1243, 'train_samples_per_second': 0.559, 'train_steps_per_second': 0.035, 'train_loss': 0.32492108064539293, 'epoch': 1.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2023-03-06 17:18:41,018\tINFO tune.py:825 -- Total run time: 2591.59 seconds (2591.46 seconds for the tuning loop).\n" ] } ], "source": [ "results = trainer.fit()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "You can use the returned {class}`~ray.train.Result` object to access metrics and the Ray AIR {class}`~ray.train.Checkpoint` associated with the last iteration." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TransformersCheckpoint(local_path=/home/ray/ray_results/TransformersTrainer_2023-03-06_16-35-29/TransformersTrainer_f623d_00000_0_2023-03-06_16-35-30/checkpoint_000000)" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "checkpoint = results.checkpoint\n", "checkpoint" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Generate text from prompt\n", "\n", "We can use the {class}`~ray.train.huggingface.huggingface_predictor.TransformersPredictor` to generate predictions from our fine-tuned model.\n", "\n", "```{tip}\n", "For large scale batch inference, see {ref}`End-to-end: Offline Batch Inference `.\n", "```\n", "\n", "Because the {class}`~ray.train.huggingface.huggingface_predictor.TransformersPredictor` uses a 🤗 Transformers [`pipeline`](https://huggingface.co/docs/transformers/en/main_classes/pipelines) under the hood, we disable the tokenizer AIR Preprocessor we have used for training and let the `pipeline` to tokenize the data itself." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "checkpoint.set_preprocessor(None)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We also set `device_map=\"auto\"` so that the model is automatically placed on the right device and set the `task` to `\"text-generation\"`. The `predict` method passes the arguments to a 🤗 Transformers `pipeline` call." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from ray.train.huggingface import TransformersPredictor\n", "import pandas as pd\n", "\n", "prompts = pd.DataFrame([\"Romeo and Juliet\", \"Romeo\", \"Juliet\"], columns=[\"text\"])\n", "\n", "# Predict on the head node.\n", "predictor = TransformersPredictor.from_checkpoint(\n", " checkpoint=checkpoint,\n", " task=\"text-generation\",\n", " torch_dtype=torch.float16 if use_gpu else None,\n", " device_map=\"auto\",\n", " use_gpu=use_gpu,\n", ")\n", "prediction = predictor.predict(\n", " prompts,\n", " do_sample=True,\n", " temperature=0.9,\n", " min_length=32,\n", " max_length=128,\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
generated_text
0Romeo and Juliet, they are married: and it is ...
1Romeo, thou art Romeo and a Montague; for only...
2Juliet's name; but I do not sound an ear to na...
\n", "
" ], "text/plain": [ " generated_text\n", "0 Romeo and Juliet, they are married: and it is ...\n", "1 Romeo, thou art Romeo and a Montague; for only...\n", "2 Juliet's name; but I do not sound an ear to na..." ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prediction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "orphan": true, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "vscode": { "interpreter": { "hash": "3c0d54d489a08ae47a06eae2fd00ff032d6cddb527c382959b7b2575f6a8167f" } } }, "nbformat": 4, "nbformat_minor": 2 }