{ "cells": [ { "cell_type": "markdown", "id": "8475ffad-e54c-440e-b735-3bb51a54f1a1", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "(kuberay-rayjob-quickstart)=\n", "\n", "# RayJob Quickstart\n", "\n", "## Prerequisites\n", "\n", "* KubeRay v0.6.0 or higher\n", " * KubeRay v0.6.0 or v1.0.0: Ray 1.10 or higher.\n", " * KubeRay v1.1.1 or newer is highly recommended: Ray 2.8.0 or higher.\n", "\n", "## What's a RayJob?\n", "\n", "A RayJob manages two aspects:\n", "\n", "* **RayCluster**: A RayCluster custom resource manages all Pods in a Ray cluster, including a head Pod and multiple worker Pods.\n", "* **Job**: A Kubernetes Job runs `ray job submit` to submit a Ray job to the RayCluster.\n", "\n", "## What does the RayJob provide?\n", "\n", "With RayJob, KubeRay automatically creates a RayCluster and submits a job when the cluster is ready. You can also configure RayJob to automatically delete the RayCluster once the Ray job finishes.\n", "\n", "To understand the following content better, you should understand the difference between:\n", "* RayJob: A Kubernetes custom resource definition provided by KubeRay.\n", "* Ray job: A Ray job is a packaged Ray application that can run on a remote Ray cluster. See [this document](jobs-overview) for more details.\n", "* Submitter: The submitter is a Kubernetes Job that runs `ray job submit` to submit a Ray job to the RayCluster.\n", "\n", "## RayJob Configuration\n", "\n", "* RayCluster configuration\n", " * `rayClusterSpec` - Defines the **RayCluster** custom resource to run the Ray job on.\n", " * `clusterSelector` - Use existing **RayCluster** custom resources to run the Ray job instead of creating a new one. See [ray-job.use-existing-raycluster.yaml](https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-job.use-existing-raycluster.yaml) for example configurations.\n", "* Ray job configuration\n", " * `entrypoint` - The submitter runs `ray job submit --address ... --submission-id ... -- $entrypoint` to submit a Ray job to the RayCluster.\n", " * `runtimeEnvYAML` (Optional): A runtime environment that describes the dependencies the Ray job needs to run, including files, packages, environment variables, and more. Provide the configuration as a multi-line YAML string.\n", " Example:\n", "\n", " ```yaml\n", " spec:\n", " runtimeEnvYAML: |\n", " pip:\n", " - requests==2.26.0\n", " - pendulum==2.1.2\n", " env_vars:\n", " KEY: \"VALUE\"\n", " ```\n", "\n", " See {ref}`Runtime Environments ` for more details. _(New in KubeRay version 1.0.0)_\n", " * `jobId` (Optional): Defines the submission ID for the Ray job. If not provided, KubeRay generates one automatically. See {ref}`Ray Jobs CLI API Reference ` for more details about the submission ID.\n", " * `metadata` (Optional): See {ref}`Ray Jobs CLI API Reference ` for more details about the `--metadata-json` option.\n", " * `entrypointNumCpus` / `entrypointNumGpus` / `entrypointResources` (Optional): See {ref}`Ray Jobs CLI API Reference ` for more details.\n", " * `backoffLimit` (Optional, added in version 1.2.0): Specifies the number of retries before marking this RayJob failed. Each retry creates a new RayCluster. The default value is 0.\n", "* Submission configuration\n", " * `submissionMode` (Optional): `submissionMode` specifies how RayJob submits the Ray job to the RayCluster. In \"K8sJobMode\", the KubeRay operator creates a submitter Kubernetes Job to submit the Ray job. In \"HTTPMode\", the KubeRay operator sends a request to the RayCluster to create a Ray job. The default value is \"K8sJobMode\".\n", " * `submitterPodTemplate` (Optional): Defines the Pod template for the submitter Kubernetes Job. This field is only effective when `submissionMode` is \"K8sJobMode\".\n", " * `RAY_DASHBOARD_ADDRESS` - The KubeRay operator injects this environment variable to the submitter Pod. The value is `$HEAD_SERVICE:$DASHBOARD_PORT`.\n", " * `RAY_JOB_SUBMISSION_ID` - The KubeRay operator injects this environment variable to the submitter Pod. The value is the `RayJob.Status.JobId` of the RayJob.\n", " * Example: `ray job submit --address=http://$RAY_DASHBOARD_ADDRESS --submission-id=$RAY_JOB_SUBMISSION_ID ...`\n", " * See [ray-job.sample.yaml](https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-job.sample.yaml) for more details.\n", " * `submitterConfig` (Optional): Additional configurations for the submitter Kubernetes Job.\n", " * `backoffLimit` (Optional, added in version 1.2.0): The number of retries before marking the submitter Job as failed. The default value is 2.\n", "* Automatic resource cleanup\n", " * `shutdownAfterJobFinishes` (Optional): Determines whether to recycle the RayCluster after the Ray job finishes. The default value is false.\n", " * `ttlSecondsAfterFinished` (Optional): Only works if `shutdownAfterJobFinishes` is true. The KubeRay operator deletes the RayCluster and the submitter `ttlSecondsAfterFinished` seconds after the Ray job finishes. The default value is 0.\n", " * `activeDeadlineSeconds` (Optional): If the RayJob doesn't transition the `JobDeploymentStatus` to `Complete` or `Failed` within `activeDeadlineSeconds`, the KubeRay operator transitions the `JobDeploymentStatus` to `Failed`, citing `DeadlineExceeded` as the reason.\n", " * `DELETE_RAYJOB_CR_AFTER_JOB_FINISHES` (Optional, added in version 1.2.0): Set this environment variable for the KubeRay operator, not the RayJob resource. If you set this environment variable to true, the RayJob custom resource itself is deleted if you also set `shutdownAfterJobFinishes` to true. Note that KubeRay deletes all resources created by the RayJob, including the Kubernetes Job.\n", "* Others\n", " * `suspend` (Optional): If `suspend` is true, KubeRay deletes both the RayCluster and the submitter. Note that Kueue also implements scheduling strategies by mutating this field. Avoid manually updating this field if you use Kueue to schedule RayJob.\n", " * `deletionPolicy` (Optional, alpha in v1.3.0): Indicates what resources of the RayJob are deleted upon job completion. Valid values are `DeleteCluster`, `DeleteWorkers`, `DeleteSelf` or `DeleteNone`. If unset, deletion policy is based on `spec.shutdownAfterJobFinishes`. This field requires the `RayJobDeletionPolicy` feature gate to be enabled.\n", " * `DeleteCluster` - Deletion policy to delete the RayCluster custom resource, and its Pods, on job completion.\n", " * `DeleteWorkers` - Deletion policy to delete only the worker Pods on job completion.\n", " * `DeleteSelf` - Deletion policy to delete the RayJob custom resource (and all associated resources) on job completion.\n", " * `DeleteNone` - Deletion policy to delete no resources on job completion.\n", "\n", "\n", "## Example: Run a simple Ray job with RayJob\n", "\n", "## Step 1: Create a Kubernetes cluster with Kind\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "bceaec38-3f73-44ca-ad9a-4ecf512bd4e5", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-output" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating cluster \"kind\" ...\n", " \u001b[32m✓\u001b[0m Ensuring node image (kindest/node:v1.26.0) 🖼\n", " \u001b[32m✓\u001b[0m Preparing nodes 📦 7l\n", " \u001b[32m✓\u001b[0m Writing configuration 📜\n", " \u001b[32m✓\u001b[0m Starting control-plane 🕹️7l\n", " \u001b[32m✓\u001b[0m Installing CNI 🔌7l\n", " \u001b[32m✓\u001b[0m Installing StorageClass 💾7l\n", "Set kubectl context to \"kind-kind\"\n", "You can now use your cluster with:\n", "\n", "kubectl cluster-info --context kind-kind\n", "\n", "Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/\n" ] } ], "source": [ "kind create cluster --image=kindest/node:v1.26.0" ] }, { "cell_type": "markdown", "id": "3e276964-354d-4589-b4a3-60e9fbf37c77", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 2: Install the KubeRay operator\n", "\n", "Follow the [RayCluster Quickstart](kuberay-operator-deploy) to install the latest stable KubeRay operator by Helm repository.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "e905fc2a-3da4-4614-b98b-7123a8e14ef3", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NAME: kuberay-operator\n", "LAST DEPLOYED: Wed Apr 9 18:28:39 2025\n", "NAMESPACE: default\n", "STATUS: deployed\n", "REVISION: 1\n", "TEST SUITE: None\n", "deployment.apps/kuberay-operator condition met\n" ] } ], "source": [ "../scripts/doctest-utils.sh install_kuberay_operator" ] }, { "cell_type": "markdown", "id": "1e91f0dd-3fe3-4d2d-bda2-f0d78cc21e06", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 3: Install a RayJob\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "73de86e5-15a4-4a62-b13a-0c81d8a0fb13", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-output" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "rayjob.ray.io/rayjob-sample created\n", "configmap/ray-job-code-sample created\n" ] } ], "source": [ "kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.3.0/ray-operator/config/samples/ray-job.sample.yaml" ] }, { "cell_type": "markdown", "id": "0d4df7b4-492d-419c-b1bc-a4adb01b75a7", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 4: Verify the Kubernetes cluster status" ] }, { "cell_type": "code", "execution_count": 4, "id": "e7b6beab-d1ff-44ae-b896-dd8fc1379096", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "raycluster.ray.io/rayjob-sample-raycluster-7965c condition met\n" ] } ], "source": [ "kubectl wait --for=condition=RayClusterProvisioned raycluster/$(kubectl get rayjob rayjob-sample -o jsonpath='{.status.rayClusterName}') --timeout=500s" ] }, { "cell_type": "code", "execution_count": 5, "id": "829be0de-102c-4873-be19-2f8ad2a1dfa5", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "pod/rayjob-sample-74pmj condition met\n" ] } ], "source": [ "kubectl wait --for=condition=ready pod -l job-name=rayjob-sample --timeout=500s" ] }, { "cell_type": "code", "execution_count": 6, "id": "1d7fd6fe-c4c5-4e57-9a86-58bfa1090dd8", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NAME JOB STATUS DEPLOYMENT STATUS RAY CLUSTER NAME START TIME END TIME AGE\n", "rayjob-sample Running rayjob-sample-raycluster-7965c 2025-04-09T10:29:17Z 117s\n" ] } ], "source": [ "# Step 4.1: List all RayJob custom resources in the `default` namespace.\n", "kubectl get rayjob" ] }, { "cell_type": "code", "execution_count": 7, "id": "0400d6df-3d55-45e4-9acd-fcf41c9abab8", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NAME DESIRED WORKERS AVAILABLE WORKERS CPUS MEMORY GPUS STATUS AGE\n", "rayjob-sample-raycluster-7965c 1 1 400m 0 0 ready 117s\n" ] } ], "source": [ "# Step 4.2: List all RayCluster custom resources in the `default` namespace.\n", "kubectl get raycluster" ] }, { "cell_type": "code", "execution_count": 8, "id": "92e04822-2911-4e22-a4fa-a5449c113fc9", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NAME READY STATUS RESTARTS AGE\n", "kuberay-operator-6bc45dd644-tlsfn 1/1 Running 0 2m26s\n", "rayjob-sample-raycluster-7965c-head-n6nj8 1/1 Running 0 117s\n", "rayjob-sample-raycluster-7965c-small-group-worker-nlzwx 1/1 Running 0 117s\n", "rayjob-sample-74pmj 1/1 Running 0 2s\n" ] } ], "source": [ "# Step 4.3: List all Pods in the `default` namespace.\n", "# The Pod created by the Kubernetes Job will be terminated after the Kubernetes Job finishes.\n", "kubectl get pods --sort-by='.metadata.creationTimestamp'" ] }, { "cell_type": "code", "execution_count": 9, "id": "be0caa06-5fd1-41a0-8247-31ae851dee31", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "job.batch/rayjob-sample condition met\n" ] } ], "source": [ "kubectl wait --for=condition=complete job/rayjob-sample --timeout=500s" ] }, { "cell_type": "code", "execution_count": 10, "id": "b5ffba32-64a2-47c1-846f-46898fc81c97", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SUCCEEDED\n" ] } ], "source": [ "# Step 4.4: Check the status of the RayJob.\n", "# The field `jobStatus` in the RayJob custom resource will be updated to `SUCCEEDED` and `jobDeploymentStatus`\n", "# should be `Complete` once the job finishes.\n", "kubectl get rayjobs.ray.io rayjob-sample -o jsonpath='{.status.jobStatus}'" ] }, { "cell_type": "code", "execution_count": 11, "id": "3c6e14d1-9e1e-4da2-bd21-c91531b75b73", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Complete\n" ] } ], "source": [ "kubectl get rayjobs.ray.io rayjob-sample -o jsonpath='{.status.jobDeploymentStatus}'" ] }, { "cell_type": "markdown", "id": "25a03c35-da92-4c4c-b671-0a8c50286cde", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "The KubeRay operator creates a RayCluster custom resource based on the `rayClusterSpec` and a submitter Kubernetes Job to submit a Ray job to the RayCluster.\n", "In this example, the `entrypoint` is `python /home/ray/samples/sample_code.py`, and `sample_code.py` is a Python script stored in a Kubernetes ConfigMap mounted to the head Pod of the RayCluster.\n", "Because the default value of `shutdownAfterJobFinishes` is false, the KubeRay operator doesn't delete the RayCluster or the submitter when the Ray job finishes.\n", "\n", "## Step 5: Check the output of the Ray job\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "b169dc31-e023-4f6c-99d7-9a39729cc05b", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2025-04-09 03:31:23,810\tINFO worker.py:1654 -- Connecting to existing Ray cluster at address: 10.244.0.6:6379...\n", "2025-04-09 03:31:23,818\tINFO worker.py:1832 -- Connected to Ray cluster. View the dashboard at \u001b[1m\u001b[32m10.244.0.6:8265 \u001b[39m\u001b[22m\n", "test_counter got 1\n", "test_counter got 2\n", "test_counter got 3\n", "test_counter got 4\n", "test_counter got 5\n", "2025-04-09 03:31:32,204\tSUCC cli.py:63 -- \u001b[32m-----------------------------------\u001b[39m\n", "2025-04-09 03:31:32,204\tSUCC cli.py:64 -- \u001b[32mJob 'rayjob-sample-jrrd2' succeeded\u001b[39m\n", "2025-04-09 03:31:32,204\tSUCC cli.py:65 -- \u001b[32m-----------------------------------\u001b[39m\n" ] } ], "source": [ "kubectl logs -l=job-name=rayjob-sample" ] }, { "cell_type": "markdown", "id": "87b8c082-784a-43fd-88d8-c4e3b646aeeb", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "The Python script `sample_code.py` used by `entrypoint` is a simple Ray script that executes a counter's increment function 5 times.\n", "\n", "## Step 6: Delete the RayJob\n" ] }, { "cell_type": "code", "execution_count": 13, "id": "9a1e25a5-720b-4ec2-a125-475b838faaed", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-output" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "rayjob.ray.io \"rayjob-sample\" deleted\n", "configmap \"ray-job-code-sample\" deleted\n" ] } ], "source": [ "kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.3.0/ray-operator/config/samples/ray-job.sample.yaml" ] }, { "cell_type": "markdown", "id": "add2827a-70e7-44e3-814d-bba03bd16cb9", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 7: Create a RayJob with `shutdownAfterJobFinishes` set to true" ] }, { "cell_type": "code", "execution_count": 14, "id": "de9c1595-34fc-45c7-8ebf-2683bd1f4ad7", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-output" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "rayjob.ray.io/rayjob-sample-shutdown created\n", "configmap/ray-job-code-sample created\n" ] } ], "source": [ "kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.3.0/ray-operator/config/samples/ray-job.shutdown.yaml" ] }, { "cell_type": "markdown", "id": "83b9d4ae-a7c9-498f-8a12-169c488bf51c", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "The `ray-job.shutdown.yaml` defines a RayJob custom resource with `shutdownAfterJobFinishes: true` and `ttlSecondsAfterFinished: 10`.\n", "Hence, the KubeRay operator deletes the RayCluster 10 seconds after the Ray job finishes. Note that the submitter job isn't deleted\n", "because it contains the ray job logs and doesn't use any cluster resources once completed. In addition, the RayJob cleans up the submitter job\n", "when the RayJob is eventually deleted due to its owner reference back to the RayJob.\n", "\n", "## Step 8: Check the RayJob status\n" ] }, { "cell_type": "code", "execution_count": 15, "id": "b943432f-615e-4636-94fe-9894fb72c5bc", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "raycluster.ray.io/rayjob-sample-shutdown-raycluster-pfqsf condition met\n" ] } ], "source": [ "kubectl wait --for=condition=RayClusterProvisioned raycluster/$(kubectl get rayjob rayjob-sample-shutdown -o jsonpath='{.status.rayClusterName}') --timeout=500s" ] }, { "cell_type": "code", "execution_count": 16, "id": "766a61bb-e398-4d36-9d35-4c877a23172d", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "job.batch/rayjob-sample-shutdown condition met\n" ] } ], "source": [ "kubectl wait --for=condition=complete job/rayjob-sample-shutdown --timeout=500s" ] }, { "cell_type": "code", "execution_count": 17, "id": "26da8449-c907-42f4-8316-468c7b330eb5", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Complete\n" ] } ], "source": [ "# Wait until `jobStatus` is `SUCCEEDED` and `jobDeploymentStatus` is `Complete`.\n", "kubectl get rayjobs.ray.io rayjob-sample-shutdown -o jsonpath='{.status.jobDeploymentStatus}'" ] }, { "cell_type": "code", "execution_count": 18, "id": "82fd3db4-9891-48c7-ad6b-81baa8f829ac", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SUCCEEDED\n" ] } ], "source": [ "kubectl get rayjobs.ray.io rayjob-sample-shutdown -o jsonpath='{.status.jobStatus}'" ] }, { "cell_type": "markdown", "id": "a362a1ad-1999-47ee-bf48-d7e0b1a77572", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 9: Check if the KubeRay operator deletes the RayCluster\n" ] }, { "cell_type": "code", "execution_count": 19, "id": "db3b0795-39cc-4808-b772-51125262a8f5", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "NAME DESIRED WORKERS AVAILABLE WORKERS CPUS MEMORY GPUS STATUS AGE\n", "rayjob-sample-shutdown-raycluster-pfqsf 1 1 400m 0 0 ready 45s\n" ] } ], "source": [ "# List the RayCluster custom resources in the `default` namespace. The RayCluster\n", "# associated with the RayJob `rayjob-sample-shutdown` should be deleted.\n", "kubectl get raycluster" ] }, { "cell_type": "markdown", "id": "bec66321-69ec-4f00-a242-f40059dec006", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Step 10: Clean up\n" ] }, { "cell_type": "code", "execution_count": 20, "id": "b0e7abbd-5e22-4b05-a67d-4956ec03eab5", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [ "nbval-ignore-output", "remove-output" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Deleting cluster \"kind\" ...\n", "Deleted nodes: [\"kind-control-plane\"]\n" ] } ], "source": [ "kind delete cluster" ] }, { "cell_type": "markdown", "id": "ca0fabe4-d266-4d62-9c03-72e711abb445", "metadata": { "editable": true, "slideshow": { "slide_type": "" }, "tags": [] }, "source": [ "## Next steps\n", "\n", "* [RayJob Batch Inference Example](kuberay-batch-inference-example)\n", "* [Priority Scheduling with RayJob and Kueue](kuberay-kueue-priority-scheduling-example)\n", "* [Gang Scheduling with RayJob and Kueue](kuberay-kueue-gang-scheduling-example)" ] } ], "metadata": { "kernelspec": { "display_name": "Bash", "language": "bash", "name": "bash" }, "language_info": { "codemirror_mode": "shell", "file_extension": ".sh", "mimetype": "text/x-sh", "name": "bash" } }, "nbformat": 4, "nbformat_minor": 5 }