openapi: 3.1.0 info: title: Databricks REST API description: >- The Databricks REST API provides programmatic access to manage Databricks workspace resources including clusters, jobs, and workspace objects. All API endpoints require authentication using a personal access token or OAuth token passed via the Authorization header. The base URL is specific to your Databricks workspace deployment region. version: 2.1.0 contact: name: Databricks url: https://www.databricks.com/company/contact email: support@databricks.com license: name: Apache 2.0 url: https://www.apache.org/licenses/LICENSE-2.0 termsOfService: https://www.databricks.com/legal/terms-of-use servers: - url: https://{workspace_host}/api description: Databricks workspace API endpoint variables: workspace_host: default: adb-1234567890123456.7.azuredatabricks.net description: >- The hostname of your Databricks workspace. Format varies by cloud provider (e.g., adb-..azuredatabricks.net for Azure, .cloud.databricks.com for AWS). security: - bearerAuth: [] tags: - name: Clusters description: >- Manage Databricks clusters for running data engineering and data science workloads on Apache Spark. - name: Jobs description: >- Create and manage automated workloads including notebooks, JARs, Python scripts, and multi-task workflows. - name: Workspace description: >- Manage workspace objects such as notebooks, folders, and libraries. paths: /2.0/clusters/create: post: operationId: createCluster summary: Databricks Create a New Cluster description: >- Creates a new Spark cluster. This method acquires new instances from the cloud provider and starts the Spark driver and worker processes. The cluster is created asynchronously; use the cluster_id returned to poll for status. tags: - Clusters requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/CreateClusterRequest' examples: CreateclusterRequestExample: summary: Default createCluster request x-microcks-default: true value: cluster_name: example_value spark_version: example_value node_type_id: '500123' driver_node_type_id: '500123' num_workers: 10 autoscale: min_workers: 10 max_workers: 10 spark_conf: example_value aws_attributes: first_on_demand: 10 availability: SPOT zone_id: '500123' instance_profile_arn: example_value spot_bid_price_percent: 10 ebs_volume_type: GENERAL_PURPOSE_SSD ebs_volume_count: 10 ebs_volume_size: 10 azure_attributes: first_on_demand: 10 availability: SPOT_AZURE spot_bid_max_price: 42.5 gcp_attributes: use_preemptible_executors: true google_service_account: example_value availability: GCP_PREEMPTIBLE custom_tags: example_value spark_env_vars: example_value autotermination_minutes: 10 enable_elastic_disk: true instance_pool_id: '500123' policy_id: '500123' enable_local_disk_encryption: true runtime_engine: STANDARD data_security_mode: NONE single_user_name: example_value init_scripts: - workspace: {} volumes: {} dbfs: {} ssh_public_keys: - example_value responses: '200': description: Cluster creation initiated successfully. content: application/json: schema: type: object properties: cluster_id: type: string description: The unique identifier of the newly created cluster. examples: - '1234-567890-abcde123' examples: Createcluster200Example: summary: Default createCluster 200 response x-microcks-default: true value: cluster_id: '500123' '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '403': $ref: '#/components/responses/Forbidden' '429': $ref: '#/components/responses/TooManyRequests' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/list: get: operationId: listClusters summary: Databricks List All Clusters description: >- Returns information about all clusters in the workspace, including terminated clusters. Clusters are ordered by cluster_id. tags: - Clusters parameters: - name: can_use_client in: query required: false description: Filter clusters by client compatibility. schema: type: string example: example_value responses: '200': description: Successfully retrieved the list of clusters. content: application/json: schema: type: object properties: clusters: type: array items: $ref: '#/components/schemas/ClusterDetails' examples: Listclusters200Example: summary: Default listClusters 200 response x-microcks-default: true value: clusters: - cluster_id: '500123' cluster_name: example_value spark_version: example_value node_type_id: '500123' driver_node_type_id: '500123' num_workers: 10 state: PENDING state_message: example_value start_time: 10 terminated_time: 10 last_state_loss_time: 10 last_activity_time: 10 last_restarted_time: 10 creator_user_name: example_value cluster_source: UI spark_conf: example_value custom_tags: example_value spark_env_vars: example_value autotermination_minutes: 10 enable_elastic_disk: true instance_pool_id: '500123' policy_id: '500123' data_security_mode: example_value single_user_name: example_value runtime_engine: example_value default_tags: example_value cluster_log_status: last_attempted: 10 last_exception: example_value termination_reason: code: example_value type: example_value parameters: example_value disk_spec: disk_count: 10 disk_size: 10 disk_type: {} executors: - {} jdbc_port: 10 spark_context_id: '500123' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/get: get: operationId: getCluster summary: Databricks Get Cluster Details description: >- Retrieves detailed information about a cluster, including its current state, configuration, and runtime properties. tags: - Clusters parameters: - name: cluster_id in: query required: true description: The unique identifier of the cluster. schema: type: string example: '500123' responses: '200': description: Successfully retrieved cluster details. content: application/json: schema: $ref: '#/components/schemas/ClusterDetails' examples: Getcluster200Example: summary: Default getCluster 200 response x-microcks-default: true value: cluster_id: '500123' cluster_name: example_value spark_version: example_value node_type_id: '500123' driver_node_type_id: '500123' num_workers: 10 autoscale: min_workers: 10 max_workers: 10 state: PENDING state_message: example_value start_time: 10 terminated_time: 10 last_state_loss_time: 10 last_activity_time: 10 last_restarted_time: 10 creator_user_name: example_value cluster_source: UI spark_conf: example_value custom_tags: example_value spark_env_vars: example_value autotermination_minutes: 10 enable_elastic_disk: true instance_pool_id: '500123' policy_id: '500123' data_security_mode: example_value single_user_name: example_value runtime_engine: example_value default_tags: example_value cluster_log_status: last_attempted: 10 last_exception: example_value termination_reason: code: example_value type: example_value parameters: example_value disk_spec: disk_count: 10 disk_size: 10 disk_type: azure_disk_volume_type: example_value ebs_volume_type: example_value driver: private_ip: example_value public_dns: example_value node_id: '500123' instance_id: '500123' start_timestamp: 10 host_private_ip: example_value executors: - private_ip: example_value public_dns: example_value node_id: '500123' instance_id: '500123' start_timestamp: 10 host_private_ip: example_value jdbc_port: 10 spark_context_id: '500123' '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/start: post: operationId: startCluster summary: Databricks Start a Terminated Cluster description: >- Starts a terminated cluster given its cluster_id. This is similar to creating a cluster except it uses the configuration of the previously terminated cluster. tags: - Clusters requestBody: required: true content: application/json: schema: type: object required: - cluster_id properties: cluster_id: type: string description: The cluster to start. examples: StartclusterRequestExample: summary: Default startCluster request x-microcks-default: true value: cluster_id: '500123' responses: '200': description: Cluster start initiated successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/restart: post: operationId: restartCluster summary: Databricks Restart a Cluster description: >- Restarts a Spark cluster given its cluster_id. If the cluster is not in a RUNNING state, nothing happens. tags: - Clusters requestBody: required: true content: application/json: schema: type: object required: - cluster_id properties: cluster_id: type: string description: The cluster to restart. examples: RestartclusterRequestExample: summary: Default restartCluster request x-microcks-default: true value: cluster_id: '500123' responses: '200': description: Cluster restart initiated successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/delete: post: operationId: terminateCluster summary: Databricks Terminate a Cluster description: >- Terminates a Spark cluster given its cluster_id. The cluster is removed after being terminated. Use the permanent-delete endpoint if you want to remove the cluster configuration entirely. tags: - Clusters requestBody: required: true content: application/json: schema: type: object required: - cluster_id properties: cluster_id: type: string description: The cluster to terminate. examples: TerminateclusterRequestExample: summary: Default terminateCluster request x-microcks-default: true value: cluster_id: '500123' responses: '200': description: Cluster termination initiated successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/permanent-delete: post: operationId: permanentDeleteCluster summary: Databricks Permanently Delete a Cluster description: >- Permanently deletes a Spark cluster. If the cluster is running, it is terminated and resources are asynchronously removed. If the cluster is terminated, it is immediately removed. A cluster can only be permanently deleted by an admin or the cluster creator. tags: - Clusters requestBody: required: true content: application/json: schema: type: object required: - cluster_id properties: cluster_id: type: string description: The cluster to permanently delete. examples: PermanentdeleteclusterRequestExample: summary: Default permanentDeleteCluster request x-microcks-default: true value: cluster_id: '500123' responses: '200': description: Cluster permanently deleted. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/edit: post: operationId: editCluster summary: Databricks Edit Cluster Configuration description: >- Edits the configuration of a cluster to match the provided attributes. The cluster must be in a RUNNING or TERMINATED state. If the cluster is running, it will be restarted to apply the changes. tags: - Clusters requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/EditClusterRequest' examples: EditclusterRequestExample: summary: Default editCluster request x-microcks-default: true value: cluster_id: '500123' cluster_name: example_value spark_version: example_value node_type_id: '500123' driver_node_type_id: '500123' num_workers: 10 autoscale: min_workers: 10 max_workers: 10 spark_conf: example_value custom_tags: example_value spark_env_vars: example_value autotermination_minutes: 10 enable_elastic_disk: true instance_pool_id: '500123' policy_id: '500123' data_security_mode: NONE single_user_name: example_value runtime_engine: STANDARD responses: '200': description: Cluster configuration updated successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/clusters/events: post: operationId: listClusterEvents summary: Databricks List Cluster Events description: >- Retrieves a list of events about the activity of a cluster. Events are returned in reverse chronological order. This endpoint can be used to audit cluster activity and monitor lifecycle changes. tags: - Clusters requestBody: required: true content: application/json: schema: type: object required: - cluster_id properties: cluster_id: type: string description: The ID of the cluster to retrieve events for. start_time: type: integer format: int64 description: Start timestamp in milliseconds for the event query range. end_time: type: integer format: int64 description: End timestamp in milliseconds for the event query range. order: type: string enum: - DESC - ASC description: Sort order for results. event_types: type: array items: type: string enum: - CREATING - DID_NOT_EXPAND_DISK - EXPANDED_DISK - FAILED_TO_EXPAND_DISK - INIT_SCRIPTS_STARTING - INIT_SCRIPTS_FINISHED - STARTING - RESTARTING - TERMINATING - EDITED - RUNNING - RESIZING - UPSIZE_COMPLETED - NODES_LOST - DRIVER_HEALTHY - DRIVER_NOT_RESPONDING - DRIVER_UNAVAILABLE - SPARK_EXCEPTION - PINNED - UNPINNED description: Filter by specific event types. offset: type: integer format: int64 description: Offset for pagination. limit: type: integer format: int64 description: Maximum number of events to return (max 500). examples: ListclustereventsRequestExample: summary: Default listClusterEvents request x-microcks-default: true value: cluster_id: '500123' start_time: 10 end_time: 10 order: DESC event_types: - CREATING offset: 10 limit: 10 responses: '200': description: Successfully retrieved cluster events. content: application/json: schema: type: object properties: events: type: array items: $ref: '#/components/schemas/ClusterEvent' next_page: type: object properties: cluster_id: type: string end_time: type: integer format: int64 offset: type: integer format: int64 total_count: type: integer format: int64 examples: Listclusterevents200Example: summary: Default listClusterEvents 200 response x-microcks-default: true value: events: - cluster_id: '500123' timestamp: 10 type: example_value details: current_num_workers: 10 target_num_workers: 10 previous_attributes: example_value attributes: example_value previous_cluster_size: example_value cluster_size: example_value cause: example_value reason: {} next_page: cluster_id: '500123' end_time: 10 offset: 10 total_count: 10 '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/create: post: operationId: createJob summary: Databricks Create a New Job description: >- Creates a new job with the provided settings. The job can be configured with a single task or multiple tasks with dependencies forming a directed acyclic graph (DAG). tags: - Jobs requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/CreateJobRequest' examples: CreatejobRequestExample: summary: Default createJob request x-microcks-default: true value: name: Example Title tasks: - task_key: example_value description: A sample description. depends_on: {} existing_cluster_id: '500123' job_cluster_key: example_value notebook_task: {} spark_jar_task: {} spark_python_task: {} spark_submit_task: {} pipeline_task: {} python_wheel_task: {} sql_task: {} dbt_task: {} run_if: ALL_SUCCESS timeout_seconds: 10 max_retries: 10 min_retry_interval_millis: 10 retry_on_timeout: true libraries: {} job_clusters: - job_cluster_key: example_value email_notifications: on_start: - {} on_success: - {} on_failure: - {} on_duration_warning_threshold_exceeded: - {} no_alert_for_skipped_runs: true webhook_notifications: on_start: - {} on_success: - {} on_failure: - {} on_duration_warning_threshold_exceeded: - {} notification_settings: no_alert_for_skipped_runs: true no_alert_for_canceled_runs: true timeout_seconds: 10 schedule: quartz_cron_expression: example_value timezone_id: '500123' pause_status: PAUSED continuous: pause_status: PAUSED trigger: pause_status: PAUSED file_arrival: url: https://www.example.com min_time_between_triggers_seconds: 10 wait_after_last_change_seconds: 10 max_concurrent_runs: 10 git_source: git_url: https://www.example.com git_provider: gitHub git_branch: example_value git_tag: example_value git_commit: example_value tags: example_value format: SINGLE_TASK queue: enabled: true parameters: - name: Example Title default: example_value run_as: user_name: example_value service_principal_name: example_value access_control_list: - user_name: example_value group_name: example_value service_principal_name: example_value permission_level: CAN_MANAGE responses: '200': description: Job created successfully. content: application/json: schema: type: object properties: job_id: type: integer format: int64 description: The canonical identifier for the newly created job. examples: - 11223344 examples: Createjob200Example: summary: Default createJob 200 response x-microcks-default: true value: job_id: '500123' '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/list: get: operationId: listJobs summary: Databricks List All Jobs description: >- Retrieves a list of jobs in the workspace. Results are paginated and can be filtered by name. tags: - Jobs parameters: - name: limit in: query required: false description: >- The number of jobs to return. This value must be greater than 0 and less than or equal to 25. The default value is 20. schema: type: integer default: 20 maximum: 25 example: 10 - name: offset in: query required: false description: The offset of the first job to return. schema: type: integer default: 0 example: 10 - name: name in: query required: false description: >- A filter on the list based on the exact (case-insensitive) job name. schema: type: string example: Example Title - name: expand_tasks in: query required: false description: Whether to include task and cluster details in the response. schema: type: boolean default: false example: true responses: '200': description: Successfully retrieved the list of jobs. content: application/json: schema: type: object properties: jobs: type: array items: $ref: '#/components/schemas/Job' has_more: type: boolean description: Whether there are more jobs to list. examples: Listjobs200Example: summary: Default listJobs 200 response x-microcks-default: true value: jobs: - job_id: '500123' creator_user_name: example_value run_as_user_name: example_value created_time: 10 has_more: true '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/get: get: operationId: getJob summary: Databricks Get a Job description: >- Retrieves the details of a single job, including its settings, creator, and run history summary. tags: - Jobs parameters: - name: job_id in: query required: true description: The canonical identifier of the job to retrieve. schema: type: integer format: int64 example: '500123' responses: '200': description: Successfully retrieved the job. content: application/json: schema: $ref: '#/components/schemas/Job' examples: Getjob200Example: summary: Default getJob 200 response x-microcks-default: true value: job_id: '500123' creator_user_name: example_value run_as_user_name: example_value settings: name: Example Title tasks: - {} job_clusters: - {} timeout_seconds: 10 max_concurrent_runs: 10 tags: example_value format: SINGLE_TASK queue: enabled: true continuous: pause_status: PAUSED parameters: - {} run_as: user_name: example_value service_principal_name: example_value created_time: 10 '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/update: post: operationId: updateJob summary: Databricks Partially Update a Job description: >- Add, update, or remove specific settings of an existing job. Use reset to overwrite all settings. tags: - Jobs requestBody: required: true content: application/json: schema: type: object required: - job_id properties: job_id: type: integer format: int64 description: The canonical identifier of the job to update. new_settings: $ref: '#/components/schemas/JobSettings' fields_to_remove: type: array items: type: string description: >- A list of fields to remove from the job settings. examples: UpdatejobRequestExample: summary: Default updateJob request x-microcks-default: true value: job_id: '500123' new_settings: name: Example Title tasks: - {} job_clusters: - {} email_notifications: on_start: {} on_success: {} on_failure: {} on_duration_warning_threshold_exceeded: {} no_alert_for_skipped_runs: true webhook_notifications: on_start: {} on_success: {} on_failure: {} on_duration_warning_threshold_exceeded: {} timeout_seconds: 10 schedule: quartz_cron_expression: example_value timezone_id: '500123' pause_status: PAUSED max_concurrent_runs: 10 git_source: git_url: https://www.example.com git_provider: gitHub git_branch: example_value git_tag: example_value git_commit: example_value tags: example_value format: SINGLE_TASK queue: enabled: true continuous: pause_status: PAUSED parameters: - name: Example Title default: example_value run_as: user_name: example_value service_principal_name: example_value fields_to_remove: - example_value responses: '200': description: Job updated successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/delete: post: operationId: deleteJob summary: Databricks Delete a Job description: >- Deletes a job and all its associated runs. The job is permanently removed and cannot be recovered. tags: - Jobs requestBody: required: true content: application/json: schema: type: object required: - job_id properties: job_id: type: integer format: int64 description: The canonical identifier of the job to delete. examples: DeletejobRequestExample: summary: Default deleteJob request x-microcks-default: true value: job_id: '500123' responses: '200': description: Job deleted successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/run-now: post: operationId: runJobNow summary: Databricks Trigger a Job Run description: >- Triggers an immediate run of a job. One run is triggered for each call. Runs triggered by run-now are considered triggered runs. Parameters can be passed to override the job defaults. tags: - Jobs requestBody: required: true content: application/json: schema: type: object required: - job_id properties: job_id: type: integer format: int64 description: The canonical identifier of the job to run. idempotency_token: type: string description: >- An optional token to guarantee the idempotency of job run requests. Reusing the token with a different run will cause the request to fail. jar_params: type: array items: type: string description: Parameters for JAR tasks. notebook_params: type: object additionalProperties: type: string description: A map of key-value pairs for notebook parameters. python_params: type: array items: type: string description: Parameters for Python tasks. spark_submit_params: type: array items: type: string description: Parameters for Spark submit tasks. python_named_params: type: object additionalProperties: type: string description: A map of named parameters for Python tasks. pipeline_params: type: object properties: full_refresh: type: boolean description: Whether to run a full refresh of the pipeline. description: Parameters for pipeline tasks. sql_params: type: object additionalProperties: type: string description: A map of named parameters for SQL tasks. examples: RunjobnowRequestExample: summary: Default runJobNow request x-microcks-default: true value: job_id: '500123' idempotency_token: example_value jar_params: - example_value notebook_params: example_value python_params: - example_value spark_submit_params: - example_value python_named_params: example_value pipeline_params: full_refresh: true sql_params: example_value responses: '200': description: Job run triggered successfully. content: application/json: schema: type: object properties: run_id: type: integer format: int64 description: The globally unique ID of the newly triggered run. number_in_job: type: integer format: int64 description: >- A unique identifier for this run within the job. This is an auto-incrementing value. examples: Runjobnow200Example: summary: Default runJobNow 200 response x-microcks-default: true value: run_id: '500123' number_in_job: 10 '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/runs/list: get: operationId: listJobRuns summary: Databricks List Job Runs description: >- Lists runs from most recently started to least. Results are paginated. Runs can be filtered by job, active state, and completion status. tags: - Jobs parameters: - name: job_id in: query required: false description: The job for which to list runs. If omitted, lists runs for all jobs. schema: type: integer format: int64 example: '500123' - name: active_only in: query required: false description: >- If true, only active runs are included in the results; otherwise lists both active and completed runs. schema: type: boolean default: false example: true - name: completed_only in: query required: false description: If true, only completed runs are included in the results. schema: type: boolean default: false example: true - name: offset in: query required: false description: The offset of the first run to return. schema: type: integer default: 0 example: 10 - name: limit in: query required: false description: The number of runs to return (max 25). schema: type: integer default: 25 maximum: 25 example: 10 - name: run_type in: query required: false description: The type of runs to return. schema: type: string enum: - JOB_RUN - WORKFLOW_RUN - SUBMIT_RUN example: JOB_RUN - name: expand_tasks in: query required: false description: Whether to include task details in each run. schema: type: boolean default: false example: true - name: start_time_from in: query required: false description: Filter runs started after this time (epoch milliseconds). schema: type: integer format: int64 example: 10 - name: start_time_to in: query required: false description: Filter runs started before this time (epoch milliseconds). schema: type: integer format: int64 example: 10 responses: '200': description: Successfully retrieved the list of runs. content: application/json: schema: type: object properties: runs: type: array items: $ref: '#/components/schemas/Run' has_more: type: boolean description: Whether there are more runs to list. examples: Listjobruns200Example: summary: Default listJobRuns 200 response x-microcks-default: true value: runs: - job_id: '500123' run_id: '500123' run_name: example_value number_in_job: 10 original_attempt_run_id: '500123' state: life_cycle_state: PENDING result_state: SUCCESS state_message: example_value user_cancelled_or_timedout: true tasks: - {} job_clusters: - {} cluster_spec: existing_cluster_id: '500123' libraries: {} cluster_instance: cluster_id: '500123' spark_context_id: '500123' start_time: 10 setup_duration: 10 execution_duration: 10 cleanup_duration: 10 end_time: 10 trigger: PERIODIC run_type: JOB_RUN attempt_number: 10 creator_user_name: example_value run_page_url: https://www.example.com format: SINGLE_TASK has_more: true '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/runs/get: get: operationId: getJobRun summary: Databricks Get a Job Run description: >- Retrieves the metadata of a run, including start time, end time, status, and task details. tags: - Jobs parameters: - name: run_id in: query required: true description: The canonical identifier of the run. schema: type: integer format: int64 example: '500123' - name: include_history in: query required: false description: Whether to include the repair history in the response. schema: type: boolean example: true - name: include_resolved_values in: query required: false description: Whether to include resolved parameter values in the response. schema: type: boolean example: true responses: '200': description: Successfully retrieved run details. content: application/json: schema: $ref: '#/components/schemas/Run' examples: Getjobrun200Example: summary: Default getJobRun 200 response x-microcks-default: true value: job_id: '500123' run_id: '500123' run_name: example_value number_in_job: 10 original_attempt_run_id: '500123' state: life_cycle_state: PENDING result_state: SUCCESS state_message: example_value user_cancelled_or_timedout: true schedule: quartz_cron_expression: example_value timezone_id: '500123' pause_status: PAUSED tasks: - run_id: '500123' task_key: example_value description: A sample description. state: {} depends_on: {} existing_cluster_id: '500123' notebook_task: {} spark_jar_task: {} spark_python_task: {} sql_task: {} start_time: 10 setup_duration: 10 execution_duration: 10 cleanup_duration: 10 end_time: 10 cluster_instance: {} attempt_number: 10 libraries: {} job_clusters: - job_cluster_key: example_value cluster_spec: existing_cluster_id: '500123' new_cluster: cluster_name: example_value spark_version: example_value node_type_id: '500123' driver_node_type_id: '500123' num_workers: 10 spark_conf: example_value custom_tags: example_value spark_env_vars: example_value autotermination_minutes: 10 enable_elastic_disk: true instance_pool_id: '500123' policy_id: '500123' enable_local_disk_encryption: true runtime_engine: STANDARD data_security_mode: NONE single_user_name: example_value init_scripts: {} ssh_public_keys: {} libraries: - {} cluster_instance: cluster_id: '500123' spark_context_id: '500123' start_time: 10 setup_duration: 10 execution_duration: 10 cleanup_duration: 10 end_time: 10 trigger: PERIODIC run_type: JOB_RUN attempt_number: 10 creator_user_name: example_value run_page_url: https://www.example.com format: SINGLE_TASK git_source: git_url: https://www.example.com git_provider: gitHub git_branch: example_value git_tag: example_value git_commit: example_value '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/runs/cancel: post: operationId: cancelJobRun summary: Databricks Cancel a Job Run description: >- Cancels an active run. The run is canceled asynchronously, and the request returns immediately. The canceled run transitions to a TERMINATING state. tags: - Jobs requestBody: required: true content: application/json: schema: type: object required: - run_id properties: run_id: type: integer format: int64 description: The canonical identifier of the run to cancel. examples: CanceljobrunRequestExample: summary: Default cancelJobRun request x-microcks-default: true value: run_id: '500123' responses: '200': description: Run cancellation initiated. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.1/jobs/runs/get-output: get: operationId: getJobRunOutput summary: Databricks Get Run Output description: >- Retrieves the output and metadata of a single task run. This endpoint can be used for notebook, JAR, Python, and Spark submit task outputs. tags: - Jobs parameters: - name: run_id in: query required: true description: The canonical identifier of the run. schema: type: integer format: int64 example: '500123' responses: '200': description: Successfully retrieved run output. content: application/json: schema: type: object properties: notebook_output: type: object properties: result: type: string description: The value passed to dbutils.notebook.exit(). truncated: type: boolean error: type: string description: Error message if the run failed. error_trace: type: string description: Stack trace of the error. metadata: $ref: '#/components/schemas/Run' examples: Getjobrunoutput200Example: summary: Default getJobRunOutput 200 response x-microcks-default: true value: notebook_output: result: example_value truncated: true error: example_value error_trace: example_value metadata: job_id: '500123' run_id: '500123' run_name: example_value number_in_job: 10 original_attempt_run_id: '500123' state: life_cycle_state: PENDING result_state: SUCCESS state_message: example_value user_cancelled_or_timedout: true schedule: quartz_cron_expression: example_value timezone_id: '500123' pause_status: PAUSED tasks: - {} job_clusters: - {} cluster_spec: existing_cluster_id: '500123' libraries: - {} cluster_instance: cluster_id: '500123' spark_context_id: '500123' start_time: 10 setup_duration: 10 execution_duration: 10 cleanup_duration: 10 end_time: 10 trigger: PERIODIC run_type: JOB_RUN attempt_number: 10 creator_user_name: example_value run_page_url: https://www.example.com format: SINGLE_TASK git_source: git_url: https://www.example.com git_provider: gitHub git_branch: example_value git_tag: example_value git_commit: example_value '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/list: get: operationId: listWorkspaceObjects summary: Databricks List Workspace Objects description: >- Lists the contents of a directory in the workspace, or the object if it is not a directory. If the input path does not exist, a RESOURCE_DOES_NOT_EXIST error is returned. tags: - Workspace parameters: - name: path in: query required: true description: >- The absolute path of the workspace directory to list. Must start with /. schema: type: string example: example_value - name: notebooks_modified_after in: query required: false description: >- If provided, only notebooks modified after this timestamp (epoch seconds) are returned. schema: type: integer format: int64 example: 10 responses: '200': description: Successfully listed workspace objects. content: application/json: schema: type: object properties: objects: type: array items: $ref: '#/components/schemas/WorkspaceObject' examples: Listworkspaceobjects200Example: summary: Default listWorkspaceObjects 200 response x-microcks-default: true value: objects: - object_type: NOTEBOOK path: example_value language: SCALA object_id: '500123' created_at: '2026-01-15T10:30:00Z' modified_at: '2026-01-15T10:30:00Z' resource_id: '500123' size: 10 '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/get-status: get: operationId: getWorkspaceObjectStatus summary: Databricks Get Workspace Object Status description: >- Gets the status of an object or a directory in the workspace. If the object does not exist, a RESOURCE_DOES_NOT_EXIST error is returned. tags: - Workspace parameters: - name: path in: query required: true description: The absolute path of the workspace object. Must start with /. schema: type: string example: example_value responses: '200': description: Successfully retrieved object status. content: application/json: schema: $ref: '#/components/schemas/WorkspaceObject' examples: Getworkspaceobjectstatus200Example: summary: Default getWorkspaceObjectStatus 200 response x-microcks-default: true value: object_type: NOTEBOOK path: example_value language: SCALA object_id: '500123' created_at: '2026-01-15T10:30:00Z' modified_at: '2026-01-15T10:30:00Z' resource_id: '500123' size: 10 '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/export: get: operationId: exportWorkspaceObject summary: Databricks Export a Workspace Object description: >- Exports a notebook or the contents of an entire directory. Notebooks can be exported in SOURCE, HTML, JUPYTER, or DBC format. tags: - Workspace parameters: - name: path in: query required: true description: >- The absolute path of the object or directory to export. Must start with /. schema: type: string example: example_value - name: format in: query required: false description: >- The format in which to export the notebook. The default is SOURCE. schema: type: string enum: - SOURCE - HTML - JUPYTER - DBC - R_MARKDOWN default: SOURCE example: SOURCE - name: direct_download in: query required: false description: >- Whether to download the file directly. If true, the response is the exported content. schema: type: boolean default: false example: true responses: '200': description: Successfully exported the workspace object. content: application/json: schema: type: object properties: content: type: string description: >- The base64-encoded content of the exported object. file_type: type: string description: The type of the exported file. examples: Exportworkspaceobject200Example: summary: Default exportWorkspaceObject 200 response x-microcks-default: true value: content: example_value file_type: example_value '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/import: post: operationId: importWorkspaceObject summary: Databricks Import a Workspace Object description: >- Imports a notebook or the contents of an entire directory. If the object already exists and overwrite is set to false, a RESOURCE_ALREADY_EXISTS error is returned. tags: - Workspace requestBody: required: true content: application/json: schema: type: object required: - path properties: path: type: string description: >- The absolute path of the object to import. Must start with /. format: type: string enum: - SOURCE - HTML - JUPYTER - DBC - R_MARKDOWN - AUTO description: >- The format of the content. AUTO will try to detect the format automatically. language: type: string enum: - SCALA - PYTHON - SQL - R description: >- The language of the object. Required for SOURCE format when the object type is a notebook. content: type: string description: >- The base64-encoded content to import. Limit is 10 MB. overwrite: type: boolean description: >- If true, the existing object is overwritten. Default is false. default: false examples: ImportworkspaceobjectRequestExample: summary: Default importWorkspaceObject request x-microcks-default: true value: path: example_value format: SOURCE language: SCALA content: example_value overwrite: true responses: '200': description: Object imported successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/delete: post: operationId: deleteWorkspaceObject summary: Databricks Delete a Workspace Object description: >- Deletes an object or a directory and optionally all of its contents. If the path does not exist, a RESOURCE_DOES_NOT_EXIST error is returned. If path is a non-empty directory and recursive is false, a DIRECTORY_NOT_EMPTY error is returned. tags: - Workspace requestBody: required: true content: application/json: schema: type: object required: - path properties: path: type: string description: >- The absolute path of the object to delete. Must start with /. recursive: type: boolean description: >- If true, all contents of the directory are recursively deleted. Required for non-empty directories. default: false examples: DeleteworkspaceobjectRequestExample: summary: Default deleteWorkspaceObject request x-microcks-default: true value: path: example_value recursive: true responses: '200': description: Object deleted successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '404': $ref: '#/components/responses/NotFound' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK /2.0/workspace/mkdirs: post: operationId: createWorkspaceDirectory summary: Databricks Create a Directory description: >- Creates the given directory and necessary parent directories if they do not exist. If there is an object (not a directory) at any prefix of the input path, a RESOURCE_ALREADY_EXISTS error is returned. tags: - Workspace requestBody: required: true content: application/json: schema: type: object required: - path properties: path: type: string description: >- The absolute path of the directory to create. Must start with /. examples: CreateworkspacedirectoryRequestExample: summary: Default createWorkspaceDirectory request x-microcks-default: true value: path: example_value responses: '200': description: Directory created successfully. '400': $ref: '#/components/responses/BadRequest' '401': $ref: '#/components/responses/Unauthorized' '500': $ref: '#/components/responses/InternalServerError' x-microcks-operation: delay: 0 dispatcher: FALLBACK components: securitySchemes: bearerAuth: type: http scheme: bearer bearerFormat: PAT description: >- Databricks personal access token (PAT) or OAuth M2M token. Pass the token in the Authorization header as 'Bearer '. schemas: CreateClusterRequest: type: object required: - cluster_name - spark_version - node_type_id properties: cluster_name: type: string description: >- A human-readable name for the cluster. This does not need to be unique. examples: - my-data-cluster spark_version: type: string description: >- The runtime version of the cluster. You can retrieve a list of available runtime versions using the Runtime Versions API. examples: - 14.3.x-scala2.12 node_type_id: type: string description: >- The node type for worker nodes. This field determines the cloud provider instance type. examples: - i3.xlarge driver_node_type_id: type: string description: >- The node type for the Spark driver. If not specified, defaults to the same value as node_type_id. example: '500123' num_workers: type: integer description: >- Number of worker nodes for a fixed-size cluster. A cluster has one Spark driver and num_workers executors. Set to 0 for a single-node cluster. examples: - 2 autoscale: $ref: '#/components/schemas/AutoScale' spark_conf: type: object additionalProperties: type: string description: >- A map of Spark configuration key-value pairs. These override the default Spark configuration values. example: example_value aws_attributes: $ref: '#/components/schemas/AwsAttributes' azure_attributes: $ref: '#/components/schemas/AzureAttributes' gcp_attributes: $ref: '#/components/schemas/GcpAttributes' custom_tags: type: object additionalProperties: type: string description: >- Additional tags for cluster resources. Tags are propagated to the cloud provider for cost tracking. example: example_value spark_env_vars: type: object additionalProperties: type: string description: >- Environment variables for all Spark processes. Use {{secrets/scope/key}} to reference secrets. example: example_value autotermination_minutes: type: integer description: >- Minutes of inactivity after which the cluster is automatically terminated. 0 disables auto-termination. default: 120 example: 10 enable_elastic_disk: type: boolean description: >- Whether to autoscale local storage. When enabled, Databricks monitors disk usage and attaches additional disks as needed. example: true instance_pool_id: type: string description: >- The optional ID of the instance pool to use for cluster nodes. example: '500123' policy_id: type: string description: >- The ID of the cluster policy to apply. Cluster policies constrain the configuration settings. example: '500123' enable_local_disk_encryption: type: boolean description: Whether to encrypt data on local disks. example: true runtime_engine: type: string enum: - STANDARD - PHOTON description: >- The runtime engine. PHOTON enables the Photon vectorized query engine for faster performance. example: STANDARD data_security_mode: type: string enum: - NONE - SINGLE_USER - USER_ISOLATION - LEGACY_TABLE_ACL - LEGACY_PASSTHROUGH - LEGACY_SINGLE_USER - LEGACY_SINGLE_USER_STANDARD description: Data security mode for the cluster. example: NONE single_user_name: type: string description: >- The user name (email) of the single user for SINGLE_USER access mode. example: example_value init_scripts: type: array items: $ref: '#/components/schemas/InitScriptInfo' description: Init scripts to run when the cluster starts. example: [] ssh_public_keys: type: array items: type: string description: SSH public keys to add to each Spark node. example: [] EditClusterRequest: type: object required: - cluster_id - cluster_name - spark_version - node_type_id properties: cluster_id: type: string description: The unique identifier of the cluster to edit. example: '500123' cluster_name: type: string description: The new name for the cluster. example: example_value spark_version: type: string description: The runtime version. example: example_value node_type_id: type: string description: The node type for worker nodes. example: '500123' driver_node_type_id: type: string description: The node type for the Spark driver. example: '500123' num_workers: type: integer description: Number of worker nodes. example: 10 autoscale: $ref: '#/components/schemas/AutoScale' spark_conf: type: object additionalProperties: type: string example: example_value custom_tags: type: object additionalProperties: type: string example: example_value spark_env_vars: type: object additionalProperties: type: string example: example_value autotermination_minutes: type: integer example: 10 enable_elastic_disk: type: boolean example: true instance_pool_id: type: string example: '500123' policy_id: type: string example: '500123' data_security_mode: type: string enum: - NONE - SINGLE_USER - USER_ISOLATION - LEGACY_TABLE_ACL - LEGACY_PASSTHROUGH - LEGACY_SINGLE_USER - LEGACY_SINGLE_USER_STANDARD example: NONE single_user_name: type: string example: example_value runtime_engine: type: string enum: - STANDARD - PHOTON example: STANDARD ClusterDetails: type: object properties: cluster_id: type: string description: The unique identifier of the cluster. example: '500123' cluster_name: type: string description: The human-readable name of the cluster. example: example_value spark_version: type: string description: The runtime version of the cluster. example: example_value node_type_id: type: string description: The node type for worker nodes. example: '500123' driver_node_type_id: type: string description: The node type for the Spark driver. example: '500123' num_workers: type: integer description: Number of worker nodes. example: 10 autoscale: $ref: '#/components/schemas/AutoScale' state: type: string enum: - PENDING - RUNNING - RESTARTING - RESIZING - TERMINATING - TERMINATED - ERROR - UNKNOWN description: The current state of the cluster. example: PENDING state_message: type: string description: A message about the state of the cluster. example: example_value start_time: type: integer format: int64 description: The time the cluster was started in epoch milliseconds. example: 10 terminated_time: type: integer format: int64 description: The time the cluster was terminated in epoch milliseconds. example: 10 last_state_loss_time: type: integer format: int64 description: >- The time when the cluster driver last lost its state in epoch milliseconds. example: 10 last_activity_time: type: integer format: int64 description: The time of the last user activity on the cluster. example: 10 last_restarted_time: type: integer format: int64 description: The time the cluster was last restarted. example: 10 creator_user_name: type: string description: The email of the user who created the cluster. example: example_value cluster_source: type: string enum: - UI - API - JOB - MODELS - PIPELINE - PIPELINE_MAINTENANCE - SQL - SOME_OTHER_SOURCE description: The source that created the cluster. example: UI spark_conf: type: object additionalProperties: type: string description: Spark configuration key-value pairs. example: example_value custom_tags: type: object additionalProperties: type: string description: Tags applied to the cluster. example: example_value spark_env_vars: type: object additionalProperties: type: string example: example_value autotermination_minutes: type: integer description: Auto-termination idle timeout in minutes. example: 10 enable_elastic_disk: type: boolean example: true instance_pool_id: type: string example: '500123' policy_id: type: string example: '500123' data_security_mode: type: string example: example_value single_user_name: type: string example: example_value runtime_engine: type: string example: example_value default_tags: type: object additionalProperties: type: string description: Default tags applied by Databricks. example: example_value cluster_log_status: type: object properties: last_attempted: type: integer format: int64 last_exception: type: string example: example_value termination_reason: type: object properties: code: type: string description: Status code for the termination reason. type: type: string description: Termination type. parameters: type: object additionalProperties: type: string example: example_value disk_spec: type: object properties: disk_count: type: integer disk_size: type: integer disk_type: type: object properties: azure_disk_volume_type: type: string ebs_volume_type: type: string example: example_value driver: $ref: '#/components/schemas/SparkNode' executors: type: array items: $ref: '#/components/schemas/SparkNode' example: [] jdbc_port: type: integer description: Port on the driver for JDBC/ODBC connections. example: 10 spark_context_id: type: integer format: int64 description: The canonical Spark context identifier. example: '500123' SparkNode: type: object properties: private_ip: type: string example: example_value public_dns: type: string example: example_value node_id: type: string example: '500123' instance_id: type: string example: '500123' start_timestamp: type: integer format: int64 example: 10 host_private_ip: type: string example: example_value AutoScale: type: object properties: min_workers: type: integer description: The minimum number of workers the cluster can scale down to. example: 10 max_workers: type: integer description: The maximum number of workers the cluster can scale up to. example: 10 description: >- Autoscaling configuration. When set, num_workers is ignored and the cluster scales between min_workers and max_workers. AwsAttributes: type: object properties: first_on_demand: type: integer description: Number of on-demand instances to place first. example: 10 availability: type: string enum: - SPOT - ON_DEMAND - SPOT_WITH_FALLBACK example: SPOT zone_id: type: string description: The availability zone identifier (e.g., us-west-2a). example: '500123' instance_profile_arn: type: string description: IAM instance profile ARN for the cluster instances. example: example_value spot_bid_price_percent: type: integer description: Max bid price as percentage of on-demand price. example: 10 ebs_volume_type: type: string enum: - GENERAL_PURPOSE_SSD - THROUGHPUT_OPTIMIZED_HDD example: GENERAL_PURPOSE_SSD ebs_volume_count: type: integer example: 10 ebs_volume_size: type: integer example: 10 AzureAttributes: type: object properties: first_on_demand: type: integer example: 10 availability: type: string enum: - SPOT_AZURE - ON_DEMAND_AZURE - SPOT_WITH_FALLBACK_AZURE example: SPOT_AZURE spot_bid_max_price: type: number example: 42.5 GcpAttributes: type: object properties: use_preemptible_executors: type: boolean example: true google_service_account: type: string example: example_value availability: type: string enum: - GCP_PREEMPTIBLE - GCP_ON_DEMAND example: GCP_PREEMPTIBLE InitScriptInfo: type: object properties: workspace: type: object properties: destination: type: string example: example_value volumes: type: object properties: destination: type: string example: example_value dbfs: type: object properties: destination: type: string deprecated: true example: example_value ClusterEvent: type: object properties: cluster_id: type: string example: '500123' timestamp: type: integer format: int64 example: 10 type: type: string example: example_value details: type: object properties: current_num_workers: type: integer target_num_workers: type: integer previous_attributes: type: object attributes: type: object previous_cluster_size: type: object cluster_size: type: object cause: type: string reason: type: object properties: code: type: string type: type: string parameters: type: object additionalProperties: type: string example: example_value CreateJobRequest: type: object properties: name: type: string description: The name of the job. examples: - my-etl-job tasks: type: array items: $ref: '#/components/schemas/TaskSettings' description: >- A list of task specifications to be executed by this job. For single-task jobs, use the top-level task fields instead. example: [] job_clusters: type: array items: $ref: '#/components/schemas/JobCluster' description: >- A list of job cluster specifications that can be shared and reused by tasks in this job. example: [] email_notifications: $ref: '#/components/schemas/JobEmailNotifications' webhook_notifications: $ref: '#/components/schemas/WebhookNotifications' notification_settings: type: object properties: no_alert_for_skipped_runs: type: boolean no_alert_for_canceled_runs: type: boolean example: example_value timeout_seconds: type: integer description: >- Maximum allowed duration for the job. If exceeded, the job is set to a TIMED_OUT life cycle state. 0 means no timeout. default: 0 example: 10 schedule: $ref: '#/components/schemas/CronSchedule' continuous: type: object properties: pause_status: type: string enum: - PAUSED - UNPAUSED description: Continuous job settings for streaming workloads. example: example_value trigger: type: object properties: pause_status: type: string enum: - PAUSED - UNPAUSED file_arrival: type: object properties: url: type: string min_time_between_triggers_seconds: type: integer wait_after_last_change_seconds: type: integer description: Trigger settings for file arrival-based jobs. example: example_value max_concurrent_runs: type: integer description: >- Maximum number of concurrent runs for the job. Setting to 1 ensures only one run at a time. default: 1 example: 10 git_source: $ref: '#/components/schemas/GitSource' tags: type: object additionalProperties: type: string description: Tags for the job. example: example_value format: type: string enum: - SINGLE_TASK - MULTI_TASK description: The format of the job. example: SINGLE_TASK queue: type: object properties: enabled: type: boolean description: Queue settings for the job. example: example_value parameters: type: array items: type: object properties: name: type: string default: type: string description: Job-level parameter definitions. example: [] run_as: type: object properties: user_name: type: string service_principal_name: type: string description: Identity to run the job as. example: example_value access_control_list: type: array items: $ref: '#/components/schemas/AccessControlRequest' example: [] TaskSettings: type: object required: - task_key properties: task_key: type: string description: >- A unique key for the task within the job. Used to reference the task in dependencies and logging. examples: - etl_task_1 description: type: string description: A description of the task. example: A sample description. depends_on: type: array items: type: object properties: task_key: type: string outcome: type: string description: >- An array of objects specifying the task dependencies. Each dependency is identified by its task_key. example: [] existing_cluster_id: type: string description: An existing cluster to run the task on. example: '500123' new_cluster: $ref: '#/components/schemas/CreateClusterRequest' job_cluster_key: type: string description: Reference to a job_clusters entry. example: example_value notebook_task: type: object properties: notebook_path: type: string description: The absolute path of the notebook in the workspace. source: type: string enum: - WORKSPACE - GIT description: The source of the notebook. base_parameters: type: object additionalProperties: type: string description: >- Base parameters to pass to the notebook. These can be overridden at run time. example: example_value spark_jar_task: type: object properties: main_class_name: type: string parameters: type: array items: type: string jar_uri: type: string example: example_value spark_python_task: type: object properties: python_file: type: string description: URI of the Python file to execute. parameters: type: array items: type: string source: type: string enum: - WORKSPACE - GIT example: example_value spark_submit_task: type: object properties: parameters: type: array items: type: string example: example_value pipeline_task: type: object properties: pipeline_id: type: string full_refresh: type: boolean example: example_value python_wheel_task: type: object properties: package_name: type: string entry_point: type: string parameters: type: array items: type: string named_parameters: type: object additionalProperties: type: string example: example_value sql_task: type: object properties: query: type: object properties: query_id: type: string dashboard: type: object properties: dashboard_id: type: string alert: type: object properties: alert_id: type: string warehouse_id: type: string parameters: type: object additionalProperties: type: string example: example_value dbt_task: type: object properties: project_directory: type: string commands: type: array items: type: string schema: type: string warehouse_id: type: string catalog: type: string profiles_directory: type: string example: example_value run_if: type: string enum: - ALL_SUCCESS - AT_LEAST_ONE_SUCCESS - NONE_FAILED - ALL_DONE - AT_LEAST_ONE_FAILED - ALL_FAILED description: Condition to run this task. example: ALL_SUCCESS timeout_seconds: type: integer description: Timeout for this individual task. example: 10 max_retries: type: integer description: Maximum number of retries for a failed task. example: 10 min_retry_interval_millis: type: integer description: Minimum interval between retry attempts. example: 10 retry_on_timeout: type: boolean description: Whether to retry when the task times out. example: true email_notifications: $ref: '#/components/schemas/JobEmailNotifications' libraries: type: array items: $ref: '#/components/schemas/Library' description: Libraries to install on the cluster running this task. example: [] JobCluster: type: object required: - job_cluster_key - new_cluster properties: job_cluster_key: type: string description: >- A unique key for the job cluster. Referenced by tasks using job_cluster_key. example: example_value new_cluster: $ref: '#/components/schemas/CreateClusterRequest' JobEmailNotifications: type: object properties: on_start: type: array items: type: string description: Email addresses to notify when a run starts. example: [] on_success: type: array items: type: string description: Email addresses to notify when a run succeeds. example: [] on_failure: type: array items: type: string description: Email addresses to notify when a run fails. example: [] on_duration_warning_threshold_exceeded: type: array items: type: string example: [] no_alert_for_skipped_runs: type: boolean example: true WebhookNotifications: type: object properties: on_start: type: array items: type: object properties: id: type: string example: [] on_success: type: array items: type: object properties: id: type: string example: [] on_failure: type: array items: type: object properties: id: type: string example: [] on_duration_warning_threshold_exceeded: type: array items: type: object properties: id: type: string example: [] CronSchedule: type: object required: - quartz_cron_expression - timezone_id properties: quartz_cron_expression: type: string description: >- A Quartz cron expression describing the schedule. See http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html examples: - 0 0 8 * * ? timezone_id: type: string description: A Java timezone ID. The schedule is resolved relative to this timezone. examples: - America/Los_Angeles pause_status: type: string enum: - PAUSED - UNPAUSED description: Whether the schedule is paused. example: PAUSED GitSource: type: object required: - git_url - git_provider properties: git_url: type: string description: URL of the Git repository. example: https://www.example.com git_provider: type: string enum: - gitHub - bitbucketCloud - gitLab - azureDevOpsServices - gitHubEnterprise - bitbucketServer - gitLabEnterpriseEdition - awsCodeCommit description: The Git provider. example: gitHub git_branch: type: string description: Branch to use. example: example_value git_tag: type: string description: Tag to use. example: example_value git_commit: type: string description: Commit hash to use. example: example_value Library: type: object properties: jar: type: string description: URI of a JAR to install. example: example_value egg: type: string description: URI of an egg to install (deprecated). example: example_value whl: type: string description: URI of a wheel to install. example: example_value pypi: type: object properties: package: type: string description: >- The name of the PyPI package to install. An optional version specification can be included. repo: type: string description: The repository URL. example: example_value maven: type: object properties: coordinates: type: string description: Maven coordinates in the form groupId:artifactId:version. repo: type: string exclusions: type: array items: type: string example: example_value cran: type: object properties: package: type: string description: The name of the CRAN package. repo: type: string example: example_value requirements: type: string description: Path to a requirements.txt file. example: example_value AccessControlRequest: type: object properties: user_name: type: string example: example_value group_name: type: string example: example_value service_principal_name: type: string example: example_value permission_level: type: string enum: - CAN_MANAGE - CAN_MANAGE_RUN - CAN_VIEW - IS_OWNER example: CAN_MANAGE JobSettings: type: object properties: name: type: string example: Example Title tasks: type: array items: $ref: '#/components/schemas/TaskSettings' example: [] job_clusters: type: array items: $ref: '#/components/schemas/JobCluster' example: [] email_notifications: $ref: '#/components/schemas/JobEmailNotifications' webhook_notifications: $ref: '#/components/schemas/WebhookNotifications' timeout_seconds: type: integer example: 10 schedule: $ref: '#/components/schemas/CronSchedule' max_concurrent_runs: type: integer example: 10 git_source: $ref: '#/components/schemas/GitSource' tags: type: object additionalProperties: type: string example: example_value format: type: string enum: - SINGLE_TASK - MULTI_TASK example: SINGLE_TASK queue: type: object properties: enabled: type: boolean example: example_value continuous: type: object properties: pause_status: type: string enum: - PAUSED - UNPAUSED example: example_value parameters: type: array items: type: object properties: name: type: string default: type: string example: [] run_as: type: object properties: user_name: type: string service_principal_name: type: string example: example_value Job: type: object properties: job_id: type: integer format: int64 description: The canonical identifier for this job. example: '500123' creator_user_name: type: string description: The email of the user who created the job. example: example_value run_as_user_name: type: string description: The email of the user the job runs as. example: example_value settings: $ref: '#/components/schemas/JobSettings' created_time: type: integer format: int64 description: The time the job was created in epoch milliseconds. example: 10 Run: type: object properties: job_id: type: integer format: int64 example: '500123' run_id: type: integer format: int64 description: The canonical identifier of the run. example: '500123' run_name: type: string example: example_value number_in_job: type: integer format: int64 example: 10 original_attempt_run_id: type: integer format: int64 example: '500123' state: type: object properties: life_cycle_state: type: string enum: - PENDING - RUNNING - TERMINATING - TERMINATED - SKIPPED - INTERNAL_ERROR - BLOCKED - WAITING_FOR_RETRY - QUEUED description: The current life cycle state of the run. result_state: type: string enum: - SUCCESS - FAILED - TIMEDOUT - CANCELED - MAXIMUM_CONCURRENT_RUNS_REACHED - EXCLUDED - SUCCESS_WITH_FAILURES - UPSTREAM_FAILED - UPSTREAM_CANCELED description: The result state of the run, if completed. state_message: type: string description: A descriptive message for the current state. user_cancelled_or_timedout: type: boolean example: example_value schedule: $ref: '#/components/schemas/CronSchedule' tasks: type: array items: $ref: '#/components/schemas/RunTask' example: [] job_clusters: type: array items: $ref: '#/components/schemas/JobCluster' example: [] cluster_spec: type: object properties: existing_cluster_id: type: string new_cluster: $ref: '#/components/schemas/CreateClusterRequest' libraries: type: array items: $ref: '#/components/schemas/Library' example: example_value cluster_instance: type: object properties: cluster_id: type: string spark_context_id: type: string example: example_value start_time: type: integer format: int64 description: The start time of the run in epoch milliseconds. example: 10 setup_duration: type: integer format: int64 description: Setup duration in milliseconds. example: 10 execution_duration: type: integer format: int64 description: Execution duration in milliseconds. example: 10 cleanup_duration: type: integer format: int64 description: Cleanup duration in milliseconds. example: 10 end_time: type: integer format: int64 description: End time in epoch milliseconds. example: 10 trigger: type: string enum: - PERIODIC - ONE_TIME - RETRY - RUN_JOB_TASK - FILE_ARRIVAL example: PERIODIC run_type: type: string enum: - JOB_RUN - WORKFLOW_RUN - SUBMIT_RUN example: JOB_RUN attempt_number: type: integer example: 10 creator_user_name: type: string example: example_value run_page_url: type: string description: URL of the run page in the Databricks workspace. example: https://www.example.com format: type: string enum: - SINGLE_TASK - MULTI_TASK example: SINGLE_TASK git_source: $ref: '#/components/schemas/GitSource' RunTask: type: object properties: run_id: type: integer format: int64 example: '500123' task_key: type: string example: example_value description: type: string example: A sample description. state: type: object properties: life_cycle_state: type: string result_state: type: string state_message: type: string example: example_value depends_on: type: array items: type: object properties: task_key: type: string example: [] existing_cluster_id: type: string example: '500123' new_cluster: $ref: '#/components/schemas/CreateClusterRequest' notebook_task: type: object properties: notebook_path: type: string source: type: string base_parameters: type: object additionalProperties: type: string example: example_value spark_jar_task: type: object properties: main_class_name: type: string parameters: type: array items: type: string example: example_value spark_python_task: type: object properties: python_file: type: string parameters: type: array items: type: string example: example_value sql_task: type: object properties: query: type: object properties: query_id: type: string warehouse_id: type: string example: example_value start_time: type: integer format: int64 example: 10 setup_duration: type: integer format: int64 example: 10 execution_duration: type: integer format: int64 example: 10 cleanup_duration: type: integer format: int64 example: 10 end_time: type: integer format: int64 example: 10 cluster_instance: type: object properties: cluster_id: type: string spark_context_id: type: string example: example_value attempt_number: type: integer example: 10 libraries: type: array items: $ref: '#/components/schemas/Library' example: [] WorkspaceObject: type: object properties: object_type: type: string enum: - NOTEBOOK - DIRECTORY - LIBRARY - FILE - REPO - DASHBOARD description: The type of the workspace object. example: NOTEBOOK path: type: string description: The absolute path of the object in the workspace. example: example_value language: type: string enum: - SCALA - PYTHON - SQL - R description: >- The language of the notebook, if applicable. Only present for notebook objects. example: SCALA object_id: type: integer format: int64 description: The unique identifier of the object. example: '500123' created_at: type: integer format: int64 description: The creation time in epoch milliseconds. example: '2026-01-15T10:30:00Z' modified_at: type: integer format: int64 description: The last modified time in epoch milliseconds. example: '2026-01-15T10:30:00Z' resource_id: type: string description: Unique identifier for SCIM compliance. example: '500123' size: type: integer format: int64 description: The file size in bytes (for FILE objects). example: 10 ErrorResponse: type: object properties: error_code: type: string description: A machine-readable error code. examples: - RESOURCE_DOES_NOT_EXIST message: type: string description: A human-readable error message. examples: - Cluster 1234-567890-abcde123 does not exist responses: BadRequest: description: The request is malformed or contains invalid parameters. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' Unauthorized: description: >- The request lacks valid authentication credentials. Verify your access token. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' Forbidden: description: >- The authenticated user does not have permission to perform this action. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' NotFound: description: The requested resource does not exist. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' TooManyRequests: description: >- Too many requests have been sent in a given amount of time. Retry after the period specified in the Retry-After header. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse' InternalServerError: description: >- An unexpected error occurred on the server. If the problem persists, contact Databricks support. content: application/json: schema: $ref: '#/components/schemas/ErrorResponse'