{
"cells": [
{
"cell_type": "markdown",
"id": "fdfcf286",
"metadata": {},
"source": [
"# PyCaret Fugue Integration\n",
"\n",
"[Fugue](https://github.com/fugue-project/fugue) is a low-code unified interface for different computing frameworks such as Spark, Dask and Pandas. PyCaret is using Fugue to support distributed computing scenarios.\n",
"\n",
"# Hello World\n",
"\n",
"# Classification\n",
"\n",
"Let's start with the most standard example, the code is exactly the same as the local version, there is no magic."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "398b0e09",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Description | \n",
" Value | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Session id | \n",
" 4292 | \n",
"
\n",
" \n",
" 1 | \n",
" Target | \n",
" Purchase | \n",
"
\n",
" \n",
" 2 | \n",
" Target type | \n",
" Binary | \n",
"
\n",
" \n",
" 3 | \n",
" Target mapping | \n",
" CH: 0, MM: 1 | \n",
"
\n",
" \n",
" 4 | \n",
" Original data shape | \n",
" (1070, 19) | \n",
"
\n",
" \n",
" 5 | \n",
" Transformed data shape | \n",
" (1070, 19) | \n",
"
\n",
" \n",
" 6 | \n",
" Transformed train set shape | \n",
" (748, 19) | \n",
"
\n",
" \n",
" 7 | \n",
" Transformed test set shape | \n",
" (322, 19) | \n",
"
\n",
" \n",
" 8 | \n",
" Ordinal features | \n",
" 1 | \n",
"
\n",
" \n",
" 9 | \n",
" Numeric features | \n",
" 17 | \n",
"
\n",
" \n",
" 10 | \n",
" Categorical features | \n",
" 1 | \n",
"
\n",
" \n",
" 11 | \n",
" Preprocess | \n",
" True | \n",
"
\n",
" \n",
" 12 | \n",
" Imputation type | \n",
" simple | \n",
"
\n",
" \n",
" 13 | \n",
" Numeric imputation | \n",
" mean | \n",
"
\n",
" \n",
" 14 | \n",
" Categorical imputation | \n",
" constant | \n",
"
\n",
" \n",
" 15 | \n",
" Maximum one-hot encoding | \n",
" 5 | \n",
"
\n",
" \n",
" 16 | \n",
" Encoding method | \n",
" None | \n",
"
\n",
" \n",
" 17 | \n",
" Fold Generator | \n",
" StratifiedKFold | \n",
"
\n",
" \n",
" 18 | \n",
" Fold Number | \n",
" 10 | \n",
"
\n",
" \n",
" 19 | \n",
" CPU Jobs | \n",
" 1 | \n",
"
\n",
" \n",
" 20 | \n",
" Use GPU | \n",
" False | \n",
"
\n",
" \n",
" 21 | \n",
" Log Experiment | \n",
" False | \n",
"
\n",
" \n",
" 22 | \n",
" Experiment Name | \n",
" clf-default-name | \n",
"
\n",
" \n",
" 23 | \n",
" USI | \n",
" 9c46 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pycaret.datasets import get_data\n",
"from pycaret.classification import *\n",
"\n",
"setup(data=get_data(\"juice\", verbose=False), target = 'Purchase', n_jobs=1)\n",
"\n",
"test_models = models().index.tolist()[:5]"
]
},
{
"cell_type": "markdown",
"id": "37b1957a",
"metadata": {},
"source": [
"`compare_model` is also exactly the same if you don't want to use a distributed system"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c8cc5a40",
"metadata": {},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8330 | \n",
" 0.8975 | \n",
" 0.7532 | \n",
" 0.8097 | \n",
" 0.7791 | \n",
" 0.6451 | \n",
" 0.6475 | \n",
" 0.3270 | \n",
"
\n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7715 | \n",
" 0.7625 | \n",
" 0.7224 | \n",
" 0.7058 | \n",
" 0.7106 | \n",
" 0.5224 | \n",
" 0.5256 | \n",
" 0.0780 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7608 | \n",
" 0.8337 | \n",
" 0.7802 | \n",
" 0.6693 | \n",
" 0.7179 | \n",
" 0.5129 | \n",
" 0.5206 | \n",
" 0.0780 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7594 | \n",
" 0.7989 | \n",
" 0.6093 | \n",
" 0.7323 | \n",
" 0.6620 | \n",
" 0.4782 | \n",
" 0.4856 | \n",
" 0.1080 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.4881 | \n",
" 0.0000 | \n",
" 0.7590 | \n",
" 0.3346 | \n",
" 0.4628 | \n",
" 0.0615 | \n",
" 0.1061 | \n",
" 0.0590 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processing: 0%| | 0/26 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, l1_ratio=None, max_iter=1000,\n",
" multi_class='auto', n_jobs=None, penalty='l2',\n",
" random_state=4292, solver='lbfgs', tol=0.0001, verbose=0,\n",
" warm_start=False),\n",
" DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features=None, max_leaf_nodes=None,\n",
" min_impurity_decrease=0.0, min_samples_leaf=1,\n",
" min_samples_split=2, min_weight_fraction_leaf=0.0,\n",
" random_state=4292, splitter='best')]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_models(include=test_models, n_select=2)"
]
},
{
"cell_type": "markdown",
"id": "86aa67d8",
"metadata": {},
"source": [
"Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e7e649ce",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8330 | \n",
" 0.8975 | \n",
" 0.7532 | \n",
" 0.8097 | \n",
" 0.7791 | \n",
" 0.6451 | \n",
" 0.6475 | \n",
" 0.214 | \n",
"
\n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7715 | \n",
" 0.7625 | \n",
" 0.7224 | \n",
" 0.7058 | \n",
" 0.7106 | \n",
" 0.5224 | \n",
" 0.5256 | \n",
" 0.078 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7608 | \n",
" 0.8337 | \n",
" 0.7802 | \n",
" 0.6693 | \n",
" 0.7179 | \n",
" 0.5129 | \n",
" 0.5206 | \n",
" 0.209 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7594 | \n",
" 0.7989 | \n",
" 0.6093 | \n",
" 0.7323 | \n",
" 0.6620 | \n",
" 0.4782 | \n",
" 0.4856 | \n",
" 0.134 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.4881 | \n",
" 0.0000 | \n",
" 0.7590 | \n",
" 0.3346 | \n",
" 0.4628 | \n",
" 0.0615 | \n",
" 0.1061 | \n",
" 0.058 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"lr Logistic Regression 0.8330 0.8975 0.7532 0.8097 0.7791 \n",
"dt Decision Tree Classifier 0.7715 0.7625 0.7224 0.7058 0.7106 \n",
"nb Naive Bayes 0.7608 0.8337 0.7802 0.6693 0.7179 \n",
"knn K Neighbors Classifier 0.7594 0.7989 0.6093 0.7323 0.6620 \n",
"svm SVM - Linear Kernel 0.4881 0.0000 0.7590 0.3346 0.4628 \n",
"\n",
" Kappa MCC TT (Sec) \n",
"lr 0.6451 0.6475 0.214 \n",
"dt 0.5224 0.5256 0.078 \n",
"nb 0.5129 0.5206 0.209 \n",
"knn 0.4782 0.4856 0.134 \n",
"svm 0.0615 0.1061 0.058 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, l1_ratio=None, max_iter=1000,\n",
" multi_class='auto', n_jobs=None, penalty='l2',\n",
" random_state=4292, solver='lbfgs', tol=0.0001, verbose=0,\n",
" warm_start=False),\n",
" DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features=None, max_leaf_nodes=None,\n",
" min_impurity_decrease=0.0, min_samples_leaf=1,\n",
" min_samples_split=2, min_weight_fraction_leaf=0.0,\n",
" random_state=4292, splitter='best')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pycaret.parallel import FugueBackend\n",
"\n",
"compare_models(include=test_models, n_select=2, parallel=FugueBackend(\"dask\"))"
]
},
{
"cell_type": "markdown",
"id": "3953dc74",
"metadata": {},
"source": [
"In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "998bd694",
"metadata": {},
"outputs": [],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"spark = SparkSession.builder.getOrCreate()"
]
},
{
"cell_type": "markdown",
"id": "0f5d91d6",
"metadata": {},
"source": [
"Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "87834c91",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8330 | \n",
" 0.8975 | \n",
" 0.7532 | \n",
" 0.8097 | \n",
" 0.7791 | \n",
" 0.6451 | \n",
" 0.6475 | \n",
" 0.678 | \n",
"
\n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7715 | \n",
" 0.7625 | \n",
" 0.7224 | \n",
" 0.7058 | \n",
" 0.7106 | \n",
" 0.5224 | \n",
" 0.5256 | \n",
" 0.208 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7608 | \n",
" 0.8337 | \n",
" 0.7802 | \n",
" 0.6693 | \n",
" 0.7179 | \n",
" 0.5129 | \n",
" 0.5206 | \n",
" 0.213 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7594 | \n",
" 0.7989 | \n",
" 0.6093 | \n",
" 0.7323 | \n",
" 0.6620 | \n",
" 0.4782 | \n",
" 0.4856 | \n",
" 0.573 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.4881 | \n",
" 0.0000 | \n",
" 0.7590 | \n",
" 0.3346 | \n",
" 0.4628 | \n",
" 0.0615 | \n",
" 0.1061 | \n",
" 0.059 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"lr Logistic Regression 0.8330 0.8975 0.7532 0.8097 0.7791 \n",
"dt Decision Tree Classifier 0.7715 0.7625 0.7224 0.7058 0.7106 \n",
"nb Naive Bayes 0.7608 0.8337 0.7802 0.6693 0.7179 \n",
"knn K Neighbors Classifier 0.7594 0.7989 0.6093 0.7323 0.6620 \n",
"svm SVM - Linear Kernel 0.4881 0.0000 0.7590 0.3346 0.4628 \n",
"\n",
" Kappa MCC TT (Sec) \n",
"lr 0.6451 0.6475 0.678 \n",
"dt 0.5224 0.5256 0.208 \n",
"nb 0.5129 0.5206 0.213 \n",
"knn 0.4782 0.4856 0.573 \n",
"svm 0.0615 0.1061 0.059 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, l1_ratio=None, max_iter=1000,\n",
" multi_class='auto', n_jobs=None, penalty='l2',\n",
" random_state=4292, solver='lbfgs', tol=0.0001, verbose=0,\n",
" warm_start=False),\n",
" DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features=None, max_leaf_nodes=None,\n",
" min_impurity_decrease=0.0, min_samples_leaf=1,\n",
" min_samples_split=2, min_weight_fraction_leaf=0.0,\n",
" random_state=4292, splitter='best')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_models(include=test_models, n_select=2, parallel=FugueBackend(spark))"
]
},
{
"cell_type": "markdown",
"id": "c490458a",
"metadata": {},
"source": [
"In the end, you can `pull` to get the metrics table"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f74ca178",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8330 | \n",
" 0.8975 | \n",
" 0.7532 | \n",
" 0.8097 | \n",
" 0.7791 | \n",
" 0.6451 | \n",
" 0.6475 | \n",
" 0.678 | \n",
"
\n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7715 | \n",
" 0.7625 | \n",
" 0.7224 | \n",
" 0.7058 | \n",
" 0.7106 | \n",
" 0.5224 | \n",
" 0.5256 | \n",
" 0.208 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7608 | \n",
" 0.8337 | \n",
" 0.7802 | \n",
" 0.6693 | \n",
" 0.7179 | \n",
" 0.5129 | \n",
" 0.5206 | \n",
" 0.213 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7594 | \n",
" 0.7989 | \n",
" 0.6093 | \n",
" 0.7323 | \n",
" 0.6620 | \n",
" 0.4782 | \n",
" 0.4856 | \n",
" 0.573 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.4881 | \n",
" 0.0000 | \n",
" 0.7590 | \n",
" 0.3346 | \n",
" 0.4628 | \n",
" 0.0615 | \n",
" 0.1061 | \n",
" 0.059 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"lr Logistic Regression 0.8330 0.8975 0.7532 0.8097 0.7791 \n",
"dt Decision Tree Classifier 0.7715 0.7625 0.7224 0.7058 0.7106 \n",
"nb Naive Bayes 0.7608 0.8337 0.7802 0.6693 0.7179 \n",
"knn K Neighbors Classifier 0.7594 0.7989 0.6093 0.7323 0.6620 \n",
"svm SVM - Linear Kernel 0.4881 0.0000 0.7590 0.3346 0.4628 \n",
"\n",
" Kappa MCC TT (Sec) \n",
"lr 0.6451 0.6475 0.678 \n",
"dt 0.5224 0.5256 0.208 \n",
"nb 0.5129 0.5206 0.213 \n",
"knn 0.4782 0.4856 0.573 \n",
"svm 0.0615 0.1061 0.059 "
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pull()"
]
},
{
"cell_type": "markdown",
"id": "76a1c5be",
"metadata": {},
"source": [
"# Regression\n",
"\n",
"It follows the same pattern as classification."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "917c6ac4",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Description | \n",
" Value | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Session id | \n",
" 3514 | \n",
"
\n",
" \n",
" 1 | \n",
" Target | \n",
" charges | \n",
"
\n",
" \n",
" 2 | \n",
" Target type | \n",
" Regression | \n",
"
\n",
" \n",
" 3 | \n",
" Data shape | \n",
" (1338, 10) | \n",
"
\n",
" \n",
" 4 | \n",
" Train data shape | \n",
" (936, 10) | \n",
"
\n",
" \n",
" 5 | \n",
" Test data shape | \n",
" (402, 10) | \n",
"
\n",
" \n",
" 6 | \n",
" Ordinal features | \n",
" 2 | \n",
"
\n",
" \n",
" 7 | \n",
" Numeric features | \n",
" 3 | \n",
"
\n",
" \n",
" 8 | \n",
" Categorical features | \n",
" 3 | \n",
"
\n",
" \n",
" 9 | \n",
" Preprocess | \n",
" True | \n",
"
\n",
" \n",
" 10 | \n",
" Imputation type | \n",
" simple | \n",
"
\n",
" \n",
" 11 | \n",
" Numeric imputation | \n",
" mean | \n",
"
\n",
" \n",
" 12 | \n",
" Categorical imputation | \n",
" constant | \n",
"
\n",
" \n",
" 13 | \n",
" Maximum one-hot encoding | \n",
" 5 | \n",
"
\n",
" \n",
" 14 | \n",
" Encoding method | \n",
" None | \n",
"
\n",
" \n",
" 15 | \n",
" Fold Generator | \n",
" KFold | \n",
"
\n",
" \n",
" 16 | \n",
" Fold Number | \n",
" 10 | \n",
"
\n",
" \n",
" 17 | \n",
" CPU Jobs | \n",
" 1 | \n",
"
\n",
" \n",
" 18 | \n",
" Use GPU | \n",
" False | \n",
"
\n",
" \n",
" 19 | \n",
" Log Experiment | \n",
" False | \n",
"
\n",
" \n",
" 20 | \n",
" Experiment Name | \n",
" reg-default-name | \n",
"
\n",
" \n",
" 21 | \n",
" USI | \n",
" 478f | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pycaret.datasets import get_data\n",
"from pycaret.regression import *\n",
"\n",
"setup(data=get_data(\"insurance\", verbose=False), target = 'charges', n_jobs=1)\n",
"\n",
"test_models = models().index.tolist()[:5]"
]
},
{
"cell_type": "markdown",
"id": "4356758c",
"metadata": {},
"source": [
"`compare_model` is also exactly the same if you don't want to use a distributed system"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bf87f67b",
"metadata": {},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MAE | \n",
" MSE | \n",
" RMSE | \n",
" R2 | \n",
" RMSLE | \n",
" MAPE | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lar | \n",
" Least Angle Regression | \n",
" 4215.3750 | \n",
" 36942784.9091 | \n",
" 6056.6512 | \n",
" 0.7412 | \n",
" 0.5944 | \n",
" 0.4301 | \n",
" 0.0540 | \n",
"
\n",
" \n",
" lr | \n",
" Linear Regression | \n",
" 4216.0692 | \n",
" 36946939.1774 | \n",
" 6057.0115 | \n",
" 0.7412 | \n",
" 0.5956 | \n",
" 0.4303 | \n",
" 0.1540 | \n",
"
\n",
" \n",
" lasso | \n",
" Lasso Regression | \n",
" 4216.0766 | \n",
" 36944721.4684 | \n",
" 6056.8051 | \n",
" 0.7412 | \n",
" 0.5943 | \n",
" 0.4303 | \n",
" 0.0590 | \n",
"
\n",
" \n",
" ridge | \n",
" Ridge Regression | \n",
" 4226.7264 | \n",
" 36949983.8412 | \n",
" 6057.1250 | \n",
" 0.7413 | \n",
" 0.5923 | \n",
" 0.4319 | \n",
" 0.0550 | \n",
"
\n",
" \n",
" en | \n",
" Elastic Net | \n",
" 7260.0035 | \n",
" 90321787.1218 | \n",
" 9448.8041 | \n",
" 0.3861 | \n",
" 0.7217 | \n",
" 0.8981 | \n",
" 0.0540 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processing: 0%| | 0/26 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[Lars(copy_X=True, eps=2.220446049250313e-16, fit_intercept=True, fit_path=True,\n",
" jitter=None, n_nonzero_coefs=500, normalize='deprecated',\n",
" precompute='auto', random_state=3514, verbose=False),\n",
" LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1,\n",
" normalize='deprecated', positive=False)]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_models(include=test_models, n_select=2, sort=\"MAE\")"
]
},
{
"cell_type": "markdown",
"id": "8cc73849",
"metadata": {},
"source": [
"Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ee333586",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MAE | \n",
" MSE | \n",
" RMSE | \n",
" R2 | \n",
" RMSLE | \n",
" MAPE | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lar | \n",
" Least Angle Regression | \n",
" 4215.3750 | \n",
" 3.694278e+07 | \n",
" 6056.6512 | \n",
" 0.7412 | \n",
" 0.5944 | \n",
" 0.4301 | \n",
" 0.055 | \n",
"
\n",
" \n",
" lr | \n",
" Linear Regression | \n",
" 4216.0692 | \n",
" 3.694694e+07 | \n",
" 6057.0115 | \n",
" 0.7412 | \n",
" 0.5956 | \n",
" 0.4303 | \n",
" 0.054 | \n",
"
\n",
" \n",
" lasso | \n",
" Lasso Regression | \n",
" 4216.0766 | \n",
" 3.694472e+07 | \n",
" 6056.8051 | \n",
" 0.7412 | \n",
" 0.5943 | \n",
" 0.4303 | \n",
" 0.056 | \n",
"
\n",
" \n",
" ridge | \n",
" Ridge Regression | \n",
" 4226.7264 | \n",
" 3.694998e+07 | \n",
" 6057.1250 | \n",
" 0.7413 | \n",
" 0.5923 | \n",
" 0.4319 | \n",
" 0.111 | \n",
"
\n",
" \n",
" en | \n",
" Elastic Net | \n",
" 7260.0035 | \n",
" 9.032179e+07 | \n",
" 9448.8041 | \n",
" 0.3861 | \n",
" 0.7217 | \n",
" 0.8981 | \n",
" 0.236 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MAE MSE RMSE R2 \\\n",
"lar Least Angle Regression 4215.3750 3.694278e+07 6056.6512 0.7412 \n",
"lr Linear Regression 4216.0692 3.694694e+07 6057.0115 0.7412 \n",
"lasso Lasso Regression 4216.0766 3.694472e+07 6056.8051 0.7412 \n",
"ridge Ridge Regression 4226.7264 3.694998e+07 6057.1250 0.7413 \n",
"en Elastic Net 7260.0035 9.032179e+07 9448.8041 0.3861 \n",
"\n",
" RMSLE MAPE TT (Sec) \n",
"lar 0.5944 0.4301 0.055 \n",
"lr 0.5956 0.4303 0.054 \n",
"lasso 0.5943 0.4303 0.056 \n",
"ridge 0.5923 0.4319 0.111 \n",
"en 0.7217 0.8981 0.236 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[Lars(copy_X=True, eps=2.220446049250313e-16, fit_intercept=True, fit_path=True,\n",
" jitter=None, n_nonzero_coefs=500, normalize='deprecated',\n",
" precompute='auto', random_state=3514, verbose=False),\n",
" LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1,\n",
" normalize='deprecated', positive=False)]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pycaret.parallel import FugueBackend\n",
"\n",
"compare_models(include=test_models, n_select=2, sort=\"MAE\", parallel=FugueBackend(\"dask\"))"
]
},
{
"cell_type": "markdown",
"id": "38ad1ddb",
"metadata": {},
"source": [
"In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "8221c7c3",
"metadata": {},
"outputs": [],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"spark = SparkSession.builder.getOrCreate()"
]
},
{
"cell_type": "markdown",
"id": "1ad84f4b",
"metadata": {},
"source": [
"Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2ce39e6d",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MAE | \n",
" MSE | \n",
" RMSE | \n",
" R2 | \n",
" RMSLE | \n",
" MAPE | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lar | \n",
" Least Angle Regression | \n",
" 4215.3750 | \n",
" 3.694278e+07 | \n",
" 6056.6512 | \n",
" 0.7412 | \n",
" 0.5944 | \n",
" 0.4301 | \n",
" 0.098 | \n",
"
\n",
" \n",
" lr | \n",
" Linear Regression | \n",
" 4216.0692 | \n",
" 3.694694e+07 | \n",
" 6057.0115 | \n",
" 0.7412 | \n",
" 0.5956 | \n",
" 0.4303 | \n",
" 0.100 | \n",
"
\n",
" \n",
" lasso | \n",
" Lasso Regression | \n",
" 4216.0766 | \n",
" 3.694472e+07 | \n",
" 6056.8051 | \n",
" 0.7412 | \n",
" 0.5943 | \n",
" 0.4303 | \n",
" 0.094 | \n",
"
\n",
" \n",
" ridge | \n",
" Ridge Regression | \n",
" 4226.7264 | \n",
" 3.694998e+07 | \n",
" 6057.1250 | \n",
" 0.7413 | \n",
" 0.5923 | \n",
" 0.4319 | \n",
" 0.053 | \n",
"
\n",
" \n",
" en | \n",
" Elastic Net | \n",
" 7260.0035 | \n",
" 9.032179e+07 | \n",
" 9448.8041 | \n",
" 0.3861 | \n",
" 0.7217 | \n",
" 0.8981 | \n",
" 0.092 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MAE MSE RMSE R2 \\\n",
"lar Least Angle Regression 4215.3750 3.694278e+07 6056.6512 0.7412 \n",
"lr Linear Regression 4216.0692 3.694694e+07 6057.0115 0.7412 \n",
"lasso Lasso Regression 4216.0766 3.694472e+07 6056.8051 0.7412 \n",
"ridge Ridge Regression 4226.7264 3.694998e+07 6057.1250 0.7413 \n",
"en Elastic Net 7260.0035 9.032179e+07 9448.8041 0.3861 \n",
"\n",
" RMSLE MAPE TT (Sec) \n",
"lar 0.5944 0.4301 0.098 \n",
"lr 0.5956 0.4303 0.100 \n",
"lasso 0.5943 0.4303 0.094 \n",
"ridge 0.5923 0.4319 0.053 \n",
"en 0.7217 0.8981 0.092 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[Lars(copy_X=True, eps=2.220446049250313e-16, fit_intercept=True, fit_path=True,\n",
" jitter=None, n_nonzero_coefs=500, normalize='deprecated',\n",
" precompute='auto', random_state=3514, verbose=False),\n",
" LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1,\n",
" normalize='deprecated', positive=False)]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_models(include=test_models, n_select=2, sort=\"MAE\", parallel=FugueBackend(spark))"
]
},
{
"cell_type": "markdown",
"id": "789fd969",
"metadata": {},
"source": [
"In the end, you can `pull` to get the metrics table"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "ecdd02a4",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MAE | \n",
" MSE | \n",
" RMSE | \n",
" R2 | \n",
" RMSLE | \n",
" MAPE | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" lar | \n",
" Least Angle Regression | \n",
" 4215.3750 | \n",
" 3.694278e+07 | \n",
" 6056.6512 | \n",
" 0.7412 | \n",
" 0.5944 | \n",
" 0.4301 | \n",
" 0.098 | \n",
"
\n",
" \n",
" lr | \n",
" Linear Regression | \n",
" 4216.0692 | \n",
" 3.694694e+07 | \n",
" 6057.0115 | \n",
" 0.7412 | \n",
" 0.5956 | \n",
" 0.4303 | \n",
" 0.100 | \n",
"
\n",
" \n",
" lasso | \n",
" Lasso Regression | \n",
" 4216.0766 | \n",
" 3.694472e+07 | \n",
" 6056.8051 | \n",
" 0.7412 | \n",
" 0.5943 | \n",
" 0.4303 | \n",
" 0.094 | \n",
"
\n",
" \n",
" ridge | \n",
" Ridge Regression | \n",
" 4226.7264 | \n",
" 3.694998e+07 | \n",
" 6057.1250 | \n",
" 0.7413 | \n",
" 0.5923 | \n",
" 0.4319 | \n",
" 0.053 | \n",
"
\n",
" \n",
" en | \n",
" Elastic Net | \n",
" 7260.0035 | \n",
" 9.032179e+07 | \n",
" 9448.8041 | \n",
" 0.3861 | \n",
" 0.7217 | \n",
" 0.8981 | \n",
" 0.092 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MAE MSE RMSE R2 \\\n",
"lar Least Angle Regression 4215.3750 3.694278e+07 6056.6512 0.7412 \n",
"lr Linear Regression 4216.0692 3.694694e+07 6057.0115 0.7412 \n",
"lasso Lasso Regression 4216.0766 3.694472e+07 6056.8051 0.7412 \n",
"ridge Ridge Regression 4226.7264 3.694998e+07 6057.1250 0.7413 \n",
"en Elastic Net 7260.0035 9.032179e+07 9448.8041 0.3861 \n",
"\n",
" RMSLE MAPE TT (Sec) \n",
"lar 0.5944 0.4301 0.098 \n",
"lr 0.5956 0.4303 0.100 \n",
"lasso 0.5943 0.4303 0.094 \n",
"ridge 0.5923 0.4319 0.053 \n",
"en 0.7217 0.8981 0.092 "
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pull()"
]
},
{
"cell_type": "markdown",
"id": "981a9c79",
"metadata": {},
"source": [
"As you see, the results from the distributed versions can be different from your local versions. In the later sections, we will show how to make them identical.\n",
"\n",
"# Time Series\n",
"\n",
"It follows the same pattern as classification.\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "ac63eb2e",
"metadata": {},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Description | \n",
" Value | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" session_id | \n",
" 42 | \n",
"
\n",
" \n",
" 1 | \n",
" Target | \n",
" Number of airline passengers | \n",
"
\n",
" \n",
" 2 | \n",
" Approach | \n",
" Univariate | \n",
"
\n",
" \n",
" 3 | \n",
" Exogenous Variables | \n",
" Not Present | \n",
"
\n",
" \n",
" 4 | \n",
" Original data shape | \n",
" (144, 1) | \n",
"
\n",
" \n",
" 5 | \n",
" Transformed data shape | \n",
" (144, 1) | \n",
"
\n",
" \n",
" 6 | \n",
" Transformed train set shape | \n",
" (132, 1) | \n",
"
\n",
" \n",
" 7 | \n",
" Transformed test set shape | \n",
" (12, 1) | \n",
"
\n",
" \n",
" 8 | \n",
" Rows with missing values | \n",
" 0.0% | \n",
"
\n",
" \n",
" 9 | \n",
" Fold Generator | \n",
" ExpandingWindowSplitter | \n",
"
\n",
" \n",
" 10 | \n",
" Fold Number | \n",
" 3 | \n",
"
\n",
" \n",
" 11 | \n",
" Enforce Prediction Interval | \n",
" False | \n",
"
\n",
" \n",
" 12 | \n",
" Seasonal Period(s) Tested | \n",
" 12 | \n",
"
\n",
" \n",
" 13 | \n",
" Seasonality Present | \n",
" True | \n",
"
\n",
" \n",
" 14 | \n",
" Seasonalities Detected | \n",
" [12] | \n",
"
\n",
" \n",
" 15 | \n",
" Primary Seasonality | \n",
" 12 | \n",
"
\n",
" \n",
" 16 | \n",
" Target Strictly Positive | \n",
" True | \n",
"
\n",
" \n",
" 17 | \n",
" Target White Noise | \n",
" No | \n",
"
\n",
" \n",
" 18 | \n",
" Recommended d | \n",
" 1 | \n",
"
\n",
" \n",
" 19 | \n",
" Recommended Seasonal D | \n",
" 1 | \n",
"
\n",
" \n",
" 20 | \n",
" Preprocess | \n",
" False | \n",
"
\n",
" \n",
" 21 | \n",
" CPU Jobs | \n",
" -1 | \n",
"
\n",
" \n",
" 22 | \n",
" Use GPU | \n",
" False | \n",
"
\n",
" \n",
" 23 | \n",
" Log Experiment | \n",
" False | \n",
"
\n",
" \n",
" 24 | \n",
" Experiment Name | \n",
" ts-default-name | \n",
"
\n",
" \n",
" 25 | \n",
" USI | \n",
" 49cf | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pycaret.datasets import get_data\n",
"from pycaret.time_series import *\n",
"\n",
"exp = TSForecastingExperiment()\n",
"exp.setup(data=get_data('airline', verbose=False), fh=12, fold=3, fig_kwargs={'renderer': 'notebook'}, session_id=42)\n",
"\n",
"test_models = exp.models().index.tolist()[:5]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "cbb457fe",
"metadata": {},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MASE | \n",
" RMSSE | \n",
" MAE | \n",
" RMSE | \n",
" MAPE | \n",
" SMAPE | \n",
" R2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" arima | \n",
" ARIMA | \n",
" 0.6830 | \n",
" 0.6735 | \n",
" 20.0069 | \n",
" 22.2199 | \n",
" 0.0501 | \n",
" 0.0507 | \n",
" 0.8677 | \n",
" 0.3200 | \n",
"
\n",
" \n",
" snaive | \n",
" Seasonal Naive Forecaster | \n",
" 1.1479 | \n",
" 1.0945 | \n",
" 33.3611 | \n",
" 35.9139 | \n",
" 0.0832 | \n",
" 0.0879 | \n",
" 0.6072 | \n",
" 0.0200 | \n",
"
\n",
" \n",
" polytrend | \n",
" Polynomial Trend Forecaster | \n",
" 1.6523 | \n",
" 1.9202 | \n",
" 48.6301 | \n",
" 63.4299 | \n",
" 0.1170 | \n",
" 0.1216 | \n",
" -0.0784 | \n",
" 0.0167 | \n",
"
\n",
" \n",
" naive | \n",
" Naive Forecaster | \n",
" 2.3599 | \n",
" 2.7612 | \n",
" 69.0278 | \n",
" 91.0322 | \n",
" 0.1569 | \n",
" 0.1792 | \n",
" -1.2216 | \n",
" 1.0600 | \n",
"
\n",
" \n",
" grand_means | \n",
" Grand Means Forecaster | \n",
" 5.5306 | \n",
" 5.2596 | \n",
" 162.4117 | \n",
" 173.6492 | \n",
" 0.4000 | \n",
" 0.5075 | \n",
" -7.0462 | \n",
" 1.2700 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processing: 0%| | 0/27 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[ARIMA(maxiter=50, method='lbfgs', order=(1, 0, 0), out_of_sample_size=0,\n",
" scoring='mse', scoring_args=None, seasonal_order=(0, 1, 0, 12),\n",
" start_params=None, suppress_warnings=False, trend=None,\n",
" with_intercept=True),\n",
" NaiveForecaster(sp=12, strategy='last', window_length=None),\n",
" PolynomialTrendForecaster(degree=1, regressor=None, with_intercept=True)]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"best_baseline_models = exp.compare_models(include=test_models, n_select=3)\n",
"best_baseline_models"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "d99c5131",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MASE | \n",
" RMSSE | \n",
" MAE | \n",
" RMSE | \n",
" MAPE | \n",
" SMAPE | \n",
" R2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" arima | \n",
" ARIMA | \n",
" 0.683 | \n",
" 0.6735 | \n",
" 20.0069 | \n",
" 22.2199 | \n",
" 0.0501 | \n",
" 0.0507 | \n",
" 0.8677 | \n",
" 0.1267 | \n",
"
\n",
" \n",
" snaive | \n",
" Seasonal Naive Forecaster | \n",
" 1.1479 | \n",
" 1.0945 | \n",
" 33.3611 | \n",
" 35.9139 | \n",
" 0.0832 | \n",
" 0.0879 | \n",
" 0.6072 | \n",
" 0.0367 | \n",
"
\n",
" \n",
" polytrend | \n",
" Polynomial Trend Forecaster | \n",
" 1.6523 | \n",
" 1.9202 | \n",
" 48.6301 | \n",
" 63.4299 | \n",
" 0.117 | \n",
" 0.1216 | \n",
" -0.0784 | \n",
" 0.0133 | \n",
"
\n",
" \n",
" naive | \n",
" Naive Forecaster | \n",
" 2.3599 | \n",
" 2.7612 | \n",
" 69.0278 | \n",
" 91.0322 | \n",
" 0.1569 | \n",
" 0.1792 | \n",
" -1.2216 | \n",
" 0.0200 | \n",
"
\n",
" \n",
" grand_means | \n",
" Grand Means Forecaster | \n",
" 5.5306 | \n",
" 5.2596 | \n",
" 162.4117 | \n",
" 173.6492 | \n",
" 0.4 | \n",
" 0.5075 | \n",
" -7.0462 | \n",
" 0.0233 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MASE RMSSE MAE RMSE \\\n",
"arima ARIMA 0.683 0.6735 20.0069 22.2199 \n",
"snaive Seasonal Naive Forecaster 1.1479 1.0945 33.3611 35.9139 \n",
"polytrend Polynomial Trend Forecaster 1.6523 1.9202 48.6301 63.4299 \n",
"naive Naive Forecaster 2.3599 2.7612 69.0278 91.0322 \n",
"grand_means Grand Means Forecaster 5.5306 5.2596 162.4117 173.6492 \n",
"\n",
" MAPE SMAPE R2 TT (Sec) \n",
"arima 0.0501 0.0507 0.8677 0.1267 \n",
"snaive 0.0832 0.0879 0.6072 0.0367 \n",
"polytrend 0.117 0.1216 -0.0784 0.0133 \n",
"naive 0.1569 0.1792 -1.2216 0.0200 \n",
"grand_means 0.4 0.5075 -7.0462 0.0233 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[ARIMA(maxiter=50, method='lbfgs', order=(1, 0, 0), out_of_sample_size=0,\n",
" scoring='mse', scoring_args=None, seasonal_order=(0, 1, 0, 12),\n",
" start_params=None, suppress_warnings=False, trend=None,\n",
" with_intercept=True),\n",
" NaiveForecaster(sp=12, strategy='last', window_length=None),\n",
" PolynomialTrendForecaster(degree=1, regressor=None, with_intercept=True)]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pycaret.parallel import FugueBackend\n",
"\n",
"best_baseline_models = exp.compare_models(include=test_models, n_select=3, parallel=FugueBackend(\"dask\"))\n",
"best_baseline_models"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "45e191f9",
"metadata": {},
"outputs": [],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"spark = SparkSession.builder.getOrCreate()"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "ed579ca3",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MASE | \n",
" RMSSE | \n",
" MAE | \n",
" RMSE | \n",
" MAPE | \n",
" SMAPE | \n",
" R2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" naive | \n",
" Naive Forecaster | \n",
" 2.3599 | \n",
" 2.7612 | \n",
" 69.0278 | \n",
" 91.0322 | \n",
" 0.1569 | \n",
" 0.1792 | \n",
" -1.2216 | \n",
" 2.5600 | \n",
"
\n",
" \n",
" grand_means | \n",
" Grand Means Forecaster | \n",
" 5.5306 | \n",
" 5.2596 | \n",
" 162.4117 | \n",
" 173.6492 | \n",
" 0.4 | \n",
" 0.5075 | \n",
" -7.0462 | \n",
" 2.5267 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MASE RMSSE MAE RMSE \\\n",
"naive Naive Forecaster 2.3599 2.7612 69.0278 91.0322 \n",
"grand_means Grand Means Forecaster 5.5306 5.2596 162.4117 173.6492 \n",
"\n",
" MAPE SMAPE R2 TT (Sec) \n",
"naive 0.1569 0.1792 -1.2216 2.5600 \n",
"grand_means 0.4 0.5075 -7.0462 2.5267 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[NaiveForecaster(sp=1, strategy='last', window_length=None),\n",
" NaiveForecaster(sp=1, strategy='mean', window_length=None)]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pycaret.parallel import FugueBackend\n",
"\n",
"best_baseline_models = exp.compare_models(include=test_models[:2], n_select=3, parallel=FugueBackend(spark))\n",
"best_baseline_models"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "3eb73043",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" MASE | \n",
" RMSSE | \n",
" MAE | \n",
" RMSE | \n",
" MAPE | \n",
" SMAPE | \n",
" R2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" naive | \n",
" Naive Forecaster | \n",
" 2.3599 | \n",
" 2.7612 | \n",
" 69.0278 | \n",
" 91.0322 | \n",
" 0.1569 | \n",
" 0.1792 | \n",
" -1.2216 | \n",
" 2.5600 | \n",
"
\n",
" \n",
" grand_means | \n",
" Grand Means Forecaster | \n",
" 5.5306 | \n",
" 5.2596 | \n",
" 162.4117 | \n",
" 173.6492 | \n",
" 0.4 | \n",
" 0.5075 | \n",
" -7.0462 | \n",
" 2.5267 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model MASE RMSSE MAE RMSE \\\n",
"naive Naive Forecaster 2.3599 2.7612 69.0278 91.0322 \n",
"grand_means Grand Means Forecaster 5.5306 5.2596 162.4117 173.6492 \n",
"\n",
" MAPE SMAPE R2 TT (Sec) \n",
"naive 0.1569 0.1792 -1.2216 2.5600 \n",
"grand_means 0.4 0.5075 -7.0462 2.5267 "
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"exp.pull()"
]
},
{
"cell_type": "markdown",
"id": "c910b81c",
"metadata": {},
"source": [
"# A more practical case\n",
"\n",
"The above examples are pure toys, to make things work perfectly in a distributed system you must be careful about a few things\n",
"\n",
"# Use a lambda instead of a dataframe in setup\n",
"\n",
"If you directly provide a dataframe in `setup`, this dataset will need to be sent to all worker nodes. If the dataframe is 1G, you have 100 workers, then it is possible your dirver machine will need to send out up to 100G data (depending on specific framework's implementation), then this data transfer becomes a bottleneck itself. Instead, if you provide a lambda function, it doesn't change the local compute scenario, but the driver will only send the function reference to workers, and each worker will be responsible to load the data by themselves, so there is no heavy traffic on the driver side.\n",
"\n",
"# Be deterministic\n",
"\n",
"You should always use `session_id` to make the distributed compute deterministic.\n",
"\n",
"# Set n_jobs\n",
"\n",
"It is important to be explicit on n_jobs when you want to run something distributedly, so it will not overuse the local/remote resources. This can also avoid resrouce contention, and make the compute faster."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1d76ddae",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" | \n",
" Description | \n",
" Value | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Session id | \n",
" 0 | \n",
"
\n",
" \n",
" 1 | \n",
" Target | \n",
" Purchase | \n",
"
\n",
" \n",
" 2 | \n",
" Target type | \n",
" Binary | \n",
"
\n",
" \n",
" 3 | \n",
" Target mapping | \n",
" CH: 0, MM: 1 | \n",
"
\n",
" \n",
" 4 | \n",
" Original data shape | \n",
" (1070, 19) | \n",
"
\n",
" \n",
" 5 | \n",
" Transformed data shape | \n",
" (1070, 19) | \n",
"
\n",
" \n",
" 6 | \n",
" Transformed train set shape | \n",
" (748, 19) | \n",
"
\n",
" \n",
" 7 | \n",
" Transformed test set shape | \n",
" (322, 19) | \n",
"
\n",
" \n",
" 8 | \n",
" Ordinal features | \n",
" 1 | \n",
"
\n",
" \n",
" 9 | \n",
" Numeric features | \n",
" 17 | \n",
"
\n",
" \n",
" 10 | \n",
" Categorical features | \n",
" 1 | \n",
"
\n",
" \n",
" 11 | \n",
" Preprocess | \n",
" True | \n",
"
\n",
" \n",
" 12 | \n",
" Imputation type | \n",
" simple | \n",
"
\n",
" \n",
" 13 | \n",
" Numeric imputation | \n",
" mean | \n",
"
\n",
" \n",
" 14 | \n",
" Categorical imputation | \n",
" constant | \n",
"
\n",
" \n",
" 15 | \n",
" Maximum one-hot encoding | \n",
" 5 | \n",
"
\n",
" \n",
" 16 | \n",
" Encoding method | \n",
" None | \n",
"
\n",
" \n",
" 17 | \n",
" Fold Generator | \n",
" StratifiedKFold | \n",
"
\n",
" \n",
" 18 | \n",
" Fold Number | \n",
" 10 | \n",
"
\n",
" \n",
" 19 | \n",
" CPU Jobs | \n",
" 1 | \n",
"
\n",
" \n",
" 20 | \n",
" Use GPU | \n",
" False | \n",
"
\n",
" \n",
" 21 | \n",
" Log Experiment | \n",
" False | \n",
"
\n",
" \n",
" 22 | \n",
" Experiment Name | \n",
" clf-default-name | \n",
"
\n",
" \n",
" 23 | \n",
" USI | \n",
" ae18 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pycaret.datasets import get_data\n",
"from pycaret.classification import *\n",
"\n",
"setup(data_func=lambda: get_data(\"juice\", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=1);"
]
},
{
"cell_type": "markdown",
"id": "2fc80912",
"metadata": {},
"source": [
"# Set the appropriate batch_size\n",
"\n",
"`batch_size` parameter helps adjust between load balence and overhead. For each batch, setup will be called only once. So\n",
"\n",
"| Choice |Load Balance|Overhead|Best Scenario|\n",
"|---|---|---|---|\n",
"|Smaller batch size|Better|Worse|`training time >> data loading time` or `models ~= workers`|\n",
"|Larger batch size|Worse|Better|`training time << data loading time` or `models >> workers`|\n",
"\n",
"The default value is set to `1`, meaning we want the best load balance.\n",
"\n",
"# Display progress\n",
"\n",
"In development, you can enable visual effect by `display_remote=True`, but meanwhile you must also enable [Fugue Callback](https://fugue-tutorials.readthedocs.io/tutorials/advanced/rpc.html) so that the driver can monitor worker progress. But it is recommended to turn off display in production."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9775c4f4",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" DUMMY | \n",
" DUMMY2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" ridge | \n",
" Ridge Classifier | \n",
" 0.8383 | \n",
" 0.0000 | \n",
" 0.7802 | \n",
" 0.8085 | \n",
" 0.7896 | \n",
" 0.6585 | \n",
" 0.6637 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.099 | \n",
"
\n",
" \n",
" lda | \n",
" Linear Discriminant Analysis | \n",
" 0.8329 | \n",
" 0.8986 | \n",
" 0.7701 | \n",
" 0.8044 | \n",
" 0.7824 | \n",
" 0.6472 | \n",
" 0.6522 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.132 | \n",
"
\n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8303 | \n",
" 0.8959 | \n",
" 0.7530 | \n",
" 0.8053 | \n",
" 0.7748 | \n",
" 0.6391 | \n",
" 0.6433 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.271 | \n",
"
\n",
" \n",
" gbc | \n",
" Gradient Boosting Classifier | \n",
" 0.8195 | \n",
" 0.8982 | \n",
" 0.7562 | \n",
" 0.7870 | \n",
" 0.7656 | \n",
" 0.6193 | \n",
" 0.6260 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.263 | \n",
"
\n",
" \n",
" lightgbm | \n",
" Light Gradient Boosting Machine | \n",
" 0.8047 | \n",
" 0.8828 | \n",
" 0.7492 | \n",
" 0.7585 | \n",
" 0.7482 | \n",
" 0.5893 | \n",
" 0.5950 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.128 | \n",
"
\n",
" \n",
" ada | \n",
" Ada Boost Classifier | \n",
" 0.7968 | \n",
" 0.8789 | \n",
" 0.7326 | \n",
" 0.7499 | \n",
" 0.7388 | \n",
" 0.5727 | \n",
" 0.5751 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.178 | \n",
"
\n",
" \n",
" rf | \n",
" Random Forest Classifier | \n",
" 0.7955 | \n",
" 0.8731 | \n",
" 0.7256 | \n",
" 0.7500 | \n",
" 0.7338 | \n",
" 0.5682 | \n",
" 0.5727 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.243 | \n",
"
\n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7795 | \n",
" 0.7711 | \n",
" 0.7328 | \n",
" 0.7168 | \n",
" 0.7201 | \n",
" 0.5389 | \n",
" 0.5441 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.082 | \n",
"
\n",
" \n",
" et | \n",
" Extra Trees Classifier | \n",
" 0.7714 | \n",
" 0.8479 | \n",
" 0.6951 | \n",
" 0.7213 | \n",
" 0.7038 | \n",
" 0.5183 | \n",
" 0.5225 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.214 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7621 | \n",
" 0.8255 | \n",
" 0.7255 | \n",
" 0.6825 | \n",
" 0.7009 | \n",
" 0.5039 | \n",
" 0.5074 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.080 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7528 | \n",
" 0.8053 | \n",
" 0.6231 | \n",
" 0.7208 | \n",
" 0.6642 | \n",
" 0.4703 | \n",
" 0.4770 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.083 | \n",
"
\n",
" \n",
" qda | \n",
" Quadratic Discriminant Analysis | \n",
" 0.6510 | \n",
" 0.6349 | \n",
" 0.4546 | \n",
" 0.7617 | \n",
" 0.4426 | \n",
" 0.2377 | \n",
" 0.3086 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.077 | \n",
"
\n",
" \n",
" dummy | \n",
" Dummy Classifier | \n",
" 0.6096 | \n",
" 0.5000 | \n",
" 0.0000 | \n",
" 0.0000 | \n",
" 0.0000 | \n",
" 0.0000 | \n",
" 0.0000 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.072 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.5677 | \n",
" 0.0000 | \n",
" 0.2690 | \n",
" 0.2077 | \n",
" 0.1901 | \n",
" 0.0290 | \n",
" 0.0396 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.201 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. \\\n",
"ridge Ridge Classifier 0.8383 0.0000 0.7802 0.8085 \n",
"lda Linear Discriminant Analysis 0.8329 0.8986 0.7701 0.8044 \n",
"lr Logistic Regression 0.8303 0.8959 0.7530 0.8053 \n",
"gbc Gradient Boosting Classifier 0.8195 0.8982 0.7562 0.7870 \n",
"lightgbm Light Gradient Boosting Machine 0.8047 0.8828 0.7492 0.7585 \n",
"ada Ada Boost Classifier 0.7968 0.8789 0.7326 0.7499 \n",
"rf Random Forest Classifier 0.7955 0.8731 0.7256 0.7500 \n",
"dt Decision Tree Classifier 0.7795 0.7711 0.7328 0.7168 \n",
"et Extra Trees Classifier 0.7714 0.8479 0.6951 0.7213 \n",
"nb Naive Bayes 0.7621 0.8255 0.7255 0.6825 \n",
"knn K Neighbors Classifier 0.7528 0.8053 0.6231 0.7208 \n",
"qda Quadratic Discriminant Analysis 0.6510 0.6349 0.4546 0.7617 \n",
"dummy Dummy Classifier 0.6096 0.5000 0.0000 0.0000 \n",
"svm SVM - Linear Kernel 0.5677 0.0000 0.2690 0.2077 \n",
"\n",
" F1 Kappa MCC DUMMY DUMMY2 TT (Sec) \n",
"ridge 0.7896 0.6585 0.6637 0.0 0.0 0.099 \n",
"lda 0.7824 0.6472 0.6522 0.0 1.0 0.132 \n",
"lr 0.7748 0.6391 0.6433 0.0 1.0 0.271 \n",
"gbc 0.7656 0.6193 0.6260 0.0 1.0 0.263 \n",
"lightgbm 0.7482 0.5893 0.5950 0.0 1.0 0.128 \n",
"ada 0.7388 0.5727 0.5751 0.0 1.0 0.178 \n",
"rf 0.7338 0.5682 0.5727 0.0 1.0 0.243 \n",
"dt 0.7201 0.5389 0.5441 0.0 1.0 0.082 \n",
"et 0.7038 0.5183 0.5225 0.0 1.0 0.214 \n",
"nb 0.7009 0.5039 0.5074 0.0 1.0 0.080 \n",
"knn 0.6642 0.4703 0.4770 0.0 1.0 0.083 \n",
"qda 0.4426 0.2377 0.3086 0.0 1.0 0.077 \n",
"dummy 0.0000 0.0000 0.0000 0.0 1.0 0.072 \n",
"svm 0.1901 0.0290 0.0396 0.0 0.0 0.201 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processing: 0%| | 0/14 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[RidgeClassifier(alpha=1.0, class_weight=None, copy_X=True, fit_intercept=True,\n",
" max_iter=None, normalize='deprecated', positive=False,\n",
" random_state=0, solver='auto', tol=0.001),\n",
" LinearDiscriminantAnalysis(covariance_estimator=None, n_components=None,\n",
" priors=None, shrinkage=None, solver='svd',\n",
" store_covariance=False, tol=0.0001)]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pycaret.parallel import FugueBackend\n",
"\n",
"fconf = {\n",
" \"fugue.rpc.server\": \"fugue.rpc.flask.FlaskRPCServer\", # keep this value\n",
" \"fugue.rpc.flask_server.host\": \"0.0.0.0\", # the driver ip address workers can access\n",
" \"fugue.rpc.flask_server.port\": \"3333\", # the open port on the dirver\n",
" \"fugue.rpc.flask_server.timeout\": \"2 sec\", # the timeout for worker to talk to driver\n",
"}\n",
"\n",
"be = FugueBackend(\"dask\", fconf, display_remote=True, batch_size=3, top_only=False)\n",
"compare_models(n_select=2, parallel=be)"
]
},
{
"cell_type": "markdown",
"id": "d697e56c",
"metadata": {},
"source": [
"# Custom Metrics\n",
"\n",
"You can add custom metrics like before. But in order to make the scorer distributable, it must be serializable. A common function should be fine, but if inside the function, it is using some global variables that are not serializable (for example an `RLock` object), it can cause issues. So try to make the custom function independent from global variables."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2614b869",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Name DUMMY\n",
"Display Name DUMMY\n",
"Score Function \n",
"Scorer make_scorer(score_dummy, greater_is_better=False)\n",
"Target pred\n",
"Args {}\n",
"Greater is Better False\n",
"Multiclass True\n",
"Custom True\n",
"Name: mydummy, dtype: object"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def score_dummy(y_true, y_pred, axis=0):\n",
" return 0.0\n",
"\n",
"add_metric(id = 'mydummy',\n",
" name = 'DUMMY',\n",
" score_func = score_dummy,\n",
" target = 'pred',\n",
" greater_is_better = False,\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "7ccaa531",
"metadata": {},
"source": [
"Adding a function in a class instance is also ok, but make sure all member variables in the class are serializable."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "83576a2d",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" DUMMY | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7795 | \n",
" 0.7711 | \n",
" 0.7328 | \n",
" 0.7168 | \n",
" 0.7201 | \n",
" 0.5389 | \n",
" 0.5441 | \n",
" 0.0 | \n",
" 0.240 | \n",
"
\n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8303 | \n",
" 0.8959 | \n",
" 0.7530 | \n",
" 0.8053 | \n",
" 0.7748 | \n",
" 0.6391 | \n",
" 0.6433 | \n",
" 0.0 | \n",
" 0.306 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7621 | \n",
" 0.8255 | \n",
" 0.7255 | \n",
" 0.6825 | \n",
" 0.7009 | \n",
" 0.5039 | \n",
" 0.5074 | \n",
" 0.0 | \n",
" 0.130 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7528 | \n",
" 0.8053 | \n",
" 0.6231 | \n",
" 0.7208 | \n",
" 0.6642 | \n",
" 0.4703 | \n",
" 0.4770 | \n",
" 0.0 | \n",
" 0.097 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.5677 | \n",
" 0.0000 | \n",
" 0.2690 | \n",
" 0.2077 | \n",
" 0.1901 | \n",
" 0.0290 | \n",
" 0.0396 | \n",
" 0.0 | \n",
" 0.102 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"dt Decision Tree Classifier 0.7795 0.7711 0.7328 0.7168 0.7201 \n",
"lr Logistic Regression 0.8303 0.8959 0.7530 0.8053 0.7748 \n",
"nb Naive Bayes 0.7621 0.8255 0.7255 0.6825 0.7009 \n",
"knn K Neighbors Classifier 0.7528 0.8053 0.6231 0.7208 0.6642 \n",
"svm SVM - Linear Kernel 0.5677 0.0000 0.2690 0.2077 0.1901 \n",
"\n",
" Kappa MCC DUMMY TT (Sec) \n",
"dt 0.5389 0.5441 0.0 0.240 \n",
"lr 0.6391 0.6433 0.0 0.306 \n",
"nb 0.5039 0.5074 0.0 0.130 \n",
"knn 0.4703 0.4770 0.0 0.097 \n",
"svm 0.0290 0.0396 0.0 0.102 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features=None, max_leaf_nodes=None,\n",
" min_impurity_decrease=0.0, min_samples_leaf=1,\n",
" min_samples_split=2, min_weight_fraction_leaf=0.0,\n",
" random_state=0, splitter='best'),\n",
" LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, l1_ratio=None, max_iter=1000,\n",
" multi_class='auto', n_jobs=None, penalty='l2',\n",
" random_state=0, solver='lbfgs', tol=0.0001, verbose=0,\n",
" warm_start=False)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_models = models().index.tolist()[:5]\n",
"compare_models(include=test_models, n_select=2, sort=\"DUMMY\", parallel=FugueBackend(\"dask\"))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "04d5e7c9",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" DUMMY | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7795 | \n",
" 0.7711 | \n",
" 0.7328 | \n",
" 0.7168 | \n",
" 0.7201 | \n",
" 0.5389 | \n",
" 0.5441 | \n",
" 0.0 | \n",
" 0.240 | \n",
"
\n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8303 | \n",
" 0.8959 | \n",
" 0.7530 | \n",
" 0.8053 | \n",
" 0.7748 | \n",
" 0.6391 | \n",
" 0.6433 | \n",
" 0.0 | \n",
" 0.306 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7621 | \n",
" 0.8255 | \n",
" 0.7255 | \n",
" 0.6825 | \n",
" 0.7009 | \n",
" 0.5039 | \n",
" 0.5074 | \n",
" 0.0 | \n",
" 0.130 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7528 | \n",
" 0.8053 | \n",
" 0.6231 | \n",
" 0.7208 | \n",
" 0.6642 | \n",
" 0.4703 | \n",
" 0.4770 | \n",
" 0.0 | \n",
" 0.097 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.5677 | \n",
" 0.0000 | \n",
" 0.2690 | \n",
" 0.2077 | \n",
" 0.1901 | \n",
" 0.0290 | \n",
" 0.0396 | \n",
" 0.0 | \n",
" 0.102 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"dt Decision Tree Classifier 0.7795 0.7711 0.7328 0.7168 0.7201 \n",
"lr Logistic Regression 0.8303 0.8959 0.7530 0.8053 0.7748 \n",
"nb Naive Bayes 0.7621 0.8255 0.7255 0.6825 0.7009 \n",
"knn K Neighbors Classifier 0.7528 0.8053 0.6231 0.7208 0.6642 \n",
"svm SVM - Linear Kernel 0.5677 0.0000 0.2690 0.2077 0.1901 \n",
"\n",
" Kappa MCC DUMMY TT (Sec) \n",
"dt 0.5389 0.5441 0.0 0.240 \n",
"lr 0.6391 0.6433 0.0 0.306 \n",
"nb 0.5039 0.5074 0.0 0.130 \n",
"knn 0.4703 0.4770 0.0 0.097 \n",
"svm 0.0290 0.0396 0.0 0.102 "
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pull()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8f1d99c5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Name DUMMY2\n",
"Display Name DUMMY2\n",
"Score Function \n",
"\n",
"\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" DUMMY | \n",
" DUMMY2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7795 | \n",
" 0.7711 | \n",
" 0.7328 | \n",
" 0.7168 | \n",
" 0.7201 | \n",
" 0.5389 | \n",
" 0.5441 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.237 | \n",
"
\n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8303 | \n",
" 0.8959 | \n",
" 0.7530 | \n",
" 0.8053 | \n",
" 0.7748 | \n",
" 0.6391 | \n",
" 0.6433 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.399 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7621 | \n",
" 0.8255 | \n",
" 0.7255 | \n",
" 0.6825 | \n",
" 0.7009 | \n",
" 0.5039 | \n",
" 0.5074 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.077 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7528 | \n",
" 0.8053 | \n",
" 0.6231 | \n",
" 0.7208 | \n",
" 0.6642 | \n",
" 0.4703 | \n",
" 0.4770 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.082 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.5677 | \n",
" 0.0000 | \n",
" 0.2690 | \n",
" 0.2077 | \n",
" 0.1901 | \n",
" 0.0290 | \n",
" 0.0396 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.104 | \n",
"
\n",
" \n",
"
\n",
""
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"dt Decision Tree Classifier 0.7795 0.7711 0.7328 0.7168 0.7201 \n",
"lr Logistic Regression 0.8303 0.8959 0.7530 0.8053 0.7748 \n",
"nb Naive Bayes 0.7621 0.8255 0.7255 0.6825 0.7009 \n",
"knn K Neighbors Classifier 0.7528 0.8053 0.6231 0.7208 0.6642 \n",
"svm SVM - Linear Kernel 0.5677 0.0000 0.2690 0.2077 0.1901 \n",
"\n",
" Kappa MCC DUMMY DUMMY2 TT (Sec) \n",
"dt 0.5389 0.5441 0.0 1.0 0.237 \n",
"lr 0.6391 0.6433 0.0 1.0 0.399 \n",
"nb 0.5039 0.5074 0.0 1.0 0.077 \n",
"knn 0.4703 0.4770 0.0 1.0 0.082 \n",
"svm 0.0290 0.0396 0.0 0.0 0.104 "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features=None, max_leaf_nodes=None,\n",
" min_impurity_decrease=0.0, min_samples_leaf=1,\n",
" min_samples_split=2, min_weight_fraction_leaf=0.0,\n",
" random_state=0, splitter='best'),\n",
" LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n",
" intercept_scaling=1, l1_ratio=None, max_iter=1000,\n",
" multi_class='auto', n_jobs=None, penalty='l2',\n",
" random_state=0, solver='lbfgs', tol=0.0001, verbose=0,\n",
" warm_start=False)]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_models(include=test_models, n_select=2, sort=\"DUMMY2\", parallel=FugueBackend(\"dask\"))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ee4e174b",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Model | \n",
" Accuracy | \n",
" AUC | \n",
" Recall | \n",
" Prec. | \n",
" F1 | \n",
" Kappa | \n",
" MCC | \n",
" DUMMY | \n",
" DUMMY2 | \n",
" TT (Sec) | \n",
"
\n",
" \n",
" \n",
" \n",
" dt | \n",
" Decision Tree Classifier | \n",
" 0.7795 | \n",
" 0.7711 | \n",
" 0.7328 | \n",
" 0.7168 | \n",
" 0.7201 | \n",
" 0.5389 | \n",
" 0.5441 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.237 | \n",
"
\n",
" \n",
" lr | \n",
" Logistic Regression | \n",
" 0.8303 | \n",
" 0.8959 | \n",
" 0.7530 | \n",
" 0.8053 | \n",
" 0.7748 | \n",
" 0.6391 | \n",
" 0.6433 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.399 | \n",
"
\n",
" \n",
" nb | \n",
" Naive Bayes | \n",
" 0.7621 | \n",
" 0.8255 | \n",
" 0.7255 | \n",
" 0.6825 | \n",
" 0.7009 | \n",
" 0.5039 | \n",
" 0.5074 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.077 | \n",
"
\n",
" \n",
" knn | \n",
" K Neighbors Classifier | \n",
" 0.7528 | \n",
" 0.8053 | \n",
" 0.6231 | \n",
" 0.7208 | \n",
" 0.6642 | \n",
" 0.4703 | \n",
" 0.4770 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.082 | \n",
"
\n",
" \n",
" svm | \n",
" SVM - Linear Kernel | \n",
" 0.5677 | \n",
" 0.0000 | \n",
" 0.2690 | \n",
" 0.2077 | \n",
" 0.1901 | \n",
" 0.0290 | \n",
" 0.0396 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.104 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" Model Accuracy AUC Recall Prec. F1 \\\n",
"dt Decision Tree Classifier 0.7795 0.7711 0.7328 0.7168 0.7201 \n",
"lr Logistic Regression 0.8303 0.8959 0.7530 0.8053 0.7748 \n",
"nb Naive Bayes 0.7621 0.8255 0.7255 0.6825 0.7009 \n",
"knn K Neighbors Classifier 0.7528 0.8053 0.6231 0.7208 0.6642 \n",
"svm SVM - Linear Kernel 0.5677 0.0000 0.2690 0.2077 0.1901 \n",
"\n",
" Kappa MCC DUMMY DUMMY2 TT (Sec) \n",
"dt 0.5389 0.5441 0.0 1.0 0.237 \n",
"lr 0.6391 0.6433 0.0 1.0 0.399 \n",
"nb 0.5039 0.5074 0.0 1.0 0.077 \n",
"knn 0.4703 0.4770 0.0 1.0 0.082 \n",
"svm 0.0290 0.0396 0.0 0.0 0.104 "
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pull()"
]
},
{
"cell_type": "markdown",
"id": "c7e34629",
"metadata": {},
"source": [
"# Notes\n",
"\n",
"# Spark settings\n",
"\n",
"It is highly recommended to have only 1 worker on each Spark executor, so the worker can fully utilize all cpus (set `spark.task.cpus`). Also when you do this you should explicitly set `n_jobs` in `setup` to the number of cpus of each executor.\n",
"\n",
"```python\n",
"executor_cores = 4\n",
"\n",
"spark = SparkSession.builder.config(\"spark.task.cpus\", executor_cores).config(\"spark.executor.cores\", executor_cores).getOrCreate()\n",
"\n",
"setup(data=get_data(\"juice\", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=executor_cores)\n",
"\n",
"compare_models(n_select=2, parallel=FugueBackend(spark))\n",
"```\n",
"\n",
"# Databricks\n",
"\n",
"On Databricks, `spark` is the magic variable representing a SparkSession. But there is no difference to use. You do the exactly same thing as before:\n",
"\n",
"```python\n",
"compare_models(parallel=FugueBackend(spark))\n",
"```\n",
"\n",
"But Databricks, the visualization is difficult, so it may be a good idea to do two things:\n",
"\n",
"* Set `verbose` to False in `setup`\n",
"* Set `display_remote` to False in `FugueBackend`\n",
"\n",
"# Dask\n",
"\n",
"Dask has fake distributed modes such as the default (multi-thread) and multi-process modes. The default mode will just work fine (but they are actually running sequentially), and multi-process doesn't work for PyCaret for now because it messes up with PyCaret's global variables. On the other hand, any Spark execution mode will just work fine.\n",
"\n",
"# Local Parallelization\n",
"\n",
"For practical use where you try non-trivial data and models, local parallelization (The eaiest way is to use local Dask as backend as shown above) normally doesn't have performance advantage. Because it's very easy to overload the CPUS on training, increasing the contention of resources. The value of local parallelization is to verify the code and give you confidence that the distributed environment will provide the expected result with much shorter time.\n",
"\n",
"# How to develop \n",
"\n",
"Distributed systems are powerful but you must follow some good practices to use them:\n",
"\n",
"1. **From small to large:** initially, you must start with a small set of data, for example in `compare_model` limit the models you want to try to a small number of cheap models, and when you verify they work, you can change to a larger model collection.\n",
"2. **From local to distributed:** you should follow this sequence: verify small data locally then verify small data distributedly and then verify large data distributedly. The current design makes the transition seamless. You can do these sequentially: `parallel=None` -> `parallel=FugueBackend()` -> `parallel=FugueBackend(spark)`. In the second step, you can replace with a local SparkSession or local dask."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee7d43a6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}