{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "81e0620e", "metadata": {}, "source": [ "Last updated: 16 Feb 2023\n", "\n", "# 👋 PyCaret Multiclass Classification Tutorial\n", "\n", "PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that exponentially speeds up the experiment cycle and makes you more productive.\n", "\n", "Compared with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with a few lines only. This makes experiments exponentially fast and efficient. PyCaret is essentially a Python wrapper around several machine learning libraries and frameworks, such as scikit-learn, XGBoost, LightGBM, CatBoost, spaCy, Optuna, Hyperopt, Ray, and a few more.\n", "\n", "The design and simplicity of PyCaret are inspired by the emerging role of citizen data scientists, a term first used by Gartner. Citizen Data Scientists are power users who can perform both simple and moderately sophisticated analytical tasks that would previously have required more technical expertise.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "8116e19d", "metadata": {}, "source": [ "# 💻 Installation\n", "\n", "PyCaret is tested and supported on the following 64-bit systems:\n", "- Python 3.7 – 3.10\n", "- Python 3.9 for Ubuntu only\n", "- Ubuntu 16.04 or later\n", "- Windows 7 or later\n", "\n", "You can install PyCaret with Python's pip package manager:\n", "\n", "`pip install pycaret`\n", "\n", "PyCaret's default installation will not install all the extra dependencies automatically. For that you will have to install the full version:\n", "\n", "`pip install pycaret[full]`\n", "\n", "or depending on your use-case you may install one of the following variant:\n", "\n", "- `pip install pycaret[analysis]`\n", "- `pip install pycaret[models]`\n", "- `pip install pycaret[tuner]`\n", "- `pip install pycaret[mlops]`\n", "- `pip install pycaret[parallel]`\n", "- `pip install pycaret[test]`" ] }, { "cell_type": "code", "execution_count": 1, "id": "d7142a33", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'3.0.0'" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# check installed version\n", "import pycaret\n", "pycaret.__version__" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fb66e98d", "metadata": {}, "source": [ "# 🚀 Quick start" ] }, { "attachments": {}, "cell_type": "markdown", "id": "00347d44", "metadata": {}, "source": [ "PyCaret’s Classification Module is a supervised machine learning module that is used for classifying elements into groups. The goal is to predict the categorical class labels which are discrete and unordered. \n", "\n", "Some common use cases include predicting customer default (Yes or No), predicting customer churn (customer will leave or stay), the disease found (positive or negative). \n", "\n", "This module can be used for binary or multiclass problems. It provides several pre-processing features that prepare the data for modeling through the setup function. It has over 18 ready-to-use algorithms and several plots to analyze the performance of trained models.\n", "\n", "A typical workflow in PyCaret consist of following 5 steps in this order:\n", "\n", "## **Setup** ➡️ **Compare Models** ➡️ **Analyze Model** ➡️ **Prediction** ➡️ **Save Model**" ] }, { "cell_type": "code", "execution_count": 2, "id": "956dfdab", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | sepal_length | \n", "sepal_width | \n", "petal_length | \n", "petal_width | \n", "species | \n", "
---|---|---|---|---|---|
0 | \n", "5.1 | \n", "3.5 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "
1 | \n", "4.9 | \n", "3.0 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "
2 | \n", "4.7 | \n", "3.2 | \n", "1.3 | \n", "0.2 | \n", "Iris-setosa | \n", "
3 | \n", "4.6 | \n", "3.1 | \n", "1.5 | \n", "0.2 | \n", "Iris-setosa | \n", "
4 | \n", "5.0 | \n", "3.6 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "
\n", " | Description | \n", "Value | \n", "
---|---|---|
0 | \n", "Session id | \n", "123 | \n", "
1 | \n", "Target | \n", "species | \n", "
2 | \n", "Target type | \n", "Multiclass | \n", "
3 | \n", "Target mapping | \n", "Iris-setosa: 0, Iris-versicolor: 1, Iris-virginica: 2 | \n", "
4 | \n", "Original data shape | \n", "(150, 5) | \n", "
5 | \n", "Transformed data shape | \n", "(150, 5) | \n", "
6 | \n", "Transformed train set shape | \n", "(105, 5) | \n", "
7 | \n", "Transformed test set shape | \n", "(45, 5) | \n", "
8 | \n", "Numeric features | \n", "4 | \n", "
9 | \n", "Preprocess | \n", "True | \n", "
10 | \n", "Imputation type | \n", "simple | \n", "
11 | \n", "Numeric imputation | \n", "mean | \n", "
12 | \n", "Categorical imputation | \n", "mode | \n", "
13 | \n", "Fold Generator | \n", "StratifiedKFold | \n", "
14 | \n", "Fold Number | \n", "10 | \n", "
15 | \n", "CPU Jobs | \n", "-1 | \n", "
16 | \n", "Use GPU | \n", "False | \n", "
17 | \n", "Log Experiment | \n", "False | \n", "
18 | \n", "Experiment Name | \n", "clf-default-name | \n", "
19 | \n", "USI | \n", "8d38 | \n", "
\n", " | Description | \n", "Value | \n", "
---|---|---|
0 | \n", "Session id | \n", "123 | \n", "
1 | \n", "Target | \n", "species | \n", "
2 | \n", "Target type | \n", "Multiclass | \n", "
3 | \n", "Target mapping | \n", "Iris-setosa: 0, Iris-versicolor: 1, Iris-virginica: 2 | \n", "
4 | \n", "Original data shape | \n", "(150, 5) | \n", "
5 | \n", "Transformed data shape | \n", "(150, 5) | \n", "
6 | \n", "Transformed train set shape | \n", "(105, 5) | \n", "
7 | \n", "Transformed test set shape | \n", "(45, 5) | \n", "
8 | \n", "Numeric features | \n", "4 | \n", "
9 | \n", "Preprocess | \n", "True | \n", "
10 | \n", "Imputation type | \n", "simple | \n", "
11 | \n", "Numeric imputation | \n", "mean | \n", "
12 | \n", "Categorical imputation | \n", "mode | \n", "
13 | \n", "Fold Generator | \n", "StratifiedKFold | \n", "
14 | \n", "Fold Number | \n", "10 | \n", "
15 | \n", "CPU Jobs | \n", "-1 | \n", "
16 | \n", "Use GPU | \n", "False | \n", "
17 | \n", "Log Experiment | \n", "False | \n", "
18 | \n", "Experiment Name | \n", "clf-default-name | \n", "
19 | \n", "USI | \n", "42d4 | \n", "
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
lr | \n", "Logistic Regression | \n", "0.9718 | \n", "0.9971 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.9190 | \n", "
knn | \n", "K Neighbors Classifier | \n", "0.9718 | \n", "0.9830 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0370 | \n", "
qda | \n", "Quadratic Discriminant Analysis | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0300 | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0330 | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9935 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.3150 | \n", "
nb | \n", "Naive Bayes | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "0.0300 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.0880 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.1220 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9355 | \n", "0.9792 | \n", "0.9355 | \n", "0.9416 | \n", "0.9325 | \n", "0.9023 | \n", "0.9083 | \n", "0.1360 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.0710 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.0270 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9909 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.0900 | \n", "
ada | \n", "Ada Boost Classifier | \n", "0.9155 | \n", "0.9947 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "0.0580 | \n", "
ridge | \n", "Ridge Classifier | \n", "0.8227 | \n", "0.0000 | \n", "0.8227 | \n", "0.8437 | \n", "0.8186 | \n", "0.7320 | \n", "0.7454 | \n", "0.0220 | \n", "
svm | \n", "SVM - Linear Kernel | \n", "0.7618 | \n", "0.0000 | \n", "0.7618 | \n", "0.6655 | \n", "0.6888 | \n", "0.6333 | \n", "0.7048 | \n", "0.0300 | \n", "
dummy | \n", "Dummy Classifier | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "0.0490 | \n", "
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
lr | \n", "Logistic Regression | \n", "0.9718 | \n", "0.9971 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0430 | \n", "
knn | \n", "K Neighbors Classifier | \n", "0.9718 | \n", "0.9830 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0520 | \n", "
qda | \n", "Quadratic Discriminant Analysis | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0420 | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0550 | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9935 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.0550 | \n", "
nb | \n", "Naive Bayes | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "0.0380 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.1430 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.0480 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9355 | \n", "0.9792 | \n", "0.9355 | \n", "0.9416 | \n", "0.9325 | \n", "0.9023 | \n", "0.9083 | \n", "0.1850 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.0600 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.0370 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9909 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1440 | \n", "
ada | \n", "Ada Boost Classifier | \n", "0.9155 | \n", "0.9947 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "0.0850 | \n", "
ridge | \n", "Ridge Classifier | \n", "0.8227 | \n", "0.0000 | \n", "0.8227 | \n", "0.8437 | \n", "0.8186 | \n", "0.7320 | \n", "0.7454 | \n", "0.0330 | \n", "
svm | \n", "SVM - Linear Kernel | \n", "0.7618 | \n", "0.0000 | \n", "0.7618 | \n", "0.6655 | \n", "0.6888 | \n", "0.6333 | \n", "0.7048 | \n", "0.0320 | \n", "
dummy | \n", "Dummy Classifier | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "0.0430 | \n", "
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|---|
0 | \n", "Logistic Regression | \n", "0.9778 | \n", "0.9985 | \n", "0 | \n", "0 | \n", "0 | \n", "0.9667 | \n", "0.9674 | \n", "
\n", " | sepal_length | \n", "sepal_width | \n", "petal_length | \n", "petal_width | \n", "species | \n", "prediction_label | \n", "prediction_score | \n", "
---|---|---|---|---|---|---|---|
105 | \n", "6.3 | \n", "2.5 | \n", "4.9 | \n", "1.5 | \n", "Iris-versicolor | \n", "Iris-versicolor | \n", "0.5204 | \n", "
106 | \n", "7.2 | \n", "3.2 | \n", "6.0 | \n", "1.8 | \n", "Iris-virginica | \n", "Iris-virginica | \n", "0.9503 | \n", "
107 | \n", "5.5 | \n", "2.4 | \n", "3.8 | \n", "1.1 | \n", "Iris-versicolor | \n", "Iris-versicolor | \n", "0.9334 | \n", "
108 | \n", "6.7 | \n", "3.1 | \n", "4.7 | \n", "1.5 | \n", "Iris-versicolor | \n", "Iris-versicolor | \n", "0.7321 | \n", "
109 | \n", "7.7 | \n", "3.8 | \n", "6.7 | \n", "2.2 | \n", "Iris-virginica | \n", "Iris-virginica | \n", "0.9952 | \n", "
\n", " | sepal_length | \n", "sepal_width | \n", "petal_length | \n", "petal_width | \n", "
---|---|---|---|---|
0 | \n", "5.1 | \n", "3.5 | \n", "1.4 | \n", "0.2 | \n", "
1 | \n", "4.9 | \n", "3.0 | \n", "1.4 | \n", "0.2 | \n", "
2 | \n", "4.7 | \n", "3.2 | \n", "1.3 | \n", "0.2 | \n", "
3 | \n", "4.6 | \n", "3.1 | \n", "1.5 | \n", "0.2 | \n", "
4 | \n", "5.0 | \n", "3.6 | \n", "1.4 | \n", "0.2 | \n", "
\n", " | sepal_length | \n", "sepal_width | \n", "petal_length | \n", "petal_width | \n", "prediction_label | \n", "prediction_score | \n", "
---|---|---|---|---|---|---|
0 | \n", "5.1 | \n", "3.5 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "0.9775 | \n", "
1 | \n", "4.9 | \n", "3.0 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "0.9678 | \n", "
2 | \n", "4.7 | \n", "3.2 | \n", "1.3 | \n", "0.2 | \n", "Iris-setosa | \n", "0.9820 | \n", "
3 | \n", "4.6 | \n", "3.1 | \n", "1.5 | \n", "0.2 | \n", "Iris-setosa | \n", "0.9719 | \n", "
4 | \n", "5.0 | \n", "3.6 | \n", "1.4 | \n", "0.2 | \n", "Iris-setosa | \n", "0.9813 | \n", "
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('trained_model',\n", " LogisticRegression(C=1.0, class_weight=None, dual=False,\n", " fit_intercept=True, intercept_scaling=1,\n", " l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None,\n", " penalty='l2', random_state=123,\n", " solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False))],\n", " verbose=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('trained_model',\n", " LogisticRegression(C=1.0, class_weight=None, dual=False,\n", " fit_intercept=True, intercept_scaling=1,\n", " l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None,\n", " penalty='l2', random_state=123,\n", " solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False))],\n", " verbose=False)
TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())
LabelEncoder()
LabelEncoder()
TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width', 'petal_length',\n", " 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='mean',\n", " verbose='deprecated'))
SimpleImputer()
SimpleImputer()
TransformerWrapper(exclude=None, include=[],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
LogisticRegression(max_iter=1000, random_state=123)
\n", " | Description | \n", "Value | \n", "
---|---|---|
0 | \n", "Session id | \n", "123 | \n", "
1 | \n", "Target | \n", "species | \n", "
2 | \n", "Target type | \n", "Multiclass | \n", "
3 | \n", "Target mapping | \n", "Iris-setosa: 0, Iris-versicolor: 1, Iris-virginica: 2 | \n", "
4 | \n", "Original data shape | \n", "(150, 5) | \n", "
5 | \n", "Transformed data shape | \n", "(150, 5) | \n", "
6 | \n", "Transformed train set shape | \n", "(105, 5) | \n", "
7 | \n", "Transformed test set shape | \n", "(45, 5) | \n", "
8 | \n", "Numeric features | \n", "4 | \n", "
9 | \n", "Preprocess | \n", "True | \n", "
10 | \n", "Imputation type | \n", "simple | \n", "
11 | \n", "Numeric imputation | \n", "mean | \n", "
12 | \n", "Categorical imputation | \n", "mode | \n", "
13 | \n", "Fold Generator | \n", "StratifiedKFold | \n", "
14 | \n", "Fold Number | \n", "10 | \n", "
15 | \n", "CPU Jobs | \n", "-1 | \n", "
16 | \n", "Use GPU | \n", "False | \n", "
17 | \n", "Log Experiment | \n", "False | \n", "
18 | \n", "Experiment Name | \n", "clf-default-name | \n", "
19 | \n", "USI | \n", "35bc | \n", "
\n", " | sepal_length | \n", "sepal_width | \n", "petal_length | \n", "petal_width | \n", "
---|---|---|---|---|
0 | \n", "5.0 | \n", "2.0 | \n", "3.5 | \n", "1.0 | \n", "
1 | \n", "5.4 | \n", "3.9 | \n", "1.3 | \n", "0.4 | \n", "
2 | \n", "5.6 | \n", "3.0 | \n", "4.1 | \n", "1.3 | \n", "
3 | \n", "7.4 | \n", "2.8 | \n", "6.1 | \n", "1.9 | \n", "
4 | \n", "4.6 | \n", "3.4 | \n", "1.4 | \n", "0.3 | \n", "
... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "
100 | \n", "6.6 | \n", "2.9 | \n", "4.6 | \n", "1.3 | \n", "
101 | \n", "4.5 | \n", "2.3 | \n", "1.3 | \n", "0.3 | \n", "
102 | \n", "4.8 | \n", "3.0 | \n", "1.4 | \n", "0.1 | \n", "
103 | \n", "5.4 | \n", "3.4 | \n", "1.7 | \n", "0.2 | \n", "
104 | \n", "6.2 | \n", "3.4 | \n", "5.4 | \n", "2.3 | \n", "
105 rows × 4 columns
\n", "\n", " | Description | \n", "Value | \n", "
---|---|---|
0 | \n", "Session id | \n", "123 | \n", "
1 | \n", "Target | \n", "species | \n", "
2 | \n", "Target type | \n", "Multiclass | \n", "
3 | \n", "Target mapping | \n", "Iris-setosa: 0, Iris-versicolor: 1, Iris-virginica: 2 | \n", "
4 | \n", "Original data shape | \n", "(150, 5) | \n", "
5 | \n", "Transformed data shape | \n", "(150, 5) | \n", "
6 | \n", "Transformed train set shape | \n", "(105, 5) | \n", "
7 | \n", "Transformed test set shape | \n", "(45, 5) | \n", "
8 | \n", "Numeric features | \n", "4 | \n", "
9 | \n", "Preprocess | \n", "True | \n", "
10 | \n", "Imputation type | \n", "simple | \n", "
11 | \n", "Numeric imputation | \n", "mean | \n", "
12 | \n", "Categorical imputation | \n", "mode | \n", "
13 | \n", "Normalize | \n", "True | \n", "
14 | \n", "Normalize method | \n", "minmax | \n", "
15 | \n", "Fold Generator | \n", "StratifiedKFold | \n", "
16 | \n", "Fold Number | \n", "10 | \n", "
17 | \n", "CPU Jobs | \n", "-1 | \n", "
18 | \n", "Use GPU | \n", "False | \n", "
19 | \n", "Log Experiment | \n", "False | \n", "
20 | \n", "Experiment Name | \n", "clf-default-name | \n", "
21 | \n", "USI | \n", "3b39 | \n", "
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
qda | \n", "Quadratic Discriminant Analysis | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0380 | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0440 | \n", "
knn | \n", "K Neighbors Classifier | \n", "0.9636 | \n", "0.9844 | \n", "0.9636 | \n", "0.9709 | \n", "0.9631 | \n", "0.9450 | \n", "0.9494 | \n", "0.0510 | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.0500 | \n", "
nb | \n", "Naive Bayes | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "0.0380 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.1240 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.0390 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.0480 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.0340 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1210 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1500 | \n", "
ada | \n", "Ada Boost Classifier | \n", "0.9155 | \n", "0.9843 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "0.0690 | \n", "
lr | \n", "Logistic Regression | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "0.0400 | \n", "
ridge | \n", "Ridge Classifier | \n", "0.8318 | \n", "0.0000 | \n", "0.8318 | \n", "0.8545 | \n", "0.8281 | \n", "0.7459 | \n", "0.7595 | \n", "0.0370 | \n", "
svm | \n", "SVM - Linear Kernel | \n", "0.8100 | \n", "0.0000 | \n", "0.8100 | \n", "0.7831 | \n", "0.7702 | \n", "0.7125 | \n", "0.7527 | \n", "0.0350 | \n", "
dummy | \n", "Dummy Classifier | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "0.0380 | \n", "
\n", " | Name | \n", "Reference | \n", "Turbo | \n", "
---|---|---|---|
ID | \n", "\n", " | \n", " | \n", " |
lr | \n", "Logistic Regression | \n", "sklearn.linear_model._logistic.LogisticRegression | \n", "True | \n", "
knn | \n", "K Neighbors Classifier | \n", "sklearn.neighbors._classification.KNeighborsCl... | \n", "True | \n", "
nb | \n", "Naive Bayes | \n", "sklearn.naive_bayes.GaussianNB | \n", "True | \n", "
dt | \n", "Decision Tree Classifier | \n", "sklearn.tree._classes.DecisionTreeClassifier | \n", "True | \n", "
svm | \n", "SVM - Linear Kernel | \n", "sklearn.linear_model._stochastic_gradient.SGDC... | \n", "True | \n", "
rbfsvm | \n", "SVM - Radial Kernel | \n", "sklearn.svm._classes.SVC | \n", "False | \n", "
gpc | \n", "Gaussian Process Classifier | \n", "sklearn.gaussian_process._gpc.GaussianProcessC... | \n", "False | \n", "
mlp | \n", "MLP Classifier | \n", "sklearn.neural_network._multilayer_perceptron.... | \n", "False | \n", "
ridge | \n", "Ridge Classifier | \n", "sklearn.linear_model._ridge.RidgeClassifier | \n", "True | \n", "
rf | \n", "Random Forest Classifier | \n", "sklearn.ensemble._forest.RandomForestClassifier | \n", "True | \n", "
qda | \n", "Quadratic Discriminant Analysis | \n", "sklearn.discriminant_analysis.QuadraticDiscrim... | \n", "True | \n", "
ada | \n", "Ada Boost Classifier | \n", "sklearn.ensemble._weight_boosting.AdaBoostClas... | \n", "True | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "sklearn.ensemble._gb.GradientBoostingClassifier | \n", "True | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "sklearn.discriminant_analysis.LinearDiscrimina... | \n", "True | \n", "
et | \n", "Extra Trees Classifier | \n", "sklearn.ensemble._forest.ExtraTreesClassifier | \n", "True | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "xgboost.sklearn.XGBClassifier | \n", "True | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "lightgbm.sklearn.LGBMClassifier | \n", "True | \n", "
catboost | \n", "CatBoost Classifier | \n", "catboost.core.CatBoostClassifier | \n", "True | \n", "
dummy | \n", "Dummy Classifier | \n", "sklearn.dummy.DummyClassifier | \n", "True | \n", "
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.0520 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.1190 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.0390 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.0580 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.0370 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1200 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1510 | \n", "
LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,\n", " importance_type='split', learning_rate=0.1, max_depth=-1,\n", " min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,\n", " n_estimators=100, n_jobs=-1, num_leaves=31, objective=None,\n", " random_state=123, reg_alpha=0.0, reg_lambda=0.0, silent='warn',\n", " subsample=1.0, subsample_for_bin=200000, subsample_freq=0)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,\n", " importance_type='split', learning_rate=0.1, max_depth=-1,\n", " min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,\n", " n_estimators=100, n_jobs=-1, num_leaves=31, objective=None,\n", " random_state=123, reg_alpha=0.0, reg_lambda=0.0, silent='warn',\n", " subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.052 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.119 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.039 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.058 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.037 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.120 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.151 | \n", "
\n", " | Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "TT (Sec) | \n", "
---|---|---|---|---|---|---|---|---|---|
qda | \n", "Quadratic Discriminant Analysis | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0400 | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "0.0380 | \n", "
knn | \n", "K Neighbors Classifier | \n", "0.9636 | \n", "0.9844 | \n", "0.9636 | \n", "0.9709 | \n", "0.9631 | \n", "0.9450 | \n", "0.9494 | \n", "0.0490 | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "0.0490 | \n", "
nb | \n", "Naive Bayes | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "0.0380 | \n", "
et | \n", "Extra Trees Classifier | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.1180 | \n", "
catboost | \n", "CatBoost Classifier | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "0.0390 | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "0.0460 | \n", "
dt | \n", "Decision Tree Classifier | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "0.0330 | \n", "
rf | \n", "Random Forest Classifier | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1170 | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "0.1450 | \n", "
ada | \n", "Ada Boost Classifier | \n", "0.9155 | \n", "0.9843 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "0.0680 | \n", "
lr | \n", "Logistic Regression | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "0.0420 | \n", "
ridge | \n", "Ridge Classifier | \n", "0.8318 | \n", "0.0000 | \n", "0.8318 | \n", "0.8545 | \n", "0.8281 | \n", "0.7459 | \n", "0.7595 | \n", "0.0310 | \n", "
svm | \n", "SVM - Linear Kernel | \n", "0.8100 | \n", "0.0000 | \n", "0.8100 | \n", "0.7831 | \n", "0.7702 | \n", "0.7125 | \n", "0.7527 | \n", "0.0350 | \n", "
dummy | \n", "Dummy Classifier | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "0.0360 | \n", "
\n", " | Name | \n", "Reference | \n", "Turbo | \n", "
---|---|---|---|
ID | \n", "\n", " | \n", " | \n", " |
lr | \n", "Logistic Regression | \n", "sklearn.linear_model._logistic.LogisticRegression | \n", "True | \n", "
knn | \n", "K Neighbors Classifier | \n", "sklearn.neighbors._classification.KNeighborsCl... | \n", "True | \n", "
nb | \n", "Naive Bayes | \n", "sklearn.naive_bayes.GaussianNB | \n", "True | \n", "
dt | \n", "Decision Tree Classifier | \n", "sklearn.tree._classes.DecisionTreeClassifier | \n", "True | \n", "
svm | \n", "SVM - Linear Kernel | \n", "sklearn.linear_model._stochastic_gradient.SGDC... | \n", "True | \n", "
rbfsvm | \n", "SVM - Radial Kernel | \n", "sklearn.svm._classes.SVC | \n", "False | \n", "
gpc | \n", "Gaussian Process Classifier | \n", "sklearn.gaussian_process._gpc.GaussianProcessC... | \n", "False | \n", "
mlp | \n", "MLP Classifier | \n", "sklearn.neural_network._multilayer_perceptron.... | \n", "False | \n", "
ridge | \n", "Ridge Classifier | \n", "sklearn.linear_model._ridge.RidgeClassifier | \n", "True | \n", "
rf | \n", "Random Forest Classifier | \n", "sklearn.ensemble._forest.RandomForestClassifier | \n", "True | \n", "
qda | \n", "Quadratic Discriminant Analysis | \n", "sklearn.discriminant_analysis.QuadraticDiscrim... | \n", "True | \n", "
ada | \n", "Ada Boost Classifier | \n", "sklearn.ensemble._weight_boosting.AdaBoostClas... | \n", "True | \n", "
gbc | \n", "Gradient Boosting Classifier | \n", "sklearn.ensemble._gb.GradientBoostingClassifier | \n", "True | \n", "
lda | \n", "Linear Discriminant Analysis | \n", "sklearn.discriminant_analysis.LinearDiscrimina... | \n", "True | \n", "
et | \n", "Extra Trees Classifier | \n", "sklearn.ensemble._forest.ExtraTreesClassifier | \n", "True | \n", "
xgboost | \n", "Extreme Gradient Boosting | \n", "xgboost.sklearn.XGBClassifier | \n", "True | \n", "
lightgbm | \n", "Light Gradient Boosting Machine | \n", "lightgbm.sklearn.LGBMClassifier | \n", "True | \n", "
catboost | \n", "CatBoost Classifier | \n", "catboost.core.CatBoostClassifier | \n", "True | \n", "
dummy | \n", "Dummy Classifier | \n", "sklearn.dummy.DummyClassifier | \n", "True | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.8182 | \n", "0.9221 | \n", "0.8182 | \n", "0.8182 | \n", "0.8182 | \n", "0.7250 | \n", "0.7250 | \n", "
2 | \n", "0.9091 | \n", "0.9610 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.6364 | \n", "0.8961 | \n", "0.6364 | \n", "0.6364 | \n", "0.6364 | \n", "0.4500 | \n", "0.4500 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9714 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
Std | \n", "0.1076 | \n", "0.0360 | \n", "0.1076 | \n", "0.1079 | \n", "0.1077 | \n", "0.1628 | \n", "0.1628 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.8182 | \n", "0.9221 | \n", "0.8182 | \n", "0.8182 | \n", "0.8182 | \n", "0.7250 | \n", "0.7250 | \n", "
2 | \n", "0.9091 | \n", "0.9610 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.6364 | \n", "0.8961 | \n", "0.6364 | \n", "0.6364 | \n", "0.6364 | \n", "0.4500 | \n", "0.4500 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9714 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
Std | \n", "0.1076 | \n", "0.0360 | \n", "0.1076 | \n", "0.1079 | \n", "0.1077 | \n", "0.1628 | \n", "0.1628 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9143 | \n", "0.9730 | \n", "0.9143 | \n", "0.9158 | \n", "0.9140 | \n", "0.8712 | \n", "0.8722 | \n", "
1 | \n", "0.8857 | \n", "0.9764 | \n", "0.8857 | \n", "0.8922 | \n", "0.8849 | \n", "0.8284 | \n", "0.8325 | \n", "
2 | \n", "0.9714 | \n", "0.9988 | \n", "0.9714 | \n", "0.9736 | \n", "0.9713 | \n", "0.9571 | \n", "0.9582 | \n", "
Mean | \n", "0.9238 | \n", "0.9827 | \n", "0.9238 | \n", "0.9272 | \n", "0.9234 | \n", "0.8856 | \n", "0.8877 | \n", "
Std | \n", "0.0356 | \n", "0.0115 | \n", "0.0356 | \n", "0.0342 | \n", "0.0359 | \n", "0.0535 | \n", "0.0525 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.8182 | \n", "0.9091 | \n", "0.8182 | \n", "0.8182 | \n", "0.8182 | \n", "0.7250 | \n", "0.7250 | \n", "
2 | \n", "0.9091 | \n", "0.9351 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.8831 | \n", "0.7273 | \n", "0.7333 | \n", "0.7229 | \n", "0.5875 | \n", "0.5950 | \n", "
4 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
5 | \n", "0.9000 | \n", "0.9714 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "0.9000 | \n", "0.9857 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.8973 | \n", "0.9684 | \n", "0.8973 | \n", "0.9108 | \n", "0.8955 | \n", "0.8445 | \n", "0.8525 | \n", "
Std | \n", "0.0753 | \n", "0.0415 | \n", "0.0753 | \n", "0.0758 | \n", "0.0762 | \n", "0.1139 | \n", "0.1130 | \n", "
LogisticRegression(C=0.5, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=0.15, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
LogisticRegression(C=0.5, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=0.15, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)
\n", " | \n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|---|
Split | \n", "Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
CV-Train | \n", "0 | \n", "0.9149 | \n", "0.9882 | \n", "0.9149 | \n", "0.9159 | \n", "0.9148 | \n", "0.8723 | \n", "0.8729 | \n", "
1 | \n", "0.9255 | \n", "0.9892 | \n", "0.9255 | \n", "0.9258 | \n", "0.9255 | \n", "0.8883 | \n", "0.8884 | \n", "|
2 | \n", "0.9255 | \n", "0.9887 | \n", "0.9255 | \n", "0.9279 | \n", "0.9254 | \n", "0.8883 | \n", "0.8896 | \n", "|
3 | \n", "0.9468 | \n", "0.9883 | \n", "0.9468 | \n", "0.9471 | \n", "0.9468 | \n", "0.9202 | \n", "0.9204 | \n", "|
4 | \n", "0.9043 | \n", "0.9855 | \n", "0.9043 | \n", "0.9065 | \n", "0.9040 | \n", "0.8564 | \n", "0.8577 | \n", "|
5 | \n", "0.9158 | \n", "0.9878 | \n", "0.9158 | \n", "0.9168 | \n", "0.9157 | \n", "0.8737 | \n", "0.8743 | \n", "|
6 | \n", "0.9053 | \n", "0.9843 | \n", "0.9053 | \n", "0.9074 | \n", "0.9051 | \n", "0.8579 | \n", "0.8592 | \n", "|
7 | \n", "0.9158 | \n", "0.9863 | \n", "0.9158 | \n", "0.9158 | \n", "0.9158 | \n", "0.8737 | \n", "0.8737 | \n", "|
8 | \n", "0.9158 | \n", "0.9848 | \n", "0.9158 | \n", "0.9168 | \n", "0.9157 | \n", "0.8737 | \n", "0.8743 | \n", "|
9 | \n", "0.9053 | \n", "0.9858 | \n", "0.9053 | \n", "0.9074 | \n", "0.9051 | \n", "0.8579 | \n", "0.8592 | \n", "|
CV-Val | \n", "0 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.8182 | \n", "0.9221 | \n", "0.8182 | \n", "0.8182 | \n", "0.8182 | \n", "0.7250 | \n", "0.7250 | \n", "|
2 | \n", "0.9091 | \n", "0.9610 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "|
3 | \n", "0.6364 | \n", "0.8961 | \n", "0.6364 | \n", "0.6364 | \n", "0.6364 | \n", "0.4500 | \n", "0.4500 | \n", "|
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "|
5 | \n", "0.9000 | \n", "0.9714 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "|
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "|
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "|
8 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "|
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "|
CV-Train | \n", "Mean | \n", "0.9175 | \n", "0.9869 | \n", "0.9175 | \n", "0.9187 | \n", "0.9174 | \n", "0.8762 | \n", "0.8770 | \n", "
Std | \n", "0.0122 | \n", "0.0017 | \n", "0.0122 | \n", "0.0117 | \n", "0.0122 | \n", "0.0182 | \n", "0.0180 | \n", "|
CV-Val | \n", "Mean | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
Std | \n", "0.1076 | \n", "0.0360 | \n", "0.1076 | \n", "0.1079 | \n", "0.1077 | \n", "0.1628 | \n", "0.1628 | \n", "|
Train | \n", "nan | \n", "0.9143 | \n", "0.9873 | \n", "0.0000 | \n", "0.0000 | \n", "0.0000 | \n", "0.8714 | \n", "0.8715 | \n", "
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=1000,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=123, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.8182 | \n", "0.8571 | \n", "0.8182 | \n", "0.8788 | \n", "0.8061 | \n", "0.7250 | \n", "0.7642 | \n", "
1 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.7857 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9286 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
Std | \n", "0.0893 | \n", "0.0700 | \n", "0.0893 | \n", "0.0552 | \n", "0.1011 | \n", "0.1351 | \n", "0.1119 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
1 | \n", "0.9091 | \n", "0.9481 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9481 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.8442 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9286 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9445 | \n", "0.9669 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
Std | \n", "0.0838 | \n", "0.0488 | \n", "0.0838 | \n", "0.0513 | \n", "0.0958 | \n", "0.1267 | \n", "0.1046 | \n", "
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n", " max_depth=None, max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n", " max_depth=None, max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9481 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
2 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.8442 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9571 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9445 | \n", "0.9678 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
Std | \n", "0.0838 | \n", "0.0485 | \n", "0.0838 | \n", "0.0513 | \n", "0.0958 | \n", "0.1267 | \n", "0.1046 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
1 | \n", "0.9091 | \n", "0.9481 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9481 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.8442 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9286 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9445 | \n", "0.9669 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
Std | \n", "0.0838 | \n", "0.0488 | \n", "0.0838 | \n", "0.0513 | \n", "0.0958 | \n", "0.1267 | \n", "0.1046 | \n", "
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='entropy',\n", " max_depth=5, max_features='sqrt', max_leaf_nodes=None,\n", " min_impurity_decrease=0.2, min_samples_leaf=5,\n", " min_samples_split=5, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='entropy',\n", " max_depth=5, max_features='sqrt', max_leaf_nodes=None,\n", " min_impurity_decrease=0.2, min_samples_leaf=5,\n", " min_samples_split=5, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')
RandomizedSearchCV(cv=StratifiedKFold(n_splits=10, random_state=None, shuffle=False),\n", " error_score=nan,\n", " estimator=Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None,\n", " include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,...\n", " 'actual_estimator__max_features': [1.0,\n", " 'sqrt',\n", " 'log2'],\n", " 'actual_estimator__min_impurity_decrease': [0,\n", " 0.0001,\n", " 0.001,\n", " 0.01,\n", " 0.0002,\n", " 0.002,\n", " 0.02,\n", " 0.0005,\n", " 0.005,\n", " 0.05,\n", " 0.1,\n", " 0.2,\n", " 0.3,\n", " 0.4,\n", " 0.5],\n", " 'actual_estimator__min_samples_leaf': [2,\n", " 3,\n", " 4,\n", " 5,\n", " 6],\n", " 'actual_estimator__min_samples_split': [2,\n", " 5,\n", " 7,\n", " 9,\n", " 10]},\n", " pre_dispatch='2*n_jobs', random_state=123, refit=False,\n", " return_train_score=False, scoring='accuracy', verbose=1)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
RandomizedSearchCV(cv=StratifiedKFold(n_splits=10, random_state=None, shuffle=False),\n", " error_score=nan,\n", " estimator=Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None,\n", " include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,...\n", " 'actual_estimator__max_features': [1.0,\n", " 'sqrt',\n", " 'log2'],\n", " 'actual_estimator__min_impurity_decrease': [0,\n", " 0.0001,\n", " 0.001,\n", " 0.01,\n", " 0.0002,\n", " 0.002,\n", " 0.02,\n", " 0.0005,\n", " 0.005,\n", " 0.05,\n", " 0.1,\n", " 0.2,\n", " 0.3,\n", " 0.4,\n", " 0.5],\n", " 'actual_estimator__min_samples_leaf': [2,\n", " 3,\n", " 4,\n", " 5,\n", " 6],\n", " 'actual_estimator__min_samples_split': [2,\n", " 5,\n", " 7,\n", " 9,\n", " 10]},\n", " pre_dispatch='2*n_jobs', random_state=123, refit=False,\n", " return_train_score=False, scoring='accuracy', verbose=1)
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ('actual_estimator',\n", " DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None,\n", " criterion='gini', max_depth=None,\n", " max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best'))],\n", " verbose=False)
TransformerWrapperWithInverse(transformer=LabelEncoder())
LabelEncoder()
LabelEncoder()
TransformerWrapper(include=['sepal_length', 'sepal_width', 'petal_length',\n", " 'petal_width'],\n", " transformer=SimpleImputer())
SimpleImputer()
SimpleImputer()
TransformerWrapper(include=[],\n", " transformer=SimpleImputer(strategy='most_frequent'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
TransformerWrapper(transformer=MinMaxScaler())
MinMaxScaler()
MinMaxScaler()
DecisionTreeClassifier(random_state=123)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.7857 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9286 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9355 | \n", "0.9500 | \n", "0.9355 | \n", "0.9551 | \n", "0.9303 | \n", "0.9023 | \n", "0.9149 | \n", "
Std | \n", "0.0822 | \n", "0.0643 | \n", "0.0822 | \n", "0.0506 | \n", "0.0939 | \n", "0.1243 | \n", "0.1027 | \n", "
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9870 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.7273 | \n", "0.9481 | \n", "0.7273 | \n", "0.8442 | \n", "0.6826 | \n", "0.5875 | \n", "0.6674 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9857 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9355 | \n", "0.9849 | \n", "0.9355 | \n", "0.9551 | \n", "0.9303 | \n", "0.9023 | \n", "0.9149 | \n", "
Std | \n", "0.0822 | \n", "0.0243 | \n", "0.0822 | \n", "0.0506 | \n", "0.0939 | \n", "0.1243 | \n", "0.1027 | \n", "
BaggingClassifier(base_estimator=DecisionTreeClassifier(ccp_alpha=0.0,\n", " class_weight=None,\n", " criterion='gini',\n", " max_depth=None,\n", " max_features=None,\n", " max_leaf_nodes=None,\n", " min_impurity_decrease=0.0,\n", " min_samples_leaf=1,\n", " min_samples_split=2,\n", " min_weight_fraction_leaf=0.0,\n", " random_state=123,\n", " splitter='best'),\n", " bootstrap=True, bootstrap_features=False, max_features=1.0,\n", " max_samples=1.0, n_estimators=10, n_jobs=None,\n", " oob_score=False, random_state=123, verbose=0,\n", " warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
BaggingClassifier(base_estimator=DecisionTreeClassifier(ccp_alpha=0.0,\n", " class_weight=None,\n", " criterion='gini',\n", " max_depth=None,\n", " max_features=None,\n", " max_leaf_nodes=None,\n", " min_impurity_decrease=0.0,\n", " min_samples_leaf=1,\n", " min_samples_split=2,\n", " min_weight_fraction_leaf=0.0,\n", " random_state=123,\n", " splitter='best'),\n", " bootstrap=True, bootstrap_features=False, max_features=1.0,\n", " max_samples=1.0, n_estimators=10, n_jobs=None,\n", " oob_score=False, random_state=123, verbose=0,\n", " warm_start=False)
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n", " max_depth=None, max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')
DecisionTreeClassifier(random_state=123)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9286 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.6364 | \n", "0.7143 | \n", "0.6364 | \n", "0.6364 | \n", "0.6121 | \n", "0.4500 | \n", "0.4743 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "0.9286 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
Std | \n", "0.1062 | \n", "0.0833 | \n", "0.1062 | \n", "0.1052 | \n", "0.1130 | \n", "0.1606 | \n", "0.1532 | \n", "
AdaBoostClassifier(algorithm='SAMME.R',\n", " base_estimator=DecisionTreeClassifier(ccp_alpha=0.0,\n", " class_weight=None,\n", " criterion='gini',\n", " max_depth=None,\n", " max_features=None,\n", " max_leaf_nodes=None,\n", " min_impurity_decrease=0.0,\n", " min_samples_leaf=1,\n", " min_samples_split=2,\n", " min_weight_fraction_leaf=0.0,\n", " random_state=123,\n", " splitter='best'),\n", " learning_rate=1.0, n_estimators=10, random_state=123)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
AdaBoostClassifier(algorithm='SAMME.R',\n", " base_estimator=DecisionTreeClassifier(ccp_alpha=0.0,\n", " class_weight=None,\n", " criterion='gini',\n", " max_depth=None,\n", " max_features=None,\n", " max_leaf_nodes=None,\n", " min_impurity_decrease=0.0,\n", " min_samples_leaf=1,\n", " min_samples_split=2,\n", " min_weight_fraction_leaf=0.0,\n", " random_state=123,\n", " splitter='best'),\n", " learning_rate=1.0, n_estimators=10, random_state=123)
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',\n", " max_depth=None, max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " random_state=123, splitter='best')
DecisionTreeClassifier(random_state=123)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9740 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
2 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
3 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
Std | \n", "0.0431 | \n", "0.0078 | \n", "0.0431 | \n", "0.0337 | \n", "0.0440 | \n", "0.0653 | \n", "0.0599 | \n", "
VotingClassifier(estimators=[('Quadratic Discriminant Analysis',\n", " QuadraticDiscriminantAnalysis(priors=None,\n", " reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('Linear Discriminant Analysis',\n", " LinearDiscriminantAnalysis(covariance_estimator=None,\n", " n_components=None,\n", " priors=None,\n", " shrinkage=None,\n", " solver='svd',\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('K Neighbors Classifier',\n", " KNeighborsClassifier(algorithm='auto',\n", " leaf_size=30,\n", " metric='minkowski',\n", " metric_params=None,\n", " n_jobs=-1, n_neighbors=5,\n", " p=2, weights='uniform'))],\n", " flatten_transform=True, n_jobs=-1, verbose=False,\n", " voting='soft', weights=None)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
VotingClassifier(estimators=[('Quadratic Discriminant Analysis',\n", " QuadraticDiscriminantAnalysis(priors=None,\n", " reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('Linear Discriminant Analysis',\n", " LinearDiscriminantAnalysis(covariance_estimator=None,\n", " n_components=None,\n", " priors=None,\n", " shrinkage=None,\n", " solver='svd',\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('K Neighbors Classifier',\n", " KNeighborsClassifier(algorithm='auto',\n", " leaf_size=30,\n", " metric='minkowski',\n", " metric_params=None,\n", " n_jobs=-1, n_neighbors=5,\n", " p=2, weights='uniform'))],\n", " flatten_transform=True, n_jobs=-1, verbose=False,\n", " voting='soft', weights=None)
QuadraticDiscriminantAnalysis()
LinearDiscriminantAnalysis()
KNeighborsClassifier(n_jobs=-1)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9740 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
2 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
3 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
Std | \n", "0.0431 | \n", "0.0078 | \n", "0.0431 | \n", "0.0337 | \n", "0.0440 | \n", "0.0653 | \n", "0.0599 | \n", "
StackingClassifier(cv=5,\n", " estimators=[('Quadratic Discriminant Analysis',\n", " QuadraticDiscriminantAnalysis(priors=None,\n", " reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('Linear Discriminant Analysis',\n", " LinearDiscriminantAnalysis(covariance_estimator=None,\n", " n_components=None,\n", " priors=None,\n", " shrinkage=None,\n", " solver='svd',\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('K Neighbors...\n", " n_jobs=-1, n_neighbors=5,\n", " p=2, weights='uniform'))],\n", " final_estimator=LogisticRegression(C=1.0, class_weight=None,\n", " dual=False,\n", " fit_intercept=True,\n", " intercept_scaling=1,\n", " l1_ratio=None,\n", " max_iter=1000,\n", " multi_class='auto',\n", " n_jobs=None, penalty='l2',\n", " random_state=123,\n", " solver='lbfgs',\n", " tol=0.0001, verbose=0,\n", " warm_start=False),\n", " n_jobs=-1, passthrough=True, stack_method='auto', verbose=0)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
StackingClassifier(cv=5,\n", " estimators=[('Quadratic Discriminant Analysis',\n", " QuadraticDiscriminantAnalysis(priors=None,\n", " reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('Linear Discriminant Analysis',\n", " LinearDiscriminantAnalysis(covariance_estimator=None,\n", " n_components=None,\n", " priors=None,\n", " shrinkage=None,\n", " solver='svd',\n", " store_covariance=False,\n", " tol=0.0001)),\n", " ('K Neighbors...\n", " n_jobs=-1, n_neighbors=5,\n", " p=2, weights='uniform'))],\n", " final_estimator=LogisticRegression(C=1.0, class_weight=None,\n", " dual=False,\n", " fit_intercept=True,\n", " intercept_scaling=1,\n", " l1_ratio=None,\n", " max_iter=1000,\n", " multi_class='auto',\n", " n_jobs=None, penalty='l2',\n", " random_state=123,\n", " solver='lbfgs',\n", " tol=0.0001, verbose=0,\n", " warm_start=False),\n", " n_jobs=-1, passthrough=True, stack_method='auto', verbose=0)
QuadraticDiscriminantAnalysis()
LinearDiscriminantAnalysis()
KNeighborsClassifier(n_jobs=-1)
LogisticRegression(max_iter=1000, random_state=123)
\n", " | Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|
Fold | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "0.9091 | \n", "0.9351 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
1 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
2 | \n", "0.9091 | \n", "0.9221 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
3 | \n", "0.9091 | \n", "1.0000 | \n", "0.9091 | \n", "0.9273 | \n", "0.9076 | \n", "0.8625 | \n", "0.8735 | \n", "
4 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
5 | \n", "0.9000 | \n", "1.0000 | \n", "0.9000 | \n", "0.9250 | \n", "0.8971 | \n", "0.8485 | \n", "0.8616 | \n", "
6 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
7 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
8 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
9 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "1.0000 | \n", "
Mean | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "
Std | \n", "0.0464 | \n", "0.0287 | \n", "0.0464 | \n", "0.0366 | \n", "0.0473 | \n", "0.0703 | \n", "0.0645 | \n", "
\n", " | Model Name | \n", "Model | \n", "Accuracy | \n", "AUC | \n", "Recall | \n", "Prec. | \n", "F1 | \n", "Kappa | \n", "MCC | \n", "
---|---|---|---|---|---|---|---|---|---|
Index | \n", "\n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " | \n", " |
0 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
1 | \n", "K Neighbors Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9636 | \n", "0.9844 | \n", "0.9636 | \n", "0.9709 | \n", "0.9631 | \n", "0.9450 | \n", "0.9494 | \n", "
2 | \n", "Naive Bayes | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "
3 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
4 | \n", "SVM - Linear Kernel | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.8100 | \n", "0.0000 | \n", "0.8100 | \n", "0.7831 | \n", "0.7702 | \n", "0.7125 | \n", "0.7527 | \n", "
5 | \n", "Ridge Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.8318 | \n", "0.0000 | \n", "0.8318 | \n", "0.8545 | \n", "0.8281 | \n", "0.7459 | \n", "0.7595 | \n", "
6 | \n", "Random Forest Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
7 | \n", "Quadratic Discriminant Analysis | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
8 | \n", "Ada Boost Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9155 | \n", "0.9843 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "
9 | \n", "Gradient Boosting Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
10 | \n", "Linear Discriminant Analysis | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
11 | \n", "Extra Trees Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
12 | \n", "Extreme Gradient Boosting | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "
13 | \n", "Light Gradient Boosting Machine | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "
14 | \n", "CatBoost Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
15 | \n", "Dummy Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "
16 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
17 | \n", "Random Forest Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
18 | \n", "Extra Trees Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
19 | \n", "Gradient Boosting Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
20 | \n", "Extreme Gradient Boosting | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "
21 | \n", "Light Gradient Boosting Machine | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "
22 | \n", "CatBoost Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
23 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
24 | \n", "K Neighbors Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9636 | \n", "0.9844 | \n", "0.9636 | \n", "0.9709 | \n", "0.9631 | \n", "0.9450 | \n", "0.9494 | \n", "
25 | \n", "Naive Bayes | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9868 | \n", "0.9445 | \n", "0.9525 | \n", "0.9438 | \n", "0.9161 | \n", "0.9207 | \n", "
26 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
27 | \n", "SVM - Linear Kernel | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.8100 | \n", "0.0000 | \n", "0.8100 | \n", "0.7831 | \n", "0.7702 | \n", "0.7125 | \n", "0.7527 | \n", "
28 | \n", "Ridge Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.8318 | \n", "0.0000 | \n", "0.8318 | \n", "0.8545 | \n", "0.8281 | \n", "0.7459 | \n", "0.7595 | \n", "
29 | \n", "Random Forest Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9903 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
30 | \n", "Quadratic Discriminant Analysis | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
31 | \n", "Ada Boost Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9155 | \n", "0.9843 | \n", "0.9155 | \n", "0.9401 | \n", "0.9097 | \n", "0.8720 | \n", "0.8873 | \n", "
32 | \n", "Gradient Boosting Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9688 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
33 | \n", "Linear Discriminant Analysis | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "1.0000 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
34 | \n", "Extra Trees Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9935 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
35 | \n", "Extreme Gradient Boosting | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9355 | \n", "0.9868 | \n", "0.9355 | \n", "0.9440 | \n", "0.9343 | \n", "0.9023 | \n", "0.9077 | \n", "
36 | \n", "Light Gradient Boosting Machine | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "
37 | \n", "CatBoost Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9922 | \n", "0.9445 | \n", "0.9586 | \n", "0.9426 | \n", "0.9161 | \n", "0.9246 | \n", "
38 | \n", "Dummy Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.2864 | \n", "0.5000 | \n", "0.2864 | \n", "0.0822 | \n", "0.1277 | \n", "0.0000 | \n", "0.0000 | \n", "
39 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9073 | \n", "0.9751 | \n", "0.9073 | \n", "0.9159 | \n", "0.9064 | \n", "0.8597 | \n", "0.8645 | \n", "
40 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9238 | \n", "0.9827 | \n", "0.9238 | \n", "0.9272 | \n", "0.9234 | \n", "0.8856 | \n", "0.8877 | \n", "
41 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.8973 | \n", "0.9684 | \n", "0.8973 | \n", "0.9108 | \n", "0.8955 | \n", "0.8445 | \n", "0.8525 | \n", "
42 | \n", "Logistic Regression | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.1076 | \n", "0.0360 | \n", "0.1076 | \n", "0.1079 | \n", "0.1077 | \n", "0.1628 | \n", "0.1628 | \n", "
43 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
44 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9669 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
45 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
46 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9678 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
47 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
48 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9445 | \n", "0.9669 | \n", "0.9445 | \n", "0.9624 | \n", "0.9395 | \n", "0.9161 | \n", "0.9276 | \n", "
49 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
50 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9355 | \n", "0.9500 | \n", "0.9355 | \n", "0.9551 | \n", "0.9303 | \n", "0.9023 | \n", "0.9149 | \n", "
51 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9502 | \n", "0.9201 | \n", "0.8886 | \n", "0.9040 | \n", "
52 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9355 | \n", "0.9849 | \n", "0.9355 | \n", "0.9551 | \n", "0.9303 | \n", "0.9023 | \n", "0.9149 | \n", "
53 | \n", "Decision Tree Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9264 | \n", "0.9429 | \n", "0.9264 | \n", "0.9343 | \n", "0.9232 | \n", "0.8886 | \n", "0.8956 | \n", "
54 | \n", "Voting Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
55 | \n", "Stacking Classifier | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9718 | \n", "0.9974 | \n", "0.9718 | \n", "0.9780 | \n", "0.9712 | \n", "0.9573 | \n", "0.9609 | \n", "
56 | \n", "Light Gradient Boosting Machine | \n", "(TransformerWrapperWithInverse(exclude=None, i... | \n", "0.9536 | \n", "0.9857 | \n", "0.9536 | \n", "0.9634 | \n", "0.9528 | \n", "0.9298 | \n", "0.9356 | \n", "
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ['trained_model',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)]],\n", " verbose=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ['trained_model',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001)]],\n", " verbose=False)
TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())
LabelEncoder()
LabelEncoder()
TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width', 'petal_length',\n", " 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='mean',\n", " verbose='deprecated'))
SimpleImputer()
SimpleImputer()
TransformerWrapper(exclude=None, include=[],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False, copy=True,\n", " feature_range=(0, 1)))
MinMaxScaler()
MinMaxScaler()
QuadraticDiscriminantAnalysis()
QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False, tol=0.0001)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False, tol=0.0001)
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ('actual_estimator',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001))],\n", " verbose=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ('actual_estimator',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001))],\n", " verbose=False)
TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())
LabelEncoder()
LabelEncoder()
TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width', 'petal_length',\n", " 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='mean',\n", " verbose='deprecated'))
SimpleImputer()
SimpleImputer()
TransformerWrapper(exclude=None, include=[],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False, copy=True,\n", " feature_range=(0, 1)))
MinMaxScaler()
MinMaxScaler()
QuadraticDiscriminantAnalysis()
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ('trained_model',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001))],\n", " verbose=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
Pipeline(memory=FastMemory(location=C:\\Users\\owner\\AppData\\Local\\Temp\\joblib),\n", " steps=[('label_encoding',\n", " TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())),\n", " ('numerical_imputer',\n", " TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width',\n", " 'petal_length', 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=F...\n", " transformer=SimpleImputer(add_indicator=False,\n", " copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))),\n", " ('normalize',\n", " TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False,\n", " copy=True,\n", " feature_range=(0,\n", " 1)))),\n", " ('trained_model',\n", " QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,\n", " store_covariance=False,\n", " tol=0.0001))],\n", " verbose=False)
TransformerWrapperWithInverse(exclude=None, include=None,\n", " transformer=LabelEncoder())
LabelEncoder()
LabelEncoder()
TransformerWrapper(exclude=None,\n", " include=['sepal_length', 'sepal_width', 'petal_length',\n", " 'petal_width'],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='mean',\n", " verbose='deprecated'))
SimpleImputer()
SimpleImputer()
TransformerWrapper(exclude=None, include=[],\n", " transformer=SimpleImputer(add_indicator=False, copy=True,\n", " fill_value=None,\n", " missing_values=nan,\n", " strategy='most_frequent',\n", " verbose='deprecated'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
TransformerWrapper(exclude=None, include=None,\n", " transformer=MinMaxScaler(clip=False, copy=True,\n", " feature_range=(0, 1)))
MinMaxScaler()
MinMaxScaler()
QuadraticDiscriminantAnalysis()
\n", " | Description | \n", "Value | \n", "
---|---|---|
0 | \n", "Session id | \n", "123 | \n", "
1 | \n", "Target | \n", "species | \n", "
2 | \n", "Target type | \n", "Multiclass | \n", "
3 | \n", "Target mapping | \n", "Iris-setosa: 0, Iris-versicolor: 1, Iris-virginica: 2 | \n", "
4 | \n", "Original data shape | \n", "(150, 5) | \n", "
5 | \n", "Transformed data shape | \n", "(150, 5) | \n", "
6 | \n", "Transformed train set shape | \n", "(105, 5) | \n", "
7 | \n", "Transformed test set shape | \n", "(45, 5) | \n", "
8 | \n", "Numeric features | \n", "4 | \n", "
9 | \n", "Preprocess | \n", "True | \n", "
10 | \n", "Imputation type | \n", "simple | \n", "
11 | \n", "Numeric imputation | \n", "mean | \n", "
12 | \n", "Categorical imputation | \n", "mode | \n", "
13 | \n", "Normalize | \n", "True | \n", "
14 | \n", "Normalize method | \n", "minmax | \n", "
15 | \n", "Fold Generator | \n", "StratifiedKFold | \n", "
16 | \n", "Fold Number | \n", "10 | \n", "
17 | \n", "CPU Jobs | \n", "-1 | \n", "
18 | \n", "Use GPU | \n", "False | \n", "
19 | \n", "Log Experiment | \n", "False | \n", "
20 | \n", "Experiment Name | \n", "clf-default-name | \n", "
21 | \n", "USI | \n", "9a69 | \n", "