{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__mlmachine - Hyperparameter Tuning with Bayesian Optimization__\n",
"
\n",
"Welcome to Example Notebook 4. If you're new to mlmachine, check out [Example Notebook 1](https://github.com/petersontylerd/mlmachine/blob/master/notebooks/mlmachine_part_1.ipynb), [Example Notebook 2](https://github.com/petersontylerd/mlmachine/blob/master/notebooks/mlmachine_part_2.ipynb) and [Example Notebook 3](https://github.com/petersontylerd/mlmachine/blob/master/notebooks/mlmachine_part_3.ipynb).\n",
"
\n",
"Check out the [GitHub repository](https://github.com/petersontylerd/mlmachine).\n",
"
\n",
"\n",
"1. [Bayesian Optimization for Multiple Estimators in One Shot](#Bayesian-Optimization-for-Multiple-Estimators-in-One-Shot)\n",
" 1. [Prepare Data](#Prepare-Data)\n",
" 1. [Feature Importance Summary](#Feature-Importance-Summary)\n",
" 1. [Exhaustively Iterative Feature Selection](#Exhaustively-Iterative-Feature-Selection)\n",
" 1. [Outline Our Feature Space](#Outline-Our-Feature-Space)\n",
" 1. [Run the Bayesian Optimization Job](#Run-the-Bayesian-Optimization-Job)\n",
"1. [Results Analysis](#Results-Analysis)\n",
" 1. [Results Summary](#Results-Summary)\n",
" 1. [Model Optimization Assessment](#Model-Optimization-Assessment)\n",
" 1. [Parameter Selection Assessment](#Parameter-Selection-Assessment)\n",
"1. [Model Reinstantiation](#Model-Reinstantiation)\n",
" 1. [Top Model Identification](#Top-Model-Identification)\n",
" 1. [Putting the Models to Use](#Putting-the-Models-to-Use)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"# Bayesian Optimization for Multiple Estimators in One Shot\n",
"---\n",
"
\n",
"Bayesian optimization is typically described as an advancement beyond exhaustive grid searches, and rightfully so. This hyperparameter tuning strategy succeeds by using prior information to inform future parameter selection for a given estimator. Check out [Will Koehrsen's article on Medium](https://towardsdatascience.com/an-introductory-example-of-bayesian-optimization-in-python-with-hyperopt-aae40fff4ff0) for an excellent overview of the package.\n",
"
\n",
"\n",
"mlmachine uses hyperopt as a foundation for performing Bayesian optimization, and takes the functionality of hyperopt a step further through a simplified workflow that allows for optimization of multiple models in single process execution. In this article, we are going to optimize four classifiers:\n",
"- `LogisticRegression()`\n",
"- `XGBClassifier()`\n",
"- `RandomForestClassifier()`\n",
"- `KNeighborsClassifier()`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Prepare Data\n",
"---\n",
"
\n",
"First, we apply data preprocessing techniques to clean up our data. We'll start by creating two `Machine()` objects - one for the training data and a second for the validation data.\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-03T14:46:48.617162Z",
"start_time": "2020-04-03T14:46:43.248567Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"~/.pyenv/versions/main37/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.\n",
" warnings.warn(msg, category=FutureWarning)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
">>> category label encoding\n",
"\n",
"\t0 --> 0\n",
"\t1 --> 1\n",
"\n"
]
}
],
"source": [
"# import libraries\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"# import mlmachine tools\n",
"import mlmachine as mlm\n",
"from mlmachine.data import titanic\n",
"\n",
"# use titanic() function to create DataFrames for training and validation datasets\n",
"df_train, df_valid = titanic()\n",
"\n",
"# ordinal encoding hierarchy\n",
"ordinal_encodings = {\"Pclass\": [1, 2, 3]}\n",
"\n",
"# instantiate a Machine object for the training data\n",
"mlmachine_titanic_train = mlm.Machine(\n",
" data=df_train,\n",
" target=\"Survived\",\n",
" remove_features=[\"PassengerId\",\"Ticket\",\"Name\",\"Cabin\"],\n",
" identify_as_continuous=[\"Age\",\"Fare\"],\n",
" identify_as_count=[\"Parch\",\"SibSp\"],\n",
" identify_as_nominal=[\"Embarked\"],\n",
" identify_as_ordinal=[\"Pclass\"],\n",
" ordinal_encodings=ordinal_encodings,\n",
" is_classification=True,\n",
")\n",
"\n",
"# instantiate a Machine object for the validation data\n",
"mlmachine_titanic_valid = mlm.Machine(\n",
" data=df_valid,\n",
" remove_features=[\"PassengerId\",\"Ticket\",\"Name\",\"Cabin\"],\n",
" identify_as_continuous=[\"Age\",\"Fare\"],\n",
" identify_as_count=[\"Parch\",\"SibSp\"],\n",
" identify_as_nominal=[\"Embarked\"],\n",
" identify_as_ordinal=[\"Pclass\"],\n",
" ordinal_encodings=ordinal_encodings,\n",
" is_classification=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"
\n",
"Now we process the data by imputing nulls and applying various binning, feature engineering and encoding techniques:\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-03T04:51:07.652640Z",
"start_time": "2020-04-03T04:51:03.525954Z"
}
},
"outputs": [],
"source": [
"# standard libary and settings\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.model_selection import KFold\n",
"from sklearn.preprocessing import (\n",
" OrdinalEncoder,\n",
" OneHotEncoder,\n",
" KBinsDiscretizer,\n",
" RobustScaler,\n",
" PolynomialFeatures,\n",
")\n",
"from sklearn.pipeline import make_pipeline\n",
"\n",
"from category_encoders import WOEEncoder, TargetEncoder, CatBoostEncoder\n",
"\n",
"# import mlmachine tools\n",
"from mlmachine.features.preprocessing import (\n",
" DataFrameSelector,\n",
" PandasTransformer,\n",
" PandasFeatureUnion,\n",
" GroupbyImputer,\n",
" KFoldEncoder,\n",
")\n",
"\n",
"### create imputation PandasFeatureUnion pipeline\n",
"impute_pipe = PandasFeatureUnion([\n",
" (\"age\", make_pipeline(\n",
" DataFrameSelector(include_columns=[\"Age\",\"SibSp\"]),\n",
" GroupbyImputer(null_column=\"Age\", groupby_column=\"SibSp\", strategy=\"mean\")\n",
" )),\n",
" (\"fare\", make_pipeline(\n",
" DataFrameSelector(include_columns=[\"Fare\",\"Pclass\"]),\n",
" GroupbyImputer(null_column=\"Fare\", groupby_column=\"Pclass\", strategy=\"mean\")\n",
" )),\n",
" (\"embarked\", make_pipeline(\n",
" DataFrameSelector(include_columns=[\"Embarked\"]),\n",
" PandasTransformer(SimpleImputer(strategy=\"most_frequent\"))\n",
" )),\n",
" (\"diff\", make_pipeline(\n",
" DataFrameSelector(exclude_columns=[\"Age\",\"Fare\",\"Embarked\"])\n",
" )),\n",
"])\n",
"\n",
"# fit and transform training data, transform validation data\n",
"mlmachine_titanic_machine.training_features = impute_pipe.fit_transform(mlmachine_titanic_machine.training_features)\n",
"mlmachine_titanic_machine.validation_features = impute_pipe.transform(mlmachine_titanic_machine.validation_features)\n",
"\n",
"### create polynomial feature PandasFeatureUnion pipeline\n",
"polynomial_pipe = PandasFeatureUnion([\n",
" (\"polynomial\", make_pipeline(\n",
" DataFrameSelector(include_mlm_dtypes=[\"continuous\"]),\n",
" PandasTransformer(PolynomialFeatures(degree=2, interaction_only=False, include_bias=False)),\n",
" )),\n",
" (\"diff\", make_pipeline(\n",
" DataFrameSelector(exclude_mlm_dtypes=[\"continuous\"]),\n",
" )),\n",
"])\n",
"\n",
"# fit and transform training data, transform validation data\n",
"mlmachine_titanic_machine.training_features = polynomial_pipe.fit_transform(mlmachine_titanic_machine.training_features)\n",
"mlmachine_titanic_machine.validation_features = polynomial_pipe.transform(mlmachine_titanic_machine.validation_features)\n",
"\n",
"# update mlm_dtypes\n",
"mlmachine_titanic_machine.update_dtypes()\n",
"mlmachine_titanic_\n",
"\n",
"### create simple encoding & binning PandasFeatureUnion pipeline\n",
"encode_pipe = PandasFeatureUnion([\n",
" (\"nominal\", make_pipeline(\n",
" DataFrameSelector(include_columns=mlmachine_titanic_machine.training_features.mlm_dtypes[\"nominal\"]),\n",
" PandasTransformer(OneHotEncoder(drop=\"first\")),\n",
" )),\n",
" (\"ordinal\", make_pipeline(\n",
" DataFrameSelector(include_columns=list(ordinal_encodings.keys())),\n",
" PandasTransformer(OrdinalEncoder(categories=list(ordinal_encodings.values()))),\n",
" )),\n",
" (\"bin\", make_pipeline(\n",
" DataFrameSelector(include_columns=mlmachine_titanic_machine.training_features.mlm_dtypes[\"continuous\"]),\n",
" PandasTransformer(KBinsDiscretizer(encode=\"ordinal\")),\n",
" )),\n",
" (\"diff\", make_pipeline(\n",
" DataFrameSelector(exclude_columns=mlmachine_titanic_machine.training_features.mlm_dtypes[\"nominal\"] + list(ordinal_encodings.keys())),\n",
" )),\n",
"])\n",
"\n",
"# fit and transform training data, transform validation data\n",
"mlmachine_titanic_machine.training_features = encode_pipe.fit_transform(mlmachine_titanic_machine.training_features)\n",
"mlmachine_titanic_machine.validation_features = encode_pipe.fit_transform(mlmachine_titanic_machine.validation_features)\n",
"\n",
"# update mlm_dtypes\n",
"mlmachine_titanic_machine.update_dtypes()\n",
"mlmachine_titanic_\n",
"\n",
"### create KFold encoding PandasFeatureUnion pipeline\n",
"target_encode_pipe = PandasFeatureUnion([\n",
" (\"target\", make_pipeline(\n",
" DataFrameSelector(include_mlm_dtypes=[\"category\"]), \n",
" KFoldEncoder(\n",
" target=mlmachine_titanic_machine.training_target,\n",
" cv=KFold(n_splits=5, shuffle=True, random_state=0),\n",
" encoder=TargetEncoder,\n",
" ),\n",
" )),\n",
" (\"woe\", make_pipeline(\n",
" DataFrameSelector(include_mlm_dtypes=[\"category\"]),\n",
" KFoldEncoder(\n",
" target=mlmachine_titanic_machine.training_target,\n",
" cv=KFold(n_splits=5, shuffle=False),\n",
" encoder=WOEEncoder,\n",
" ),\n",
" )),\n",
" (\"catboost\", make_pipeline(\n",
" DataFrameSelector(include_mlm_dtypes=[\"category\"]),\n",
" KFoldEncoder(\n",
" target=mlmachine_titanic_machine.training_target,\n",
" cv=KFold(n_splits=5, shuffle=False),\n",
" encoder=CatBoostEncoder,\n",
" ),\n",
" )),\n",
" (\"diff\", make_pipeline(\n",
" DataFrameSelector(exclude_mlm_dtypes=[\"category\"]),\n",
" )),\n",
"])\n",
"\n",
"# fit and transform training data, transform validation data\n",
"mlmachine_titanic_machine.training_features = target_encode_pipe.fit_transform(mlmachine_titanic_machine.training_features)\n",
"mlmachine_titanic_machine.validation_features = target_encode_pipe.transform(mlmachine_titanic_machine.validation_features)\n",
"\n",
"# update mlm_dtypes\n",
"mlmachine_titanic_machine.update_dtypes()\n",
"mlmachine_titanic_\n",
"\n",
"### scale values\n",
"scale = PandasTransformer(RobustScaler())\n",
"\n",
"mlmachine_titanic_machine.training_features = scale.fit_transform(mlmachine_titanic_machine.training_features)\n",
"mlmachine_titanic_machine.validation_features = scale.transform(mlmachine_titanic_machine.validation_features)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-03T04:51:07.686180Z",
"start_time": "2020-04-03T04:51:07.655312Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n", " | Age | \n", "Age*Fare | \n", "Age*Fare_binned_5 | \n", "Age*Fare_binned_5_catboost_encoded | \n", "Age*Fare_binned_5_target_encoded | \n", "Age*Fare_binned_5_woe_encoded | \n", "Age^2 | \n", "Age^2_binned_5 | \n", "Age^2_binned_5_catboost_encoded | \n", "Age^2_binned_5_target_encoded | \n", "... | \n", "Parch | \n", "Pclass_ordinal_encoded | \n", "Pclass_ordinal_encoded_catboost_encoded | \n", "Pclass_ordinal_encoded_target_encoded | \n", "Pclass_ordinal_encoded_woe_encoded | \n", "Sex_male | \n", "Sex_male_catboost_encoded | \n", "Sex_male_target_encoded | \n", "Sex_male_woe_encoded | \n", "SibSp | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | \n", "-0.622287 | \n", "-0.241862 | \n", "-1.0 | \n", "-0.061922 | \n", "0.257553 | \n", "-0.090433 | \n", "-0.568680 | \n", "-0.5 | \n", "0.196885 | \n", "0.754887 | \n", "... | \n", "0.0 | \n", "0.0 | \n", "-0.089623 | \n", "0.000000 | \n", "-0.167183 | \n", "0.0 | \n", "0.017268 | \n", "0.017703 | \n", "0.000604 | \n", "1.0 | \n", "
1 | \n", "0.608483 | \n", "2.880011 | \n", "1.0 | \n", "1.746016 | \n", "1.330575 | \n", "1.479878 | \n", "0.726867 | \n", "1.0 | \n", "0.660192 | \n", "0.000000 | \n", "... | \n", "0.0 | \n", "-2.0 | \n", "1.760707 | \n", "1.370847 | \n", "1.550231 | \n", "-1.0 | \n", "0.996585 | \n", "0.989747 | \n", "0.991390 | \n", "1.0 | \n", "
2 | \n", "-0.314594 | \n", "-0.184856 | \n", "-0.5 | \n", "-1.034485 | \n", "-0.868374 | \n", "-1.118681 | \n", "-0.309570 | \n", "-0.5 | \n", "0.196885 | \n", "-0.053122 | \n", "... | \n", "0.0 | \n", "0.0 | \n", "-0.089623 | \n", "-0.121216 | \n", "-0.167183 | \n", "-1.0 | \n", "0.996585 | \n", "0.991811 | \n", "0.991390 | \n", "0.0 | \n", "
3 | \n", "0.377713 | \n", "1.838762 | \n", "1.0 | \n", "1.746016 | \n", "1.529026 | \n", "1.479878 | \n", "0.431320 | \n", "0.5 | \n", "-0.550145 | \n", "-0.358207 | \n", "... | \n", "0.0 | \n", "-2.0 | \n", "1.760707 | \n", "1.518257 | \n", "1.550231 | \n", "-1.0 | \n", "0.996585 | \n", "1.018386 | \n", "0.991390 | \n", "1.0 | \n", "
4 | \n", "0.377713 | \n", "-0.092152 | \n", "0.0 | \n", "-0.528664 | \n", "-0.424139 | \n", "-0.541412 | \n", "0.431320 | \n", "0.5 | \n", "-0.550145 | \n", "-0.164268 | \n", "... | \n", "0.0 | \n", "0.0 | \n", "-0.089623 | \n", "-0.092714 | \n", "-0.167183 | \n", "0.0 | \n", "0.017268 | \n", "-0.004054 | \n", "0.000604 | \n", "0.0 | \n", "
5 | \n", "0.100602 | \n", "-0.111967 | \n", "-0.5 | \n", "-1.034485 | \n", "-0.626603 | \n", "-1.118681 | \n", "0.108522 | \n", "0.5 | \n", "-0.550145 | \n", "-0.301361 | \n", "... | \n", "0.0 | \n", "0.0 | \n", "-0.089623 | \n", "-0.035279 | \n", "-0.167183 | \n", "0.0 | \n", "0.017268 | \n", "0.000000 | \n", "0.000604 | \n", "0.0 | \n", "
6 | \n", "1.839252 | \n", "2.992442 | \n", "1.0 | \n", "1.746016 | \n", "1.529026 | \n", "1.479878 | \n", "2.713372 | \n", "1.0 | \n", "0.660192 | \n", "0.442971 | \n", "... | \n", "0.0 | \n", "-2.0 | \n", "1.760707 | \n", "1.518257 | \n", "1.550231 | \n", "0.0 | \n", "0.017268 | \n", "0.017703 | \n", "0.000604 | \n", "0.0 | \n", "
7 | \n", "-2.160748 | \n", "-0.385571 | \n", "-1.0 | \n", "-0.061922 | \n", "0.257553 | \n", "-0.090433 | \n", "-1.216453 | \n", "-1.0 | \n", "1.956874 | \n", "1.754877 | \n", "... | \n", "1.0 | \n", "0.0 | \n", "-0.089623 | \n", "0.000000 | \n", "-0.167183 | \n", "0.0 | \n", "0.017268 | \n", "0.017703 | \n", "0.000604 | \n", "3.0 | \n", "
8 | \n", "-0.237671 | \n", "-0.069069 | \n", "0.0 | \n", "-0.528664 | \n", "-0.325717 | \n", "-0.541412 | \n", "-0.238045 | \n", "-0.5 | \n", "0.196885 | \n", "-0.196302 | \n", "... | \n", "2.0 | \n", "0.0 | \n", "-0.089623 | \n", "-0.035279 | \n", "-0.167183 | \n", "-1.0 | \n", "0.996585 | \n", "0.989747 | \n", "0.991390 | \n", "0.0 | \n", "
9 | \n", "-1.237671 | \n", "0.078365 | \n", "0.0 | \n", "-0.528664 | \n", "-0.358860 | \n", "-0.541412 | \n", "-0.957344 | \n", "-1.0 | \n", "1.956874 | \n", "1.193962 | \n", "... | \n", "0.0 | \n", "-1.0 | \n", "0.909780 | \n", "0.823122 | \n", "0.797100 | \n", "-1.0 | \n", "0.996585 | \n", "0.935964 | \n", "0.991390 | \n", "1.0 | \n", "
10 rows × 43 columns
\n", "\n", " | iteration | \n", "estimator | \n", "scoring | \n", "loss | \n", "mean_score | \n", "std_score | \n", "min_score | \n", "max_score | \n", "train_time | \n", "status | \n", "params | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|
0 | \n", "1 | \n", "LogisticRegression | \n", "accuracy | \n", "0.204218 | \n", "0.795782 | \n", "0.026768 | \n", "0.754190 | \n", "0.825843 | \n", "1.080047 | \n", "ok | \n", "{'C': 0.008174796663349533, 'penalty': 'l2', '... | \n", "
1 | \n", "2 | \n", "LogisticRegression | \n", "accuracy | \n", "0.210922 | \n", "0.789078 | \n", "0.037798 | \n", "0.720670 | \n", "0.820225 | \n", "1.232385 | \n", "ok | \n", "{'C': 0.007456580176063114, 'penalty': 'l2', '... | \n", "
2 | \n", "3 | \n", "LogisticRegression | \n", "accuracy | \n", "0.196384 | \n", "0.803616 | \n", "0.027326 | \n", "0.769663 | \n", "0.848315 | \n", "1.175930 | \n", "ok | \n", "{'C': 0.015197336097355196, 'penalty': 'l2', '... | \n", "
3 | \n", "4 | \n", "LogisticRegression | \n", "accuracy | \n", "0.191890 | \n", "0.808110 | \n", "0.025218 | \n", "0.780899 | \n", "0.842697 | \n", "1.189311 | \n", "ok | \n", "{'C': 0.016970348333069277, 'penalty': 'l2', '... | \n", "
4 | \n", "5 | \n", "LogisticRegression | \n", "accuracy | \n", "0.196391 | \n", "0.803609 | \n", "0.014813 | \n", "0.786517 | \n", "0.825843 | \n", "1.207008 | \n", "ok | \n", "{'C': 0.03346957219746275, 'penalty': 'l2', 'n... | \n", "
5 | \n", "6 | \n", "LogisticRegression | \n", "accuracy | \n", "0.193064 | \n", "0.806936 | \n", "0.020473 | \n", "0.780899 | \n", "0.831461 | \n", "1.180980 | \n", "ok | \n", "{'C': 0.18501222413286775, 'penalty': 'l2', 'n... | \n", "
6 | \n", "7 | \n", "LogisticRegression | \n", "accuracy | \n", "0.292838 | \n", "0.707162 | \n", "0.044509 | \n", "0.625698 | \n", "0.747191 | \n", "1.202759 | \n", "ok | \n", "{'C': 0.0011479966819945002, 'penalty': 'l2', ... | \n", "
7 | \n", "8 | \n", "LogisticRegression | \n", "accuracy | \n", "0.210922 | \n", "0.789078 | \n", "0.037965 | \n", "0.720670 | \n", "0.820225 | \n", "1.166395 | \n", "ok | \n", "{'C': 0.007362723951833776, 'penalty': 'l2', '... | \n", "
8 | \n", "9 | \n", "LogisticRegression | \n", "accuracy | \n", "0.195255 | \n", "0.804745 | \n", "0.023097 | \n", "0.776536 | \n", "0.837079 | \n", "1.171910 | \n", "ok | \n", "{'C': 0.028150535343720546, 'penalty': 'l2', '... | \n", "
9 | \n", "10 | \n", "LogisticRegression | \n", "accuracy | \n", "0.298456 | \n", "0.701544 | \n", "0.042828 | \n", "0.625698 | \n", "0.747191 | \n", "1.179324 | \n", "ok | \n", "{'C': 0.0010719047187090322, 'penalty': 'l2', ... | \n", "
10 | \n", "11 | \n", "LogisticRegression | \n", "accuracy | \n", "0.191903 | \n", "0.808097 | \n", "0.014580 | \n", "0.792135 | \n", "0.831461 | \n", "1.179505 | \n", "ok | \n", "{'C': 0.04917602741481999, 'penalty': 'l2', 'n... | \n", "
11 | \n", "12 | \n", "LogisticRegression | \n", "accuracy | \n", "0.188551 | \n", "0.811449 | \n", "0.010974 | \n", "0.797753 | \n", "0.831461 | \n", "1.170878 | \n", "ok | \n", "{'C': 0.06940715377237823, 'penalty': 'l2', 'n... | \n", "
12 | \n", "13 | \n", "LogisticRegression | \n", "accuracy | \n", "0.241215 | \n", "0.758785 | \n", "0.043890 | \n", "0.681564 | \n", "0.803371 | \n", "1.184253 | \n", "ok | \n", "{'C': 0.0029054318718131663, 'penalty': 'l2', ... | \n", "
13 | \n", "14 | \n", "LogisticRegression | \n", "accuracy | \n", "0.193014 | \n", "0.806986 | \n", "0.023733 | \n", "0.780899 | \n", "0.837079 | \n", "1.156993 | \n", "ok | \n", "{'C': 0.02173685952136848, 'penalty': 'l2', 'n... | \n", "
14 | \n", "15 | \n", "LogisticRegression | \n", "accuracy | \n", "0.193020 | \n", "0.806980 | \n", "0.015832 | \n", "0.787709 | \n", "0.831461 | \n", "1.139732 | \n", "ok | \n", "{'C': 0.042661619723085194, 'penalty': 'l2', '... | \n", "
15 | \n", "16 | \n", "LogisticRegression | \n", "accuracy | \n", "0.240092 | \n", "0.759908 | \n", "0.044651 | \n", "0.681564 | \n", "0.803371 | \n", "1.172876 | \n", "ok | \n", "{'C': 0.003325935337999729, 'penalty': 'l2', '... | \n", "
16 | \n", "17 | \n", "LogisticRegression | \n", "accuracy | \n", "0.195305 | \n", "0.804695 | \n", "0.019433 | \n", "0.780899 | \n", "0.831461 | \n", "1.192614 | \n", "ok | \n", "{'C': 0.15489548435948908, 'penalty': 'l2', 'n... | \n", "
17 | \n", "18 | \n", "LogisticRegression | \n", "accuracy | \n", "0.186310 | \n", "0.813690 | \n", "0.010923 | \n", "0.797753 | \n", "0.831461 | \n", "1.204993 | \n", "ok | \n", "{'C': 0.05931036664011822, 'penalty': 'l2', 'n... | \n", "
18 | \n", "19 | \n", "LogisticRegression | \n", "accuracy | \n", "0.201971 | \n", "0.798029 | \n", "0.028659 | \n", "0.754190 | \n", "0.831461 | \n", "1.141704 | \n", "ok | \n", "{'C': 0.008415834542277328, 'penalty': 'l2', '... | \n", "
19 | \n", "20 | \n", "LogisticRegression | \n", "accuracy | \n", "0.194143 | \n", "0.805857 | \n", "0.015445 | \n", "0.787709 | \n", "0.831461 | \n", "1.147760 | \n", "ok | \n", "{'C': 0.03837255565837678, 'penalty': 'l2', 'n... | \n", "