{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Homepage: https://spkit.github.io\n",
"
Nikesh Bajaj : http://nikeshbajaj.in"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Classification Trees: Depth & Decision boundaries using SpKit"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**:In this notebook, we show, how the depth of a decision tree affect the decision boundries for classification. With toy examples of simulated 2D datasets, plotting the decsison boundries allows us to understand the overfitting of tree. You will be able to observe that by lowering the depth of tree, you can minimize the overfitting. Higher depth of a tree create a much complex decision boundry to classify each example in training correctly, as a consequences, it fails to generalize for test data. The take away message of this notebook is be careful to choose hyperparameter ***max_depth*** for a tree before training."
]
},
{
"cell_type": "markdown",
"metadata": {
"toc": true
},
"source": [
"