{ "metadata": { "name": "", "signature": "sha256:b1bc69f60c6fde6b2dc02463bb2f2966851a8a977fbb090a2c45eed931b082e4" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Both classification and regression are done by `Estimator` objects.(Classifier and Regressors extend this class). Estimator more or less for most of the classifiers and regressors looks like in the following:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "class Estimator(object):\n", " \n", " def __init__(self, *args, **kwargs):\n", " # Initialization of object\n", " pass\n", " \n", " \n", " def fit(self, X, y):\n", " \"\"\"Train the Estimator\n", " Arguments:\n", " X(numpy array-like): Training Data\n", " y(numpy array-like): Labels\n", " \"\"\"\n", " # This goes the classifier algorithm\n", " # does not return, but updates the Estimator object\n", " pass\n", " \n", " def predict(self, X):\n", " \"\"\"Predict the test data\n", " Arguments:\n", " X(numpy array-like): Test Data\n", " Returns:\n", " y(numpy array): Predicted Labels\n", " \"\"\"\n", " # compute predictions\n", " \n", " return predictions" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to summarize Scikit Learn in three lines:\n", "\n", " est = Estimator()\n", " est.fit(X_train, y_train)\n", " est.predict(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Commoditization of machine learning\n", "\n", "Some people say ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we initialize the etimator, then fit the training instances by providing training dataset and labels. Then, try to predict the test dataset and get the predictions. Classification produces discrete labels(based on the number of classes) whereas regression produces rational numbers. However, the api stays same for classifiers and regressors for supervised learning. For unsupervised learning, we do not have `predict` as there is no `target` variable. We have `fit` and `transform` functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Advantages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- It has a consistent API which is easy to use while also providing a lot of evaluation, diagnostic and cross-validation methods out of the box (sound familiar? Python has batteries-included approach as well). \n", "- It uses Scipy data structures under the hood and fits quite well with the rest of scientific computing in Python with Scipy, Numpy, Pandas and Matplotlib packages. Therefore, if you want to visualize the performance of your classifiers (say, using a precision-recall graph or Receiver Operating Characteristics (ROC) curve) those could be quickly visualized with help of Matplotlib. Considering how much time is spent on cleaning and structuring the data, this makes it very convenient to use the library as it tightly integrates to other scientific computing packages.\n", "- It has also limited Natural Language Processing feature extraction capabilities as well such as bag of words, tfidf, preprocessing (stop-words, custom preprocessing, analyzer). \n", "- If you want to quickly perform different benchmarks on toy datasets, it has a datasets module which provides common and useful datasets. You could also build toy datasets from these datasets for your own purposes to see if your model performs well before applying the model to the real-world dataset. \n", "- For parameter optimization and tuning, it also provides grid search and randomized parameter search. " ] } ], "metadata": {} } ] }