{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section1_2-Combinatorial-Optimization.ipynb)\n", "\n", "# Combinatorial Optimization\n", "\n", "Some optimization problems are ***combinatorial***, in the sense that there are $p$ items that can be ordered or combined in many different ways, some ways being better than others according to a set of specified criteria.\n", "\n", "As an example, consider the deceptively simple traveling salesperson problem (TSP), which attempts to optimize the hypothetical path of a salesperson who is required to visit each of a set of cities once, then return home. Assuming travel is the same distance irrespective of travel direction, there are:\n", "\n", "$$\\frac{(p-1)!}{2}$$\n", "\n", "possible routes. So, 5 cities have 12 possible routes, 10 cities have 181,440 routes, 50 cities have $3 \\times 10^{64}$ routes!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "from mpl_toolkits.basemap import Basemap\n", "\n", "import warnings\n", "warnings.simplefilter('ignore')\n", "\n", "sns.set(context='notebook', style='ticks', palette='viridis')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def parse_latlon(x):\n", " d, m, s = map(float, x.split(':'))\n", " ms = m/60. + s/3600.\n", " if d<0:\n", " return d - ms\n", " return d + ms\n", "\n", "cities = pd.read_csv('https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/brasil_capitals.txt', \n", " names=['city','lat','lon'])[['lat','lon']].applymap(parse_latlon)\n", "\n", "xmin, xmax, ymin, ymax = -70.0, -20.0, -40.0, 10.0\n", "\n", "fig = plt.figure(figsize=(11.7,8.3))\n", "bm = Basemap(projection='merc', \\\n", " llcrnrlon=xmin, llcrnrlat=ymin, \\\n", " urcrnrlon=xmax, urcrnrlat=ymax, \\\n", " lon_0=0.5*(xmax + xmin), lat_0=0.5*(ymax + ymin), \\\n", " resolution='l', area_thresh=1000000)\n", " \n", "bm.drawcoastlines(linewidth=1.5)\n", "bm.bluemarble()\n", "bm.drawcountries(linewidth=2)\n", "bm.drawstates(linewidth=1)\n", "\n", "for i,c in cities.iterrows():\n", " bm.plot(-c['lat'], c['lon'], 'ro', latlon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The difficulty of a problem of size $p$ can be quantified by ***the number of calculations needed to solve the worst-case scenario using the best available algorithm***.\n", "\n", "We can bound the number of operations using the \"Big-O\" notation, which is a relative representation of the complexity of an algorithm. It can be used to compare similar tasks doing similar sorts of operations on similar data. \n", "\n", "So, in general we use the notation:\n", "\n", "$$\\mathcal{O}(h(p))$$\n", "\n", "For example, if $h$ is polynomial in $p$, then the algorithm is polynomial.\n", "\n", "Given what we showed above, the TSP is $\\mathcal{O}(p!)$ or factorial complexity. If we were able to solve a 20-city problem in 1 minute using a particular algorithm, then we could solve:\n", "\n", "* $p=21$ in 21 minutes\n", "* $p=25$ in 12.1 years\n", "* $p=30$ in 207 million years\n", "* $p=50$ in $2.4 \\times 10^{40}$ years\n", "\n", "contrast this with a polynomial problem such as $\\mathcal{O}(p^2)$:\n", "\n", "* $p=21$ in 70 seconds\n", "* $p=25$ in 1.57 minutes\n", "* $p=30$ in 2.25 minutes\n", "* $p=50$ in 6.25 minutes\n", "\n", "Clearly, finding an algorithm that reduces solving a particular problem from factorial to polynomial time is advantageous." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![xkcd travelling salesman](http://imgs.xkcd.com/comics/travelling_salesman_problem.png)\n", "\n", "(via [xkcd](http://xkcd.com/399/))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Heuristics\n", "\n", "Unfortunately, there is no known combinatorial optimization algorithm for obtaining an optimal solution to the TSP in polynomial time. Instead, we must turn to ***heuristics***, which have no guarantee of a global maximum, but in practice tend to yield *good* results in a reasonable time. Thus, we are trading off global optimality for a little speed.\n", "\n", "Heuristics have two notable characteristics:\n", "\n", "* **iteration**: candidate solutions are incrementally improved\n", "* **localization**: search for improved solutions are restricted to a local neighborhood of the current solution\n", "\n", "This ***local search*** approach encompasses several specific techniques, some of which we will explore here. For a given candidate solution vector $\\mathbf{\\theta}^{(t)}$ at iteration $t$, we might change components $\\theta_i$ to propose an updated solution $\\mathbf{\\theta}^{(t+1)}$. Limiting the number of changes keeps $\\mathbf{\\theta}^{(t+1)}$ in the *neighborhood* of $\\mathbf{\\theta}^{(t)}$. We refer to $k$ changes to the candidate solution as a **k-change** and the set of possible new candidates as the *k-neighborhood*.\n", "\n", "A sensible approach for updating a candidate solution is to choose the best candidate from the neighborhood; this is called ***steepest ascent***. The selection of any improved candidate is called an *ascent*. However, choosing the steepest ascent from a neighborhood may not be globally optimal if, for example, it takes us toward a local maximum at the cost of missing a global maximum. An algorithm that uses a steepest ascent strategy in the context of local search is called a *greedy* algorithm.\n", "\n", "In order to attain a global maximum (or globally-competitive solution), it makes sense to occasionaly choose an candidate solution that is not the best-in-neighborhood. In other words, to move from one peak to another (higher) peak, one must pass through valleys.\n", "\n", "Since local search algorithms can easily converge to local optima, one *ad hoc* approach to improve exploration of the parameter space is to use ***random starts***. This simply involves running the local search multiple times, each time initializing the starting point randomly over $\\Theta$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Baseball salaries\n", "\n", "Perhaps we wish to develop a predictive model of baseball salaries, using various player statistics as predictors. To keep things simple, we will use linear regression on everyday players (non-pitchers) only. We seek an optimal (parsimonious) subset of a set that includes 27 candidate statistics. \n", "\n", "A brute-force search of the entire parameter space would involve comparing $2^{27} = 134,217,728$ models, so we will instead use random-starts local search." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "baseball = pd.read_table(\"https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/baseball.dat\", sep='\\s+')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.salary.hist(grid=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since players salaries are strongly skewed, we will log-transform the response variable for the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictors = baseball.copy()\n", "logsalary = predictors.pop('salary').apply(np.log)\n", "nrows, ncols = predictors.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the purposes of this example, we will start the algorithm from 5 different starting positions in the model space $\\Theta$, and will run each algorithm for 15 iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nstarts = 5\n", "iterations = 15" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will generate random starting candidate models by generating random indicators for each of the 27 predictors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Randomly initialize starting values of runs\n", "runs = np.random.binomial(1, 0.5, ncols*nstarts).reshape((nstarts,ncols)).astype(bool)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a measure of model parsimony, we will use Akaike's Information Criterion (AIC), which is an information-theoretic model selection criterion. It balances model fit (likelihood) with model size by penalizing the addition of parameters. \n", "\n", "$$\\text{AIC} = n \\log(\\text{SSE}/n) + 2k$$\n", "\n", "Thus models that add parameters that do not appreciably inrpove model fit will be down-weighted by AIC. Better models have a lower AIC value for a given dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aic = lambda g, X, y: len(y) * np.log(sum((g.predict(X) - y)**2)/len(y)) + 2*g.rank_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At each iteration of the algorithm, we examine models in the 1-neighborhood of the current model by adding or dropping each predictor in turn, fitting the model and calculating the associated AIC value. We then select the model with the lowest AIC in the neighborhood and set that to the current model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "\n", "runs_aic = np.empty((nstarts, iterations))\n", "\n", "for i in range(nstarts):\n", " \n", " run_current = runs[i]\n", " \n", " for j in range(iterations):\n", " \n", " # Extract current set of predictors\n", " run_vars = predictors[predictors.columns[run_current]]\n", " g = LinearRegression().fit(X=run_vars, y=logsalary)\n", " run_aic = aic(g, run_vars, logsalary)\n", " run_next = run_current\n", " \n", " # Test all models in 1-neighborhood and select lowest AIC\n", " for k in range(ncols):\n", " run_step = run_current.copy()\n", " run_step[k] = not run_current[k]\n", " run_vars = predictors[predictors.columns[run_step]]\n", " g = LinearRegression().fit(X=run_vars, y=logsalary)\n", " step_aic = aic(g, run_vars, logsalary)\n", " if step_aic < run_aic:\n", " run_next = run_step.copy()\n", " run_aic = step_aic\n", " \n", " run_current = run_next.copy()\n", " runs_aic[i,j] = run_aic\n", " \n", " runs[i] = run_current\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting the change in AIC, we see that all the runs tend to converge to the same (or equivalent) models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(runs_aic).T.plot(grid=False)\n", "plt.xlabel('iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This table shows the number of times a given predictor was used across all 5 solutions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series(runs.sum(0), index=predictors.columns).sort_values(ascending=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulated Annealing\n", "\n", "Annealing is a metallurgic process that involves heating a material beyond its critical temperature, maintaining a suitable temperature, and then cooling. If the process is followed according to a schedule appropriate to the material, annealing increases its ductility and reduces the number of defects. ***Simulated annealing*** uses an analogous process to pursue an optimial solution of a function over a large search space, in the sense that at high \"temperatures\" (large probabilities) solutions are proposed from more distant regions of the parameter space relative to the current solution. Similarly, \"cooling\" slowly reduces the number of exploratory proposals in order to converge on a (hopefully) global optimum. Thus, there is an initial period of liberal exploration, which gradually decreases to resemble hill-climbing procedures that we have already seen.\n", "\n", "A generic simulated annealing algorithm proceeds as follows:\n", "\n", "1. Initialize $t=0$, $\\theta^{(t=0)}$, temperature $\\tau^{(0)}$\n", "2. Iterate until convergence:\n", "\n", " a. Select candidate solution $\\theta^{\\prime}$ from neighborhood of $\\theta^{(t)}$ \n", " according to proposal g($\\theta^{(t)})$ \n", " b. Set $\\theta^{(t+1)} = \\theta^{\\prime}$ with probability:\n", " \n", " $$\\alpha = \\min\\left(1, \\exp\\left[\\frac{f(\\theta^{(t)})-f(\\theta^{\\prime})}{\\tau^{(j)}}\\right]\\right)$$\n", " \n", " otherwise, set $\\theta^{(t+1)} = \\theta^{(t)}$ \n", " c. Repeat (a) and (b) for $m_j$ iterations \n", " d. Increment $j$, update $\\tau^{(j)}$ and $m_j$ according to cooling schedule\n", " \n", "This algorithm can be halted once the minimum temperature is reached. The temerature $\\tau^{(j)}$ should slowly decrease, while the time spent at each temperature $m_j$ should correspondingly increase.\n", "\n", "Notice that this algorithm implies that though superior candidates are *always* adopted when proposed, inferior solutions are also accepted, but with some probability that is related to its quality relative to the current solution. This is what allows for exploration of the parameter space, and the escape from local optima." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Proposals\n", "\n", "A variety of proposal and neighborhood strategies can be effective, depending on the structure of the problem at hand. One constraint on generating proposed solutions is that all pairs of solutions $(\\theta^{(i)}, \\theta^{(j)}) \\in \\Theta, i \\ne j$ be able to ***communicate***. That is, there must exist some finite sequence of solutions that can be generated starting at $\\theta^{(i)}$ such that we eventually reach $\\theta^{(j)}$.\n", "\n", "The simplest example of a proposal strategy that allows for communication among solutions is to sample uniformly from the $k$-neighborhood of the current solution.\n", "\n", "### Cooling and Convergence\n", "\n", "Unlike some metaheuristics, the limiting behavior of simulated annealing is pretty well-known. Within each temperature regime, the SA algorithm produces a homogeneous Markov chain (since the transition probabilties do not change at each step), becoming non-homogeneous whenever a cooling event is triggered. If we generate symmetric proposals, such that the probability of proposing $\\theta^{(i)}$ from $\\theta^{(j)}$ is equal to that of proposing $\\theta^{(j)}$ from $\\theta^{(i)}$, then the *stationary* distribution of the Markov chain is:\n", "\n", "$$p_{\\tau}(\\theta) \\propto \\exp[-f(\\theta)/\\tau]$$\n", "\n", "So, in the limit, as we run the Markov chain, it converges to $p_{\\tau}(\\theta)$. During the SA algorithm, we want to run the chain long enough so that it is close to its stationary distribution before cooling. How long is \"long enough\" usually takes some experimentation using trial runs.\n", "\n", "In general, we adopt a cooling schedule that sets the temperature at period $j$ according to $\\tau^{(j)} = f_{\\alpha}(\\tau^{(j-1)})$ and the number of iterations in period $j$ according to $m_j = f_\\beta(m_{j-1})$. Some specific examples include:\n", "\n", "* $m_j = 1$ for all $j$, with very slow cooling via $\\tau^{(j)} = \\tau^{(j-1)}/( 1 + \\alpha \\tau^{(j-1)})$ for some small chosen $\\alpha$\n", "* $\\tau^{(j)} = \\alpha \\tau^{(j-1)}$ for some $\\alpha \\lt 1$ with $m_j = \\beta m_{j-1}$ for some $\\beta \\lt 1$\n", "\n", "One approach is to choose an initial temperature $\\tau^{(0)}$ so that $p$ is close to 1 for all combinations of $\\theta^{(i)}$ and $\\theta^{(j)}$, which provides any point a reasonable chance of being visited early in the simulation. \n", "\n", "The appropriate choice for $m_j$ is a tradeoff between performance and speed: a large number of steps can produce a more accurate solution, but requires additional computation. Evidence suggests that long runs at high temperatures is not optimal. For most problems, good cooling schedules invlove a rapid decrease in temperature early in the simulation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Baseball salaries\n", "\n", "Let's revisit the baseball salaries example, this time using simulated anealling to search for an optimal model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tau_start = 10\n", "cooling_schedule = [tau_start]*60 + [tau_start/2]*120 + [tau_start/10]*240\n", "#cooling_schedule = [tau_start*0.99**i for i in range(400)]\n", "aic_values = []" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initialize the annealing run and temperature schedule:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solution_current = solution_best = np.random.binomial(1, 0.5, ncols).astype(bool)\n", "solution_vars = predictors[predictors.columns[solution_current]]\n", "g = LinearRegression().fit(X=solution_vars, y=logsalary)\n", "aic_best = aic(g, solution_vars, logsalary)\n", "aic_values.append(aic_best)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the simulated annealing run:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for tau in cooling_schedule:\n", " \n", " # Random change 1-neighborhood\n", " flip = np.random.randint(0, ncols)\n", " solution_current[flip] = not solution_current[flip]\n", " solution_vars = predictors[predictors.columns[solution_current]]\n", " g = LinearRegression().fit(X=solution_vars, y=logsalary)\n", " aic_step = aic(g, solution_vars, logsalary)\n", " alpha = min(1, np.exp((aic_values[-1] - aic_step)/tau))\n", "\n", " if ((aic_step < aic_values[-1]) or (np.random.uniform() < alpha)):\n", " # Accept proposed solution\n", " aic_values.append(aic_step)\n", " if aic_step < aic_best:\n", " # Replace previous best with this one\n", " aic_best = aic_step\n", " solution_best = solution_current.copy()\n", " else:\n", " # Revert solution\n", " solution_current[flip] = not solution_current[flip]\n", " aic_values.append(aic_values[-1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(aic_values)\n", "plt.xlim(0, len(aic_values))\n", "plt.xlabel('Iteration')\n", "plt.ylabel('AIC')\n", "print('Best AIC: {0}\\nBest solution: {1}\\nDiscovered at iteration {2}'.format(aic_best, \n", " np.where(solution_best==True),\n", " np.where(aic_values==aic_best)[0][0]))\n", "plt.plot(np.where(aic_values==aic_best)[0][0], aic_best, 'ro')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can compare this solution to one of the local search results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.where(runs[-1]==True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Create a function for simulated annealing that uses the above code, but allows for custom cooling schedules (and other parameters) to be passed. Try running the function with the continuous cooling schedule (e.g. cooling by 10% each iteration) described above and comapare the results to the one above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Genetic Algorithms\n", "\n", "From a metallurgic metaphor, we turn to an ecological one: The genetic algorithm (GA) metaheuristic mimics the process of natural selection. In doing so, we construct a **population** of solutions that are analogous to organisms in a natural system. The relative quality of any particular solution represents its fitness, and improves its ability to pass on its desirable attributes to future generations of solutions, which are generated by combining with other viable solutions.\n", "\n", "Phenotypes in GA are candidate solutions, the components of which are coded into its genotype on the chromosome. For convenience, organisms in GA have a single chromosome. If we take our baseball salary problem as an example, the chromosome might simply consist a list of indicators for the presence of each available covariate in a particular model:\n", "\n", " [0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, \n", " 1, 1, 1, 0, 1, 1, 0, 0]\n", " \n", "this particular chromosome encodes for a model with `runs, triples, rbis, sos, freeagent, arbitration, obppererror, runspererror, hitspererror, soserrors, sbsobp` included as covariates. Thus, each locus on the chromosome is a linear model coefficient, and the two corresponding alleles are the presence and absence of that coefficient.\n", "\n", "*How would you encode the travelling salesperson problem?*\n", "\n", "The key aspect of the natural selection metaphor is the concept of ***fitness***. In nature, individuals with high fitness are those who are able to pass their genes onto the next generation more successfully than other individuals in the population; typically, this relates both to the ability to survive to reproductive age, and then to reproduce successfuly. In a genetic algorithm, \"fit\" individuals are solutions of high value, as determined by our objective function of choice; for the baseball salary example we have been using, this is the AIC value of the linear model corresponding to the candidate solution. We facilitate this in the algorithm by giving high-value solutions a higher probability of propagating into the next generation than those of low probability.\n", "\n", "Let's consider another genotype,\n", "\n", " [0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, \n", " 0, 1, 1, 1, 1, 0, 1, 0]\n", " \n", "Comparing this with the previous genotype, we say that the ***schema*** for this set of two genotypes is:\n", "\n", " [0, 0, 1, 0, *, 1, 0, *, 0, 1, 0, 0, 1, 1, *, 0, *, *, 0, \n", " *, 1, 1, *, 1, *, *, 0]\n", " \n", "where wildcards (`*`) represent loci that differ among the set. If this schema (or aspects of it) confer higher fitness, then it will tend to be represented in future populations as the algorithm progresses.\n", "\n", "The key difference between GA and previous metaheuristics is that here we track more than one candidate solution simultaneously, in the form of our \"population\" of organisms, each of which encodes a solution. This population changes over time as organisms reproduce, via ***crossover*** and ***mutation***." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reproduction and Genetic Change\n", "\n", "In a genetic algorithm, individual solutions \"reproduce\" by combining with other solutions in the population according to some rule that is fitness-based. For example, we may select pairs of parent solutions with probabilities proportional to their fitness, and combine them to generate child solutions, which replace their parents in the next generation.\n", "\n", "There are a handful of ***genetic operators*** that are used to create children from parents. The first is crossover, which randomly allocates a position on two parent chromosomes after which all the genes on the two chromosomes swap from one to the other. For example, let's arbitrarily specify a crossover after position 7 of a pair of 10-gene chromosomes (here indicated by an apostrophe):\n", "\n", "\n", " 0100101`001\n", " \n", " 0001001`111\n", " \n", "This would result in two new genotypes in the offspring of these parents:\n", "\n", "\n", " 0100101111\n", " \n", " 0001001001\n", " \n", "Notice that the schema from the two chromosomes (`0*0***01**1`) is preserved in the next generation.\n", "\n", "One way to escape the constraint of schema preservation is to apply a second genetic operator, mutation. Mutation is usually applied after crossover, and simply involves randomly changing an allele at a randomly-chosen locus, according to some specified mutation rate. Note that if mutation is too rare it reduces the rate at which the solution space is explored, and if too frequent it disrupts the inheritance of high-value schema." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pop_size = 20\n", "iterations = 100\n", "mutation_rate = .02" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aic_best = []\n", "best_solution = []\n", "aic_history = []" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we initialize the population, and their corresponding fitness." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(1)\n", "# Initialize genotype\n", "current_gen = np.random.binomial(1, 0.5, pop_size*ncols).reshape((pop_size, ncols))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def calculate_fitness(aic_values):\n", " P = len(aic_values)\n", " aic_rank = (-aic_values).argsort().argsort()+1.\n", " return 2.*aic_rank/(P*(P+1.))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(iterations):\n", " \n", " # Get phenotype\n", " current_phe = [predictors[predictors.columns[g.astype(bool)]] for g in current_gen]\n", " # Calculate AIC\n", " current_aic = np.array([aic(LinearRegression().fit(X=x, y=logsalary), x, logsalary) for x in current_phe])\n", " # Get lowest AIC\n", " aic_best.append(current_aic[np.argmin(current_aic)])\n", " best_solution.append(current_gen[np.argmin(current_aic)])\n", " \n", " # Calculate fitness according to AIC rank\n", " fitness = calculate_fitness(current_aic)\n", " \n", " # Choose first parents according to fitness\n", " moms = np.random.choice(range(pop_size), size=int(pop_size/2), p=fitness)\n", " # Choose second parents randomly\n", " dads = np.random.choice(range(pop_size), size=int(pop_size/2))\n", " \n", " next_gen = []\n", " for x,y in zip(current_gen[moms], current_gen[dads]):\n", " # Crossover\n", " cross = np.random.randint(0, ncols)\n", " child1 = np.r_[x[:cross], y[cross:]]\n", " child2 = np.r_[y[:cross], x[cross:]]\n", " # Mutate\n", " m1 = np.random.binomial(1, mutation_rate, size=ncols).astype(bool)\n", " child1[m1] = abs(child1[m1]-1)\n", " m2 = np.random.binomial(1, mutation_rate, size=ncols)\n", " child2[m2] = abs(child1[m2]-1)\n", " next_gen += [child1, child2]\n", " \n", " # Increment generation\n", " current_gen = np.array(next_gen)\n", " # Store AIC values\n", " aic_history.append(current_aic)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(aic_best)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i,x in enumerate(aic_history):\n", " plt.plot(np.ones(len(x))*i, -x, 'r.', alpha=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise: Wine classification\n", "\n", "Thirteen chemical measurements were carried out on each of 178 wines from three regions of Italy. These data are available in the `wine.dat` file in your `data` directory. Using one or more heuristic search methods from this lecture, partition the wines into three groups for which the total of the within-group sum of squares is minimal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wine = pd.read_table('https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/wine.dat', sep='\\s+')\n", "grape = wine.pop('region')\n", "nrows, ncols = wine.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## References\n", "\n", "Chapter 3 of [Givens, Geof H.; Hoeting, Jennifer A. (2012-10-09). Computational Statistics (Wiley Series in Computational Statistics)](http://www.stat.colostate.edu/computationalstatistics/)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 }, "nav_menu": {}, "toc": { "navigate_menu": true, "number_sections": false, "sideBar": false, "threshold": 6, "toc_cell": true, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }