\n",
"\n",
"# The Unfinished Game ... of Risk\n",
"\n",
"[Keith Devlin](https://web.stanford.edu/~kdevlin/)'s [book](https://www.amazon.com/Unfinished-Game-Pascal-Fermat-Seventeenth-Century/dp/0465018963) [*The Unfinished Game*](https://wordplay.blogs.nytimes.com/2015/12/14/devlin-unfinished-game/) describes how Fermat and Pascal discovered the rules of probability that guide gambling in games. The question they confronted was: what if a gambling game is interrupted, but one player is in the lead by a certain score. How much of the pot should the leader get?\n",
"\n",
"My friends and I faced a similar question when a game of [*Risk*](https://www.ultraboardgames.com/risk/game-rules.php) ran on too long (as they often do). Player **A** was poised to make a sweeping attack on player **D**, whose territories were arranged in such a way that **A** could attack from one territory to the next without ever having to branch off. We wrote down the number of **A**'s massed armies, **72**, and the number of armies in **D**'s successive territories: **22, 8, 2, 2, 2, 7, 1, 1, 3, 1, 2, 3, 5, 1.** What is the probability that **A** can capture all these territories?\n",
"\n",
"![](https://www.ultraboardgames.com/risk/gfx/board.jpg)\n",
"\n",
"______\n",
"\n",
"# Terminology\n",
"\n",
"Let's explain some *Risk* [rules](https://www.ultraboardgames.com/risk/game-rules.php) and terminology:\n",
"\n",
"- A **battle** is when armies from one territory attack an enemy neighboring territory. A roll of dice determines which armies perish:\n",
" - The attacker will roll 3 dice if possible (but no more than the number of armies in the attacking territory minus one).\n",
" - The defender will roll 2 dice if possible (or only one die if they have only one army defending). \n",
" - The **outcome** of the battle is determined by comparing the highest die from each of the two players, with the defender losing an army if the attacker's die is higher, and the attacker losing an army if tied or lower. Then if both sides rolled at least two dice, we do the same comparison with the second highest die on each side. \n",
" - When a battle kills off the last defender in a territory, the attackers **occupy** the territory. They must leave behind one army, but can move the rest in. \n",
"\n",
"- A **campaign** consists of a sequence of battles and occupations. In this notebook we will consider only a **chain campaign**, in which attackers invade successive enemy territories in order, always moving all their remaining armies into each captured territory, never branching, and never changing strategy based on the outcome of a battle. The attackers **win** the campaign if they occupy all the territories. The attackers **lose** if there are any remaining defenders and only one remaining attacker, who by rule cannot attack. \n",
"\n",
"\n",
"\n",
"With that out of the way, we're ready for some Python implementation."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Iterable, Tuple\n",
"from collections import Counter\n",
"from functools import lru_cache\n",
"from dataclasses import dataclass\n",
"import random\n",
"import itertools\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Representing the State of a Campaign\n",
"\n",
"I will represent the **state of a campaign** with the class `State`, and the state of the unfinished game as `start`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"@dataclass(frozen=True)\n",
"class State:\n",
" A: int # Number of attackers\n",
" D: int # Number of defenders in first territory\n",
" rest: Tuple[int] = () # Tuple of numbers of defenders in subsequent territories \n",
" \n",
"start = State(A=72, D=22, rest=(8, 2, 2, 2, 7, 1, 1, 3, 1, 2, 3, 5, 1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function `update` will update a state to reflect the `Deaths` that happened in a battle. We declare `game_over` when there are no defenders or only a single attacker (who can't leave their territory) remaining."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"@dataclass(frozen=True)\n",
"class Deaths:\n",
" \"The number of attackers and defenders who die in a battle.\"\n",
" A: int\n",
" D: int\n",
" \n",
"def update(state: State, dead: Deaths) -> State:\n",
" \"\"\"Update the `state` of a campaign to reflect the`dead` in a battle.\"\"\"\n",
" a = state.A - dead.A # Attackers remaining\n",
" d = state.D - dead.D # First territory defenders remaining\n",
" r = state.rest # Other territories defenders remaining\n",
" return (State(a, d, r) if d # Defenders still in first territory\n",
" else State(a - 1, r[0], r[1:]) if r # First territory captured\n",
" else State(a - 1, 0)) # All territories captured\n",
"\n",
"def game_over(state) -> bool: \n",
" \"\"\"Is the game over?\"\"\"\n",
" return state.D == 0 or state.A <= 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Rolling the Dice and the Outcome of a Single Battle\n",
"\n",
"We'll represent a roll of the dice with a list of integers; for example the attacker might roll `[2, 6, 1]` with three dice. The function `random_roll(n)` gives a random outcome for rolling `n` dice. The function `battle_deaths`, when given the two lists of dice rolled by attacker and defender, returns the number of attackers and defenders who perish in the battle."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"die = (1, 2, 3, 4, 5, 6)\n",
"Dice = List[int] # a list of die rolls, like [2, 6, 1]\n",
"\n",
"def random_roll(n) -> Dice:\n",
" \"\"\"Roll n dice randomly.\"\"\"\n",
" return [random.choice(die) for _ in range(n)]\n",
"\n",
"def battle_deaths(A_dice: Dice, D_dice: Dice) -> Deaths:\n",
" \"\"\"How many (attacker, defender) armies perish as the result of these dice?\"\"\"\n",
" dead = Counter('D' if a > d else 'A'\n",
" for a, d in zip(sorted(A_dice, reverse=True), \n",
" sorted(D_dice, reverse=True)))\n",
" return Deaths(dead['A'], dead['D'])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"([6, 1, 6], [6, 4], Deaths(A=1, D=1))"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Example battle\n",
"A_dice = random_roll(3)\n",
"D_dice = random_roll(2)\n",
"A_dice, D_dice, battle_deaths(A_dice, D_dice)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Monte Carlo Simulation of a Campaign\n",
"\n",
"A **simulation** makes random choices at every choice point (here, every dice roll), and reports on the outcome. The function `simulate_campaign` rolls random dice for a battle until the game is over:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def simulate_campaign(state) -> State:\n",
" \"\"\"Simulate a campaign with random rolls, returning the final state.\"\"\"\n",
" while not game_over(state):\n",
" dead = battle_deaths(random_roll(min(3, state.A - 1)), \n",
" random_roll(min(2, state.D)))\n",
" state = update(state, dead)\n",
" return state"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see who wins:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"State(A=5, D=0, rest=())"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"simulate_campaign(start)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That final state says that the attackers won—there are 5 attackers and no defenders left. \n",
"\n",
"But what if we want to know the **probability** that the attackers win? [Monte Carlo simulation](https://en.wikipedia.org/wiki/Monte_Carlo_method) answers that question by repeating a simulation many times and summarizing all the final outcomes. The summary is in the form of a **probability distribution**, a mapping of `{outcome_state: probability}` pairs, which I have implemented as `ProbDist`. \n",
"\n",
"*Note*: I have `ProbDist` as a subclass of `Counter`, with the restriction that the values are normalized to sum to 1. I realize that the name `Counter` suggests integer counts, but that's not a requirement, and `Counter` has a nicer API than `dict`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"class ProbDist(Counter): \n",
" \"A Probability Distribution.\"\n",
" def __init__(self, *args):\n",
" \"Normalize total to 1.\"\n",
" Counter.__init__(self, *args)\n",
" total = sum(self.values())\n",
" for x in self:\n",
" self[x] /= total "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ProbDist({4: 0.176, 3: 0.168, 2: 0.163, 1: 0.175, 5: 0.16, 6: 0.158})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ProbDist(random_roll(1000)) # Probability distribution over 1,000 die rolls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The higher-order function `monte_carlo` works with any random simulation function, calling it `k` times, each time passing in a `start` state, and collecting the `k` outcome states in a `ProbDist`:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"def monte_carlo(simulation, start, k=1000) -> ProbDist:\n",
" \"Call `simulation(start)` repeatedly (`k` times) and return a ProbDist of outcomes.\"\n",
" return ProbDist(simulation(start) for _ in range(k))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we simulate the campaign \n",
" 10 times and see the summary of outcomes:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ProbDist({State(A=16, D=0, rest=()): 0.1,\n",
" State(A=1, D=5, rest=(1,)): 0.1,\n",
" State(A=11, D=0, rest=()): 0.2,\n",
" State(A=1, D=2, rest=(5, 1)): 0.1,\n",
" State(A=17, D=0, rest=()): 0.1,\n",
" State(A=6, D=0, rest=()): 0.2,\n",
" State(A=1, D=0, rest=()): 0.1,\n",
" State(A=28, D=0, rest=()): 0.1})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"monte_carlo(simulate_campaign, start, 10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And here we run 1,000 simulations and report how often the attackers win:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.8190000000000004"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def attacker_win_probability(dist: ProbDist) -> float: \n",
" \"\"\"The probability that the attackers win the campaign.\"\"\"\n",
" return sum(dist[s] for s in dist if not s.D)\n",
"\n",
"attacker_win_probability(monte_carlo(simulate_campaign, start, 1000))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Monte Carlo simulation says the attackers win about 82% of the time. How accurate is that result? The standard deviation of the expected value of a [binomial variable](https://www.mathsisfun.com/data/binomial-distribution.html) is $\\sqrt{p(1-p)/n}$, where $p$ is the true probability of one of the two outcomes and $n$ is the number of samples. In our case $\\sqrt{0.8(1-0.8)/1000}$ gives a standard deviation of about 1%. So we can be pretty confident that the true percentage is within 3 standard deviations; that is, between 79% and 85%.\n",
"\n",
"We could get better accuracy at the cost of increased computing time. To get the standard deviation down by a factor of 10 from 1% to 0.1% would require 100 times more computation (because of the square root in the formula). \n",
"\n",
"# Monte Carlo versus Exact Probability Calculation\n",
"\n",
"An alternative to the Monte Carlo approach is to explicitly calculate the exact probability distribution of the final state of the campaign. The differences between the two approaches are:\n",
"\n",
"- The **Monte Carlo approach** deals with a **single current state**, using random dice rolls to decide how that state changes. At the end of a simulated campaign we get a single final state, and we repeat the simulation to get an **estimated** probability distribution over final states.\n",
"\n",
"- The **exact probability calculation approach** deals with a **probability distribution over states**, right from the start. At each dice roll, every possible outcome is considered, and the probability distribution is updated to reflect all the possible outcomes and their probabilities. At the end of the campaign we have an **exact** probability distribution over all possible final states (well, exact at least to the limits of floating point precision; if you need more precision, use `fractions.Fraction`).\n",
"\n",
"In deciding whether to use the exact calculation approach, there are three considerations:\n",
"\n",
"- **Is it worth the effort?** Code for an exact calculation is more complex, and will take more time to write and debug.\n",
"- **Is it possible?** The exact calculation approach only works for **finite games**. \n",
"- **Is it feasible?** Computation may take too long if there are too many possible states of the game.\n",
"\n",
"Let's examine the three considerations:\n",
"\n",
"**Worth the effort?** If I only wanted to know whether the attackers chance of winning is above or below 50%, then the Monte Carlo approach is the quickest way to answer the question. The code will be simple and straightforward. Monte Carlo code deals with the single current state of the simulation, for example: \n",
"\n",
" while not game_over(state):\n",
" \n",
"But in the exact calculation we need to consider every possible states (referenced in a loop):\n",
"\n",
" while any(not game_over(state) for state in probdist):\n",
"\n",
"Putting all these loops into the code makes it more complex. (Less added complexity in Python, which has comprehensions, than in other languages where loops must be statements, not expressions.) After the shared basics of representing states and battle outcomes, we need just 10 non-comment lines of code to implement the Monte Carlo method (in `simulate_campaign`, `monte_carlo`, and `random_roll` above), but twice as much code (22 lines) to implement exact calculation (in `battle_deaths_probdist`, `all_rolls`, `campaign_probdist`, and `battle_update_probdist` below).\n",
"\n",
"**Possible?** Imagine a *Risk* rule change where if the attackers roll three 6s and the defenders roll two 6s, then both sides *gain* an army. For the Monte Carlo simulation it would be trivial to add a single `if` statement to handle this rule, and the expected run time of the program would hardly change. But with exact calculation everything has changed, because we now have an **infinite** game: no matter how many moves ahead we look, there will always be some possible states that have not terminated. (To deal with these we would have to make some compromises, such as stopping the calculation when there are still some nonterminal states, as long as they have sufficiently low total probability. Sometimes a mathematical formula can determine the value of an infinite game.)\n",
"\n",
"**Feasible?** Imagine a rule change where before each battle, every army independently has the option to take one of ten possible actions (e.g. to move to a safe neighboring territory). Then with just 80 armies, the very first move has $10^{80}$ possible outcomes, meaning that the probability distribution requires as many states as there are [atoms in the universe](http://norvig.com/atoms.html). (To deal with this we would either stick with the Monte Carlo simulation, or a variant of it such as [particle filtering](https://en.wikipedia.org/wiki/Particle_filter) in which we maintain a **sample** of several possible states–more than a single state, but less than the complete state distribution, or we would find a way to abstract over the possible moves.)\n",
"\n",
" The *Risk* campaign problem as it stands leads to a very efficient exact probability calculation (possible, feasible, and, IMHO, worth it). If there are $n$ total armies to start, then there are fewer than $n^2$ possible states, and the game can last no more than $n$ moves. With $n$ in the range of a few hundred, computation takes less than a second; much faster and more accurate than doing 100,000 simulations."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exact Probability Distribution over a Battle\n",
"\n",
"The function `battle_deaths` was defined above to return the specific death counts for a specific dice roll. \n",
"\n",
"Now we'll define `battle_deaths_probdist` to give a probability distribution over all possible battle outcomes, corresponding to all possible dice rolls. The input to `battle_deaths_probdist` is the number of attacking and defending armies for just this one battle (*not* the total number of attacking and defending armies in the current state). Thus, the attackers will always be 3, 2, or 1, and the defenders will always be 2 or 1. I define it this way so that I can **cache** the results and reuse them in subsequent battles, for efficiency."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"@lru_cache()\n",
"def battle_deaths_probdist(battlers: State) -> ProbDist:\n",
" \"\"\"A probability distribution of deaths in a single battle.\n",
" Requires 1 <= battlers.A <= 3 and 1 <= battlers.D <= 2.\"\"\"\n",
" return ProbDist(battle_deaths(A_dice, D_dice)\n",
" for A_dice in all_rolls(battlers.A)\n",
" for D_dice in all_rolls(battlers.D))\n",
"\n",
"def all_rolls(n) -> Iterable[Dice]: \n",
" \"\"\"All possible rolls of `n` dice.\"\"\"\n",
" return tuple(itertools.product(die, repeat=n))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ProbDist({Deaths(A=2, D=0): 0.2925668724279835,\n",
" Deaths(A=1, D=1): 0.3357767489711934,\n",
" Deaths(A=0, D=2): 0.37165637860082307})"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"battle_deaths_probdist(State(3, 2)) # Three attacker dice against two defender dice"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exact Probability Distribution over a Campaign\n",
"\n",
"Our old function `campaign` is mimiced by `campaign_probdist`, except that the former updates a `State` whereas the later updates a `ProbDist`. We start with certainty: there is a 100% chance that we are initially in the `start` state. But the [fog of war](https://en.wikipedia.org/wiki/Fog_of_war) means that uncertainty soon arises: we don't know how the dice will land, so we don't know the outcome of the very first battle. Subsequent battles add more uncertainty. \n",
"\n",
"`campaign_probdist` calls `battle_update_probdist` to update the probability distribution to account for one battle, in all the possible ways it can turn out. The key line in `battle_update_probdist` is\n",
"\n",
" outcomes[update(state, dead)] += probdist[state] * dead_probdist[dead]\n",
" \n",
"which tells us to consider the outcome in which `dead` is the number of attackers and defenders that die in a battle, and to update `state` with those death counts, and increment the probability of that updated outcome by the product of the probability of `state` and the probability of `dead` given `state`."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"def campaign_probdist(start: State) -> ProbDist:\n",
" \"\"\"Probability distribution for all outcomes of a campaign.\"\"\"\n",
" probdist = ProbDist({start: 1.0})\n",
" while any(not game_over(state) for state in probdist):\n",
" probdist = battle_update_probdist(probdist)\n",
" return probdist\n",
"\n",
"def battle_update_probdist(probdist) -> ProbDist:\n",
" \"\"\"For every possible campaign state in the `probdist`, consider the outcomes of a battle. \n",
" Combine these all into one updated `outcomes` ProbDist.\"\"\"\n",
" outcomes = ProbDist()\n",
" for state in probdist:\n",
" if game_over(state): # `state` carries through unchanged to `outcomes`\n",
" outcomes[state] += probdist[state] \n",
" else: # Replace `state` with all possible outcomes from a battle\n",
" dead_probdist = battle_deaths_probdist(Deaths(min(3, state.A - 1), min(2, state.D)))\n",
" for dead in dead_probdist:\n",
" outcomes[update(state, dead)] += probdist[state] * dead_probdist[dead]\n",
" return outcomes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Finishing the Unfinished Game (Exactly)\n",
"\n",
"What is the exact probability of winning the unfinished game?"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.8105485936352178"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"attacker_win_probability(campaign_probdist(start))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The attackers defeat all the defenders 81.05% of the time. \n",
"\n",
"What are the 20 most common outcomes? "
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[(State(A=12, D=0, rest=()), 0.03824220182706657),\n",
" (State(A=11, D=0, rest=()), 0.0380992150215239),\n",
" (State(A=13, D=0, rest=()), 0.038032667992797725),\n",
" (State(A=10, D=0, rest=()), 0.037618411457469664),\n",
" (State(A=14, D=0, rest=()), 0.03746512036620402),\n",
" (State(A=9, D=0, rest=()), 0.036822601595588304),\n",
" (State(A=15, D=0, rest=()), 0.036544033638847624),\n",
" (State(A=8, D=0, rest=()), 0.035741340182918115),\n",
" (State(A=16, D=0, rest=()), 0.035284184613703425),\n",
" (State(A=7, D=0, rest=()), 0.03440940023374102),\n",
" (State(A=17, D=0, rest=()), 0.03371050842173721),\n",
" (State(A=6, D=0, rest=()), 0.03286517629527088),\n",
" (State(A=18, D=0, rest=()), 0.031857448871758426),\n",
" (State(A=5, D=0, rest=()), 0.031149102741286083),\n",
" (State(A=19, D=0, rest=()), 0.029767807751056283),\n",
" (State(A=4, D=0, rest=()), 0.029302161098148583),\n",
" (State(A=20, D=0, rest=()), 0.027491133463900305),\n",
" (State(A=3, D=0, rest=()), 0.0273645356924103),\n",
" (State(A=1, D=5, rest=(1,)), 0.026621109937678585),\n",
" (State(A=1, D=1, rest=()), 0.025661022694069335)]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"campaign_probdist(start).most_common(20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The most probable outcome is that the attackers win with 12 remaining armies. The top 10 outcomes have anywhere from 7 to 16 attackers remaining. You have to go down to the 19th most common outcome to find one where the defenders win."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Analyzing and Visualizing Campaigns\n",
"\n",
"Let's try to visualize all the possible outcomes. I'll define the **score** of a campaign as the number of attacker armies in the final territory minus the total number of defenders, minus the number of territories that the defenders hold. This score will be positive when the attackers win and negative when they lose. A score cannot be zero. The function `show` plots score probabilities, with the attacker's wins in green and the defender's wins in blue:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def show(probdist, epsilon=0.0002):\n",
" \"\"\"Plot and annotate a probability distribution over states.\"\"\"\n",
" states = [s for s in probdist if probdist[s] > epsilon] # Ignore low-probability states\n",
" X = [score(s) for s in states]\n",
" Y = [probdist[s] for s in states]\n",
" μ = sum(score(s) * probdist[s] for s in probdist)\n",
" p = attacker_win_probability(probdist)\n",
" plt.figure(figsize=(10, 5))\n",
" plt.title(f'Attacker wins {p:.2%}. Average score: {μ:.2f}.')\n",
" plt.xlabel('Score (positive when attacker wins)')\n",
" plt.ylabel('Probability of score')\n",
" plt.bar(X, Y, width=0.7, color=['g' if x > 0 else 'b' for x in X])\n",
" \n",
"def score(state): \n",
" return (state.A if not state.D else \n",
" state.A - (state.D + sum(state.rest)) - (len(state.rest) + 1))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAm0AAAFNCAYAAABST1gVAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3deZglVX3/8feHGTZZdRyNAsOg4AJEiY5oEvcV10EFHUIUDIaYhGhWhSSKKOanSdTEiAsKAVEDiNuoKKKIuBBkCCCLYEbEMEAEZBFQ1MHv7486DZc7vdwhc7v7dr9fz1NPV506de6purf7fvucOnVSVUiSJGl222imKyBJkqSpGbRJkiSNAIM2SZKkEWDQJkmSNAIM2iRJkkaAQZskSdIIMGiTRkiSpUkqycIZrsdtSR4yk3WQpPnGoE26l5KcmeSmJJv2pR+X5Mi+tCuTPHN6azg8VbVlVV2xIctMcr8kJyW5oS0fS7J1z/63Jrkoydokb56irCR5R5KftOUfk6RnfyW5vQWftyX5cM++30tybZIfJnlqT/pDk3w7yYJ7cW5vbq+55/oeq3snyQuTXNze328n2XWSvJf0fBZua5+xz7V9D0vy2STXJ7kxyWlJHj59ZyLdzaBNuheSLAWeBBTwohmtzL3QgprZ9vt/JHBf4CHAQ4EHAm/u2b8aeD3whQHKOhjYG3g08CjgBcAf9eV5dAs+t6yqVwO0Fsy3A48B/gx4b0/+9wB/WVV3rs9JtWDxFcCNwAHrc+x6vMaMtrxuaP/X80myC/Ax4DXAtsDngJUTlVtVu419FoCtgP8BPtF2bwusBB5O95n8DvDZ/0v9pHtrtv3RlkbFK4H/BI6j54s4ycHA/sDr23/sn0tyArAE+FxLe33L+4kk/5vkliRnJdmtp5zNk7wzyY/a/m8m2by/Ekle2lrxdm/bT2itCjcnubCvpejMJG9L8i3gZ3TBUW9ZrxprXWjbq5Oc3LN9VZI92nol2bmtH5fkqCRfSHJrknOSPLTtS5J3J7muncd3x+o6jp2Az1TVT6vqFuDTwF3XpKqOr6ovArdO9Kb0OAB4Z1WtqaqrgXcCBw5w3CLg6qq6FvgK7Rol2ael/+cAZfR7EvBg4HXAiiSbtDI3be/TXdcjyeIkP0/ygLb9giQXtHzfTvKonrxXJnlDku8CtydZmOTQJD9o78OlSV7ck39B+0zd0FoRD0lPV3uSbZIc01oZr05y5EStikn2TLIqyU+T/DjJu3r2PbHnM3hVkgN7yv9Ia7H6UZK/H/vHIcmBSb7VPis30oL1JH+Q5HvpWrRPS7LjgNf8OcA3quqbVbUWeAewHfCUAY59MvAA4JMAVfWdqjqmqm6sql8B7wYenmTRgHWRNpyqcnFxWc+FrtXnT4DHAr8CHtiz7zjgyL78VwLP7Ev7A7r/6jcF/gW4oGffUcCZdF80C4DfafmW0rXuLQRe1eqxcztmO+AnwPPo/iF7Vtte3PafSdeCsFs7fuO++jwEuLkd+yDgR3SByti+m4CN2nb1vO5xdK1Ie7ZyPwac2PY9BziPrrUiwCOBB01wTV8AnErX2nZf4Azgz8fJ91HgzVO8P7cAj+/ZXgbc2rNdwDXA/wKfApa29I2A7wPbAy8EzgW2BC4AFt3Lz8oxwMnAxu39eEnPvmOBt/Vs/ynwpbb+GOA64PHtM3BA+xxt2vOZugDYAdi8pe1LFyBuBLwcuH3setO1Ol3azu2+dEFpAQvb/s8AHwS2oAtavgP80QTndDbwira+JfCEtr6ELqjer53vImCPtu8jdC1UW9F9jr8PHNT2HQispWvdXAhsTtdSurp9ZhYCfw98u6cOnwcOnaB+fwac2rO9ALgDeN0A79exwHGT7N8buHam/wa5zM9lxivg4jJqC/BEukDt/m37MuAvevYfxwBBW9/+bdsX6DbtC/fndN13/fmWtnx/PfYF3LPvDcAJfflPAw5o62cCb5ni3K5qwcIK4Oj2xf0IugBxZU++/qDtwz37ngdc1taf3r6cn0AL+CZ57Qe3QOLXbTkd2GScfIMEbXcCj+jZ3qXVOW37ycAm7bq/F7iYu4OXZ9C1on4d2AN4F3AQ8FTga+2a7j7gZ+U+wE+Bvdv2B4HP9ux/JnBFz/a3gFe29fcDb+0r73LgKT2fqT+Y4vUvAJa39TPoCcLaa4/9A/BA4Be04K/t3w/42gTlngUcQfsd6Ek/DPj0OPkXtPJ37Un7I+DMtn4g8D99x3yRFtS17Y3oWoh3HOC6P4IuYH1qe5/f2D5Thw34fj11gv3bA1cD+w3y/ru4bOjF7lFp/R0AfLmqbmjbH2c971VqXVVvb11ZP6X7Aga4f1s2A34wSRF/AxxVVWt60nYE9m3dUjcnuZkuwHxQT56rpqja1+m+6J7c1s+k61J6StueyP/2rP+MrvWFqjqDLig6CvhxkqPTM7igzyfoArytgK3pzv+jU9R3Ire1MsZsDdxWVdXqdVZV/bKqbqbrttyJrkWHqvpqVT2hqp5C90W/jC4wPYEuuHgr8GEG82K6FqRT2/bHgOcmWdy2zwA2T/L41vW3B123MHTv51/1vZ870AW3Y+7xfiZ5ZU936s3A7nSfJ9pxV01w7I50LWPX9hz7QboWt/EcBDwMuCzJuUle0NJ3YPzP7f3pgqcf9aT9iK51eNxzaXX615763EjXWrsdU6iqy+h+J98LXNte/1JgzWTHAS9pr7POZ729Z18G3ldV/zFVHaRhmFM3r0rD1u4rexmwIMlYoLIpsG2SR1fVhXStF/36034PWE7X2nElXQvbTXRfSjfQdeU8FLhwgqo8G/hSkv+tqk+2tKvoWtr+cJJTGK9uvb5O1y24E/APdN2l+wO/zT1vyh9YVb0HeE+7T+tkuoDzjeNkfTTwJ1V1O0CSDwDfvDevCVzSyvtOT9mXTFZNumt/lyShO+fX0n3pL6iqH7X3/VHrFjGuA+gC2P/piiN0wdF+wHuq6tftvsH9gB8Dn6+qsXv2rqLrOn3bFPUeq++OwIfoWgrPrqo7k1zQc17X0rUUjdmhZ/0qupaw+1d3D9ikquq/gf3aPWkvAU5p93hdRddN3u8GutbpHemCJ+i6Uq8e71x66vS2qvrYVPWZoI6nAKcAJNmW7naEc6c47ADgI2PB/Zgk96UL2FZO8X5IQ2VLm7R+9qbretuVrlVkD7oWmm/QDU6A7su3/xlm/Wlb0X1J/oSuS+YfxnZU1a/p7qt5V5IHt1a53849Hy1yCbAXcFSSsdGrHwVemOQ57ZjNkjw1Se8X9VS+DjyNrptsTTuvvejuTTp/PcoBIMnjWivSxnTdVXfQXb/xnAu8Ot0gjM3pRoDeFbQm2TjJZnR/txa285vo8RsfAf4yyXZJHgz8FV1rGUl2S7JHu0Zb0g1SuBr4Xl8ZrwbOr6oL6N6nzdM9NuJpwJSPO0myHV0A9QLu/qw8mu6m+N6W2Y/T3X+2f1sf8yHgNe36JckWSZ6fZKsJXnILusDn+vb6r6JraRtzMvC6dk22petOB6C6gRdfBt6ZZOskG6V7xMm4N+4n+f0ki9tn9eaWfCddS+Izk7ws3cCIRUn2qG7E7cnA25Js1QLMv2TyltQPAIelDdBpAxn2nSR/fx0f297jxXSthp9rLXAT5d+e7r09vi99a7ou8W9V1aGDvr40FDPdP+viMkoL8CW6UYn96S+j6yJcSHf/1AV0X2afafuX0w0CuJnufrQt6W7KvpWum+iV3PM+sc3pBidcTXdT/VktbSn3vHl8GV1A+Ny2/Xi6wOtGui/vLwBL2r4zgVcPcI7XAv/es70K+GJfnv572o7s2fdUYE1bfwbwXbruyhvovtS3nOB1d6J7NMNPWv2/BOzSs/+49rq9y4Ft35Pouj/H8gb4x1bOjW197H62p9PdG3Y73Y3+n+l9nZbn/nT3uW3dk7Z/e4+vBJ7W0pa0c1syzvkcCpw3TvqD6Vqddu9JW93quUlf3r3ogtmb2/vyCWCrtu9K1h3c8rZWzg109+J9few9p/tsvrtd3x8Cf9HqMXZdtqG7j24N3WfufGDFBO/VR9u1u43uH4i9e/Y9CTiH7t6wq7j7nsr7tuOub+lv4u6BLQcC3xzndV4BXNRT1rE9+74I/O0kn+Nv0v1+3UgbYNH3Xl7Sl/8wuhGn/eUc0D5rt7fzHVuWTFSWi8uwlrFfVknSPJLkucAHqmrHma6LpMHYPSpJ80Drdn5e67bcDjicuwc9SBoBtrRJ0jyQ5D503aWPoHukzBfonlv20xmtmKSBDbWlLcleSS5P92T1dW7gTPdE8JPa/nPSTQ3Uu39JuifI//WgZUqS1lVVP6uqx1XVVlX1gKp6lQGbNFqGFrS1UV1HAc+lG2m3X9adsPcg4Kaq2pnuBtl39O1/N93NputTpiRJ0pwzzJa2PYHVVXVFVf0SOJFuBF2v5dw9vPoU4BkZe5hRsjfdsPreZysNUqYkSdKcM8yH627HPZ9wvYbucQTj5qmqtUluARYl+TndM4SeRfd4hPUpcx33v//9a+nSpetbf0mSpGl33nnn3VBVi/vThxm0ZZy0/lEPE+U5Anh3Vd3WGt7Wp8wuY3Iw3cM5WbJkCatWrZqywpIkSTMtyY/GSx9m0LaGe06Tsj1wzQR51iRZSPdwxxvpWs/2SfKPdBM6/zrJHcB5A5QJQFUdTTfhNcuWLXOIrCRJGmnDDNrOBXZJshPdU91X0M232Gsl3dOmzwb2Ac6o7hkkTxrLkOTNdE86f28L7KYqU5Ikac4ZWtDW7lE7hG7OtgV0049ckuQtwKqqWgkcA5yQZGwKlxX3psxhnYMkSdJsMS8errts2bLynjZJkjQKkpxXVcv6053GSpIkaQQYtEmSJI0AgzZJkqQRYNAmSZI0AgzaJEmSRoBBmyRJ0ggwaJMkSRoBw5wRQZI04nLEulM+1+Fz//me0mxk0CZJ89y9DcwM6KTpZfeoJEnSCDBokyRJGgF2j0rSPGBXpjT6DNokSUPRHygaJEr/N3aPSpIkjQCDNkmSpBFg0CZJkjQCDNokSZJGgAMRJGmOcISoNLcZtEmSpp0BprT+7B6VJEkaAQZtkiRJI8CgTZIkaQQYtEmSJI0AgzZJkqQR4OhRSRohjrqU5q+htrQl2SvJ5UlWJzl0nP2bJjmp7T8nydKWvmeSC9pyYZIX9xxzZZKL2r5Vw6y/JEnSbDG0lrYkC4CjgGcBa4Bzk6ysqkt7sh0E3FRVOydZAbwDeDlwMbCsqtYmeRBwYZLPVdXadtzTquqGYdVdkiRpthlm9+iewOqqugIgyYnAcqA3aFsOvLmtnwK8N0mq6mc9eTYDbPuXpHnCLmBpfMPsHt0OuKpne01LGzdPa0W7BVgEkOTxSS4BLgJe09PKVsCXk5yX5OAh1l+SJGnWGGZL27r/Kq3bYjZhnqo6B9gtySOB45N8saruAH63qq5J8gDg9CSXVdVZ67x4F9AdDLBkyZL/y3lIkiTNuGG2tK0BdujZ3h64ZqI8SRYC2wA39maoqu8BtwO7t+1r2s/rgE/TdcOuo6qOrqplVbVs8eLF/+eTkSRJmknDDNrOBXZJslOSTYAVwMq+PCuBA9r6PsAZVVXtmIUASXYEHg5cmWSLJFu19C2AZ9MNWpAkSZrThtY92kZ+HgKcBiwAjq2qS5K8BVhVVSuBY4ATkqyma2Fb0Q5/InBokl8Bvwb+pKpuSPIQ4NNJxur+8ar60rDOQZIkabYY6sN1q+pU4NS+tDf1rN8B7DvOcScAJ4yTfgXw6A1fU0maPRw9KWk8TmMlSZI0AgzaJEmSRoBBmyRJ0ggwaJMkSRoBQx2IIEnShuQgDc1ntrRJkiSNAIM2SZKkEWDQJkmSNAIM2iRJkkaAAxEkaQZ4Q72k9WVLmyRJ0ggwaJMkSRoBBm2SJEkjwKBNkiRpBBi0SZIkjQCDNkmSpBHgIz8kSXOCj1HRXGdLmyRJ0ggwaJMkSRoBBm2SJEkjwKBNkiRpBDgQQZKGxBvjJW1ItrRJkiSNAIM2SZKkEWDQJkmSNAKGGrQl2SvJ5UlWJzl0nP2bJjmp7T8nydKWvmeSC9pyYZIXD1qmJEnSXDS0oC3JAuAo4LnArsB+SXbty3YQcFNV7Qy8G3hHS78YWFZVewB7AR9MsnDAMiVJkuacYba07QmsrqorquqXwInA8r48y4Hj2/opwDOSpKp+VlVrW/pmwNhwq0HKlCRJmnOGGbRtB1zVs72mpY2bpwVptwCLAJI8PsklwEXAa9r+QcqUJEmac4YZtK37gKK7W8ymzFNV51TVbsDjgMOSbDZgmV3BycFJViVZdf31169HtSVJkmafYT5cdw2wQ8/29sA1E+RZk2QhsA1wY2+GqvpektuB3Qcsc+y4o4GjAZYtW+bTLCVpHvNBx5oLhtnSdi6wS5KdkmwCrABW9uVZCRzQ1vcBzqiqascsBEiyI/Bw4MoBy5QkSZpzhtbSVlVrkxwCnAYsAI6tqkuSvAVYVVUrgWOAE5KspmthW9EOfyJwaJJfAb8G/qSqbgAYr8xhnYMkSdJsMdS5R6vqVODUvrQ39azfAew7znEnACcMWqYkzRS73SRNF2dEkCRJGgEGbZIkSSPAoE2SJGkEGLRJkiSNAIM2SZKkEWDQJkmSNAIM2iRJkkaAQZskSdIIMGiTJEkaAQZtkiRJI8CgTZIkaQQYtEmSJI2AoU4YL0nSbJcjsk5aHV4zUBNpcgZtkjQFv9QlzQZ2j0qSJI2AgYK2JE9M8qq2vjjJTsOtliRJknpNGbQlORx4A3BYS9oY+OgwKyVJkqR7GqSl7cXAi4DbAarqGmCrYVZKkiRJ9zTIQIRfVlUlKYAkWwy5TpJmQNa9157yXntJmjUGaWk7OckHgW2T/CHwFeBDw62WJEmSek3Z0lZV/5zkWcBPgYcDb6qq04deM0mSJN1l0qAtyQLgtKp6JmCgJkmSNEMm7R6tqjuBnyXZZprqI0mSpHEMMhDhDuCiJKfTRpACVNVrh1YrSZIk3cMgQdsX2iJJM85RrpLmq0EGIhyfZBPgYS3p8qr61SCFJ9kL+FdgAfDhqnp73/5NgY8AjwV+Ary8qq5sAx/eDmwC/BL4m6o6ox1zJvAg4OetmGdX1XWD1EeSJGlUTRm0JXkqcDxwJRBghyQHVNVZUxy3ADgKeBawBjg3ycqqurQn20HATVW1c5IVwDuAlwM3AC+sqmuS7A6cBmzXc9z+VbVqwHOUJEkaeYN0j76TrjXrcoAkDwP+g651bDJ7Aqur6op23InAcqA3aFsOvLmtnwK8N0mq6vyePJcAmyXZtKp+MUB9JUmS5pxBgraNxwI2gKr6fpKNBzhuO+Cqnu01wOMnylNVa5PcAiyia2kb81Lg/L6A7d+T3Al8EjiyyjtaJEkbXo5Y9ybKOtyvHM2MQYK2VUmOAU5o2/sD5w1w3Di3C9P/SZ80T5Ld6LpMn92zf/+qujrJVnRB2yvo7ou7Z8HJwcDBAEuWLBmgupIkSbPXINNY/TFdF+VrgdfRdW++ZoDj1gA79GxvD1wzUZ4kC4FtgBvb9vbAp4FXVtUPxg6oqqvbz1uBj9N1w66jqo6uqmVVtWzx4sUDVFeSJGn2GqSlbSHwr1X1LrhrgMGmAxx3LrBLkp2Aq4EVwO/15VkJHACcDewDnNEmp9+W7jEjh1XVt8Yyt8Bu26q6oXXRvoBuLlRJkqQ5bZCWtq8Cm/dsb84AgVJVrQUOoRv5+T3g5Kq6JMlbkryoZTsGWJRkNfCXwKEt/RBgZ+CNSS5oywPogsXTknwXuIAuGHTyekmSNOcN0tK2WVXdNrZRVbcluc8ghVfVqcCpfWlv6lm/A9h3nOOOBI6coNipRq1KkiTNOYO0tN2e5DFjG0key90PtpUkSdI0GKSl7c+BTyQZG0TwILoH4EqSJGmaDDKN1blJHgE8nO4RHZcNOo2VJEmSNowpu0eT7Et3X9vFdDMYnNTbXSpJkqThG+SetjdW1a1Jngg8h24e0vcPt1qSJEnqNUjQdmf7+Xzg/VX1WWCT4VVJkiRJ/QYJ2q5O8kHgZcCpSTYd8DhJkiRtIIMEXy+je0DuXlV1M3A/4G+GWitJkiTdwyCjR38GfKpn+1rg2mFWSpIkSfdkN6ckSdIImDBoa/euSZIkaRaYrHv0bOAxSU6oqldMV4UkSRoFOSLrpNXhNQM10XwxWdC2SZIDgN9J8pL+nVX1qXGOkSRJ0hBMFrS9Btgf2BZ4Yd++omdwgiSNOltNJM12EwZtVfVN4JtJVlXVMdNYJ0mSJPWZ8pEfwAlJXgs8uW1/HfiAk8ZLkiRNn0GCtvcBG7efAK+gm3v01cOqlCRJku5pkKDtcVX16J7tM5JcOKwKSZIkaV0DTRif5KFjG0kewt2TyEuSJGkaDNLS9jfA15JcAQTYEXjVUGslSZKkexhk7tGvJtkFeDhd0HZZVf1i6DWTJEnSXQZpaaMFad8dcl0kSZI0ASeMlyRJGgEGbZIkSSNgyqAtySeTPD+JAZ4kSdIMGSQQez/we8B/J3l7kkcMWniSvZJcnmR1kkPH2b9pkpPa/nOSLG3pz0pyXpKL2s+n9xzz2Ja+Osl7kqw7YaAkSdIcM2XQVlVfqar9gccAVwKnJ/l2klcl2Xii45IsAI4CngvsCuyXZNe+bAcBN1XVzsC7gXe09BuAF1bVbwIHACf0HPN+4GBgl7bsNeVZSpIkjbiBujyTLAIOpJu66nzgX+mCuNMnOWxPYHVVXVFVvwROBJb35VkOHN/WTwGekSRVdX5VXdPSLwE2a61yDwK2rqqzq6qAjwB7D3IOkiRJo2zKR34k+RTwCLrWrhdW1bVt10lJVk1y6HbAVT3ba4DHT5SnqtYmuQVYRNfSNualwPlV9Ysk27VyesvcboJ6H0zXIseSJUsmqaYkSRtWjlj3zp06vGagJppLBnlO24er6tTehCSbVtUvqmrZJMeNd69Z/yd20jxJdqPrMn32epTZJVYdDRwNsGzZMn9TJEnSSBuke/TIcdLOHuC4NcAOPdvbA9dMlCfJQmAb4Ma2vT3waeCVVfWDnvzbT1GmJEnSnDNhS1uS36Dretw8yW9xdyvX1sB9Bij7XGCXJDsBVwMr6Eah9lpJN9DgbGAf4IyqqiTbAl8ADquqb41lrqprk9ya5AnAOcArgX8boC6SJEkjbbLu0efQDT7YHnhXT/qtwN9OVXC7R+0Q4DRgAXBsVV2S5C3AqqpaCRwDnJBkNV0L24p2+CHAzsAbk7yxpT27qq4D/hg4Dtgc+GJbJEmS5rQJg7aqOh44PslLq+qT96bwdi/cqX1pb+pZvwPYd5zjjmT8blmqahWw+72pjyRJ0qiarHv096vqo8DSJH/Zv7+q3jXOYZJ0l/EefV0OC5Kke2Wy7tEt2s8tp6MikiRJmthk3aMfbD+PmL7qSJIkaTyTdY++Z7IDq+q1G746kkaNXaCSND0m6x49b9pqIUmSpElNNXpUkiRJs8Bk3aP/UlV/nuRzjDNVVFW9aKg1kyRJ0l0m6x49of385+moiCRJkiY2Wffoee3n15NsAjyCrsXt8qr65TTVT5IkSUze0gZAkucDHwB+QDf/6E5J/qiqnD5KkiRpmkwZtAHvBJ5WVasBkjyUbjJ3gzZJkqRpMkjQdt1YwNZcAVw3pPpImoV8Fpu04eSIdX+h6nB/oTS1yUaPvqStXpLkVOBkunva9gXOnYa6SZIkqZmspe2FPes/Bp7S1q8H7ju0GkkCbN2SJN3TZKNHXzWdFZEkSdLEBhk9uhlwELAbsNlYelX9wRDrJUmSpB4bDZDnBOA3gOcAXwe2B24dZqUkSZJ0T4MEbTtX1RuB29t8pM8HfnO41ZIkSVKvQYK2X7WfNyfZHdgGWDq0GkmSJGkdgzyn7egk9wXeCKwEtmzrkiRJmiZTBm1V9eG2+nXgIcOtjiRJksYzZfdokkVJ/i3JfyU5L8m/JFk0HZWTJElSZ5B72k6km7bqpcA+wA3AScOslCRJku5pkHva7ldVb+3ZPjLJ3sOqkCRJktY1SEvb15KsSLJRW14GfGGQwpPsleTyJKuTHDrO/k2TnNT2n5NkaUtflORrSW5L8t6+Y85sZV7QlgcMUhdJkqRRNtmE8bfSTRAf4C+Bj7ZdGwG3AYdPVnCSBcBRwLOANcC5SVZW1aU92Q4CbqqqnZOsAN4BvBy4g26E6u5t6bd/Va2a+vQkSZLmhglb2qpqq6rauv3cqKoWtmWjqtp6gLL3BFZX1RVV9Uu6e+OW9+VZDhzf1k8BnpEkVXV7VX2TLniTJEma9wa5p40kLwKe3DbPrKrPD3DYdsBVPdtrgMdPlKeq1ia5BVhEN9hhMv+e5E7gk8CRVVUD1EeSpFktR2SdtDrcrzh1Bnnkx9uB1wGXtuV1LW3KQ8dJ6//kDZKn3/5V9ZvAk9ryinFfPDk4yaokq66//vopKytJkjSbDTIQ4XnAs6rq2Ko6FtirpU1lDbBDz/b2wDUT5UmykG6KrBsnK7Sqrm4/bwU+TtcNO16+o6tqWVUtW7x48QDVlSRJmr0GCdoAtu1Z32bAY84FdkmyU5JNgBV002D1Wgkc0Nb3Ac6YrKszycIk92/rGwMvAC4esD6SJEkja5B72v4fcH6Sr9F1Zz4ZOGyqg9o9aocApwELgGOr6pIkbwFWVdVK4BjghCSr6VrYVowdn+RKYGtgk/ZcuGcDPwJOawHbAuArwIcGPVlJkqRRNWnQliTAN4EnAI+jC9reUFX/O0jhVXUqcGpf2pt61u8A9p3g2KUTFPvYQV5bkiRpLpk0aKuqSvKZqnos63ZtSpIkaZoMck/bfyZ53NBrIkmSpAkNck/b04DXtHvMbqfrIq2qetQwKyZJkqS7DRK0PXfotZAkSdKkJpt7dDPgNcDOwEXAMVW1droqJkmSpLtNdk/b8cAyuoDtucA7p6VGkiRJWsdk3aO7tumiSHIM8J3pqZIkSZL6TdbS9quxFbtFJUmSZtZkLW2PTvLTth5g87Y9Nnp066HXTpI2oByRe2zX4RPOmidJs86EQVtVLZjOikiSpIn1/9MB/uMx3ww6YbwkSZJmkEGbJEnSCDBokyRJGgEGbZIkSSPAoE2SJBuNjR0AABNESURBVGkEDDL3qKRJZN0BXZQDuiRJG5gtbZIkSSPAoE2SJGkEGLRJkiSNAO9pk0aQ99FJ0vxjS5skSdIIMGiTJEkaAQZtkiRJI8B72iRJGnE5Yt0bXetwb3Sda2xpkyRJGgFDDdqS7JXk8iSrkxw6zv5Nk5zU9p+TZGlLX5Tka0luS/LevmMem+Sidsx7kvHG0UmSJM0tQwvakiwAjgKeC+wK7Jdk175sBwE3VdXOwLuBd7T0O4A3An89TtHvBw4GdmnLXhu+9pIkSbPLMFva9gRWV9UVVfVL4ERgeV+e5cDxbf0U4BlJUlW3V9U36YK3uyR5ELB1VZ1dVQV8BNh7iOcgaUiSdRdJ0sSGGbRtB1zVs72mpY2bp6rWArcAi6Yoc80UZUqSJM05wwzaxvu/uX8oyyB57lX+JAcnWZVk1fXXXz9JkZIkSbPfMIO2NcAOPdvbA9dMlCfJQmAb4MYpytx+ijIBqKqjq2pZVS1bvHjxelZdkiRpdhlm0HYusEuSnZJsAqwAVvblWQkc0Nb3Ac5o96qNq6quBW5N8oQ2avSVwGc3fNUlSZJml6E9XLeq1iY5BDgNWAAcW1WXJHkLsKqqVgLHACckWU3XwrZi7PgkVwJbA5sk2Rt4dlVdCvwxcBywOfDFtkiSJM1pQ50RoapOBU7tS3tTz/odwL4THLt0gvRVwO4brpaSJEmznzMiSJIkjQCDNkmSpBHghPGSJM1xTig/N9jSJkmSNAIM2iRJkkaAQZskSdIIMGiTJEkaAQZtkiRJI8CgTZIkaQQYtEmSJI0AgzZJkqQRYNAmSZI0AgzaJEmSRoDTWEmSNI85xdXosKVNkiRpBBi0SZIkjQCDNkmSpBFg0CZJkjQCDNokSZJGgEGbJEnSCDBokyRJGgE+p02aY7LuI5coH7kkSSPPljZJkqQRYEubJEkal7MlzC62tEmSJI2AoQZtSfZKcnmS1UkOHWf/pklOavvPSbK0Z99hLf3yJM/pSb8yyUVJLkiyapj1lyRJmi2G1j2aZAFwFPAsYA1wbpKVVXVpT7aDgJuqauckK4B3AC9PsiuwAtgNeDDwlSQPq6o723FPq6obhlV3SZKk2WaYLW17Aqur6oqq+iVwIrC8L89y4Pi2fgrwjCRp6SdW1S+q6ofA6laeJEnSvDTMoG074Kqe7TUtbdw8VbUWuAVYNMWxBXw5yXlJDh5CvSVJkmadYY4eHedpUfQPOZkoz2TH/m5VXZPkAcDpSS6rqrPWefEuoDsYYMmSJYPXWpIkaRYaZkvbGmCHnu3tgWsmypNkIbANcONkx1bV2M/rgE8zQbdpVR1dVcuqatnixYv/zycjSZI0k4YZtJ0L7JJkpySb0A0sWNmXZyVwQFvfBzijqqqlr2ijS3cCdgG+k2SLJFsBJNkCeDZw8RDPQZIkaVYYWvdoVa1NcghwGrAAOLaqLknyFmBVVa0EjgFOSLKaroVtRTv2kiQnA5cCa4E/rao7kzwQ+HQ3VoGFwMer6kvDOgfNrOmejsnpnyRpcD54d/oNdUaEqjoVOLUv7U0963cA+05w7NuAt/WlXQE8esPXVBoOA0FJ0obijAiSJEkjwKBNkiRpBBi0SZIkjQCDNkmSpBFg0CZJkjQChjp6VJIkzT8+DmQ4bGmTJEkaAba0aST5/DNJ0nxjS5skSdIIMGiTJEkaAQZtkiRJI8B72iRJ0rRxZOm9Z0ubJEnSCDBokyRJGgEGbZIkSSPAoE2SJGkEOBBBkiTNCg5SmJxBm4ZuumcvcLYESdJcZPeoJEnSCLClTZIkzXp2ndrSJkmSNBIM2iRJkkaA3aMamDf4S5Jmo/nSdWpLmyRJ0giwpW0OskVMkqTOXGqFG2pLW5K9klyeZHWSQ8fZv2mSk9r+c5Is7dl3WEu/PMlzBi1zvkjWXSRJ0tw1tJa2JAuAo4BnAWuAc5OsrKpLe7IdBNxUVTsnWQG8A3h5kl2BFcBuwIOBryR5WDtmqjLnjGG1mPWXayucJGk+GrVWuGF2j+4JrK6qKwCSnAgsB3oDrOXAm9v6KcB7k6Sln1hVvwB+mGR1K48BypwRkwVY93afJEmaGbMxoBtm9+h2wFU922ta2rh5qmotcAuwaJJjBylTkiRpzkkNqVknyb7Ac6rq1W37FcCeVfVnPXkuaXnWtO0f0LWovQU4u6o+2tKPAU6lCzInLbOn7IOBg9vmw4HLh3KiG9b9gRtmuhKzkNdlYl6biXltJua1GZ/XZWJem4kN49rsWFWL+xOH2T26BtihZ3t74JoJ8qxJshDYBrhximOnKhOAqjoaOPreVn4mJFlVVctmuh6zjddlYl6biXltJua1GZ/XZWJem4lN57UZZvfoucAuSXZKsgndwIKVfXlWAge09X2AM6pr+lsJrGijS3cCdgG+M2CZkiRJc87QWtqqam2SQ4DTgAXAsVV1SZK3AKuqaiVwDHBCG2hwI10QRst3Mt0Ag7XAn1bVnQDjlTmsc5AkSZothvpw3ao6le5etN60N/Ws3wHsO8GxbwPeNkiZc8hIdedOI6/LxLw2E/PaTMxrMz6vy8S8NhObtmsztIEIkiRJ2nCce1SSJGkEGLTNsCRvTfLdJBck+XKSB7f0JHlPm67ru0keM9N1nW5J/inJZe38P51k2559405zNl8k2TfJJUl+nWRZ3775fm2c6q5JcmyS65Jc3JN2vySnJ/nv9vO+M1nHmZJkhyRfS/K99rv0upY+769Pks2SfCfJhe3aHNHSd2pTTv53m4Jyk5mu60xIsiDJ+Uk+37an7boYtM28f6qqR1XVHsDngbF7/p5LN2p2F7rnzb1/huo3k04Hdq+qRwHfBw4D6JvmbC/gfW3atPnkYuAlwFm9ifP92vRMn/dcYFdgv3ZN5qvj6D4HvQ4FvlpVuwBfbdvz0Vrgr6rqkcATgD9tnxWvD/wCeHpVPRrYA9gryRPoppp8d7s2N9FNRTkfvQ74Xs/2tF0Xg7YZVlU/7dncAhi7yXA58JHq/CewbZIHTXsFZ1BVfbnNlAHwn3TP5YOeac6q6odA7zRn80JVfa+qxntg9Hy/NndNn1dVvwTGprqbl6rqLLqR+b2WA8e39eOBvae1UrNEVV1bVf/V1m+l+xLeDq8P7Xvntra5cVsKeDrdlJMwT69Nku2B5wMfbtthGq+LQdsskORtSa4C9ufuljan7LqnPwC+2Na9NhOb79dmvp//IB5YVddCF7gAD5jh+sy4JEuB3wLOwesD3NUFeAFwHV2vxw+Am3v+kZ6vv1v/Arwe+HXbXsQ0XheDtmmQ5CtJLh5nWQ5QVX9XVTsAHwMOGTtsnKLm3FDfqa5Ny/N3dF0ZHxtLGqeoeXltxjtsnLQ5d20mMd/PX+spyZbAJ4E/7+v5mNeq6s522872dC3Yjxwv2/TWamYleQFwXVWd15s8TtahXZehPqdNnap65oBZPw58ATicwaYBG3lTXZskBwAvAJ5Rdz+fxmszsXlxbSYx389/ED9O8qCqurbdcnHdTFdopiTZmC5g+1hVfaole316VNXNSc6ku+9v2yQLW6vSfPzd+l3gRUmeB2wGbE3X8jZt18WWthmWZJeezRcBl7X1lcAr2yjSJwC3jDXZzxdJ9gLeALyoqn7Ws2uiac7ktXGqu6n1Th94APDZGazLjGn3Ih0DfK+q3tWza95fnySLx0brJ9kceCbdPX9fo5tyEubhtamqw6pq+6paSve35Yyq2p9pvC4+XHeGJfkk8HC6/vEfAa+pqqvbH5T30o38+hnwqqpaNXM1nX7ppjfbFPhJS/rPqnpN2/d3dPe5raXr1vji+KXMTUleDPwbsBi4Gbigqp7T9s33a/M8uv9+x6a6W2dmlfkiyX8ATwXuD/yYrhX/M8DJwBLgf4B9q6p/sMKcl+SJwDeAi7j7/qS/pbuvbV5fnySPoruhfgFd487JVfWWJA+hG9xzP+B84Per6hczV9OZk+SpwF9X1Qum87oYtEmSJI0Au0clSZJGgEGbJEnSCDBokyRJGgEGbZIkSSPAoE2SJGkEGLRJc1SSv0tySZLvJrkgyeOn8bWT5IwkW2/AMl+T5JVt/cAkD+7Z9+HpmBg+yZlJlg37dXpeb+/e8+o/7/Us66lJPr/hajfl65069qyv9TzuBUmOGEadpFFn0CbNQUl+m24micdU1aPoHo551eRHTVnm+syg8jzgwg05LVBVfaCqPtI2DwQe3LPv1VV16YZ6rVlkb6A3GD2QnvOeLuv53gNQVc+rqpvvxct9ge6p8/e5F8dKc5pBmzQ3PQi4YewBj1V1Q1VdA5DkcUm+neTCJN9JslWSzZL8e5KLkpyf5Gkt74FJPpHkc8CXW9rfJDm3teBN1CKyP+2p4EmWJrksyfHtmFPGvpCTPKO93kVJjk2yaUt/e5JLW/5/bmlvTvLXSfYBlgEfay2Im4+1gCX54yT/OFaJVv9/a+u/3873giQfTLKgt8JJ9kzyqba+PMnPk2zSrs0VPVn3beV8P8mTWv4FSf6p57r8UUt/aqvbKe0afKw9OJu+1/7DduyFST6Z5D5JfodulpR/anV+wzjn/aZ23MVJjh4rO8nO6eauvTDJfyV5aN/rPa5d94ck2aJd+3Nb2vKJ3vue41+f5LVt/d1Jzuh5Pz/a1q9Mcv/2/n8vyYfStfx+Od1T9kny2p73+cT2WS3gTLp/OiT1qioXF5c5tgBbAhcA3wfeBzylpW8CXAE8rm1vTTcH8V8B/97SHkH3JPjN6Fp21gD3a/ueDRxNN0nyRsDngSeP8/o/ArZq60vpJlD+3bZ9LPDXrfyrgIe19I8Af073VPHLufvh39u2n2+mewI5dF/qy3pe70y6gGYxsLon/YvAE+kmu/4csHFLfx/wyr46LwR+2Nb/mW5KrN8FngL8R8/rvLOtPw/4Sls/GPj7tr4psArYiW42glvo5iPcCDgbeOI412tRz/qRwJ+19eOAffrPs2f7fj3rJwAvbOvnAC9u65sB92l1+TzwO8B5wJK2/x/onuAOsC3dZ2aL/ve+r75PAD7R1r9BN1XaxnQzLvxRS7+SbiaGpXSzc+zR0k/ueb1rgE173+e2vj/wbzP9e+TiMtsWW9qkOaiqbgMeSxdMXA+clORAuinTrq2qc1u+n1Y3yfET6b70qarL6IKuh7XiTq+7p/F5dlvOB/6LLsDrnT93zP2q6tae7auq6ltt/aPt9R5OFyR9v6UfDzwZ+ClwB/DhJC+hm8Zt0PO+HrgiyROSLGqv8S3gGe16nJvkgrb9kL5j1wKrkzwS2BN4V6vPk+gCkzFjE4ufRxeQQHdNXtnKPgdYxN3X5TtVtaaqfk0XSC9lXbsn+UaSi+gClt0GPOWnJTmnHfd0YLckWwHbVdWn23ndUXfP3ftIuqD7hVX1Pz11P7TV/Uy6IG9J29f73vc6D3hse61f0AWjy1j3Wo35YVVd0HPs0rb+XbqWw9+nC+zGXMcMdANLs91636cgaTRU1Z10X8Jnti/1A+gCrfHmrluny67H7X35/l9VfXCKl1+bZKMWqDDOa9ZEr1lVa5PsSRdYrQAOoQtIBnUS8DLgMuDTVVWt2/D4qjpsimO/ATwX+BXwFbqWrgV0LYNjxuYUvJO7/4aGrnXstN7C0s1P2DsHYe8xvY4D9q6qC1tw/dQp6kmSzehaDJdV1VVJ3kwXcE32Xl7b8vwWXSvXWN1fWlWX95X/eO753t+lqn6V5ErgVcC36YKvpwEPpZtYvF//Ndi8rT+fLjB+EfDGJLu14Hkz4OeTnIc0L9nSJs1BSR6epLcFbA+61rPLgAcneVzLt1W6m8zPomvhIcnD6FpaLmddpwF/kGTLlne7JA8YJ9/l3LMla0m6wREA+wHfbHVZmmTnlv4K4Out7G2q6lS67tI9xin/VmCrCU7/U3Q38O9HF8ABfBXYZ6yuSe6XZMdxjj2rvebZrdVuEV1r4iUTvNaY04A/TrJxK/9hSbaY4pheWwHXtuP370nvP8/e7c3azxvaNdsHutZTYE2SvVtdNs3dN/XfTBco/UMLKMfq/mc998P91oB1PosumD2LLth9DXBBVQ00oXWSjYAdquprwOvpuma3bLsfBlw8YD2kecOgTZqbtgSOH7vJm24E4pur6pfAy4F/S3IhcDrdl//7gAWtRe4k4MBqgxh6VdWXgY8DZ7e8pzB+8PQF7tla9D3ggFaX+wHvr6o76FpqPtHK+jXwgVbe51verwN/MU75xwEfGLshv6+ONwGXAjtW1Xda2qXA3wNfbuWeTjdYo985wAPpAhHoWpC+O0Ag8uH2mv+V5GLgg6xfT8Yb22ufThfMjjkR+Js2QOCh9Jw3XevVh4CLgM/Q3YM35hXAa9u5fhv4jbEdVfVj4IXAUa017a1096N9t9X9rQPW+Rt01/DsVuYdjN81OpEFwEfbe38+8O66e7Tp0+g+Q5J6ZMB/iiRpYEkeBHykqp6VZCnw+arafWZrpVGQ5IHAx6vqGTNdF2m2saVN0gZXVdcCH8oGfLiu5o0ldKOZJfWxpU2SJGkE2NImSZI0AgzaJEmSRoBBmyRJ0ggwaJMkSRoBBm2SJEkjwKBNkiRpBPx/naLMgvImaGwAAAAASUVORK5CYII=\n",
"text/plain": [
"