{ "cells": [ { "cell_type": "markdown", "id": "9c7373b1", "metadata": {}, "source": [ "# Agent-Based Model (ABM) for Democratic Backsliding\n", "\n", "Jochen Fromm, 2022" ] }, { "cell_type": "code", "execution_count": 1, "id": "9d789bf1", "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import random" ] }, { "cell_type": "markdown", "id": "c35b5b68", "metadata": {}, "source": [ "### The fitness" ] }, { "cell_type": "markdown", "id": "186090c2", "metadata": {}, "source": [ "Simple 2x2 payoff matrix for cooperation game" ] }, { "cell_type": "code", "execution_count": 2, "id": "82d8dc67", "metadata": {}, "outputs": [], "source": [ "PAYOFF = [2,0,0,2]" ] }, { "cell_type": "markdown", "id": "4b6b02b4", "metadata": {}, "source": [ "### The agents" ] }, { "cell_type": "markdown", "id": "8cdbcc2a", "metadata": {}, "source": [ "Agents have two properties, \"fitness\" and \"strategy\", and interact according to a standard 2x2 payoff matrix from evolutionary game theory. \"strategy\" can have two values, 1 for \"cooperators\" and 0 for \"critics\". In our case these two values mean cooperate with regime or criticize the regime (i.e. cooperate with opponents)." ] }, { "cell_type": "code", "execution_count": 3, "id": "768a1220", "metadata": {}, "outputs": [], "source": [ "COOPERATE = 1\n", "CRITICIZE = 0\n", "\n", "class Agent: \n", " def __init__(self, s):\n", " self.fitness = 0\n", " self.strategy = s\n", "\n", " def reset(self):\n", " self.fitness = 0\n", " \n", " def interact(self, opponent, PAYOFF):\n", " self.fitness += PAYOFF[self.strategy*2+opponent.strategy]" ] }, { "cell_type": "code", "execution_count": 4, "id": "164eba8d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Agent A fitness 2\n", "Agent B fitness 0\n", "Agent A fitness 2\n", "Agent c fitness 0\n", "Agent c fitness 2\n", "Agent d fitness 0\n" ] } ], "source": [ "a = Agent(0)\n", "b = Agent(0)\n", "c = Agent(1)\n", "d = Agent(1)\n", "a.interact(b, PAYOFF)\n", "print(\"Agent A fitness\", a.fitness)\n", "print(\"Agent B fitness\", b.fitness)\n", "a.interact(c, PAYOFF)\n", "print(\"Agent A fitness\", a.fitness)\n", "print(\"Agent c fitness\", c.fitness)\n", "c.interact(d, PAYOFF)\n", "print(\"Agent c fitness\", c.fitness)\n", "print(\"Agent d fitness\", d.fitness)" ] }, { "cell_type": "markdown", "id": "45c4f20a", "metadata": {}, "source": [ "### The Agent-Based Model" ] }, { "cell_type": "markdown", "id": "b0c6104f", "metadata": {}, "source": [ "In each generation agents which have fitness below average are eliminated because they are not well adapted to their environment (in our case for example because they are imprisoned or emigrate to avoid prison and persecution)" ] }, { "cell_type": "code", "execution_count": 5, "id": "1df63b4e", "metadata": {}, "outputs": [], "source": [ "class AgentBasedModel:\n", " def __init__(self, cooperators, critics, PAYOFF):\n", " self.cooperators = cooperators\n", " self.critics = critics\n", " self.state = 0\n", " self.model = []\n", " self.data = []\n", " self.PAYOFF = PAYOFF\n", " for x in range(self.cooperators): self.model.append(Agent(COOPERATE))\n", " for x in range(self.critics): self.model.append(Agent(CRITICIZE))\n", "\n", " def play(self):\n", " total_number = len(self.model)\n", " i1 = random.randint(0, total_number-1)\n", " i2 = i1\n", " while (i1 == i2):\n", " i2 = random.randint(0, total_number-1)\n", " agent1 = self.model[i1]\n", " agent2 = self.model[i2]\n", " agent1.interact(agent2, self.PAYOFF)\n", " agent2.interact(agent1, self.PAYOFF)\n", "\n", " def count_cooperators(self):\n", " cooperators = [1 for agent in self.model if agent.strategy == COOPERATE]\n", " return sum(cooperators)\n", "\n", " def avergate_fitness(self):\n", " fitness = [agent.fitness for agent in self.model]\n", " return sum(fitness) / (len(self.model) * 1.0)\n", "\n", " def new_generation(self, generation):\n", " total_number = len(self.model)\n", " avg_fitness = self.avergate_fitness()\n", " # print(\"avg_fitness\", avg_fitness) \n", " \n", " # eliminate agents which have fitness below average \n", " new_model = [Agent(agent.strategy) for agent in self.model if (agent.fitness > avg_fitness)]\n", " new_count = len(new_model)\n", " new_agents = total_number - new_count\n", " # msg = f\"New generation {generation}. {new_count} agents from {total_number} have replicated. {new_agents} new agents\"\n", " # print(msg)\n", " \n", " self.model = new_model\n", " for x in range(new_agents): \n", " i = random.randint(0, new_count-1)\n", " strategy = self.model[i].strategy\n", " self.model.append(Agent(strategy))\n", "\n", " def run(self, generation_time, timesteps):\n", " generation_no = 0\n", " generations = int(timesteps / generation_time) - 1\n", " self.data = np.arange(generations * 3).reshape(generations,3)\n", " for t in range(timesteps):\n", " self.play() \n", " self.state = self.count_cooperators() / len(self.model)\n", " \n", " if (t > 0) & ((t % generation_time) == 0):\n", " generation_no += 1\n", " self.new_generation(generation_no)\n", " self.data[generation_no-1] = np.array([t/30, generation_no, self.state * 100])\n", " # print(np.array([t, generation_no, self.state * 100]))" ] }, { "cell_type": "code", "execution_count": 6, "id": "4e6cae24", "metadata": {}, "outputs": [], "source": [ "initial_state = 42\n", "generation_time = 100\n", "timesteps = 1000\n", "generations = int(timesteps / generation_time) - 1\n", "model_runs = 20" ] }, { "cell_type": "code", "execution_count": 7, "id": "f7698f3d", "metadata": {}, "outputs": [], "source": [ "PAYOFF = [2,0,0,1]\n", "model_1_data = np.arange(model_runs*generations*3).reshape(model_runs, generations, 3)\n", "\n", "for n in range(model_runs):\n", " x = AgentBasedModel(initial_state, 100 - initial_state, PAYOFF)\n", " x.run(generation_time, timesteps) \n", " model_1_data[n] = x.data" ] }, { "cell_type": "code", "execution_count": 8, "id": "b2aa9826", "metadata": {}, "outputs": [], "source": [ "PAYOFF = [2,0,0,2]\n", "model_2_data = np.arange(model_runs*generations*3).reshape(model_runs, generations, 3)\n", "\n", "for n in range(model_runs):\n", " x = AgentBasedModel(initial_state, 100 - initial_state, PAYOFF)\n", " x.run(generation_time, timesteps) \n", " model_2_data[n] = x.data" ] }, { "cell_type": "code", "execution_count": 9, "id": "dcb78a60", "metadata": {}, "outputs": [], "source": [ "PAYOFF = [1,0,0,2]\n", "model_3_data = np.arange(model_runs*generations*3).reshape(model_runs, generations, 3)\n", "\n", "for n in range(model_runs):\n", " x = AgentBasedModel(initial_state, 100 - initial_state, PAYOFF)\n", " x.run(generation_time, timesteps) \n", " model_3_data[n] = x.data" ] }, { "cell_type": "code", "execution_count": 10, "id": "d032b8a0", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | Time | \n", "Generation | \n", "State | \n", "
---|---|---|---|
0 | \n", "3 | \n", "1 | \n", "42 | \n", "
1 | \n", "6 | \n", "2 | \n", "20 | \n", "
2 | \n", "10 | \n", "3 | \n", "0 | \n", "
3 | \n", "13 | \n", "4 | \n", "0 | \n", "
4 | \n", "16 | \n", "5 | \n", "0 | \n", "
5 | \n", "20 | \n", "6 | \n", "0 | \n", "
6 | \n", "23 | \n", "7 | \n", "0 | \n", "
7 | \n", "26 | \n", "8 | \n", "0 | \n", "
8 | \n", "30 | \n", "9 | \n", "0 | \n", "