{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook gives the computational details of the results in the manuscript \"Ruling Out Static Latent Homophily in Citation Networks\" ([arXiv:1605.08185](http://arxiv.org/abs/1605.08185)). We are interested in studying causal structures in citation networks and their algebraic geometry. In particular, we study semidefinite programming relaxations (SDPs) of a latent homophily model.\n", "\n", "We start with importing the necessary libraries. The module [metaknowledge](http://networkslab.org/metaknowledge/) handles the citation database and [Ncpol2sdpa](http://ncpol2sdpa.readthedocs.org/) for generating the SDP. It also needs at least one SDP solver; in the example below, we will use [Mosek](https://mosek.com/). Since metaknowledge requires Python 3, this notebook will not work with a Python 2 kernel." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import metaknowledge as mk\n", "import ncpol2sdpa as nc\n", "import numpy as np\n", "import re" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will need a handful of helper functions. The following are the $F_\\pm$ and $S_\\pm$ counters in Eq. (1):\n", "\n", "$P(A_{1:T}|R_A) = \\alpha_+^{F_+(A)}\\alpha_-^{F_-(A)}(1-\\alpha_-)^{S_-(A)}(1-\\alpha_+)^{S_+(A)} \\alpha_0^{1/2(1+A1)}(1-\\alpha_0)^{1/2(1-A1)}$\n", "\n", "We also define a function to generate chains of ${\\pm}1$, and a class for storing the probabilities of the chains." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def Fp(chain, T):\n", " return sum((1+chain[t])*(1-chain[t+1]*chain[t]) for t in range(T-1))//4\n", "\n", "\n", "def Fn(chain, T):\n", " return sum((1-chain[t])*(1-chain[t+1]*chain[t]) for t in range(T-1))//4\n", "\n", "\n", "def Sp(chain, T):\n", " return sum((1+chain[t])*(1+chain[t+1]*chain[t]) for t in range(T-1))//4\n", "\n", "\n", "def Sn(chain, T):\n", " return sum((1-chain[t])*(1+chain[t+1]*chain[t]) for t in range(T-1))//4\n", "\n", "\n", "def generate_chains(T, At=None, chain=None):\n", " if At is None:\n", " Atl = [1, -1]\n", " else:\n", " Atl = [At]\n", " self_chain = [c for c in chain]\n", " sum_ = []\n", " for Ai in Atl:\n", " if chain is None:\n", " self_chain = [Ai]\n", " else:\n", " self_chain.append(Ai)\n", " for Aj in [1, -1]:\n", " if T > 2:\n", " sum_ += generate_chains(T-1, Aj, self_chain)\n", " else:\n", " sum_.append(self_chain + [Aj])\n", " return sum_\n", "\n", "\n", "class Probability(object):\n", "\n", " def __init__(self, T):\n", " chains = generate_chains(T)\n", " self.combinations = []\n", " for chain1 in chains:\n", " for chain2 in chains:\n", " self.combinations.append([chain1, chain2, 0])\n", "\n", " def __getitem__(self, chains):\n", " for combination in self.combinations:\n", " if combination[0] == chains[0] and combination[1] == chains[1]:\n", " return combination[2]\n", " raise Exception(\"Not found\")\n", "\n", " def __setitem__(self, chains, value):\n", " for combination in self.combinations:\n", " if combination[0] == chains[0] and combination[1] == chains[1]:\n", " combination[2] = value\n", " return\n", " raise Exception(\"Not found\")\n", "\n", " def normalize(self, Z):\n", " for combination in self.combinations:\n", " combination[2] /= Z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next couple of functions work on the citation network. The first function attempts to mitigate the arrogance of Thomson-Reuters, who cannot be bothered to normalize author names. The second function picks all authors who made a reference in a time period. The third one defines a directed graph over the co-author network. The last one gets all references" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def normalize_name(name):\n", " w = re.sub(\"([a-z]{2,}) \", \"\\g<1>X\", re.sub(\"[\\.,]\", \" \", name.lower()))\n", " return w.replace(\" \", \"\").replace(\"X\", \" \")\n", "\n", "\n", "def get_authors_in_period(RC, year1, year2):\n", " RC_period = RC.yearSplit(year1, year2)\n", " authors_period = {}\n", " for R in RC_period:\n", " if R.citations is not None:\n", " reference_IDs = [reference.ID() for reference in R.citations]\n", " for author in R.authors:\n", " author = normalize_name(author)\n", " if author in authors_period:\n", " authors_period[author] += reference_IDs\n", " else:\n", " authors_period[author] = reference_IDs\n", " return authors_period\n", "\n", "\n", "def create_directed_coauthor_graph(RC):\n", " coauthors_undirected = RC.coAuthNetwork()\n", " coauthors = coauthors_undirected.to_directed()\n", " weights = coauthors_undirected.degree()\n", " for edge in coauthors_undirected.edges_iter():\n", " if weights[edge[0]] > weights[edge[1]]:\n", " coauthors.remove_edge(edge[1], edge[0])\n", " else:\n", " coauthors.remove_edge(edge[0], edge[1])\n", " return coauthors\n", "\n", "\n", "def get_all_references(author, epochs):\n", " references = set()\n", " for epoch in epochs:\n", " if author in epoch:\n", " for reference in epoch[author]:\n", " references.add(reference)\n", " return references" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code snippet loads the database. Here we use the example set that comes with metaknowledge, and the commented out line would load the full corpus." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "RC = mk.RecordCollection(\"./savedrecs.txt\")\n", "# RC = mk.RecordCollection(\"./InfSci20JournBase.hci\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we split the corpus in three time periods and record the coauthor relationships in the first time period." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "years = []\n", "for R in RC:\n", " if isinstance(R.year, int):\n", " years.append(R.year)\n", "histogram = np.histogram(np.array(years), bins=3)\n", "coauthors0 = create_directed_coauthor_graph(RC.yearSplit(histogram[1][0],\n", " histogram[1][1]))\n", "\n", "authors_period0 = get_authors_in_period(RC, histogram[1][0], histogram[1][1])\n", "authors_period1 = get_authors_in_period(RC, histogram[1][1], histogram[1][2])\n", "authors_period2 = get_authors_in_period(RC, histogram[1][2], histogram[1][3])\n", "epochs = [authors_period0, authors_period1, authors_period2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We populate the probability class with length-3 chains and add the statistics from the corpus across the three time periods:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "T = len(epochs)\n", "p = Probability(T)\n", "M = 0\n", "for pair in coauthors0.edges():\n", " coauthor1 = normalize_name(pair[0])\n", " coauthor2 = normalize_name(pair[1])\n", " references = get_all_references(coauthor1, epochs)\n", " references.union(get_all_references(coauthor1, epochs))\n", " for reference in references:\n", " chain1, chain2 = [], []\n", " for epoch in epochs:\n", " if coauthor1 in epoch:\n", " if reference in epoch[coauthor1]:\n", " chain1.append(+1)\n", " else:\n", " chain1.append(-1)\n", " else:\n", " chain1.append(-1)\n", " if coauthor2 in epoch:\n", " if reference in epoch[coauthor2]:\n", " chain2.append(+1)\n", " else:\n", " chain2.append(-1)\n", " else:\n", " chain2.append(-1)\n", " p[(chain1, chain2)] += 1\n", " M += 1\n", "p.normalize(M)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set up the symbolic variables of the SDP relaxation, and set the moment constraints as outlined in Eq. (2), that is, \n", "\n", "$y_j = \\sum_{R_A, R_B} P(R_A, R_B|E)f_j(x),$\n", "\n", "where\n", "\n", "$f_j(x) = \\sum_{A,B}P(A_{1:T}|R_A)P(B_{1:T}|R_B)O_j(A,B).$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A1 = 1\n", "alpha0 = nc.generate_variables(name='alpha_0')[0]\n", "alphap = nc.generate_variables(name='alpha_p')[0]\n", "alpham = nc.generate_variables(name='alpha_m')[0]\n", "beta0 = nc.generate_variables(name='beta_0')[0]\n", "betap = nc.generate_variables(name='beta_p')[0]\n", "betam = nc.generate_variables(name='beta_m')[0]\n", "chains = generate_chains(T)\n", "moments = []\n", "for chain1 in chains:\n", " for chain2 in chains:\n", " if p[(chain1, chain2)] > 0:\n", " P_A = alphap**Fp(chain1, T) * alpham**Fn(chain1, T) * (1-alpham)**Sn(chain1, T) * (1-alphap)**Sp(chain1, T) * \\\n", " alpha0**((1+A1)//2) * (1-alpha0)**((1-A1)//2)\n", " P_B = betap**Fp(chain2, T) * betam**Fn(chain2, T) * (1-betam)**Sn(chain2, T) * (1-betap)**Sp(chain2, T) * \\\n", " beta0**((1+A1)//2) * (1-beta0)**((1-A1)//2)\n", " moments.append(P_A*P_B - p[(chain1, chain2)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also have to make sure that the constraints Eq. (3) are satisfied to ensure that we have a valid probability distribution\"\n", "\n", "$x\\in\\mathbb{R}^6: g_i(x)=x_i(1-x_i)\\geq 0, i=1,\\ldots,6.$\n", "\n", "These constraints are not on the moments but on the actual variables, so matching localizing matrices will be generated in the SDP relaxation." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "inequalities = [alpha0, alphap, alpham, beta0, betap, betam,\n", " 1-alpha0, 1-alphap, 1-alpham, 1-beta0, 1-betap, 1-betam]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we generate a level-3 relaxation, and solve it as a feasibility problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sdp = nc.SdpRelaxation([alpha0, alphap, alpham, beta0, betap, betam])\n", "sdp.get_relaxation(3, inequalities=inequalities, momentequalities=moments)\n", "sdp.solve(solver=\"mosek\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we study the result, for the example set that comes with metaknowledge, the status will be optimal. In this case, we cannot say anything about static latent homophily. Executing the notebook with the actual database used in the paper gives an infeasible status, which allows us to reject the hypothesis." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "print(sdp.primal, sdp.dual, sdp.status)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }