{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Example Scenario 1: Generating embeddings for ConceptNet nodes\n", "\n", "*Alice wishes to import the English subset of ConceptNet in KGTK format. Then, she would extract a subset of ConceptNet where two concepts are connected with a precise semantic relation, like `Causes` or `UsedFor` (as opposed to weaker relations like `/r/RelatedTo`). Text embeddings would be computed for all nodes in this subset, and saved in a file called `emb.txt`.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note on the expected running time:** Running this notebook takes around half an hour on a Macbook Pro laptop with MacOS Catalina 10.15, a 2.3 GHz 8-Core Intel Core i9 processor, 2TB SSD disk, and 64 GB 2667 MHz DDR4 memory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation\n", "\n", "To run this notebook, Alice would need the ConceptNet graph file. We will work with the latest ConceptNet, v5.7.0. Presumably, this file is not present on Alice's laptop, so we need to download and unpack it first (note: mac users might need to install `wget` first: `brew install wget`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "wget https://s3.amazonaws.com/conceptnet/downloads/2019/edges/conceptnet-assertions-5.7.0.csv.gz" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "gunzip conceptnet-assertions-5.7.0.csv.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation in KGTK" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will select the relevant edges from ConceptNet and sort them (note that we extract three more relations which will be used to extract labels, descriptions, and inheritance by our embedding generator below).\n", "\n", "Then we compute text embeddings. For demonstration purposes, we will compute embeddings based on the first 30k edges." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████| 19571/19571 [29:14<00:00, 11.16it/s]\n" ] } ], "source": [ "%%bash\n", "kgtk import-conceptnet --english_only -i conceptnet-assertions-5.7.0.csv / \\\n", " filter -p \" ; /r/Causes,/r/UsedFor,/r/Synonym,/r/DefinedAs,/r/IsA ; \" / sort -c 1,2,3 \\\n", " | head -30000 |\n", " kgtk text-embedding --debug --embedding-projector-metadata-path none \\\n", " --embedding-projector-metadata-path none \\\n", " --label-properties \"/r/Synonym\" \\\n", " --isa-properties \"/r/IsA\" \\\n", " --description-properties \"/r/DefinedAs\" \\\n", " --property-value \"/r/Causes\" \"/r/UsedFor\" \\\n", " --has-properties \"\" \\\n", " -f kgtk_format \\\n", " --output-data-format kgtk_format \\\n", " --use-cache \\\n", " --model bert-large-nli-cls-token \\\n", " > emb.txt " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the result, by printing the first 500 characters of the file:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "node\tproperty\tvalue\n", "/c/en/astragalus_glycyphyllos/n/wn/plant\ttext_embedding\t-0.38157165,-0.021805033,0.7940887,-1.5922968,0.52496123,-0.16233969,-0.19431037,1.0408834,0.8114325,0.3559178,0.61059636,-0.24603112,0.5337883,0.4534494,-0.29937816,0.090129025,-0.30235052,-0.6983496,-1.171757,0.9471463,0.9576315,0.6795303,-1.1980538,0.65520096,-0.59407276,0.28939876,-0.6164435,-0.2264376,1.5879735,0.31625852,-0.42459768,-0.43198207,0.22300366,-0.2425214,-0.5070722,-0.08494526,-0.6393699,0.18749073,0.48" ] } ], "source": [ "!head -c500 emb.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Playing with the embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What can we do with the embeddings now that we have computed them? For applications like query answering or entity resolution, we need a representation where similar concepts have similar embeddings. Let's perform a small trial on whether this is the case for our ConceptNet embeddings. \n", "\n", "We will use the customary metric *cosine similarity* to measure vector similarities. We use invoke an existing function from the `sklearn` package in Python:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics.pairwise import cosine_similarity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first load all embeddings into a key-value dictionary:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "embeddings={}\n", "with open('emb.txt', 'r') as f:\n", " header = next(f)\n", " for line in f:\n", " node1, label, embedding=line.split()\n", " embeddings[node1]=embedding.split(',')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We would expect that the embeddings for zero (node: `/c/en/0`) and for a plant (node: `/c/en/astragalus_alpinus/n/wn/plant`) to be fairly different, leading to a low cosine similarity:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Similarity between zero and the astragalus alpinus plant: 0.437055\n" ] } ], "source": [ "emb_0 = embeddings['/c/en/0']\n", "emb_alpinus=embeddings['/c/en/astragalus_alpinus/n/wn/plant']\n", "sim=cosine_similarity([emb_0], [emb_alpinus])\n", "\n", "print(\"Similarity between zero and the astragalus alpinus plant: %f\" % sim)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, we would expect that the similarity between the embeddings of two countries is much higher:" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Similarity between America and Argentina: 0.772047\n" ] } ], "source": [ "emb_america = embeddings['/c/en/america']\n", "emb_argentina=embeddings['/c/en/argentina']\n", "sim=cosine_similarity([emb_america], [emb_argentina])\n", "\n", "print(\"Similarity between America and Argentina: %f\" % sim)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Feel free to experiment further with the embedding similarity. Keep in mind that in this notebook we computed embeddings on the first 30k edges in ConceptNet; if you want to investigate the entire ConceptNet, then it is required that you train the embeddings on the full KG first." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" } }, "nbformat": 4, "nbformat_minor": 4 }