{ "cells": [ { "cell_type": "markdown", "id": "69d9d6cc", "metadata": { "id": "69d9d6cc" }, "source": [ "# CS5481 - Tutorial 7\n", "## Information Retrieval\n", "\n", "Welcome to CS5481 tutorial 7. In this tutorial, you will learn how to use classic information retrieval methods in practice.\n", "\n", "Information retrieval in data science refers to the process of retrieving relevant information or data from a collection or database to satisfy a user's query. It involves techniques and methods for searching, retrieving, and presenting information in a way that is meaningful and useful to users.\n", "\n", "## preparation\n", "- Python\n", "- Python Libraries\n", " - numpy" ] }, { "cell_type": "markdown", "id": "aa182e09", "metadata": { "id": "aa182e09" }, "source": [ "# Context\n", "1. Boolean Retrieval\n", "\n", "2. Vector Space Model\n", "3. BM25 Model" ] }, { "cell_type": "markdown", "id": "470f48aa", "metadata": { "id": "470f48aa" }, "source": [ "# Boolean Retrieval\n" ] }, { "cell_type": "markdown", "id": "p7hiv2ZkW1_z", "metadata": { "id": "p7hiv2ZkW1_z" }, "source": [ "Boolean retrieval is a classic information retrieval technique that allows users to retrieve relevant documents or data by using Boolean operators and logical expressions.\n", "\n", "In Boolean retrieval, a document or data item is represented as a set of terms or keywords. Users can construct queries by combining these terms using Boolean operators such as AND, OR, and NOT. The operators enable users to specify the relationships between terms and refine their search criteria.\n", "\n", "The basic Boolean operators are:\n", "\n", "* AND: Retrieves documents or data items that contain all the specified terms.\n", "* OR: Retrieves documents or data items that contain at least one of the specified terms.\n", "* NOT: Excludes documents or data items that contain the specified term.\n", "\n", "By combining these operators, users can create complex Boolean expressions to precisely /prəˈsaɪs.li/ define their search requirements. The result of a Boolean retrieval is a set of documents or data items that match the specified criteria." ] }, { "cell_type": "markdown", "id": "a605a10f", "metadata": { "id": "a605a10f" }, "source": [ "Let's consider the following documents:" ] }, { "cell_type": "code", "execution_count": 1, "id": "583961dc", "metadata": { "id": "583961dc" }, "outputs": [], "source": [ "docs = [\"The top surface of the Model A's car-like exterior is a mesh so that air can pass through to eight propellers inside the body which provide lift.\",\n", " \"But flying any distance using these alone, without the assistance of wings, would require prohibitive amounts of power.\",\n", " \"Alef's proposed solution is novel - for longer flights the Model A transforms into a biplane.\",\n", " \"It's an ingenious idea, but is it a practical one?\",\n", " \"The mesh, as visualised, might also cause significant aerodynamic drag, he adds.\"]" ] }, { "cell_type": "markdown", "id": "f097ca88", "metadata": { "id": "f097ca88" }, "source": [ "Q1: construct a vocabulary table for these documents" ] }, { "cell_type": "code", "execution_count": 2, "id": "70297385", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "70297385", "outputId": "245f2b48-81a0-4ac8-eb69-bc098b0e536f", "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "['',\n", " 'Model',\n", " 'air',\n", " 'through',\n", " 'lift',\n", " 'without',\n", " 'would',\n", " 'these',\n", " 'exterior',\n", " 'require',\n", " 'into',\n", " 'provide',\n", " 'flights',\n", " 'is',\n", " 'for',\n", " \"It's\",\n", " 'distance',\n", " 'an',\n", " 'car-like',\n", " 'pass',\n", " 'ingenious',\n", " 'but',\n", " 'as',\n", " 'drag',\n", " 'using',\n", " 'eight',\n", " 'propellers',\n", " 'that',\n", " 'biplane',\n", " 'flying',\n", " 'novel',\n", " 'the',\n", " 'proposed',\n", " \"A's\",\n", " 'visualised',\n", " 'inside',\n", " 'surface',\n", " 'he',\n", " 'which',\n", " 'power',\n", " 'one',\n", " 'amounts',\n", " 'But',\n", " 'adds',\n", " 'A',\n", " 'wings',\n", " 'to',\n", " 'a',\n", " 'might',\n", " 'solution',\n", " \"Alef's\",\n", " 'it',\n", " 'The',\n", " 'can',\n", " 'top',\n", " 'so',\n", " 'cause',\n", " 'mesh',\n", " 'of',\n", " 'prohibitive',\n", " 'practical',\n", " 'body',\n", " 'also',\n", " 'significant',\n", " 'assistance',\n", " 'aerodynamic',\n", " 'idea',\n", " 'any',\n", " 'longer',\n", " 'transforms',\n", " 'alone']" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "vocabs = [v.strip(\"-\").strip(\".\").strip(\"?\").strip(\",\") for doc in docs for v in doc.strip().split()]\n", "# remove repeated vocabs\n", "vocabs = list(set(vocabs))\n", "vocabs" ] }, { "cell_type": "markdown", "id": "RUd4qSEHYenK", "metadata": { "id": "RUd4qSEHYenK" }, "source": [ "The given code snippet uses list comprehension to extract individual words from a nested list of documents (docs). It removes common punctuation marks from each word and creates a list of unique vocabulary words (vocabs) by converting it to a set and then back to a list." ] }, { "cell_type": "markdown", "id": "848101cb", "metadata": { "id": "848101cb" }, "source": [ "Q2. Represent documents with One-Hot vectors" ] }, { "cell_type": "code", "execution_count": 3, "id": "465b6d05-9731-475e-b147-9f64547c4d14", "metadata": {}, "outputs": [], "source": [ "# Recap: What is a one-hot vector?\n", "vocab = [\"hello\", \"how\", \"are\", \"you\", \"world\"]\n", "doc1 = \"hello world\"\n", "doc2 = \"how are you\"\n", "v_doc1 = [1,0,0,0,1]\n", "v_doc2 = [0,1,1,1,0]" ] }, { "cell_type": "code", "execution_count": 4, "id": "1c405505", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "1c405505", "outputId": "3a10bfab-d6db-44c0-e67e-2825ed36dd49" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(5, 71)\n" ] }, { "data": { "text/plain": [ "array([[0., 1., 1., 1., 1., 0., 0., 0., 1., 0., 0., 1., 0., 1., 0., 0.,\n", " 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0., 1.,\n", " 0., 1., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1.,\n", " 0., 0., 0., 0., 1., 1., 1., 1., 0., 1., 1., 0., 0., 1., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 1., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 1., 0., 0., 1., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.,\n", " 1., 0., 0., 1., 0., 0., 1.],\n", " [1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 1., 1., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 1.,\n", " 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,\n", " 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 1., 1., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1.,\n", " 0., 1., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,\n", " 0., 0., 1., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 1., 0., 0., 0., 1., 1., 0., 0., 0., 0., 1., 1.,\n", " 0., 1., 0., 0., 0., 0., 0.]])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "one_hot_docs = np.zeros((len(docs), len(vocabs)))\n", "print(one_hot_docs.shape) # doc_num x vocab_len\n", "for i, doc in enumerate(docs):\n", " for v in doc.strip().split():\n", " v = v.strip(\"-\").strip(\".\").strip(\"?\").strip(\",\")\n", " one_hot_docs[i][vocabs.index(v)] = 1\n", "one_hot_docs" ] }, { "cell_type": "markdown", "id": "dA-s04tnY240", "metadata": { "id": "dA-s04tnY240" }, "source": [ "The given code snippet creates a matrix one_hot_docs of size (number of documents, size of vocabulary), initialized with zeros. It then iterates over each document in the docs list and each word in the document. It removes common punctuation marks from the word, finds its index in the vocabs list, and sets the corresponding entry in the one_hot_docs matrix to 1. Finally, it returns the resulting one-hot encoded matrix one_hot_docs." ] }, { "cell_type": "markdown", "id": "1344af31", "metadata": { "id": "1344af31" }, "source": [ "Q3. Retrieval docs satisfying the following requirements with Boolean Model\n", "1. Model OR power\n", "2. Model AND air" ] }, { "cell_type": "markdown", "id": "K-2ZPs28vgA9", "metadata": { "id": "K-2ZPs28vgA9" }, "source": [ "1. We create a one-hot encoded vector **retri_1** where the entries corresponding to the indices of \"Model\" and \"power\" are set to 1. Then, it performs **element-wise multiplication** between the one-hot encoded documents (one_hot_docs) and the retrieval vector (retri_1), followed by **summing** the values along the rows. The resulting retri_1_results contains the sum of occurrences of \"Model\" or \"power\" in each document. Finally, it prints the documents where **the sum is greater than or equal to 1**, indicating that the document satisfies the Boolean condition of having either \"Model\" **OR** \"power\" present." ] }, { "cell_type": "code", "execution_count": null, "id": "5d410fa8", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "5d410fa8", "outputId": "433e3fc4-c830-43b2-8335-e5866529e9c1" }, "outputs": [], "source": [ "Model_id = vocabs.index(\"Model\")\n", "power_id = vocabs.index(\"power\")\n", "\n", "retri_1 = np.zeros(len(vocabs))\n", "retri_1[Model_id] = 1\n", "retri_1[power_id] = 1\n", "\n", "retri_1_results = np.sum(one_hot_docs * retri_1, axis=1)\n", "print(\"Retrieved Docs for \\\"Model OR power\\\": \")\n", "for index, doc in enumerate(docs):\n", " if retri_1_results[index] >= 1:\n", " print(index, docs[index])\n" ] }, { "cell_type": "markdown", "id": "bd82QBciaG-4", "metadata": { "id": "bd82QBciaG-4" }, "source": [ "2. We create a one-hot encoded vector retri_2 where the entries corresponding to the indices of \"Model\" and \"air\" are set to 1. Then, it performs element-wise multiplication between the one-hot encoded documents (one_hot_docs) and the retrieval vector (retri_2), followed by summing the values along the rows. The resulting retri_2_results contains the sum of occurrences of both \"Model\" and \"air\" in each document. Finally, it prints the documents where the sum is greater than or equal to 2, indicating that the document satisfies the Boolean condition of having both \"Model\" and \"air\" present.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6waO59lRuuvP", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6waO59lRuuvP", "outputId": "ac4dadd1-2ac7-4f5d-b579-32c76e256c89" }, "outputs": [], "source": [ "Model_id = vocabs.index(\"Model\")\n", "air_id = vocabs.index(\"air\")\n", "\n", "retri_2 = np.zeros(len(vocabs))\n", "retri_2[Model_id] = 1\n", "retri_2[air_id] = 1\n", "\n", "retri_2_results = np.sum(one_hot_docs * retri_2, axis=1)\n", "print(\"\\nRetrievaled Docs for \\\"Model AND air\\\": \")\n", "for index, doc in enumerate(docs):\n", " if retri_2_results[index] >= 2:\n", " print(index, docs[index])" ] }, { "cell_type": "markdown", "id": "0efd5910", "metadata": { "id": "0efd5910" }, "source": [ "# Vector Space Model\n", "\n", "Here, we mainly use TF-IDF method to retrieve documents" ] }, { "cell_type": "markdown", "id": "bmZQIZVD1G5T", "metadata": { "id": "bmZQIZVD1G5T" }, "source": [ "TF-IDF is calculated by multiplying two components: term frequency (TF) and inverse document frequency (IDF). TF measures the frequency of a term within a document, indicating its importance within that specific document. IDF, on the other hand, quantifies the rarity or uniqueness of a term across the entire document collection. It assigns higher weights to terms that appear less frequently in the collection, as they are considered more informative.\n", "\n", "The TF-IDF score for a term in a document is obtained by multiplying the term's frequency (TF) in the document by the inverse document frequency (IDF) of the term across the corpus. The higher the TF-IDF score, the more significant the term is to the document. This method helps in identifying important and relevant terms for document retrieval, as well as ranking documents based on their relevance to a query." ] }, { "cell_type": "markdown", "id": "a27b9178", "metadata": { "id": "a27b9178" }, "source": [ "Q1. Represent documents with TF-IDF Model" ] }, { "cell_type": "code", "execution_count": null, "id": "eee59a81", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "eee59a81", "outputId": "e3332809-dad2-4413-bde9-175dce0385cd" }, "outputs": [], "source": [ "import numpy as np\n", "\n", "corpus = [\n", " 'this is the first document',\n", " 'this is the second document',\n", " 'and the third one',\n", " 'is this the first document'\n", "]\n", "\n", "# tokenize words\n", "word_list = []\n", "for i in range(len(corpus)):\n", " word_list.append(corpus[i].split(' '))\n", "word_list" ] }, { "cell_type": "code", "execution_count": null, "id": "2dab5a54", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "2dab5a54", "outputId": "6f9cd8da-c261-4c92-8f0a-06c193ac11c5" }, "outputs": [], "source": [ "# assign an id to each word and obtain each word's frequency\n", "from collections import Counter\n", "dictionary = Counter([v for item in word_list for v in item])\n", "dictionary" ] }, { "cell_type": "code", "execution_count": null, "id": "0ff74283", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0ff74283", "outputId": "db58bc57-277c-4ddd-ec60-cd108e5c1880" }, "outputs": [], "source": [ "# represent each document with word frequency in current documents\n", "words = list(dictionary.keys())\n", "print(words)\n", "tf = np.zeros((len(word_list), len(dictionary)))\n", "for i, doc in enumerate(word_list):\n", " for v in doc:\n", " # word freq in current document\n", " tf[i, words.index(v)] += 1 / len(doc)\n", "tf" ] }, { "cell_type": "code", "execution_count": null, "id": "88b71990", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "88b71990", "outputId": "8c85e62d-67ff-4f97-b219-d48fc4fad01d" }, "outputs": [], "source": [ "# represent each document with word inverse document freqency\n", "print(words)\n", "idf = np.zeros((len(word_list), len(dictionary)))\n", "for i, doc in enumerate(word_list):\n", " for v in doc:\n", " idf[i, words.index(v)] = 1\n", "\n", "# The number of documents containing the current word\n", "idf = idf.sum(0)+1\n", "idf = np.log(len(word_list) / idf)\n", "idf" ] }, { "cell_type": "markdown", "id": "x7Uny0yPLuuX", "metadata": { "id": "x7Uny0yPLuuX" }, "source": [ "Certainly, the IDF (inverse document frequency) calculation formula may vary slightly in different contexts. For example, in some cases, a constant value, such as 1, is added to the denominator to prevent division by zero. Additionally, there are smoothing techniques where both the numerator and denominator are incremented by 1. These variations in the formula are used to handle specific scenarios and prevent mathematical issues, ensuring the IDF calculation is robust and effective." ] }, { "cell_type": "markdown", "id": "fkLT2OOPHyhS", "metadata": { "id": "fkLT2OOPHyhS" }, "source": [ "Here, we add 1 to the denominator to avoid division by zero (i.e., when the word is not present in any document). If a word is more common, its denominator will be larger, resulting in a smaller and closer-to-zero inverse document frequency. Then we obtained the result through log operation." ] }, { "cell_type": "code", "execution_count": null, "id": "7b40cdb0", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "7b40cdb0", "outputId": "755293d9-5e2b-4520-b0d3-d0cec148751e" }, "outputs": [], "source": [ "tfidf = tf * idf\n", "tfidf" ] }, { "cell_type": "markdown", "id": "20a255e2", "metadata": { "id": "20a255e2" }, "source": [ "Q2. Compute similarity between the following document and the above documents based on tfidf with dot product" ] }, { "cell_type": "code", "execution_count": null, "id": "b371ff35", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "b371ff35", "outputId": "d2bfc114-285c-43c1-82f7-f96531f32adc" }, "outputs": [], "source": [ "query_doc = \"this is second document\"\n", "tf_query = np.zeros(len(dictionary))\n", "for v in query_doc.split():\n", " tf_query[words.index(v)] += 1 / len(query_doc.split())\n", "print(\"tf_query: \", tf_query)\n", "idf_query = np.zeros(len(dictionary))\n", "for v in query_doc.split():\n", " idf_query[words.index(v)] = idf[words.index(v)]\n", "print(\"idf_query: \", idf_query)\n", "tfidf_query = tf_query * idf_query\n", "print(\"tfidf_query: \", tfidf_query)" ] }, { "cell_type": "markdown", "id": "F0ogbZ9IRcIi", "metadata": { "id": "F0ogbZ9IRcIi" }, "source": [ "Once we have the TF-IDF vectors representing the documents, we can use them for document retrieval by vectorizing the search query and calculating the distances between the search vector and the document vectors. This allows us to determine which documents are closer or more similar to the search vector.\n", "\n", "To do this, we vectorize the search query using the same TF-IDF representation as the documents and then calculate the distance between the search vector and each document vector. For cosine similarity, we compute the dot product between the search vector and each document vector." ] }, { "cell_type": "code", "execution_count": null, "id": "c85c760e", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "c85c760e", "outputId": "8257fe31-6a65-468d-92cb-3fcce8033a54" }, "outputs": [], "source": [ "# compute cosine similary\n", "similarity = tfidf_query * tfidf\n", "similarity.sum(1)" ] }, { "cell_type": "markdown", "id": "AsrVJPf7MeLa", "metadata": { "id": "AsrVJPf7MeLa" }, "source": [ "Q3. Why is the IDF (inverse document frequency) in the TF-IDF algorithm calculated using a logarithm?\n", "\n", "A3.1 First of all, for those terms with particularly high term frequencies that appear in almost every document (such as \"the\" and \"is\"), the number of documents in the collection containing these terms is approximately equal to the total number of documents, i.e., N/n = 1 (where N/n is always greater than 1). These terms would have a high weight, even though they lack discriminative power, which is not in line with our expectations. By using IDF (with the logarithm function), when log(1) = 0, the weight of these terms calculated by TF-IDF becomes 0, which aligns with our expectations.\n", "\n", "A3.2 Another reason is that using logarithm can prevent weight explosion. If certain words appear in only one or a few documents (such as typographical errors), without logarithm, the IDF would be very large (due to a very small denominator), which would impact their weights. However, using logarithm can mitigate this effect." ] }, { "cell_type": "markdown", "id": "6fefabb9", "metadata": { "id": "6fefabb9" }, "source": [ "# BM25 Model\n", "\n", "The BM25 (Best Matching 25) model is a ranking function commonly used in information retrieval and search engines. It is an improvement over the earlier TF-IDF model. BM25 takes into account factors such as term frequency, document length, and document frequency to calculate a relevance score between a search query and a document.\n", "\n", "$Score(Q, d) = \\sum_{i}^{n}W_i R(q_i, d)$, where $Q$ is the query, $d$ is a document, $n$ is the number of words in $Q$, $q_i$ is the $i$-th word in query. $W_i$ is the weight of this word, $R(q_i, d)$ is the relevant score bewteen word $q_i$ and document $d$.\n", "\n", "In BM2.5 Model,\n", "$W_i = log\\frac{N-df_i+0.5}{df_i+0.5}$, where $N$ is the numbe of all documents in document database, $df_i$ is the number of documents containing word $q_i$.\n", "Based on the influence of IDF, the greater the number of documents that contain a term $q_i$, the less significant or distinctive is $q_i$. A smaller IDF value signifies a lower level of importance or discriminative ability for $q_i$. Hence, IDF can be utilized to characterize the similarity between $q_i$ and documents.\n", "\n", "$R(q_i, d) = \\frac{f_i(k_1+1)}{f+K} * \\frac{qf_i(k_2+1)}{qf_i+k_2}$,\n", "$K = k_1 * (1 -b + b * \\frac{dl}{avg\\_dl})$, where $k_1$, $k_2$, $b$ are tunable parameters of the BM25 model. $k_1$ controls the impact of term frequency in the document on the relevance score. $k_2$ determines the effect of term frequency in the query. $b$ controls the impact of document length normalization.\n", "$dl$ is the length of docuemnt $d$, $avg\\_dl$ is the average length of all documents.\n", "Longer documents tend to have higher term frequencies simply because they contain more words, which can potentially lead to a bias in relevance ranking.\n", "By incorporating document length in the BM25 model, the impact of document length on the relevance score is normalized.\n", "\n", "The first term, $\\frac{f_i(k_1+1)}{f+K}$, represents the impact of term frequency in the document, where $f_i$ is the number word $q_i$ appearing in all documents, $f$ is the total term frequency in the document.\n", "The second term, $\\frac{qf_i(k_2+1)}{qf_i+k_2}$, represents the impact of term frequency in the query. $qf_i$ is the number word $q_i$ appearing in query." ] }, { "cell_type": "markdown", "id": "5ffe1b65", "metadata": { "id": "5ffe1b65" }, "source": [ "Q1. Construct a BM25 Model" ] }, { "cell_type": "code", "execution_count": null, "id": "60d2303c", "metadata": { "id": "60d2303c" }, "outputs": [], "source": [ "import numpy as np\n", "from collections import Counter\n", "\n", "\n", "class BM25_Model(object):\n", " def __init__(self, documents_list, k1=2, k2=1, b=0.5):\n", " # documents and each document is a list of words\n", " self.documents_list = documents_list\n", " self.documents_number = len(documents_list)\n", " # average document length\n", " self.avg_documents_len = sum([len(document) for document in documents_list]) / self.documents_number\n", " # save each word's frequency in each document\n", " self.f = []\n", " # word's weight\n", " self.idf = {}\n", " self.k1 = k1\n", " self.k2 = k2\n", " self.b = b\n", " # obtain f and idf from input documents\n", " self.init()\n", "\n", " def init(self):\n", " #\n", " df = {}\n", " for document in self.documents_list:\n", " # word frequency in current document\n", " temp = {}\n", " # stat\n", " for word in document:\n", " temp[word] = temp.get(word, 0) + 1\n", " # save word frequency\n", " self.f.append(temp)\n", " # the number of documents containing the word key\n", " for key in temp.keys():\n", " df[key] = df.get(key, 0) + 1\n", " # compute word's idf\n", " for key, value in df.items():\n", " self.idf[key] = np.log((self.documents_number - value + 0.5) / (value + 0.5))\n", "\n", " # compute similarity score bewteen query and index-th document in all documents\n", " def get_score(self, index, query):\n", " score = 0.0\n", " document_len = len(self.documents_list[index])\n", " qf = Counter(query)\n", " for q in query:\n", " if q not in self.f[index]:\n", " continue\n", " score += self.idf[q] * (self.f[index][q] * (self.k1 + 1) / (\n", " self.f[index][q] + self.k1 * (1 - self.b + self.b * document_len / self.avg_documents_len))) * (\n", " qf[q] * (self.k2 + 1) / (qf[q] + self.k2))\n", "\n", " return score\n", " # compute simialrity scores between query and all documents\n", " def get_documents_score(self, query):\n", " score_list = []\n", " for i in range(self.documents_number):\n", " score_list.append(self.get_score(i, query))\n", " return score_list" ] }, { "cell_type": "markdown", "id": "8b6fbc2d", "metadata": { "id": "8b6fbc2d" }, "source": [ "# Practice" ] }, { "cell_type": "markdown", "id": "92d7abd6", "metadata": { "id": "92d7abd6" }, "source": [ "Q1. Use the above BM25 model to compute the similary of a given query and documents" ] }, { "cell_type": "code", "execution_count": null, "id": "8215a372", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8215a372", "outputId": "9e31e45f-3c51-4bd8-ad11-a79160c42b77" }, "outputs": [], "source": [ "query = \"This is second documents\"\n", "corpus = [\n", " 'this is the first document',\n", " 'this is the second second document',\n", " 'and the third one',\n", " 'is this the first document'\n", "]\n", "\n", "\n", "# tokenize words\n", "word_list = []\n", "for i in range(len(corpus)):\n", " word_list.append(corpus[i].split(' '))\n", "\n", "query = query.split(' ')\n", "\n", "BM25 = BM25_Model(word_list)\n", "scores = BM25.get_documents_score(query)\n", "\n", "scores" ] }, { "cell_type": "code", "execution_count": null, "id": "508836fc", "metadata": { "id": "508836fc" }, "outputs": [], "source": [] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "good", "language": "python", "name": "good" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }