{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Creating Text-Fabric dataset (from GBI trees XML nodes)\n", "\n", "Version: 0.4 (July 24, 2023 - major updates; changing feature names; updated documentation)\n", "\n", "## Table of content \n", "* 1 - Introduction\n", "* 2 - Read GBI XML data and store in pickle\n", " * 2.1 - Import various libraries\n", " * 2.2 - Initialize global data\n", " * 2.3 - Function to add parent info to each node in XML tree\n", " * 2.4 - Process the XML data and store in pickle file\n", "* 3 - Nestle1904GBI Text-Fabric production from pickle input\n", " * 3.1 - Load libraries and initialize some data\n", " * 3.2 - Running the Text-Fabric walker function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1 - Introduction\n", "##### [Back to TOC](#TOC)\n", "\n", "The source data for the conversion are the XML node files representing the macula-greek version of Eberhard Nestle's 1904 Greek New Testament (British Foreign Bible Society, 1904). The starting dataset is formatted according to Syntax diagram markup by the Global Bible Initiative (GBI). The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes. Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n", "\n", "The production of the Text-Fabric files consist of two major parts. The first part is the creation of pickle files. The second part is the actual Text-Fabric creation process. Both parts are independent, allowing to start from part 2 by using the pickle files created in part 1 as input. \n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2 - Read GBI XML data and store in pickle \n", "##### [Back to TOC](#TOC)\n", "\n", "This script extracts all information from the GBI tree XML data file, organizes it into a Pandas DataFrame, and saves the result per book in a pickle file. Please note that pickling in Python refers to the process of serializing an object into a disk file or buffer. See also the [Python3 documentation](https://docs.python.org/3/library/pickle.html).\n", "\n", "Within the context of this script, the term 'Leaf' refers to nodes that contain the Greek word as data. These nodes are also referred to as 'terminal nodes' since they do not have any children, similar to leaves on a tree. Additionally, Parent1 represents the parent of the leaf, Parent2 represents the parent of Parent1, and so on. For a visual representation, please refer to the following diagram.\n", "\n", "\n", "\n", "For a full description of the structure of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## 2.1 - Import various libraries\n", "##### [Back to TOC](#TOC) " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T02:58:14.739227Z", "start_time": "2022-10-28T02:57:38.766097Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import sys\n", "import os\n", "import time\n", "import pickle\n", "\n", "import re # used for regular expressions\n", "from os import listdir\n", "from os.path import isfile, join\n", "import xml.etree.ElementTree as ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2 - Initialize global data\n", "##### [Back to TOC](#TOC)\n", "\n", "The following global data initializes the script, gathering the XML data to store it into the pickle files.\n", "\n", "IMPORTANT: To ensure proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, InputDir, and OutputDir to match the location of the data and the operating system you are using. In this Jupyter Notebook, Windows is the operating system employed." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "InputDir = BaseDir+'inputfiles\\\\'\n", "OutputDir = BaseDir+'outputfiles\\\\'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3 - Function to add parent info to each node in XML tree\n", "##### [Back to TOC](#TOC) \n", "\n", "In order to be able to traverse from the 'leafs' upto the root of the tree, it is required to add information to each node pointing to the parent of each node. The terminating nodes of an XML tree are called \"leaf nodes\" or \"leaves.\" These nodes do not have any child elements and are located at the end of a branch in the XML tree. Leaf nodes contain the actual data or content within an XML document. In contrast, non-leaf nodes are called \"internal nodes,\" which have one or more child elements.\n", "\n", "(Attribution: the concept of following functions is taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def addParentInfo(et):\n", " for child in et:\n", " child.attrib['parent'] = et\n", " addParentInfo(child)\n", "\n", "def getParent(et):\n", " if 'parent' in et.attrib:\n", " return et.attrib['parent']\n", " else:\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.4 - Process the XML data and store in pickle file\n", "##### [Back to TOC](#TOC)\n", "This code processes books in the correct order. Firstly, it parses the XML and adds parent information to each node. Then, it loops through the nodes and checks if it is a 'leaf' node, meaning it contains only one word. If it is a 'leaf' node, the following steps are performed:\n", "\n", "* Adds computed data to the 'leaf' nodes in memory.\n", "* Traverses from the 'leaf' node up to the root and adds information from the parent, grandparent, and so on, to the 'leaf' node.\n", "* Once it reaches the root, it stops and stores all the gathered information in a dataframe that will be added to the full_dataframe.\n", "* After processing all the nodes for a specific book, the full_dataframe is exported to a pickle file specific to that book.\n", "\n", "Note that this script takes a long time to execute (due to the large number of itterations). However, once the XML data is converted to PKL, there is no need to rerun (unless the source XML data is updated)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Processing Matthew at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\01-matthew.xml\n", "......................................................................................................................................................................................\n", "Found 18299 items in 91.10916662216187 seconds\n", "\n", "Processing Mark at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\02-mark.xml\n", "................................................................................................................\n", "Found 11277 items in 50.291404247283936 seconds\n", "\n", "Processing Luke at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\03-luke.xml\n", "..................................................................................................................................................................................................\n", "Found 19456 items in 133.3894076347351 seconds\n", "\n", "Processing John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\04-john.xml\n", "............................................................................................................................................................\n", "Found 15643 items in 64.38849639892578 seconds\n", "\n", "Processing Acts at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\05-acts.xml\n", ".......................................................................................................................................................................................\n", "Found 18393 items in 108.86283874511719 seconds\n", "\n", "Processing Romans at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\06-romans.xml\n", ".......................................................................\n", "Found 7100 items in 39.84243655204773 seconds\n", "\n", "Processing I_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\07-1corinthians.xml\n", "....................................................................\n", "Found 6820 items in 30.45336675643921 seconds\n", "\n", "Processing II_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\08-2corinthians.xml\n", "............................................\n", "Found 4469 items in 23.716757774353027 seconds\n", "\n", "Processing Galatians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\09-galatians.xml\n", "......................\n", "Found 2228 items in 10.996569156646729 seconds\n", "\n", "Processing Ephesians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\10-ephesians.xml\n", "........................\n", "Found 2419 items in 16.31870675086975 seconds\n", "\n", "Processing Philippians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\11-philippians.xml\n", "................\n", "Found 1630 items in 7.621110439300537 seconds\n", "\n", "Processing Colossians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\12-colossians.xml\n", "...............\n", "Found 1575 items in 10.663908243179321 seconds\n", "\n", "Processing I_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\13-1thessalonians.xml\n", "..............\n", "Found 1473 items in 8.290420293807983 seconds\n", "\n", "Processing II_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\14-2thessalonians.xml\n", "........\n", "Found 822 items in 4.760505676269531 seconds\n", "\n", "Processing I_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\15-1timothy.xml\n", "...............\n", "Found 1588 items in 10.483261823654175 seconds\n", "\n", "Processing II_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\16-2timothy.xml\n", "............\n", "Found 1237 items in 9.861332178115845 seconds\n", "\n", "Processing Titus at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\17-titus.xml\n", "......\n", "Found 658 items in 3.542095899581909 seconds\n", "\n", "Processing Philemon at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\18-philemon.xml\n", "...\n", "Found 335 items in 1.1049859523773193 seconds\n", "\n", "Processing Hebrews at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\19-hebrews.xml\n", ".................................................\n", "Found 4955 items in 24.637736558914185 seconds\n", "\n", "Processing James at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\20-james.xml\n", ".................\n", "Found 1739 items in 7.296755313873291 seconds\n", "\n", "Processing I_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\21-1peter.xml\n", "................\n", "Found 1676 items in 10.295158624649048 seconds\n", "\n", "Processing II_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\22-2peter.xml\n", "..........\n", "Found 1098 items in 5.295553684234619 seconds\n", "\n", "Processing I_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\23-1john.xml\n", ".....................\n", "Found 2136 items in 6.607006549835205 seconds\n", "\n", "Processing II_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\24-2john.xml\n", "..\n", "Found 245 items in 0.9022383689880371 seconds\n", "\n", "Processing III_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\25-3john.xml\n", "..\n", "Found 219 items in 0.6504268646240234 seconds\n", "\n", "Processing Jude at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\26-jude.xml\n", "....\n", "Found 457 items in 2.1085281372070312 seconds\n", "\n", "Processing Revelation at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\27-revelation.xml\n", "..................................................................................................\n", "Found 9832 items in 47.65871715545654 seconds\n", "\n" ] } ], "source": [ "# Set some globals\n", "monad=1 # Smallest meaningful unit of text (in this corpus: a single word)\n", "\n", "# Process all the books (files) in order\n", "for bo, bookinfo in bo2book.items():\n", " CollectedItems=0\n", " full_df=pd.DataFrame({})\n", " book_long=bookinfo[0]\n", " booknum=bookinfo[1]\n", " book_short=bookinfo[2]\n", " InputFile = os.path.join(InputDir, f'{bo}.xml')\n", " OutputFile = os.path.join(OutputDir, f'{bo}.pkl')\n", " print(f'Processing {book_long} at {InputFile}')\n", " DataFrameList = []\n", "\n", " # Send the loaded XML document to the parsing routine\n", " tree = ET.parse(InputFile)\n", " \n", " # Now add all the parent info to the nodes in the XML tree [this step is important!]\n", " addParentInfo(tree.getroot())\n", " start_time = time.time()\n", " \n", " # Walk over all the leaves and harvest the data\n", " for elem in tree.iter():\n", " if not list(elem):\n", " # If no child elements exist, this must be a leaf/terminal node\n", " \n", " # Show progress on screen by printing a dot for each 100 words processed\n", " CollectedItems+=1\n", " if (CollectedItems%100==0): print (\".\",end='')\n", " \n", " # Leafref will contain list with book, chapter verse and wordnumber\n", " Leafref = re.sub(r'[!: ]',\" \", elem.attrib.get('ref')).split()\n", " \n", " # Push value for monad to element tree \n", " elem.set('monad', monad)\n", " monad+=1\n", " \n", " # Add some important computed data to the leaf\n", " elem.set('LeafName', elem.tag)\n", " elem.set('word', elem.text)\n", " elem.set('book_long', book_long)\n", " elem.set('booknum', int(booknum))\n", " elem.set('book_short', book_short)\n", " elem.set('chapter', int(Leafref[1]))\n", " elem.set('verse', int(Leafref[2]))\n", " \n", " # The following code traces the parents up the tree and stores the discovered attributes.\n", " parentnode=getParent(elem)\n", " index=0\n", " while (parentnode):\n", " index+=1\n", " elem.set('Parent{}Name'.format(index), parentnode.tag)\n", " elem.set('Parent{}Type'.format(index), parentnode.attrib.get('Type'))\n", " elem.set('Parent{}Cat'.format(index), parentnode.attrib.get('Cat'))\n", " elem.set('Parent{}Start'.format(index), parentnode.attrib.get('Start'))\n", " elem.set('Parent{}End'.format(index), parentnode.attrib.get('End'))\n", " elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('Rule'))\n", " elem.set('Parent{}Head'.format(index), parentnode.attrib.get('Head'))\n", " elem.set('Parent{}NodeId'.format(index),parentnode.attrib.get('nodeId'))\n", " elem.set('Parent{}ClType'.format(index),parentnode.attrib.get('ClType'))\n", " elem.set('Parent{}HasDet'.format(index),parentnode.attrib.get('HasDet'))\n", " currentnode=parentnode\n", " parentnode=getParent(currentnode) \n", " elem.set('parents', int(index))\n", " \n", " # This will push all elements found in the tree into a DataFrame\n", " DataFrameChunk=pd.DataFrame(elem.attrib, index={monad})\n", " DataFrameList.append(DataFrameChunk)\n", " \n", " # Store the resulting DataFrame per book into a pickle file for further processing\n", " full_df = pd.concat([df for df in DataFrameList])\n", " \n", " output = open(r\"{}\".format(OutputFile), 'wb')\n", " pickle.dump(full_df, output)\n", " output.close()\n", " print(\"\\nFound \",CollectedItems, \" items in %s seconds\\n\" % (time.time() - start_time)) \n", " " ] }, { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "## 3 - Nestle1904GBI Text-Fabric production from pickle input \n", "##### [Back to TOC](#TOC)\n", "\n", "This script creates the Text-Fabric files by recursive calling the TF walker function.\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The pickle files created by the script in section 2.4 are stored on Github location [/resources/pickle](https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/pickle)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.1 - Load libraries and initialize some data\n", "##### [Back to TOC](#TOC) \n", "\n", "The following global data initializes the Text-Fabric conversion script.\n", "\n", "IMPORTANT: To ensure the proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, PklDir, etc., to match the location of the data and the operating system you are using. This Jupyter Notebook employs the Windows operating system." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T03:01:34.810259Z", "start_time": "2022-10-28T03:01:25.745112Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import re\n", "import gc\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "from tf.parameters import VERSION\n", "from datetime import date\n", "import pickle\n", "\n", "\n", "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "source_dir = BaseDir+'outputfiles\\\\' # the input for the walker is the output of the xml to pickle \n", "output_dir = BaseDir+'outputfilesTF\\\\' #the TextFabric files\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.2 - Running the Text-Fabric walker function\n", "##### [Back to TOC](#TOC)\n", "\n", "Text-Fabric API info can be found at https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "From a high level perspective, the director function first perform the following functions:\n", "* Initializes all required data\n", "* Makes sure the input files (books) are read in the right order\n", "\n", "Next, on a file-by-file (i.e., book-by-book) order, the XML terminal nodes within a specific book are sorted according to the order within the corpus. Then, for each terminal node (word), the following actions are performed:\n", "* Determine the book, chapter, verse and sentence the word belongs to and create related Text-Fabric nodes when required.\n", "* etermine syntactical information related to the word under processing and create relevant Text-Fabric phrase and clause nodes to store syntactical information.\n", "* Determine boundaries of sentences, clauses and phrases based upon interpunction and create sentence nodes and other nodes where required.\n", "* Determine various word-related attributes (e.g., orthographic, lexical, or morphological) and create a word node and store the attributes as features assigned to the word node.\n", "\n", "Explanatory notes about the data interpretation logic are incorporated within the Python code of the director function." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.4.10\n", "0 features found and 0 ignored\n", " 0.00s Not all of the warp features otype and oslots are present in\n", "~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " 0.00s Only the Feature and Edge APIs will be enabled\n", " 0.00s Warp feature \"otext\" not found. Working without Text-API\n", "\n", " 0.00s Importing data from walking through the source ...\n", " | 0.00s Preparing metadata... \n", " | SECTION TYPES: book, chapter, verse\n", " | SECTION FEATURES: book, chapter, verse\n", " | STRUCTURE TYPES: book, chapter, verse\n", " | STRUCTURE FEATURES: book, chapter, verse\n", " | TEXT FEATURES:\n", " | | text-orig-full after, word\n", " | 0.00s OK\n", " | 0.00s Following director... \n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\01-matthew.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\02-mark.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\03-luke.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\04-john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\05-acts.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\06-romans.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\07-1corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\08-2corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\09-galatians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\10-ephesians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\11-philippians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\12-colossians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\13-1thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\14-2thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\15-1timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\16-2timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\17-titus.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\18-philemon.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\19-hebrews.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\20-james.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\21-1peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\22-2peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\23-1john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\24-2john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\25-3john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\26-jude.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\27-revelation.pkl...\n", " | 35s \"edge\" actions: 0\n", " | 35s \"feature\" actions: 451007\n", " | 35s \"node\" actions: 102748\n", " | 35s \"resume\" actions: 0\n", " | 35s \"slot\" actions: 137779\n", " | 35s \"terminate\" actions: 240527\n", " | 27 x \"book\" node \n", " | 260 x \"chapter\" node \n", " | 16124 x \"clause\" node \n", " | 72674 x \"phrase\" node \n", " | 5720 x \"sentence\" node \n", " | 7943 x \"verse\" node \n", " | 137779 x \"word\" node = slot type\n", " | 240527 nodes of all types\n", " | 35s OK\n", " | 0.00s checking for nodes and edges ... \n", " | 0.00s OK\n", " | 0.00s checking (section) features ... \n", " | 0.18s OK\n", " | 0.00s reordering nodes ...\n", " | 0.04s Sorting 27 nodes of type \"book\"\n", " | 0.05s Sorting 260 nodes of type \"chapter\"\n", " | 0.06s Sorting 16124 nodes of type \"clause\"\n", " | 0.09s Sorting 72674 nodes of type \"phrase\"\n", " | 0.17s Sorting 5720 nodes of type \"sentence\"\n", " | 0.19s Sorting 7943 nodes of type \"verse\"\n", " | 0.21s Max node = 240527\n", " | 0.21s OK\n", " | 0.00s reassigning feature values ...\n", " | | 0.00s node feature \"after\" with 137779 nodes\n", " | | 0.03s node feature \"book\" with 162187 nodes\n", " | | 0.08s node feature \"booknum\" with 162187 nodes\n", " | | 0.12s node feature \"bookshort\" with 162187 nodes\n", " | | 0.16s node feature \"case\" with 137779 nodes\n", " | | 0.19s node feature \"chapter\" with 162160 nodes\n", " | | 0.24s node feature \"clause\" with 153930 nodes\n", " | | 0.27s node feature \"clauserule\" with 16124 nodes\n", " | | 0.28s node feature \"clausetype\" with 3846 nodes\n", " | | 0.28s node feature \"degree\" with 137779 nodes\n", " | | 0.32s node feature \"formaltag\" with 137779 nodes\n", " | | 0.36s node feature \"functionaltag\" with 137779 nodes\n", " | | 0.39s node feature \"gloss\" with 137779 nodes\n", " | | 0.43s node feature \"gn\" with 137779 nodes\n", " | | 0.46s node feature \"lemma\" with 137779 nodes\n", " | | 0.50s node feature \"lex_dom\" with 137779 nodes\n", " | | 0.53s node feature \"ln\" with 137779 nodes\n", " | | 0.57s node feature \"monad\" with 137779 nodes\n", " | | 0.60s node feature \"mood\" with 137779 nodes\n", " | | 0.64s node feature \"nodeID\" with 137779 nodes\n", " | | 0.67s node feature \"normalized\" with 137779 nodes\n", " | | 0.70s node feature \"nu\" with 137779 nodes\n", " | | 0.74s node feature \"number\" with 137779 nodes\n", " | | 0.78s node feature \"person\" with 137779 nodes\n", " | | 0.81s node feature \"phrase\" with 210453 nodes\n", " | | 0.87s node feature \"phrasefunction\" with 72674 nodes\n", " | | 0.90s node feature \"phrasefunctionlong\" with 72674 nodes\n", " | | 0.92s node feature \"phrasetype\" with 72674 nodes\n", " | | 0.95s node feature \"reference\" with 137779 nodes\n", " | | 0.98s node feature \"sentence\" with 143553 nodes\n", " | | 1.01s node feature \"sp\" with 137779 nodes\n", " | | 1.05s node feature \"splong\" with 137779 nodes\n", " | | 1.08s node feature \"strongs\" with 137779 nodes\n", " | | 1.12s node feature \"subj_ref\" with 137779 nodes\n", " | | 1.15s node feature \"tense\" with 137779 nodes\n", " | | 1.19s node feature \"type\" with 137779 nodes\n", " | | 1.22s node feature \"verse\" with 161900 nodes\n", " | | 1.27s node feature \"voice\" with 137779 nodes\n", " | | 1.30s node feature \"word\" with 137779 nodes\n", " | 1.40s OK\n", " 0.00s Exporting 40 node and 1 edge and 1 config features to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF:\n", " 0.00s VALIDATING oslots feature\n", " 0.01s VALIDATING oslots feature\n", " 0.01s maxSlot= 137779\n", " 0.02s maxNode= 240527\n", " 0.03s OK: oslots is valid\n", " | 0.13s T after to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.15s T book to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T booknum to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.15s T bookshort to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T case to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T chapter to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T clause to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.02s T clauserule to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.01s T clausetype to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T degree to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T formaltag to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T functionaltag to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T gloss to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T gn to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.16s T lemma to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T lex_dom to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T ln to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T monad to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T mood to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T nodeID to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.16s T normalized to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T nu to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T number to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.04s T otype to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.12s T person to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.26s T phrase to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.18s T phrasefunction to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.09s T phrasefunctionlong to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.09s T phrasetype to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.15s T reference to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T sentence to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T sp to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T splong to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T strongs to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T subj_ref to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T tense to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.13s T type to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.15s T verse to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.14s T voice to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.15s T word to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.43s T oslots to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " | 0.00s M otext to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", " 5.77s Exported 40 node features and 1 edge features and 1 config features to ~/my_new_Jupyter_folder/test_of_xml_etree/outputfilesTF\n", "done\n" ] } ], "source": [ "TF = Fabric(locations=output_dir, silent=False)\n", "cv = CV(TF)\n", "\n", "###############################################\n", "# Common helper functions #\n", "###############################################\n", "\n", "# The following sanitizer function is required to prevent passing float data to the walker function\n", "def sanitize(input):\n", " if isinstance(input, float): return ''\n", " else: return (input)\n", " \n", "###############################################\n", "# The director routine #\n", "###############################################\n", "\n", "def director(cv):\n", " \n", " ###############################################\n", " # Innitial setup of data etc. #\n", " ###############################################\n", " \n", " NoneType = type(None) # needed as tool to validate certain data\n", " IndexDict = {} # init an empty dictionary\n", "\n", " for bo,bookinfo in bo2book.items():\n", " \n", " ###############################################\n", " # start of section executed for each book #\n", " ###############################################\n", " \n", " # load all data into a dataframe and process books in order (note that bookinfo is a list)\n", " Book=bookinfo[0] \n", " BookNum=int(bookinfo[1])\n", " BookShort=bookinfo[2]\n", " BookLoc = os.path.join(source_dir, f'{bo}.pkl') \n", " \n", " # Read data from PKL file and report progress\n", " print(f'\\tloading {BookLoc}...')\n", " PklFile = open(BookLoc, 'rb')\n", " df_unsorted = pickle.load(PklFile)\n", " PklFile.close()\n", " \n", " # Fill dictionary of column names for this book\n", " ItemsInRow=1\n", " for itemname in df_unsorted.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " # This is to identify the collumn containing the key to sort upon\n", " if itemname==\"{http://www.w3.org/XML/1998/namespace}id\": SortKey=ItemsInRow-1\n", " ItemsInRow+=1\n", " \n", " # Sort the nodes\n", " df=df_unsorted.sort_values(by=df_unsorted.columns[SortKey])\n", " del df_unsorted\n", " \n", " # Reset/load the following initial variables (we are at the start of a new book) \n", " phrasefunction = prev_phrasefunction = phrasefunctionlong = prev_phrasefunctionlong='TBD'\n", " this_clausetype = this_clauserule = phrasetype=\"unknown\"\n", " prev_chapter = prev_verse = prev_sentence = prev_clause = prev_phrase = int(1) \n", " sentence_track = clause_track = phrase_track = 1\n", " sentence_done = clause_done = phrase_done = verse_done = chapter_done = book_done = False \n", " wrdnum = 0 # start at 0 \n", "\n", " # Create a set of nodes at the start a new book\n", " \n", " ThisBookPointer = cv.node('book')\n", " cv.feature(ThisBookPointer, book=Book, booknum=BookNum, bookshort=BookShort)\n", " \n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=1)\n", " \n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=1, verse=1)\n", " \n", " ThisSentencePointer = cv.node('sentence')\n", " cv.feature(ThisSentencePointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=1, verse=1, sentence=1)\n", " \n", " ThisClausePointer = cv.node('clause')\n", " cv.feature(ThisClausePointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=1, verse=1, sentence=1, clause=1)\n", " \n", " ThisPhrasePointer = cv.node('phrase')\n", " cv.feature(ThisPhrasePointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=1, verse=1, sentence=1, clause=1, phrase=1)\n", " \n", " ###############################################\n", " # Iterate through words and construct objects #\n", " ###############################################\n", " \n", " for row in df.itertuples():\n", " wrdnum += 1\n", " \n", " # Get the number of parent nodes for this word\n", " parents = row[IndexDict.get(\"i_parents\")]\n", " \n", " # Get chapter and verse for this word from the data\n", " chapter = row[IndexDict.get(\"i_chapter\")]\n", " verse = row[IndexDict.get(\"i_verse\")]\n", " \n", " # Get clause rule and type info of parent clause\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"CL\":\n", " clauseparent=i\n", " this_clauserule=row[IndexDict.get(\"i_Parent{}Rule\".format(i))] \n", " this_clausetype=row[IndexDict.get(\"i_Parent{}ClType\".format(i))] \n", " break\n", " cv.feature(ThisClausePointer, clause=clause_track, clauserule=this_clauserule, clausetype=this_clausetype, book=Book, booknum=BookNum, bookshort=BookShort, chapter=chapter, verse=verse)\n", " \n", "\n", " # Get phrase type info\n", " prev_phrasetype=phrasetype\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"np\":\n", " _item =\"i_Parent{}Rule\".format(i)\n", " phrasetype=row[IndexDict.get(_item)]\n", " break\n", " functionaltag=row[IndexDict.get('i_FunctionalTag')]\n", "\n", " \n", " # Determine syntactic categories of clause parts. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 5&6\n", " # (section 2.4 Syntactic Categories at Clause Level)\n", " prev_phrasefunction=phrasefunction\n", " for i in range(1,clauseparent): \n", " phrasefunction = row[IndexDict.get(\"i_Parent{}Cat\".format(i))] \n", " if phrasefunction==\"ADV\": \n", " phrasefunctionlong='Adverbial function'\n", " break\n", " elif phrasefunction==\"IO\":\n", " phrasefunctionlong='Indirect Object function'\n", " break\n", " elif phrasefunction==\"O\":\n", " phrasefunctionlong='Object function'\n", " break\n", " elif phrasefunction==\"O2\":\n", " phrasefunctionlong='Second Object function'\n", " break\n", " elif phrasefunction==\"S\":\n", " phrasefunctionlong='Subject function'\n", " break\n", " elif phrasefunction=='P':\n", " phrasefunctionlong='Predicate function'\n", " break\n", " elif phrasefunction==\"V\":\n", " phrasefunctionlong='Verbal function'\n", " break\n", " elif phrasefunction==\"VC\":\n", " phrasefunctionlong='Verbal Copula function'\n", " if prev_phrasefunction!=phrasefunction and wrdnum!=1:\n", " phrase_done = True\n", "\n", "\n", " # Determine syntactic categories at word level. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 6&7\n", " # (2.2. Syntactic Categories at Word Level: Part of Speech Labels)\n", " sp=sanitize(row[IndexDict.get(\"i_Cat\")])\n", " if sp=='adj': splong='adjective'\n", " elif sp=='adj': splong='adjective'\n", " elif sp=='conj': splong='conjunction'\n", " elif sp=='det': splong='determiner' \n", " elif sp=='intj': splong='interjection' \n", " elif sp=='noun': splong='noun' \n", " elif sp=='num': splong='numeral' \n", " elif sp=='prep': splong='preposition' \n", " elif sp=='ptcl': splong='particle' \n", " elif sp=='pron': splong='pronoun' \n", " elif sp=='verb': splong='verb' \n", " \n", " \n", " '''\n", " Determine if conditions are met to trigger some action \n", " action will be executed after next word\n", " ''' \n", " \n", " # Detect chapter boundary\n", " if prev_chapter != chapter:\n", " chapter_done = True\n", " verse_done=True\n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " \n", " # Detect verse boundary\n", " if prev_verse != verse:\n", " verse_done=True\n", " \n", " \n", " '''\n", " Handle TF events and determine what actions need to be done if proper condition is met.\n", " ''' \n", "\n", " # Act upon end of phrase (close)\n", " if phrase_done or clause_done or sentence_done:\n", " cv.feature(ThisPhrasePointer, phrase=phrase_track, phrasetype=prev_phrasetype, phrasefunction=prev_phrasefunction, phrasefunctionlong=prev_phrasefunctionlong)\n", " cv.terminate(ThisPhrasePointer)\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunctionlong=phrasefunctionlong\n", " \n", " # act upon end of clause (close) \n", " if clause_done:\n", " cv.terminate(ThisClausePointer)\n", " \n", " # act upon end of sentence (close)\n", " if sentence_done:\n", " cv.terminate(ThisSentencePointer)\n", " \n", " # act upon end of verse (close)\n", " if verse_done:\n", " cv.terminate(ThisVersePointer)\n", " prev_verse = verse \n", "\n", " # act upon end of chapter (close)\n", " if chapter_done:\n", " cv.terminate(ThisChapterPointer)\n", " prev_chapter = chapter\n", "\n", " \n", " # Start of chapter (create new)\n", " if chapter_done:\n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=chapter)\n", " chapter_done = False\n", " \n", " # Start of verse (create new)\n", " if verse_done:\n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, book=Book, booknum=BookNum, bookshort=BookShort, chapter=chapter, verse=verse)\n", " verse_done = False \n", " \n", " # Start of sentence (create new)\n", " if sentence_done:\n", " ThisSentencePointer= cv.node('sentence')\n", " cv.feature(ThisSentencePointer, sentence=sentence_track)\n", " sentence_track += 1\n", " sentence_done = False\n", " \n", " # Start of clause (create new) \n", " if clause_done:\n", " ThisClausePointer = cv.node('clause')\n", " cv.feature(ThisClausePointer, clause=clause_track, clauserule=this_clauserule,clausetype=this_clausetype)\n", " clause_track += 1\n", " clause_done = False\n", " phrase_done = True \n", " \n", " # Start of phrase (create new)\n", " if phrase_done:\n", " ThisPhrasePointer = cv.node('phrase')\n", " cv.feature(ThisPhrasePointer, phrase=phrase_track, phrasefunction=phrasefunction, phrasefunctionlong=phrasefunctionlong)\n", " prev_phrase = phrase_track\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunctionlong=phrasefunctionlong\n", " phrase_track += 1\n", " phrase_done = False\n", " \n", " \n", " # Detect boundaries of sentences, clauses and phrases \n", " text=row[IndexDict.get(\"i_Unicode\")]\n", " if text[-1:] == \".\" : \n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " if text[-1:] == \";\" or text[-1:] == \",\":\n", " clause_done = True\n", " phrase_done = True \n", " \n", " \n", " '''\n", " -- create word nodes --\n", " ''' \n", " \n", " # Get the word details and detect presence of punctuations\n", " word=row[IndexDict.get(\"i_Unicode\")]\n", " match = re.search(r\"([\\.·—,;])$\", word)\n", " if match: \n", " # The group(0) method is used to retrieve the matched punctuation sign\n", " after=match.group(0)+' '\n", " # Remove the punctuation from the end of the word\n", " word=word[:-1]\n", " \n", " else: \n", " after=' '\n", " \n", " # Some attributes are not present inside some (small) books. The following is to prevent exceptions.\n", " degree='' \n", " if 'i_Degree' in IndexDict: \n", " degree=sanitize(row[IndexDict.get(\"i_Degree\")]) \n", " subjref=''\n", " if 'i_SubjRef' in IndexDict:\n", " subjref=sanitize(row[IndexDict.get(\"i_SubjRef\")]) \n", " \n", "\n", " # Create the word node \n", " ThisWordPointer = cv.slot()\n", " cv.feature(ThisWordPointer, \n", " after=after,\n", " word=word,\n", " monad=row[IndexDict.get(\"i_monad\")],\n", " book=Book,\n", " booknum=BookNum,\n", " bookshort=BookShort,\n", " chapter=chapter,\n", " sp=sp,\n", " splong=splong,\n", " verse=verse,\n", " sentence=sentence_track,\n", " clause=clause_track,\n", " phrase=phrase_track,\n", " normalized=sanitize(row[IndexDict.get(\"i_NormalizedForm\")]),\n", " formaltag=sanitize(row[IndexDict.get(\"i_FormalTag\")]),\n", " functionaltag=functionaltag,\n", " strongs=sanitize(row[IndexDict.get(\"i_StrongNumber\")]),\n", " lex_dom=sanitize(row[IndexDict.get(\"i_LexDomain\")]),\n", " ln=sanitize(row[IndexDict.get(\"i_LN\")]),\n", " gloss=sanitize(row[IndexDict.get(\"i_Gloss\")]),\n", " gn=sanitize(row[IndexDict.get(\"i_Gender\")]),\n", " nu=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " case=sanitize(row[IndexDict.get(\"i_Case\")]),\n", " lemma=sanitize(row[IndexDict.get(\"i_UnicodeLemma\")]),\n", " person=sanitize(row[IndexDict.get(\"i_Person\")]),\n", " mood=sanitize(row[IndexDict.get(\"i_Mood\")]),\n", " tense=sanitize(row[IndexDict.get(\"i_Tense\")]),\n", " number=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " voice=sanitize(row[IndexDict.get(\"i_Voice\")]),\n", " degree=degree,\n", " type=sanitize(row[IndexDict.get(\"i_Type\")]),\n", " reference=sanitize(row[IndexDict.get(\"i_Ref\")]), # the capital R is critical here!\n", " subj_ref=subjref,\n", " nodeID=row[1] #this is a fixed position.\n", " )\n", " cv.terminate(ThisWordPointer)\n", "\n", " \n", " '''\n", " Wrap up the book. At the end of the book we need to close all nodes in proper order.\n", " ''' \n", " \n", " # Close all nodes (phrase, clause, sentence, verse, chapter and book)\n", " cv.feature(ThisPhrasePointer, phrase=phrase_track, phrasetype=prev_phrasetype,phrasefunction=prev_phrasefunction,phrasefunctionlong=prev_phrasefunctionlong)\n", " cv.terminate(ThisPhrasePointer)\n", " cv.feature(ThisClausePointer, clause=clause_track, clauserule=this_clauserule, clausetype=this_clausetype)\n", " cv.terminate(ThisClausePointer)\n", " cv.terminate(ThisSentencePointer)\n", " cv.terminate(ThisVersePointer)\n", " cv.terminate(ThisChapterPointer)\n", " cv.terminate(ThisBookPointer)\n", " \n", " # Clear dataframe for this book \n", " del df\n", " # Clear the index dictionary\n", " IndexDict.clear()\n", " gc.collect()\n", " \n", " ###############################################\n", " # End of section executed for each book #\n", " ###############################################\n", " \n", " ###############################################\n", " # End of director function #\n", " ###############################################\n", " \n", "###############################################\n", "# Output definitions #\n", "###############################################\n", " \n", "slotType = 'word' # or whatever you choose\n", "otext = { # dictionary of config data for sections and text formats\n", " 'fmt:text-orig-full':'{word}{after}',\n", " 'sectionTypes':'book,chapter,verse',\n", " 'sectionFeatures':'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " }\n", "\n", "# Configure metadata\n", "generic = { # dictionary of metadata which will be included in all feature files\n", " 'textFabricVersion': '{}'.format(VERSION), #imported from tf.parameter\n", " 'xmlSourceLocation': 'https://github.com/tonyjurg/Nestle1904GBI/tree/main/resources/sourcedata/apr_6_2023',\n", " 'xmlSourceDate': 'April 6, 2023',\n", " 'author': 'Evangelists and apostles',\n", " 'availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',\n", " 'converters': 'Tony Jurg',\n", " 'converterSource': 'https://github.com/tonyjurg/Nestle1904GBI/tree/main/resources/converter',\n", " 'converterVersion': '0.4',\n", " 'dataSource': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n", " 'editors': 'Eberhart Nestle (1904)',\n", " 'sourceDescription': 'Greek New Testment (British Foreign Bible Society, 1904)',\n", " 'sourceFormat': 'XML (GBI tree node data)',\n", " 'title': 'Greek New Testament (Nestle 1904 GBI)'\n", " }\n", "\n", "intFeatures = { # set of integer valued feature names\n", " 'booknum',\n", " 'chapter',\n", " 'verse',\n", " 'sentence',\n", " 'clause',\n", " 'phrase',\n", " 'monad'\n", " }\n", "\n", "# note: 'description' should start with a lower case. Using an uppercase leaves this info out of node overview \n", "featureMeta = { # per feature dicts with metadata\n", " 'after': {'description': 'Character after the word (space or punctuation)'},\n", " 'book': {'description': 'Book name (fully spelled out)'},\n", " 'booknum': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n", " 'bookshort': {'description': 'Book name (abbreviated)'},\n", " 'chapter': {'description': 'Chapter number inside book'},\n", " 'verse': {'description': 'Verse number inside chapter'},\n", " 'sentence': {'description': 'Sentence number (counted per chapter)'},\n", " 'clause': {'description': 'Clause number (counted per chapter)'},\n", " 'clauserule': {'description': 'Clause rule'},\n", " 'clausetype': {'description': 'Clause type'},\n", " 'phrase' : {'description': 'Phrase number (counted per chapter)'},\n", " 'phrasetype' : {'description': 'Phrase type information'},\n", " 'phrasefunction' : {'description': 'Phrase function (abbreviated)'},\n", " 'phrasefunctionlong' : {'description': 'Phrase function (long description)'},\n", " 'monad': {'description': 'Sequence number of the smallest meaningful unit of text (single word)'},\n", " 'word': {'description': 'Word as it appears in the text'},\n", " 'sp': {'description': 'Speech Part (abbreviated)'},\n", " 'splong': {'description': 'Speech Part (long description)'}, \n", " 'normalized': {'description': 'Surface word stripped of punctations'},\n", " 'lemma': {'description': 'Lexeme (lemma)'},\n", " 'formaltag': {'description': 'Formal tag (Sandborg-Petersen morphology)'},\n", " 'functionaltag': {'escription': 'Functional tag (Sandborg-Petersen morphology)'},\n", " # see also discussion on relation between lex_dom and ln @ https://github.com/Clear-Bible/macula-greek/issues/29\n", " 'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG'},\n", " 'ln': {'description': 'Lauw-Nida lexical classification'},\n", " 'strongs': {'description': 'Strongs number'},\n", " 'gloss': {'description': 'English gloss'},\n", " 'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},\n", " 'nu': {'description': 'Gramatical number (Singular, Plural)'},\n", " 'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n", " 'person': {'description': 'Gramatical person of the verb (first, second, third)'},\n", " 'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},\n", " 'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},\n", " 'number': {'description': 'Gramatical number of the verb'},\n", " 'voice': {'description': 'Gramatical voice of the verb'},\n", " 'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},\n", " 'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n", " 'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'subj_ref': {'description': 'Subject reference (to nodeID in XML source data)'},\n", " 'nodeID': {'description': 'Node ID (as in the XML source data)'}\n", " }\n", "\n", "'''\n", " -- The main function --\n", "''' \n", "\n", "\n", "good = cv.walk(\n", " director,\n", " slotType,\n", " otext=otext,\n", " generic=generic,\n", " intFeatures=intFeatures,\n", " featureMeta=featureMeta,\n", " warn=True,\n", " force=True\n", ")\n", "\n", "if good:\n", " print (\"done\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.5" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "321.391px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }