{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Creating Text-Fabric dataset (from GBI trees XML nodes)\n", "The source data for the conversion are the XML node files representing the macula-greek version of Eberhard Nestle's 1904 Greek New Testament (British Foreign Bible Society, 1904). The starting dataset is formatted according to Syntax diagram markup by the Global Bible Initiative (GBI). The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes. Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n", "\n", "The production of the Text-Fabric files consist of two major steps. First the creation of pickle files (part 1). Secondly the actual Text-Fabric creation process (part 2). Both steps are independent, allowing to start from part 2 by using the pickle files as input. \n", "\n", "Please be advised that this Text-Fabric version is a test version (proof of concept) and may requires further finetuning, especialy with regards of nomenclature and presentation of (sub)phrases and clauses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of content \n", "* [Part 1: Read GBI XML data and store in pickle](#first-bullet)\n", "* [Part 2: Sort the nodes](#second-bullet)\n", "* [Part 3: Nestle1904GBI production from pickle input](#third-bullet)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Read GBI XML data and store in pickle \n", "##### [Back to TOC](#TOC)\n", "\n", "This script harvests all information from the GBI tree XML data file, puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer).\n", "\n", "In the context of this script, 'Leaf' refers to those node containing the Greek word as data, which happen to be the nodes without any child (hence the analogy with the leaves on the tree). These 'leafs' can also be refered to as 'terminal nodes'. Futher, Parent1 is the leaf's parent, Parent2 is Parent1's parent, etc.\n", "\n", "For a full description of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Step 1: import various libraries" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T02:58:14.739227Z", "start_time": "2022-10-28T02:57:38.766097Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import sys\n", "import os\n", "import time\n", "import pickle\n", "\n", "import re #regular expressions\n", "from os import listdir\n", "from os.path import isfile, join\n", "import xml.etree.ElementTree as ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: initialize global data\n", "\n", "IMPORTANT: In case you want to build the Text-Fabric files on your own system, you need to change BaseDir, InputDir and OutputDir to match location of the datalocation and the OS used." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "InputDir = BaseDir+'inputfiles\\\\'\n", "OutputDir = BaseDir+'outputfiles\\\\'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3: define Function to add parent info to each node of the XML tree\n", "\n", "In order to traverse from the 'leafs' (terminating nodes) upto the root of the tree, it is required to add information to each node pointing to the parent of each node.\n", "\n", "(concept taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def addParentInfo(et):\n", " for child in et:\n", " child.attrib['parent'] = et\n", " addParentInfo(child)\n", "\n", "def getParent(et):\n", " if 'parent' in et.attrib:\n", " return et.attrib['parent']\n", " else:\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4: read and process the XML data and store panda dataframe in pickle" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# set some globals\n", "monad=1\n", "CollectedItems= 0\n", "\n", "# process books in order\n", "for bo, bookinfo in bo2book.items():\n", " CollectedItems=0\n", " full_df=pd.DataFrame({})\n", " book_long=bookinfo[0]\n", " booknum=bookinfo[1]\n", " book_short=bookinfo[2]\n", " InputFile = os.path.join(InputDir, f'{bo}.xml')\n", " OutputFile = os.path.join(OutputDir, f'{bo}.pkl')\n", " print(f'Processing {book_long} at {InputFile}')\n", "\n", " # send xml document to parsing process\n", " tree = ET.parse(InputFile)\n", " # Now add all the parent info to the nodes in the xtree [important!]\n", " addParentInfo(tree.getroot())\n", " start_time = time.time()\n", " \n", " # walk over all the leaves and harvest the data\n", " for elem in tree.iter():\n", " if not list(elem):\n", " # if no child elements, this is a leaf/terminal node\n", " \n", " # show progress on screen\n", " CollectedItems+=1\n", " if (CollectedItems%100==0): print (\".\",end='')\n", " \n", " #Leafref will contain list with book, chapter verse and wordnumber\n", " Leafref = re.sub(r'[!: ]',\" \", elem.attrib.get('ref')).split()\n", " \n", " #push value for monad to element tree \n", " elem.set('monad', monad)\n", " monad+=1\n", " \n", " # add some important computed data to the leaf\n", " elem.set('LeafName', elem.tag)\n", " elem.set('word', elem.text)\n", " elem.set('book_long', book_long)\n", " elem.set('booknum', int(booknum))\n", " elem.set('book_short', book_short)\n", " elem.set('chapter', int(Leafref[1]))\n", " elem.set('verse', int(Leafref[2]))\n", " \n", " # following code will trace down parents upto the tree and store found attributes\n", " parentnode=getParent(elem)\n", " index=0\n", " while (parentnode):\n", " index+=1\n", " elem.set('Parent{}Name'.format(index), parentnode.tag)\n", " elem.set('Parent{}Type'.format(index), parentnode.attrib.get('Type'))\n", " elem.set('Parent{}Cat'.format(index), parentnode.attrib.get('Cat'))\n", " elem.set('Parent{}Start'.format(index), parentnode.attrib.get('Start'))\n", " elem.set('Parent{}End'.format(index), parentnode.attrib.get('End'))\n", " elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('Rule'))\n", " elem.set('Parent{}Head'.format(index), parentnode.attrib.get('Head'))\n", " elem.set('Parent{}NodeId'.format(index),parentnode.attrib.get('nodeId'))\n", " elem.set('Parent{}ClType'.format(index),parentnode.attrib.get('ClType'))\n", " elem.set('Parent{}HasDet'.format(index),parentnode.attrib.get('HasDet'))\n", " currentnode=parentnode\n", " parentnode=getParent(currentnode) \n", " elem.set('parents', int(index))\n", " \n", " #this will push all elements found in the tree into a DataFrame\n", " df=pd.DataFrame(elem.attrib, index={monad})\n", " full_df=pd.concat([full_df,df])\n", " \n", " #store the resulting DataFrame per book into a pickle file for further processing\n", " df = df.convert_dtypes(convert_string=True)\n", " output = open(r\"{}\".format(OutputFile), 'wb')\n", " pickle.dump(full_df, output)\n", " output.close()\n", " print(\"\\nFound \",CollectedItems, \" items in %s seconds\\n\" % (time.time() - start_time)) \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Sort the nodes \n", "##### [Back to TOC](#TOC)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The node data is not the same as the word order in the running text. This part is to sort the dataframes accordingly." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\01-matthew.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\02-mark.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\03-luke.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\04-john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\05-acts.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\06-romans.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\07-1corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\08-2corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\09-galatians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\10-ephesians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\11-philippians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\12-colossians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\13-1thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\14-2thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\15-1timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\16-2timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\17-titus.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\18-philemon.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\19-hebrews.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\20-james.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\21-1peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\22-2peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\23-1john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\24-2john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\25-3john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\26-jude.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\27-revelation.pkl...\n" ] } ], "source": [ "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "source_dir = BaseDir+'outputfiles\\\\' #the input files (with 'wordjumps')\n", "output_dir = BaseDir+'outputfiles_sorted\\\\' #the output files (words in order of running text)\n", "\n", "\n", "for bo in bo2book:\n", " '''\n", " load all data into a dataframe\n", " process books in order (bookinfo is a list!)\n", " ''' \n", " InputFile = os.path.join(source_dir, f'{bo}.pkl')\n", " OutputFile = os.path.join(output_dir, f'{bo}.pkl')\n", " \n", " print(f'\\tloading {InputFile}...')\n", " pkl_file = open(InputFile, 'rb')\n", " df = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " # fill dictionary of column names for this book \n", " IndexDict = {} # init an empty dictionary\n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " ItemsInRow+=1\n", " \n", " # sort by id\n", " df.sort_values(by=['nodeId'])\n", " #store the resulting DataFrame per book into a pickle file for further processing\n", " #df = df.convert_dtypes(convert_string=True) DO NOT DO THIS! IT MUTILATES THE DATA...\n", " output = open(r\"{}\".format(OutputFile), 'wb')\n", " pickle.dump(df, output)\n", " output.close()\n", " \n", " " ] }, { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "## Part 3: Nestle1904GBI Text-Fabric production from pickle input \n", "##### [Back to TOC](#TOC)\n", "\n", "This script creates the Text-Fabric files by recursive calling the TF walker function.\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The pickle files created by step 1 are stored on Github location https://github.com/tonyjurg/Nestle1904GBI/tree/main/resources/picklefiles" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Load libraries and initialize some data\n", "\n", "Change BaseDir, InputDir and OutputDir to match location of the datalocation and the OS used." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T03:01:34.810259Z", "start_time": "2022-10-28T03:01:25.745112Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import re\n", "import gc\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "from tf.parameters import VERSION\n", "from datetime import date\n", "import pickle\n", "\n", "\n", "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "source_dir = BaseDir+'outputfiles_sorted\\\\' # the input for the walker is the output of the xml to excel \n", "#output_dir = BaseDir+'outputfilesTF\\\\' #the TextFabric files\n", "output_dir = 'C:\\\\text-fabric-data\\\\github\\\\tonyjurg\\\\Nestle1904GBI\\\\tf'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Running the Text-Fabric walker function\n", "\n", "Text-Fabric API info can be found at https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "Explanatory notes regarding the logic of interpreting the data are included in the python code of the director function." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.4.10\n", "0 features found and 0 ignored\n", " 0.00s Not all of the warp features otype and oslots are present in\n", "C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " 0.00s Only the Feature and Edge APIs will be enabled\n", " 0.00s Warp feature \"otext\" not found. Working without Text-API\n", "\n", " 0.00s Importing data from walking through the source ...\n", " | 0.00s Preparing metadata... \n", " | SECTION TYPES: book, chapter, verse\n", " | SECTION FEATURES: book, chapter, verse\n", " | STRUCTURE TYPES: book, chapter, verse\n", " | STRUCTURE FEATURES: book, chapter, verse\n", " | TEXT FEATURES:\n", " | | text-orig-full word\n", " | 0.00s OK\n", " | 0.00s Following director... \n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\01-matthew.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\02-mark.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\03-luke.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\04-john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\05-acts.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\06-romans.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\07-1corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\08-2corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\09-galatians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\10-ephesians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\11-philippians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\12-colossians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\13-1thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\14-2thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\15-1timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\16-2timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\17-titus.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\18-philemon.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\19-hebrews.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\20-james.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\21-1peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\22-2peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\23-1john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\24-2john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\25-3john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\26-jude.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles_sorted\\27-revelation.pkl...\n", " | 39s \"edge\" actions: 0\n", " | 39s \"feature\" actions: 379313\n", " | 39s \"node\" actions: 103647\n", " | 39s \"resume\" actions: 0\n", " | 39s \"slot\" actions: 137779\n", " | 39s \"terminate\" actions: 241426\n", " | 27 x \"book\" node \n", " | 260 x \"chapter\" node \n", " | 16124 x \"clause\" node \n", " | 73572 x \"phrase\" node \n", " | 5720 x \"sentence\" node \n", " | 7944 x \"verse\" node \n", " | 137779 x \"word\" node = slot type\n", " | 241426 nodes of all types\n", " | 39s OK\n", " | 0.02s Removing unlinked nodes ... \n", " | | 0.00s 25 unlinked \"phrase\" nodes: [1, 10018, 27166, 46044, 49656] ...\n", " | | 0.00s 25 unlinked nodes\n", " | | 0.00s Leaving 241401 nodes\n", " | 0.00s checking for nodes and edges ... \n", " | 0.00s OK\n", " | 0.00s checking (section) features ... \n", " | 0.22s OK\n", " | 0.00s reordering nodes ...\n", " | 0.03s Sorting 27 nodes of type \"book\"\n", " | 0.04s Sorting 260 nodes of type \"chapter\"\n", " | 0.05s Sorting 16124 nodes of type \"clause\"\n", " | 0.08s Sorting 73547 nodes of type \"phrase\"\n", " | 0.17s Sorting 5720 nodes of type \"sentence\"\n", " | 0.19s Sorting 7944 nodes of type \"verse\"\n", " | 0.21s Max node = 241401\n", " | 0.21s OK\n", " | 0.00s reassigning feature values ...\n", " | | 0.50s node feature \"book\" with 27 nodes\n", " | | 0.50s node feature \"book_long\" with 137779 nodes\n", " | | 0.53s node feature \"book_short\" with 137806 nodes\n", " | | 0.57s node feature \"booknum\" with 137806 nodes\n", " | | 0.61s node feature \"case\" with 137779 nodes\n", " | | 0.64s node feature \"chapter\" with 138039 nodes\n", " | | 0.68s node feature \"clause\" with 153903 nodes\n", " | | 0.71s node feature \"clauserule\" with 16124 nodes\n", " | | 0.72s node feature \"clausetype\" with 3603 nodes\n", " | | 0.72s node feature \"degree\" with 137779 nodes\n", " | | 0.76s node feature \"formaltag\" with 137779 nodes\n", " | | 0.79s node feature \"functionaltag\" with 137779 nodes\n", " | | 0.83s node feature \"gloss_EN\" with 137779 nodes\n", " | | 0.86s node feature \"gn\" with 137779 nodes\n", " | | 0.90s node feature \"lemma\" with 137779 nodes\n", " | | 0.93s node feature \"lex_dom\" with 137779 nodes\n", " | | 0.97s node feature \"ln\" with 137779 nodes\n", " | | 1.01s node feature \"monad\" with 137779 nodes\n", " | | 1.04s node feature \"mood\" with 137779 nodes\n", " | | 1.07s node feature \"nodeID\" with 137779 nodes\n", " | | 1.11s node feature \"normalized\" with 137779 nodes\n", " | | 1.14s node feature \"nu\" with 137779 nodes\n", " | | 1.18s node feature \"number\" with 137779 nodes\n", " | | 1.22s node feature \"orig_order\" with 137779 nodes\n", " | | 1.25s node feature \"person\" with 137779 nodes\n", " | | 1.28s node feature \"phrase\" with 211326 nodes\n", " | | 1.34s node feature \"phrasefunction\" with 73547 nodes\n", " | | 1.37s node feature \"phrasefunction_long\" with 73547 nodes\n", " | | 1.39s node feature \"phrasetype\" with 73547 nodes\n", " | | 1.42s node feature \"reference\" with 137779 nodes\n", " | | 1.45s node feature \"sentence\" with 143499 nodes\n", " | | 1.49s node feature \"sp\" with 137779 nodes\n", " | | 1.52s node feature \"sp_full\" with 137779 nodes\n", " | | 1.56s node feature \"strongs\" with 137779 nodes\n", " | | 1.59s node feature \"subj_ref\" with 137779 nodes\n", " | | 1.63s node feature \"tense\" with 137779 nodes\n", " | | 1.67s node feature \"type\" with 137779 nodes\n", " | | 1.70s node feature \"verse\" with 145723 nodes\n", " | | 1.73s node feature \"voice\" with 137779 nodes\n", " | | 1.77s node feature \"word\" with 137779 nodes\n", " | 1.38s OK\n", " 0.00s Exporting 41 node and 1 edge and 1 config features to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf:\n", " 0.00s VALIDATING oslots feature\n", " 0.02s VALIDATING oslots feature\n", " 0.02s maxSlot= 137779\n", " 0.02s maxNode= 241401\n", " 0.03s OK: oslots is valid\n", " | 0.00s T book to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T book_long to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T book_short to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T booknum to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T case to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T chapter to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.15s T clause to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " | 0.02s T clauserule to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.01s T clausetype to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.15s T degree to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T formaltag to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T functionaltag to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T gloss_EN to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T gn to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.15s T lemma to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T lex_dom to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T ln to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T monad to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T mood to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T nodeID to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.15s T normalized to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T nu to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T number to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T orig_order to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.05s T otype to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T person to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.20s T phrase to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.07s T phrasefunction to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.07s T phrasefunction_long to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.08s T phrasetype to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T reference to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T sentence to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T sp to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T sp_full to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T strongs to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T subj_ref to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T tense to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.13s T type to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T verse to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.14s T voice to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.16s T word to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.27s T oslots to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " | 0.00s M otext to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", " 5.39s Exported 41 node features and 1 edge features and 1 config features to C:/text-fabric-data/github/tonyjurg/Nestle1904GBI/tf\n", "done\n" ] } ], "source": [ "TF = Fabric(locations=output_dir, silent=False)\n", "cv = CV(TF)\n", "version = \"0.2\"\n", "\n", "# the following is required to prevent passing float data to the walker function\n", "def sanitize(input):\n", " if isinstance(input, float): return ''\n", " else: return (input)\n", " \n", "\n", "def director(cv):\n", " \n", " NoneType = type(None) # needed as tool to validate certain data\n", " prev_book = \"Matthew\" # start at first book\n", " IndexDict = {} # init an empty dictionary\n", "\n", " for bo,bookinfo in bo2book.items():\n", " \n", " # load all data into a dataframe and process books in order (bookinfo is a list!)\n", " book=bookinfo[0] \n", " booknum=int(bookinfo[1])\n", " book_short=bookinfo[2]\n", " book_loc = os.path.join(source_dir, f'{bo}.pkl') \n", " \n", " # Report progress or reading data\n", " print(f'\\tloading {book_loc}...')\n", " pkl_file = open(book_loc, 'rb')\n", " df = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " # reset/load the following initial variables (we are at the start of a new book) \n", " phrasefunction = prev_phrasefunction = phrasefunction_long = prev_phrasefunction_long='TBD'\n", " this_clausetype = this_clauserule = phrasetype=\"unknown\"\n", " prev_chapter = prev_verse = prev_sentence = prev_clause = prev_phrase = int(1) \n", " sentence_track = clause_track = phrase_track = 1\n", " sentence_done = clause_done = phrase_done = verse_done = chapter_done = book_done = False \n", " wrdnum = 0 # start at 0\n", "\n", " # build a dictionary of column names for this book\n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " ItemsInRow+=1\n", " \n", " \n", " # Create a set of nodes at the start a new book\n", " book_done = chapter_done = verse_done = phrase_done = clause_done = sentence_done = False\n", " this_book = cv.node('book')\n", " cv.feature(this_book, book=book, booknum=booknum, book_short=book_short)\n", " this_chapter = cv.node('chapter')\n", " cv.feature(this_chapter, chapter=1)\n", " this_verse = cv.node('verse')\n", " cv.feature(this_verse, verse=1)\n", " this_sentence = cv.node('sentence')\n", " cv.feature(this_sentence, sentence=1)\n", " this_clause = cv.node('clause')\n", " this_phrase = cv.node('phrase')\n", " \n", " \n", " '''\n", " Walks through the texts and triggers\n", " slot and node creation events.\n", " '''\n", " \n", " # iterate through words and construct objects\n", " for row in df.itertuples():\n", " wrdnum += 1\n", " \n", " # get number of parent nodes for this word\n", " parents = row[IndexDict.get(\"i_parents\")]\n", " \n", " # get chapter and verse for this word from the data\n", " chapter = row[IndexDict.get(\"i_chapter\")]\n", " verse = row[IndexDict.get(\"i_verse\")]\n", " \n", " # get clause rule and type info of parent clause\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"CL\":\n", " clauseparent=i\n", " this_clauserule=row[IndexDict.get(\"i_Parent{}Rule\".format(i))] \n", " this_clausetype=row[IndexDict.get(\"i_Parent{}ClType\".format(i))] \n", " break\n", " cv.feature(this_clause, clause=clause_track, clauserule=this_clauserule, clausetype=this_clausetype)\n", " \n", "\n", " # get phrase type info\n", " prev_phrasetype=phrasetype\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"np\":\n", " _item =\"i_Parent{}Rule\".format(i)\n", " phrasetype=row[IndexDict.get(_item)]\n", " break\n", " functionaltag=row[IndexDict.get('i_FunctionalTag')]\n", "\n", " \n", " # determine syntactic categories of clause parts. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 5&6\n", " # (section 2.4 Syntactic Categories at Clause Level)\n", " phrase_done = False\n", " for i in range(1,clauseparent): \n", " phrasefunction = row[IndexDict.get(\"i_Parent{}Cat\".format(i))] \n", " if phrasefunction==\"ADV\":\n", " phrasefunction_long='Adverbial function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"IO\":\n", " phrasefunction_long='Indirect Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"O\":\n", " phrasefunction_long='Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"O2\":\n", " phrasefunction_long='Second Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"S\":\n", " phrasefunction_long='Subject function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction=='P':\n", " phrasefunction_long='Predicate function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"V\":\n", " phrasefunction_long='Verbal function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"VC\":\n", " phrasefunction_long='Verbal Copula function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", "\n", "\n", " # determine syntactic categories at word level. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 6&7\n", " # (2.2. Syntactic Categories at Word Level: Part of Speech Labels)\n", " sp=sanitize(row[IndexDict.get(\"i_Cat\")])\n", " if sp=='adj':\n", " sp_full='adjective'\n", " elif sp=='adj':\n", " sp_full='adjective'\n", " elif sp=='conj':\n", " sp_full='conjunction'\n", " elif sp=='det':\n", " sp_full='determiner' \n", " elif sp=='intj':\n", " sp_full='interjection' \n", " elif sp=='noun':\n", " sp_full='noun' \n", " elif sp=='num':\n", " sp_full='numeral' \n", " elif sp=='prep':\n", " sp_full='preposition' \n", " elif sp=='ptcl':\n", " sp_full='particle' \n", " elif sp=='pron':\n", " sp_full='pronoun' \n", " elif sp=='verb':\n", " sp_full='verb' \n", " \n", " \n", " '''\n", " determine if conditions are met to trigger some action \n", " action will be executed after next word\n", " ''' \n", " \n", " # detect chapter boundary\n", " if prev_chapter != chapter:\n", " chapter_done = True\n", " verse_done=True\n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " \n", " # detect verse boundary\n", " if prev_verse != verse:\n", " verse_done=True\n", " \n", " \n", " \n", " '''\n", " -- handle TF events --\n", " Determine what actions need to be done if proper condition is met.\n", " ''' \n", "\n", " # act upon end of phrase (close)\n", " if phrase_done or clause_done:\n", " cv.feature(this_phrase, phrase=phrase_track, phrasetype=prev_phrasetype, phrasefunction=prev_phrasefunction, phrasefunction_long=prev_phrasefunction_long)\n", " cv.terminate(this_phrase)\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunction_long=phrasefunction_long\n", " \n", " # act upon end of clause (close) \n", " if clause_done:\n", " cv.terminate(this_clause)\n", " \n", " # act upon end of sentence (close)\n", " if sentence_done:\n", " cv.terminate(this_sentence)\n", " \n", " # act upon end of verse (close)\n", " if verse_done:\n", "\n", " cv.terminate(this_verse)\n", " prev_verse = verse \n", "\n", " # act upon end of chapter (close)\n", " if chapter_done:\n", "\n", " cv.terminate(this_chapter)\n", " prev_chapter = chapter\n", "\n", " \n", " # start of chapter (create new)\n", " if chapter_done:\n", " this_chapter = cv.node('chapter')\n", " cv.feature(this_chapter, chapter=chapter)\n", " chapter_done = False\n", " \n", " # start of verse (create new)\n", " if verse_done:\n", " this_verse = cv.node('verse')\n", " cv.feature(this_verse, verse=verse)\n", " verse_done = False \n", " \n", " # start of sentence (create new)\n", " if sentence_done:\n", " this_sentence= cv.node('sentence')\n", " cv.feature(this_sentence, sentence=sentence_track)\n", " sentence_track += 1\n", " sentence_done = False\n", "\n", " \n", " # start of clause (create new) \n", " if clause_done:\n", " this_clause = cv.node('clause')\n", " cv.feature(this_clause, clause=clause_track, clauserule=this_clauserule,clausetype=this_clausetype)\n", " clause_track += 1\n", " clause_done = False\n", " phrase_done = True \n", "\n", " \n", " # start of phrase (create new)\n", " if phrase_done:\n", " this_phrase = cv.node('phrase')\n", " prev_phrase = phrase_track\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunction_long=phrasefunction_long\n", " phrase_track += 1\n", " phrase_done = False\n", " \n", " \n", " # Detect boundaries of sentences, clauses and phrases \n", " text=row[IndexDict.get(\"i_Unicode\")]\n", " if text[-1:] == \".\" : \n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " if text[-1:] == \";\" or text[-1:] == \",\":\n", " clause_done = True\n", " phrase_done = True \n", " \n", " \n", " '''\n", " -- create word nodes --\n", " ''' \n", " \n", " # some attributes are not present inside some (small) books. The following is to prevent exceptions.\n", " degree='' \n", " if 'i_Degree' in IndexDict: \n", " degree=sanitize(row[IndexDict.get(\"i_Degree\")]) \n", " subjref=''\n", " if 'i_SubjRef' in IndexDict:\n", " subjref=sanitize(row[IndexDict.get(\"i_SubjRef\")]) \n", " \n", "\n", " # make word object\n", " this_word = cv.slot()\n", " cv.feature(this_word, \n", " word=row[IndexDict.get(\"i_Unicode\")],\n", " monad=row[IndexDict.get(\"i_monad\")],\n", " orig_order=row[IndexDict.get(\"i_monad\")],\n", " book_long=row[IndexDict.get(\"i_book_long\")],\n", " booknum=booknum,\n", " book_short=row[IndexDict.get(\"i_book_short\")],\n", " chapter=chapter,\n", " sp=sp,\n", " sp_full=sp_full,\n", " verse=verse,\n", " sentence=sentence_track,\n", " clause=clause_track,\n", " phrase=phrase_track,\n", " normalized=sanitize(row[IndexDict.get(\"i_NormalizedForm\")]),\n", " formaltag=sanitize(row[IndexDict.get(\"i_FormalTag\")]),\n", " functionaltag=functionaltag,\n", " strongs=sanitize(row[IndexDict.get(\"i_StrongNumber\")]),\n", " lex_dom=sanitize(row[IndexDict.get(\"i_LexDomain\")]),\n", " ln=sanitize(row[IndexDict.get(\"i_LN\")]),\n", " gloss_EN=sanitize(row[IndexDict.get(\"i_Gloss\")]),\n", " gn=sanitize(row[IndexDict.get(\"i_Gender\")]),\n", " nu=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " case=sanitize(row[IndexDict.get(\"i_Case\")]),\n", " lemma=sanitize(row[IndexDict.get(\"i_UnicodeLemma\")]),\n", " person=sanitize(row[IndexDict.get(\"i_Person\")]),\n", " mood=sanitize(row[IndexDict.get(\"i_Mood\")]),\n", " tense=sanitize(row[IndexDict.get(\"i_Tense\")]),\n", " number=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " voice=sanitize(row[IndexDict.get(\"i_Voice\")]),\n", " degree=degree,\n", " type=sanitize(row[IndexDict.get(\"i_Type\")]),\n", " reference=sanitize(row[IndexDict.get(\"i_Ref\")]), # the capital R is critical here!\n", " subj_ref=subjref,\n", " nodeID=row[1] #this is a fixed position.\n", " )\n", " cv.terminate(this_word)\n", "\n", " \n", " '''\n", " -- wrap up the book --\n", " ''' \n", " \n", " # close all nodes (phrase, clause, sentence, verse, chapter and book)\n", " cv.feature(this_phrase, phrase=phrase_track, phrasetype=prev_phrasetype,phrasefunction=prev_phrasefunction,phrasefunction_long=prev_phrasefunction_long)\n", " cv.terminate(this_phrase)\n", " cv.feature(this_clause, clause=clause_track, clauserule=this_clauserule, clausetype=this_clausetype)\n", " cv.terminate(this_clause)\n", " cv.feature(this_sentence, sentence=prev_sentence)\n", " cv.terminate(this_sentence)\n", " cv.feature(this_verse, verse=prev_verse)\n", " cv.terminate(this_verse)\n", " cv.feature(this_chapter, chapter=prev_chapter)\n", " cv.terminate(this_chapter)\n", " cv.feature(this_book, book=prev_book)\n", " cv.terminate(this_book)\n", " \n", " # clear dataframe for this book \n", " del df\n", " # clear the index dictionary\n", " IndexDict.clear()\n", " gc.collect()\n", " \n", " \n", "'''\n", "-- output definitions --\n", "''' \n", " \n", "slotType = 'word' # or whatever you choose\n", "otext = { # dictionary of config data for sections and text formats\n", " 'fmt:text-orig-full':'{word}',\n", " 'sectionTypes':'book,chapter,verse',\n", " 'sectionFeatures':'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " }\n", "\n", "# configure metadata\n", "generic = { # dictionary of metadata meant for all features\n", " 'Name': 'Greek New Testament (Nestle 1904) based upon GBI tree node data',\n", " 'Version': '{}'.format(version),\n", " 'Editors': 'Eberhart Nestle',\n", " 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n", " 'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)', \n", " 'Converter_author': 'Tony Jurg, ReMa student Vrije Universiteit Amsterdam, Netherlands', \n", " 'Converter_execution': 'Tony Jurg, ReMa student Vrije Universiteit Amsterdam, Netherlands', \n", " 'Convertor_source': 'https://github.com/tonyjurg/Nestle1904GBI/tree/main/resources/converter',\n", " 'TextFabric version': '{}'.format(VERSION) #imported from tf.parameters\n", " }\n", "\n", "intFeatures = { # set of integer valued feature names\n", " 'booknum',\n", " 'chapter',\n", " 'verse',\n", " 'sentence',\n", " 'clause',\n", " 'phrase',\n", " 'orig_order',\n", " 'monad'\n", " }\n", "\n", "featureMeta = { # per feature dicts with metadata\n", " 'book': {'description': 'Book'},\n", " 'book_long': {'description': 'Book name (fully spelled out)'},\n", " 'booknum': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n", " 'book_short': {'description': 'Book name (abbreviated)'},\n", " 'chapter': {'description': 'Chapter number inside book'},\n", " 'verse': {'description': 'Verse number inside chapter'},\n", " 'sentence': {'description': 'Sentence number (counted per chapter)'},\n", " 'clause': {'description': 'Clause number (counted per chapter)'},\n", " 'clauserule': {'description': 'Clause rule'},\n", " 'clausetype': {'description': 'Clause type'},\n", " 'phrase' : {'description': 'Phrase number (counted per chapter)'},\n", " 'phrasetype' : {'description': 'Phrase type information'},\n", " 'phrasefunction' : {'description': 'Phrase function (abbreviated)'},\n", " 'phrasefunction_long' : {'description': 'Phrase function (long description)'},\n", " 'orig_order': {'description': 'Word order within corpus'},\n", " 'monad':{'description': 'Monad'},\n", " 'word': {'description': 'Word as it appears in the text'},\n", " 'sp': {'description': 'Part of Speech (abbreviated)'},\n", " 'sp_full': {'description': 'Part of Speech (long description)'}, \n", " 'normalized': {'description': 'Surface word stripped of punctations'},\n", " 'lemma': {'description': 'Lexeme (lemma)'},\n", " 'formaltag': {'description': 'Formal tag (Sandborg-Petersen morphology)'},\n", " 'functionaltag': {'description': 'Functional tag (Sandborg-Petersen morphology)'},\n", " # see also discussion on relation between lex_dom and ln @ https://github.com/Clear-Bible/macula-greek/issues/29\n", " 'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},\n", " 'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},\n", " 'strongs': {'description': 'Strongs number'},\n", " 'gloss_EN': {'description': 'English gloss'},\n", " 'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},\n", " 'nu': {'description': 'Gramatical number (Singular, Plural)'},\n", " 'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n", " 'person': {'description': 'Gramatical person of the verb (first, second, third)'},\n", " 'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},\n", " 'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},\n", " 'number': {'description': 'Gramatical number of the verb'},\n", " 'voice': {'description': 'Gramatical voice of the verb'},\n", " 'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},\n", " 'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n", " 'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'}\n", " }\n", "\n", "'''\n", " -- the main function --\n", "''' \n", "\n", "\n", "good = cv.walk(\n", " director,\n", " slotType,\n", " otext=otext,\n", " generic=generic,\n", " intFeatures=intFeatures,\n", " featureMeta=featureMeta,\n", " warn=True,\n", " force=False\n", ")\n", "\n", "if good:\n", " print (\"done\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "321.391px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }