{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Creating Text-Fabric from LowFat XML trees\n", "\n", "Version: 0.1.7 (May 3, 2023)\n", "\n", "## Table of content \n", "* [1. Introduction](#first-bullet)\n", "* [2. Read LowFat XML data and store in pickle](#second-bullet)\n", "* [3. Sort the nodes](#third-bullet)\n", "* [4. Production Text-Fabric from pickle input](#fourth-bullet)\n", "* [5. Basic testing of the textfabric data](#fift-bullet)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "\n", "## 1. Introduction \n", "##### [Back to TOC](#TOC)\n", "\n", "The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment. The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat. Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n", "\n", "The production of the Text-Fabric files consist of two steps. First the creation of piclke files (part 1). Secondly the actual Text-Fabric creation process (part 2). Both steps are independent allowing to start from Part 2 by using the pickle files as input. \n", "\n", "\n", "\n", "Be advised that this Text-Fabric version is a test version (proof of concept) and requires further finetuning, especialy with regards of nomenclature and presentation of (sub)phrases and clauses." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## 2. Read LowFat XML data and store in pickle \n", "##### [Back to TOC](#TOC)\n", "\n", "This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer). \n", "\n", "In the context of this script, 'Leaf' refers to those node containing the Greek word as data, which happen to be the nodes without any child (hence the analogy with the leaves on the tree). These 'leafs' can also be refered to as 'terminal nodes'. Futher, Parent1 is the leaf's parent, Parent2 is Parent1's parent, etc.\n", "\n", "For a full description of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Step 1: import various libraries" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T02:58:14.739227Z", "start_time": "2022-10-28T02:57:38.766097Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import sys\n", "import os\n", "import time\n", "import pickle\n", "\n", "import re #regular expressions\n", "from os import listdir\n", "from os.path import isfile, join\n", "import xml.etree.ElementTree as ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: initialize global data\n", "\n", "Change BaseDir, XmlDir and PklDir to match location of the datalocation and the OS used." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\Read_from_lowfat\\\\data\\\\'\n", "XmlDir = BaseDir+'xml\\\\'\n", "PklDir = BaseDir+'pkl\\\\'\n", "XlsxDir = BaseDir+'xlsx\\\\'\n", "# note: create output directory prior running this part\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}\n", "\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt']}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### step 3: define Function to add parent info to each node of the XML tree\n", "\n", "In order to traverse from the 'leafs' (terminating nodes) upto the root of the tree, it is required to add information to each node pointing to the parent of each node.\n", "\n", "(concept taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def addParentInfo(et):\n", " for child in et:\n", " child.attrib['parent'] = et\n", " addParentInfo(child)\n", "\n", "def getParent(et):\n", " if 'parent' in et.attrib:\n", " return et.attrib['parent']\n", " else:\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4: read and process the XML data and store panda dataframe in pickle" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "scrolled": true, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Processing Matthew at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\01-matthew.xml\n", "......................................................................................................................................................................................\n", "Found 18299 items in 337.3681836128235 seconds\n", "\n", "Processing Mark at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\02-mark.xml\n", "................................................................................................................\n", "Found 11277 items in 144.04719877243042 seconds\n", "\n", "Processing Luke at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\03-luke.xml\n", "..................................................................................................................................................................................................\n", "Found 19456 items in 1501.197922706604 seconds\n", "\n", "Processing John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\04-john.xml\n", "............................................................................................................................................................\n", "Found 15643 items in 237.1071105003357 seconds\n", "\n", "Processing Acts at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\05-acts.xml\n", ".......................................................................................................................................................................................\n", "Found 18393 items in 384.3644151687622 seconds\n", "\n", "Processing Romans at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\06-romans.xml\n", ".......................................................................\n", "Found 7100 items in 71.03568935394287 seconds\n", "\n", "Processing I_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\07-1corinthians.xml\n", "....................................................................\n", "Found 6820 items in 58.47511959075928 seconds\n", "\n", "Processing II_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\08-2corinthians.xml\n", "............................................\n", "Found 4469 items in 31.848721027374268 seconds\n", "\n", "Processing Galatians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\09-galatians.xml\n", "......................\n", "Found 2228 items in 13.850211143493652 seconds\n", "\n", "Processing Ephesians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\10-ephesians.xml\n", "........................\n", "Found 2419 items in 17.529520511627197 seconds\n", "\n", "Processing Philippians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\11-philippians.xml\n", "................\n", "Found 1630 items in 9.271572589874268 seconds\n", "\n", "Processing Colossians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\12-colossians.xml\n", "...............\n", "Found 1575 items in 10.389309883117676 seconds\n", "\n", "Processing I_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\13-1thessalonians.xml\n", "..............\n", "Found 1473 items in 8.413437604904175 seconds\n", "\n", "Processing II_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\14-2thessalonians.xml\n", "........\n", "Found 822 items in 4.284915447235107 seconds\n", "\n", "Processing I_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\15-1timothy.xml\n", "...............\n", "Found 1588 items in 10.419771671295166 seconds\n", "\n", "Processing II_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\16-2timothy.xml\n", "............\n", "Found 1237 items in 7.126454591751099 seconds\n", "\n", "Processing Titus at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\17-titus.xml\n", "......\n", "Found 658 items in 3.1472580432891846 seconds\n", "\n", "Processing Philemon at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\18-philemon.xml\n", "...\n", "Found 335 items in 1.3175146579742432 seconds\n", "\n", "Processing Hebrews at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\19-hebrews.xml\n", ".................................................\n", "Found 4955 items in 44.31139326095581 seconds\n", "\n", "Processing James at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\20-james.xml\n", ".................\n", "Found 1739 items in 8.570415496826172 seconds\n", "\n", "Processing I_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\21-1peter.xml\n", "................\n", "Found 1676 items in 10.489561557769775 seconds\n", "\n", "Processing II_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\22-2peter.xml\n", "..........\n", "Found 1098 items in 6.005697250366211 seconds\n", "\n", "Processing I_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\23-1john.xml\n", ".....................\n", "Found 2136 items in 10.843079566955566 seconds\n", "\n", "Processing II_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\24-2john.xml\n", "..\n", "Found 245 items in 0.9535031318664551 seconds\n", "\n", "Processing III_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\25-3john.xml\n", "..\n", "Found 219 items in 1.0913233757019043 seconds\n", "\n", "Processing Jude at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\26-jude.xml\n", "....\n", "Found 457 items in 1.8929190635681152 seconds\n", "\n", "Processing Revelation at C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\xml\\27-revelation.xml\n", "..................................................................................................\n", "Found 9832 items in 125.92533278465271 seconds\n", "\n" ] } ], "source": [ "# set some globals\n", "monad=1\n", "CollectedItems= 0\n", "\n", "# process books in order\n", "for bo, bookinfo in bo2book.items():\n", " CollectedItems=0\n", " SentenceNumber=0\n", " WordGroupNumber=0\n", " full_df=pd.DataFrame({})\n", " book_long=bookinfo[0]\n", " booknum=bookinfo[1]\n", " book_short=bookinfo[2]\n", " InputFile = os.path.join(XmlDir, f'{bo}.xml')\n", " OutputFile = os.path.join(PklDir, f'{bo}.pkl')\n", " print(f'Processing {book_long} at {InputFile}')\n", "\n", " # send xml document to parsing process\n", " tree = ET.parse(InputFile)\n", " # Now add all the parent info to the nodes in the xtree [important!]\n", " addParentInfo(tree.getroot())\n", " start_time = time.time()\n", " \n", " # walk over all the XML data\n", " for elem in tree.iter():\n", " if elem.tag == 'sentence':\n", " # add running number to 'sentence' tags\n", " SentenceNumber+=1\n", " elem.set('SN', SentenceNumber)\n", " if elem.tag == 'wg':\n", " # add running number to 'wg' tags\n", " WordGroupNumber+=1\n", " elem.set('WGN', WordGroupNumber)\n", " if elem.tag == 'w':\n", " # all nodes containing words are tagged with 'w'\n", " \n", " # show progress on screen\n", " CollectedItems+=1\n", " if (CollectedItems%100==0): print (\".\",end='')\n", " \n", " #Leafref will contain list with book, chapter verse and wordnumber\n", " Leafref = re.sub(r'[!: ]',\" \", elem.attrib.get('ref')).split()\n", " \n", " #push value for monad to element tree \n", " elem.set('monad', monad)\n", " monad+=1\n", " \n", " # add some important computed data to the leaf\n", " elem.set('LeafName', elem.tag)\n", " elem.set('word', elem.text)\n", " elem.set('book_long', book_long)\n", " elem.set('booknum', int(booknum))\n", " elem.set('book_short', book_short)\n", " elem.set('chapter', int(Leafref[1]))\n", " elem.set('verse', int(Leafref[2]))\n", " \n", " # folling code will trace down parents upto the tree and store found attributes\n", " parentnode=getParent(elem)\n", " index=0\n", " while (parentnode):\n", " index+=1\n", " elem.set('Parent{}Name'.format(index), parentnode.tag)\n", " elem.set('Parent{}Type'.format(index), parentnode.attrib.get('type'))\n", " elem.set('Parent{}Appos'.format(index), parentnode.attrib.get('appositioncontainer'))\n", " elem.set('Parent{}Class'.format(index), parentnode.attrib.get('class'))\n", " elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('rule'))\n", " elem.set('Parent{}Role'.format(index), parentnode.attrib.get('role'))\n", " elem.set('Parent{}Cltype'.format(index), parentnode.attrib.get('cltype'))\n", " elem.set('Parent{}Unit'.format(index), parentnode.attrib.get('unit'))\n", " elem.set('Parent{}Junction'.format(index), parentnode.attrib.get('junction'))\n", " elem.set('Parent{}SN'.format(index), parentnode.attrib.get('SN'))\n", " elem.set('Parent{}WGN'.format(index), parentnode.attrib.get('WGN'))\n", " currentnode=parentnode\n", " parentnode=getParent(currentnode) \n", " elem.set('parents', int(index))\n", " \n", " #this will push all elements found in the tree into a DataFrame\n", " df=pd.DataFrame(elem.attrib, index={monad})\n", " full_df=pd.concat([full_df,df])\n", " \n", " #store the resulting DataFrame per book into a pickle file for further processing\n", " df = df.convert_dtypes(convert_string=True)\n", " \n", " # sort by s=id\n", " sortkey='{http://www.w3.org/XML/1998/namespace}id'\n", " full_df.rename(columns={sortkey: 'id'}, inplace=True)\n", " full_df.sort_values(by=['id'])\n", "\n", " output = open(r\"{}\".format(OutputFile), 'wb')\n", " pickle.dump(full_df, output)\n", " output.close()\n", " print(\"\\nFound \",CollectedItems, \" items in %s seconds\\n\" % (time.time() - start_time)) \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# just dump some things to test the result\n", "\n", "\n", "for bo in bo2book:\n", " '''\n", " load all data into a dataframe\n", " process books in order (bookinfo is a list!)\n", " ''' \n", " InputFile = os.path.join(PklDir, f'{bo}.pkl')\n", " \n", " print(f'\\tloading {InputFile}...')\n", " pkl_file = open(InputFile, 'rb')\n", " df = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " # not sure if this is needed\n", " # fill dictionary of column names for this book \n", " IndexDict = {} # init an empty dictionary\n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " print (itemname)\n", " ItemsInRow+=1\n", " " ] }, { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "## 3. Nestle1904 Text-Fabric production from pickle input \n", "##### [Back to TOC](#TOC)\n", "\n", "This script creates the Text-Fabric files by recursive calling the TF walker function.\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The pickle files created by step 1 are stored on Github location T.B.D." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Load libraries and initialize some data\n", "\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T03:01:34.810259Z", "start_time": "2022-10-28T03:01:25.745112Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import re\n", "import gc\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "from tf.parameters import VERSION\n", "from datetime import date\n", "import pickle\n", "\n", "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\Read_from_lowfat\\\\data\\\\'\n", "XmlDir = BaseDir+'xml\\\\'\n", "PklDir = BaseDir+'pkl\\\\'\n", "XlsxDir = BaseDir+'xlsx\\\\'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}\n", "\n", "bo2book_ = {'26-jude': ['Jude', '26', 'Jude']}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optional: export to Excel for investigation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# test: sorting the data\n", "import openpyxl\n", "import pickle\n", "\n", "#if True:\n", "for bo in bo2book:\n", " '''\n", " load all data into a dataframe\n", " process books in order (bookinfo is a list!)\n", " ''' \n", " InputFile = os.path.join(PklDir, f'{bo}.pkl')\n", " #InputFile = os.path.join(PklDir, '01-matthew.pkl')\n", " \n", " print(f'\\tloading {InputFile}...')\n", " pkl_file = open(InputFile, 'rb')\n", " df = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " # not sure if this is needed\n", " # fill dictionary of column names for this book \n", " IndexDict = {} # init an empty dictionary\n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " ItemsInRow+=1\n", " #print(itemname)\n", " \n", " # sort by id\n", " #print(df)\n", " df_sorted=df.sort_values(by=['id'])\n", " df_sorted.to_excel(os.path.join(XlsxDir, f'{bo}.xlsx'), index=False)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Running the TF walker function\n", "\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The logic of interpreting the data is included in the director function." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.4.5\n", "52 features found and 0 ignored\n", " 0.00s Importing data from walking through the source ...\n", " | 0.00s Preparing metadata... \n", " | SECTION TYPES: book, chapter, verse\n", " | SECTION FEATURES: book, chapter, verse\n", " | STRUCTURE TYPES: book, chapter, verse\n", " | STRUCTURE FEATURES: book, chapter, verse\n", " | TEXT FEATURES:\n", " | | text-orig-full after, word\n", " | 0.00s OK\n", " | 0.00s Following director... \n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\01-matthew.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\02-mark.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\03-luke.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\04-john.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\05-acts.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\06-romans.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\07-1corinthians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\08-2corinthians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\09-galatians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\10-ephesians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\11-philippians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\12-colossians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\13-1thessalonians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\14-2thessalonians.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\15-1timothy.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\16-2timothy.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\17-titus.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\18-philemon.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\19-hebrews.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\20-james.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\21-1peter.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\22-2peter.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\23-1john.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\24-2john.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\25-3john.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\26-jude.pkl...\n", "\tWe are loading C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\pkl\\27-revelation.pkl...\n", " | 45s \"edge\" actions: 0\n", " | 45s \"feature\" actions: 267471\n", " | 45s \"node\" actions: 129692\n", " | 45s \"resume\" actions: 9629\n", " | 45s \"slot\" actions: 137779\n", " | 45s \"terminate\" actions: 277227\n", " | 27 x \"book\" node \n", " | 260 x \"chapter\" node \n", " | 8011 x \"sentence\" node \n", " | 7943 x \"verse\" node \n", " | 113451 x \"wg\" node \n", " | 137779 x \"word\" node = slot type\n", " | 267471 nodes of all types\n", " | 45s OK\n", " | 0.00s checking for nodes and edges ... \n", " | 0.00s OK\n", " | 0.00s checking (section) features ... \n", " | 0.22s OK\n", " | 0.00s reordering nodes ...\n", " | 0.03s Sorting 27 nodes of type \"book\"\n", " | 0.05s Sorting 260 nodes of type \"chapter\"\n", " | 0.06s Sorting 8011 nodes of type \"sentence\"\n", " | 0.08s Sorting 7943 nodes of type \"verse\"\n", " | 0.10s Sorting 113451 nodes of type \"wg\"\n", " | 0.22s Max node = 267471\n", " | 0.22s OK\n", " | 0.00s reassigning feature values ...\n", " | | 0.00s node feature \"after\" with 137779 nodes\n", " | | 0.04s node feature \"appos\" with 113451 nodes\n", " | | 0.08s node feature \"book\" with 27 nodes\n", " | | 0.08s node feature \"book_long\" with 137779 nodes\n", " | | 0.13s node feature \"booknumber\" with 137806 nodes\n", " | | 0.18s node feature \"bookshort\" with 137806 nodes\n", " | | 0.22s node feature \"case\" with 137779 nodes\n", " | | 0.27s node feature \"chapter\" with 153939 nodes\n", " | | 0.32s node feature \"clausetype\" with 113451 nodes\n", " | | 0.35s node feature \"degree\" with 137779 nodes\n", " | | 0.40s node feature \"gloss\" with 137779 nodes\n", " | | 0.44s node feature \"gn\" with 137779 nodes\n", " | | 0.49s node feature \"id\" with 137779 nodes\n", " | | 0.53s node feature \"junction\" with 113451 nodes\n", " | | 0.57s node feature \"lemma\" with 137779 nodes\n", " | | 0.62s node feature \"lex_dom\" with 137779 nodes\n", " | | 0.67s node feature \"ln\" with 137779 nodes\n", " | | 0.71s node feature \"monad\" with 137779 nodes\n", " | | 0.75s node feature \"mood\" with 137779 nodes\n", " | | 0.80s node feature \"morph\" with 137779 nodes\n", " | | 0.84s node feature \"nodeID\" with 137779 nodes\n", " | | 0.89s node feature \"normalized\" with 137779 nodes\n", " | | 0.93s node feature \"nu\" with 137779 nodes\n", " | | 0.98s node feature \"number\" with 137779 nodes\n", " | | 1.03s node feature \"orig_order\" with 137779 nodes\n", " | | 1.07s node feature \"person\" with 137779 nodes\n", " | | 1.12s node feature \"ref\" with 137779 nodes\n", " | | 1.16s node feature \"reference\" with 137779 nodes\n", " | | 1.22s node feature \"roleclausedistance\" with 137779 nodes\n", " | | 1.26s node feature \"rule\" with 113451 nodes\n", " | | 1.31s node feature \"sentence\" with 137779 nodes\n", " | | 1.35s node feature \"sp\" with 137779 nodes\n", " | | 1.39s node feature \"sp_full\" with 137779 nodes\n", " | | 1.44s node feature \"strongs\" with 137779 nodes\n", " | | 1.48s node feature \"subj_ref\" with 137779 nodes\n", " | | 1.53s node feature \"tense\" with 137779 nodes\n", " | | 1.57s node feature \"type\" with 137779 nodes\n", " | | 1.62s node feature \"unicode\" with 137779 nodes\n", " | | 1.66s node feature \"verse\" with 153733 nodes\n", " | | 1.71s node feature \"voice\" with 137779 nodes\n", " | | 1.75s node feature \"wgclass\" with 113451 nodes\n", " | | 1.80s node feature \"wglevel\" with 113451 nodes\n", " | | 1.84s node feature \"wgrole\" with 113451 nodes\n", " | | 1.87s node feature \"wgrolelong\" with 113451 nodes\n", " | | 1.91s node feature \"wgtype\" with 113451 nodes\n", " | | 1.96s node feature \"word\" with 137779 nodes\n", " | | 2.00s node feature \"wordgroup\" with 113451 nodes\n", " | | 2.04s node feature \"wordlevel\" with 137779 nodes\n", " | | 2.08s node feature \"wordrole\" with 137779 nodes\n", " | | 2.13s node feature \"wordrolelong\" with 137779 nodes\n", " | 2.26s OK\n", " 0.00s Exporting 51 node and 1 edge and 1 config features to ~/my_new_Jupyter_folder/Read_from_lowfat/data:\n", " 0.00s VALIDATING oslots feature\n", " 0.02s VALIDATING oslots feature\n", " 0.02s maxSlot= 137779\n", " 0.02s maxNode= 267471\n", " 0.04s OK: oslots is valid\n", " | 0.13s T after to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T appos to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.00s T book to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.15s T book_long to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T booknumber to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T bookshort to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T case to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.15s T chapter to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.12s T clausetype to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T degree to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T gloss to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T gn to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T id to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T junction to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.16s T lemma to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T lex_dom to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T ln to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T monad to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T mood to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T morph to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.15s T nodeID to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.16s T normalized to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T nu to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T number to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T orig_order to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.07s T otype to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T person to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T ref to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T reference to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.12s T roleclausedistance to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.12s T rule to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T sentence to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T sp to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T sp_full to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T strongs to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T subj_ref to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T tense to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T type to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.16s T unicode to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T verse to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T voice to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T wgclass to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.10s T wglevel to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T wgrole to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T wgrolelong to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T wgtype to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.16s T word to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.11s T wordgroup to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.13s T wordlevel to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T wordrole to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.14s T wordrolelong to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.36s T oslots to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " | 0.00s M otext to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", " 7.07s Exported 51 node features and 1 edge features and 1 config features to ~/my_new_Jupyter_folder/Read_from_lowfat/data\n", "done\n" ] } ], "source": [ "TF = Fabric(locations=BaseDir, silent=False)\n", "cv = CV(TF)\n", "version = \"0.1.7 (added role info to each word)\"\n", "\n", "###############################################\n", "# Common helper functions #\n", "###############################################\n", "\n", "#Function to prevent errors during conversion due to missing data\n", "def sanitize(input):\n", " if isinstance(input, float): return ''\n", " if isinstance(input, type(None)): return ''\n", " else: return (input)\n", "\n", "\n", "# Function to expand the syntactic categories of words or wordgroup\n", "# See also \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" \n", "# page 5&6 (section 2.4 Syntactic Categories at Clause Level)\n", "def ExpandRole(input):\n", " if input==\"adv\": return 'Adverbial'\n", " if input==\"io\": return 'Indirect Object'\n", " if input==\"o\": return 'Object'\n", " if input==\"o2\": return 'Second Object'\n", " if input==\"s\": return 'Subject'\n", " if input==\"p\": return 'Predicate'\n", " if input==\"v\": return 'Verbal'\n", " if input==\"vc\": return 'Verbal Copula'\n", " if input=='aux': return 'Auxiliar'\n", " return ''\n", "\n", "# Function to expantion of Part of Speech labels. See also the description in \n", "# \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 6&7\n", "# (2.2. Syntactic Categories at Word Level: Part of Speech Labels)\n", "def ExpandSP(input):\n", " if input=='adj': return 'adjective'\n", " if input=='conj': return 'conjunction'\n", " if input=='det': return 'determiner' \n", " if input=='intj': return 'interjection' \n", " if input=='noun': return 'noun' \n", " if input=='num': return 'numeral' \n", " if input=='prep': return 'preposition' \n", " if input=='ptcl': return 'particle' \n", " if input=='pron': return 'pronoun' \n", " if input=='verb': return 'verb' \n", " return ''\n", "\n", "###############################################\n", "# The director routine #\n", "###############################################\n", "\n", "def director(cv):\n", " \n", " ###############################################\n", " # Innitial setup of data etc. #\n", " ###############################################\n", " NoneType = type(None) # needed as tool to validate certain data\n", " IndexDict = {} # init an empty dictionary\n", " WordGroupDict={} # init a dummy dictionary\n", " PrevWordGroupSet = WordGroupSet = []\n", " PrevWordGroupList = WordGroupList = []\n", " RootWordGroup = 0\n", " WordNumber=FoundWords=WordGroupTrack=0\n", " # The following is required to recover succesfully from an abnormal condition\n", " # in the LowFat tree data where a element is labeled as \n", " # this number is arbitrary but should be high enough not to clash with 'real' WG numbers\n", " DummyWGN=200000 \n", " \n", " for bo,bookinfo in bo2book.items(): \n", " \n", " ###############################################\n", " # start of section executed for each book #\n", " ###############################################\n", " \n", " # note: bookinfo is a list! Split the data\n", " Book = bookinfo[0] \n", " BookNumber = int(bookinfo[1])\n", " BookShort = bookinfo[2]\n", " BookLoc = os.path.join(PklDir, f'{bo}.pkl') \n", " \n", " \n", " # load data for this book into a dataframe. \n", " # make sure wordorder is correct\n", " print(f'\\tWe are loading {BookLoc}...')\n", " pkl_file = open(BookLoc, 'rb')\n", " df_unsorted = pickle.load(pkl_file)\n", " pkl_file.close()\n", " df=df_unsorted.sort_values(by=['id'])\n", " \n", " \n", " # set up nodes for new book\n", " ThisBookPointer = cv.node('book')\n", " cv.feature(ThisBookPointer, book=Book, booknumber=BookNumber, bookshort=BookShort)\n", " \n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, chapter=1)\n", " PreviousChapter=1\n", " \n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, verse=1)\n", " PreviousVerse=1\n", " \n", " ThisSentencePointer = cv.node('sentence')\n", " cv.feature(ThisSentencePointer, verse=1)\n", " PreviousSentence=1 \n", "\n", "\n", " '''\n", " fill dictionary of column names for this book \n", " sort to ensure proper wordorder\n", " '''\n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " ItemsInRow+=1\n", " df.sort_values(by=['id'])\n", " \n", "\n", " ###############################################\n", " # Iterate through words and construct objects #\n", " ###############################################\n", " \n", " for row in df.itertuples():\n", " WordNumber += 1\n", " FoundWords +=1\n", " \n", " # Detect and act upon changes in sentences, verse and chapter \n", " # the order of terminating and creating the nodes is critical: \n", " # close verse - close chapter - open chapter - open verse \n", " NumberOfParents = row[IndexDict.get(\"i_parents\")]\n", " ThisSentence=int(row[IndexDict.get(\"i_Parent{}SN\".format(NumberOfParents-1))])\n", " ThisVerse = sanitize(row[IndexDict.get(\"i_verse\")])\n", " ThisChapter = sanitize(row[IndexDict.get(\"i_chapter\")])\n", " if (ThisSentence!=PreviousSentence):\n", " #cv.feature(ThisSentencePointer, statdata?)\n", " cv.terminate(ThisSentencePointer)\n", " \n", " if (ThisVerse!=PreviousVerse):\n", " #cv.feature(ThisVersePointer, statdata?)\n", " cv.terminate(ThisVersePointer)\n", "\n", " if (ThisChapter!=PreviousChapter):\n", " #cv.feature(ThisChapterPointer, statdata?)\n", " cv.terminate(ThisChapterPointer)\n", " PreviousChapter = ThisChapter\n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, chapter=ThisChapter)\n", " \n", " if (ThisVerse!=PreviousVerse):\n", " PreviousVerse = ThisVerse \n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, verse=ThisVerse, chapter=ThisChapter)\n", " \n", " if (ThisSentence!=PreviousSentence):\n", " PreviousSentence=ThisSentence\n", " ThisSentencePointer = cv.node('sentence')\n", " cv.feature(ThisSentencePointer, verse=ThisVerse, chapter=ThisChapter) \n", "\n", " \n", " ###############################################\n", " # analyze and process tags #\n", " ###############################################\n", " \n", " # get number of parent nodes (this differs per word)\n", " PrevWordGroupList=WordGroupList\n", " WordGroupList=[] # stores current active WordGroup numbers\n", "\n", " for i in range(NumberOfParents-2,0,-1): # important: reversed itteration!\n", " _WGN=row[IndexDict.get(\"i_Parent{}WGN\".format(i))]\n", " if isinstance(_WGN, type(None)): \n", " # handling conditions where XML data has e.g. Acts 26:12\n", " # to recover, we need to create a dummy WG with a sufficient high WGN so it can never match any real WGN. \n", " WGN=DummyWGN\n", " else:\n", " WGN=int(_WGN)\n", " if WGN!='':\n", " WordGroupList.append(WGN)\n", " WordGroupDict[(WGN,0)]=WGN\n", " WordGroupDict[(WGN,1)]=sanitize(row[IndexDict.get(\"i_Parent{}Rule\".format(i))])\n", " WordGroupDict[(WGN,2)]=sanitize(row[IndexDict.get(\"i_Parent{}Cltype\".format(i))])\n", " WordGroupDict[(WGN,3)]=sanitize(row[IndexDict.get(\"i_Parent{}Junction\".format(i))])\n", " WordGroupDict[(WGN,6)]=sanitize(row[IndexDict.get(\"i_Parent{}Class\".format(i))])\n", " WordGroupDict[(WGN,7)]=sanitize(row[IndexDict.get(\"i_Parent{}Role\".format(i))])\n", " WordGroupDict[(WGN,8)]=sanitize(row[IndexDict.get(\"i_Parent{}Type\".format(i))])\n", " WordGroupDict[(WGN,9)]=sanitize(row[IndexDict.get(\"i_Parent{}Appos\".format(i))]) \n", " WordGroupDict[(WGN,10)]=NumberOfParents-1-i # = number of parent wordgroups \n", " if not PrevWordGroupList==WordGroupList:\n", " if RootWordGroup != WordGroupList[0]:\n", " RootWordGroup = WordGroupList[0]\n", " SuspendableWordGoupList = []\n", " # we have a new sentence. rebuild suspendable wordgroup list\n", " # some cleaning of data may be added here to save on memmory... \n", " #for k in range(6): del WordGroupDict[item,k]\n", " for item in reversed(PrevWordGroupList):\n", " if (item not in WordGroupList):\n", " # CLOSE/SUSPEND CASE\n", " SuspendableWordGoupList.append(item)\n", " cv.terminate(WordGroupDict[item,4])\n", " for item in WordGroupList:\n", " if (item not in PrevWordGroupList):\n", " if (item in SuspendableWordGoupList):\n", " # RESUME CASE\n", " #print ('\\n resume: '+str(item),end=' ')\n", " cv.resume(WordGroupDict[(item,4)])\n", " else:\n", " # CREATE CASE\n", " #print ('\\n create: '+str(item),end=' ')\n", " WordGroupDict[(item,4)]=cv.node('wg')\n", " WordGroupDict[(item,5)]=WordGroupTrack\n", " WordGroupTrack += 1\n", " cv.feature(WordGroupDict[(item,4)], wordgroup=WordGroupDict[(item,0)], junction=WordGroupDict[(item,3)], \n", " clausetype=WordGroupDict[(item,2)], rule=WordGroupDict[(item,1)], wgclass=WordGroupDict[(item,6)], \n", " wgrole=WordGroupDict[(item,7)],wgrolelong=ExpandRole(WordGroupDict[(item,7)]),\n", " wgtype=WordGroupDict[(item,8)],appos=WordGroupDict[(item,8)],wglevel=WordGroupDict[(item,10)])\n", "\n", " \n", " \n", " # These roles are performed either by a WG or just a single word.\n", " Role=row[IndexDict.get(\"i_role\")]\n", " ValidRoles=[\"adv\",\"io\",\"o\",\"o2\",\"s\",\"p\",\"v\",\"vc\",\"aux\"]\n", " DistanceToRoleClause=0\n", " if isinstance (Role,str) and Role in ValidRoles: \n", " # role is assign to this word (uniqely)\n", " WordRole=Role\n", " WordRoleLong=ExpandRole(WordRole)\n", " else:\n", " # role details needs to be taken from some uptree wordgroup \n", " WordRole=WordRoleLong=''\n", " for i in range(1,NumberOfParents-1):\n", " Role = row[IndexDict.get(\"i_Parent{}Role\".format(i))]\n", " if isinstance (Role,str) and Role in ValidRoles: \n", " WordRole=Role\n", " WordRoleLong=ExpandRole(WordRole)\n", " DistanceToRoleClause=i\n", " break\n", " \n", "\n", " ###############################################\n", " # analyze and process tags #\n", " ###############################################\n", " \n", " # determine syntactic categories at word level. \n", " PartOfSpeech=sanitize(row[IndexDict.get(\"i_class\")])\n", " PartOfSpeechFull=ExpandSP(PartOfSpeech)\n", " \n", " # some attributes are not present inside some (small) books. The following is to prevent exceptions.\n", " degree='' \n", " if 'i_degree' in IndexDict: degree=sanitize(row[IndexDict.get(\"i_degree\")]) \n", " subjref=''\n", " if 'i_subjref' in IndexDict: subjref=sanitize(row[IndexDict.get(\"i_subjref\")]) \n", "\n", " \n", " # create the word slots\n", " this_word = cv.slot()\n", " cv.feature(this_word, \n", " after= sanitize(row[IndexDict.get(\"i_after\")]),\n", " id= sanitize(row[IndexDict.get(\"i_id\")]),\n", " unicode= sanitize(row[IndexDict.get(\"i_unicode\")]),\n", " word= sanitize(row[IndexDict.get(\"i_word\")]),\n", " monad= sanitize(row[IndexDict.get(\"i_monad\")]),\n", " orig_order= FoundWords,\n", " book_long= sanitize(row[IndexDict.get(\"i_book_long\")]),\n", " booknumber= BookNumber,\n", " bookshort= sanitize(row[IndexDict.get(\"i_book_short\")]),\n", " chapter= ThisChapter,\n", " ref= sanitize(row[IndexDict.get(\"i_ref\")]),\n", " sp= PartOfSpeech,\n", " sp_full= PartOfSpeechFull,\n", " verse= ThisVerse,\n", " sentence= ThisSentence,\n", " normalized= sanitize(row[IndexDict.get(\"i_normalized\")]),\n", " morph= sanitize(row[IndexDict.get(\"i_morph\")]),\n", " strongs= sanitize(row[IndexDict.get(\"i_strong\")]),\n", " lex_dom= sanitize(row[IndexDict.get(\"i_domain\")]),\n", " ln= sanitize(row[IndexDict.get(\"i_ln\")]),\n", " gloss= sanitize(row[IndexDict.get(\"i_gloss\")]),\n", " gn= sanitize(row[IndexDict.get(\"i_gender\")]),\n", " nu= sanitize(row[IndexDict.get(\"i_number\")]),\n", " case= sanitize(row[IndexDict.get(\"i_case\")]),\n", " lemma= sanitize(row[IndexDict.get(\"i_lemma\")]),\n", " person= sanitize(row[IndexDict.get(\"i_person\")]),\n", " mood= sanitize(row[IndexDict.get(\"i_mood\")]),\n", " tense= sanitize(row[IndexDict.get(\"i_tense\")]),\n", " number= sanitize(row[IndexDict.get(\"i_number\")]),\n", " voice= sanitize(row[IndexDict.get(\"i_voice\")]),\n", " degree= degree,\n", " type= sanitize(row[IndexDict.get(\"i_type\")]),\n", " reference= sanitize(row[IndexDict.get(\"i_ref\")]), \n", " subj_ref= subjref,\n", " nodeID= sanitize(row[1]), #this is a fixed position in dataframe\n", " wordrole= WordRole,\n", " wordrolelong= WordRoleLong,\n", " wordlevel= NumberOfParents-1,\n", " roleclausedistance = DistanceToRoleClause\n", " )\n", " cv.terminate(this_word)\n", "\n", " \n", " '''\n", " wrap up the book. At the end of the book we need to close all nodes in proper order.\n", " ''' \n", " for item in WordGroupList:\n", " #cv.feature(WordGroupDict[(item,4)], add some stats?)\n", " cv.terminate(WordGroupDict[item,4])\n", " #cv.feature(ThisSentencePointer, statdata?)\n", " cv.terminate(ThisSentencePointer)\n", " #cv.feature(ThisVersePointer, statdata?)\n", " cv.terminate(ThisVersePointer)\n", " #cv.feature(ThisChapterPonter, statdata?)\n", " cv.terminate(ThisChapterPointer) \n", " #cv.feature(ThisBookPointer, statdata?)\n", " cv.terminate(ThisBookPointer)\n", "\n", " # clear dataframe for this book, clear the index dictionary\n", " del df\n", " IndexDict.clear()\n", " gc.collect()\n", " \n", " ###############################################\n", " # end of section executed for each book #\n", " ###############################################\n", "\n", " ###############################################\n", " # end of director function #\n", " ###############################################\n", " \n", "###############################################\n", "# Output definitions #\n", "###############################################\n", " \n", "slotType = 'word' \n", "otext = { # dictionary of config data for sections and text formats\n", " 'fmt:text-orig-full':'{word}{after}',\n", " 'sectionTypes':'book,chapter,verse',\n", " 'sectionFeatures':'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " }\n", "\n", "# configure metadata\n", "generic = { # dictionary of metadata meant for all features\n", " 'Name': 'Greek New Testament (NA1904)',\n", " 'Version': '1904',\n", " 'Editors': 'Nestle',\n", " 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat',\n", " 'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)', \n", " 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', \n", " 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', \n", " 'Convertor_source': 'https://github.com/tonyjurg/n1904_lft',\n", " 'Converter_version': '{}'.format(version),\n", " 'TextFabric version': '{}'.format(VERSION) #imported from tf.parameters\n", " }\n", "\n", "# set of integer valued feature names\n", "intFeatures = { \n", " 'booknumber',\n", " 'chapter',\n", " 'verse',\n", " 'sentence',\n", " 'wordgroup',\n", " 'orig_order',\n", " 'monad',\n", " 'wglevel'\n", " }\n", "\n", "# per feature dicts with metadata\n", "featureMeta = { \n", " 'after': {'description': 'Characters (eg. punctuations) following the word'},\n", " 'id': {'description': 'id of the word'},\n", " 'book': {'description': 'Book'},\n", " 'book_long': {'description': 'Book name (fully spelled out)'},\n", " 'booknumber': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n", " 'bookshort': {'description': 'Book name (abbreviated)'},\n", " 'chapter': {'description': 'Chapter number inside book'},\n", " 'verse': {'description': 'Verse number inside chapter'},\n", " 'sentence': {'description': 'Sentence number (counted per chapter)'},\n", " 'type': {'description': 'Wordgroup type information (verb, verbless, elided, minor, etc.)'},\n", " 'rule': {'description': 'Wordgroup rule information '},\n", " 'orig_order': {'description': 'Word order within corpus (per book)'},\n", " 'monad': {'description': 'Monad (currently: order of words in XML tree file!)'},\n", " 'word': {'description': 'Word as it appears in the text (excl. punctuations)'},\n", " 'unicode': {'description': 'Word as it arears in the text in Unicode (incl. punctuations)'},\n", " 'ref': {'description': 'ref Id'},\n", " 'sp': {'description': 'Part of Speech (abbreviated)'},\n", " 'sp_full': {'description': 'Part of Speech (long description)'}, \n", " 'normalized': {'description': 'Surface word stripped of punctations'},\n", " 'lemma': {'description': 'Lexeme (lemma)'},\n", " 'morph': {'description': 'Morphological tag (Sandborg-Petersen morphology)'},\n", " # see also discussion on relation between lex_dom and ln \n", " # @ https://github.com/Clear-Bible/macula-greek/issues/29\n", " 'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},\n", " 'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},\n", " 'strongs': {'description': 'Strongs number'},\n", " 'gloss': {'description': 'English gloss'},\n", " 'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},\n", " 'nu': {'description': 'Gramatical number (Singular, Plural)'},\n", " 'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n", " 'person': {'description': 'Gramatical person of the verb (first, second, third)'},\n", " 'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},\n", " 'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},\n", " 'number': {'description': 'Gramatical number of the verb'},\n", " 'voice': {'description': 'Gramatical voice of the verb'},\n", " 'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},\n", " 'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n", " 'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'},\n", " 'junction': {'description': 'Junction data related to a wordgroup'},\n", " 'wordgroup': {'description': 'Wordgroup number (counted per book)'},\n", " 'wgclass': {'description': 'Class of the wordgroup ()'},\n", " 'wgrole': {'description': 'Role of the wordgroup (abbreviated)'},\n", " 'wgrolelong': {'description': 'Role of the wordgroup (abbreviated)'},\n", " 'wordrole': {'description': 'Role of the word (abbreviated)'},\n", " 'wordrolelong':{'description': 'Role of the word (full)'},\n", " 'wgtype': {'description': 'Wordgroup type details'},\n", " 'clausetype': {'description': 'Clause type details'},\n", " 'appos': {'description': 'Apposition details'},\n", " 'wglevel': {'description': 'number of parent wordgroups for a wordgroup'},\n", " 'wordlevel': {'description': 'number of parent wordgroups for a word'},\n", " 'roleclausedistance': {'description': 'distance to wordgroup defining the role of this word'}\n", " }\n", "\n", "\n", "###############################################\n", "# the main function #\n", "###############################################\n", "\n", "good = cv.walk(\n", " director,\n", " slotType,\n", " otext=otext,\n", " generic=generic,\n", " intFeatures=intFeatures,\n", " featureMeta=featureMeta,\n", " warn=True,\n", " force=True\n", ")\n", "\n", "if good:\n", " print (\"done\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [], "toc": true }, "source": [ "## 5: Basic testing the textfabric data \n", "##### [back to TOC](#TOC)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Step 1 load the TF data\n", "\n", "The TF will be loaded from github repository https://github.com/tonyjurg/n1904_lft" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "ExecuteTime": { "end_time": "2022-10-21T02:32:54.197994Z", "start_time": "2022-10-21T02:32:53.217806Z" } }, "outputs": [], "source": [ "# First, I have to laod different modules that I use for analyzing the data and for plotting:\n", "import sys, os, collections\n", "import pandas as pd\n", "import numpy as np\n", "import re\n", "\n", "\n", "from tf.fabric import Fabric\n", "from tf.app import use\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell loads the TextFabric files from github repository. " ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "ExecuteTime": { "end_time": "2022-10-21T02:32:55.906200Z", "start_time": "2022-10-21T02:32:55.012231Z" } }, "outputs": [ { "data": { "text/markdown": [ "**Locating corpus resources ...**" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "app: ~/text-fabric-data/github/tonyjurg/n1904_lft/app" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "data: ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ " | 0.29s T otype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 2.65s T oslots from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.73s T word from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.00s T book from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.57s T chapter from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.59s T after from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.57s T verse from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | | 0.07s C __levels__ from otype, oslots, otext\n", " | | 1.80s C __order__ from otype, oslots, __levels__\n", " | | 0.08s C __rank__ from otype, __order__\n", " | | 4.72s C __levUp__ from otype, oslots, __rank__\n", " | | 2.34s C __levDown__ from otype, __levUp__, __rank__\n", " | | 0.06s C __characters__ from otext\n", " | | 1.23s C __boundary__ from otype, oslots, __rank__\n", " | | 0.05s C __sections__ from otype, oslots, otext, __levUp__, __levels__, book, chapter, verse\n", " | | 0.26s C __structure__ from otype, oslots, otext, __rank__, __levUp__, book, chapter, verse\n", " | 0.43s T appos from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.59s T book_long from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.49s T booknumber from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.58s T bookshort from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.55s T case from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.40s T clausetype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.49s T degree from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.66s T gloss from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.56s T gn from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.77s T id from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.44s T junction from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.67s T lemma from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.63s T lex_dom from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.65s T ln from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.55s T monad from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.53s T mood from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.62s T morph from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.77s T nodeID from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.69s T normalized from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.58s T nu from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.57s T number from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.54s T orig_order from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.53s T person from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.76s T ref from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.58s T roleclausedistance from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.50s T rule from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.51s T sentence from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.62s T sp from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.60s T sp_full from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.66s T strongs from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.53s T subj_ref from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.53s T tense from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.55s T type from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.75s T unicode from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.53s T voice from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.47s T wgclass from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.43s T wglevel from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.44s T wgrole from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.47s T wgrolelong from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.43s T wgtype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.46s T wordgroup from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.60s T wordlevel from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.59s T wordrole from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n", " | 0.68s T wordrolelong from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.7\n" ] }, { "data": { "text/html": [ "\n", " Text-Fabric: Text-Fabric API 11.4.5, tonyjurg/n1904_lft/app v3, Search Reference
\n", " Data: tonyjurg - n1904_lft 0.1.7, Character table, Feature docs
\n", "
Node types\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
Name# of nodes# slots/node% coverage
book275102.93100
chapter260529.92100
verse794317.35100
sentence801117.20100
wg1134517.58624
word1377791.00100
\n", " Sets: no custom sets
\n", " Features:
\n", "
Nestle 1904\n", "
\n", "\n", "
\n", "
\n", "after\n", "
\n", "
str
\n", "\n", " Characters (eg. punctuations) following the word\n", "\n", "
\n", "\n", "
\n", "
\n", "appos\n", "
\n", "
str
\n", "\n", " Apposition details\n", "\n", "
\n", "\n", "
\n", "
\n", "book\n", "
\n", "
str
\n", "\n", " Book\n", "\n", "
\n", "\n", "
\n", "
\n", "book_long\n", "
\n", "
str
\n", "\n", " Book name (fully spelled out)\n", "\n", "
\n", "\n", "
\n", "
\n", "booknumber\n", "
\n", "
int
\n", "\n", " NT book number (Matthew=1, Mark=2, ..., Revelation=27)\n", "\n", "
\n", "\n", "
\n", "
\n", "bookshort\n", "
\n", "
str
\n", "\n", " Book name (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "case\n", "
\n", "
str
\n", "\n", " Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)\n", "\n", "
\n", "\n", "
\n", "
\n", "chapter\n", "
\n", "
int
\n", "\n", " Chapter number inside book\n", "\n", "
\n", "\n", "
\n", "
\n", "clausetype\n", "
\n", "
str
\n", "\n", " Clause type details\n", "\n", "
\n", "\n", "
\n", "
\n", "degree\n", "
\n", "
str
\n", "\n", " Degree (e.g. Comparitative, Superlative)\n", "\n", "
\n", "\n", "
\n", "
\n", "gloss\n", "
\n", "
str
\n", "\n", " English gloss\n", "\n", "
\n", "\n", "
\n", "
\n", "gn\n", "
\n", "
str
\n", "\n", " Gramatical gender (Masculine, Feminine, Neuter)\n", "\n", "
\n", "\n", "
\n", "
\n", "id\n", "
\n", "
str
\n", "\n", " id of the word\n", "\n", "
\n", "\n", "
\n", "
\n", "junction\n", "
\n", "
str
\n", "\n", " Junction data related to a wordgroup\n", "\n", "
\n", "\n", "
\n", "
\n", "lemma\n", "
\n", "
str
\n", "\n", " Lexeme (lemma)\n", "\n", "
\n", "\n", "
\n", "
\n", "lex_dom\n", "
\n", "
str
\n", "\n", " Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "ln\n", "
\n", "
str
\n", "\n", " Lauw-Nida lexical classification (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "monad\n", "
\n", "
int
\n", "\n", " Monad (currently: order of words in XML tree file!)\n", "\n", "
\n", "\n", "
\n", "
\n", "mood\n", "
\n", "
str
\n", "\n", " Gramatical mood of the verb (passive, etc)\n", "\n", "
\n", "\n", "
\n", "
\n", "morph\n", "
\n", "
str
\n", "\n", " Morphological tag (Sandborg-Petersen morphology)\n", "\n", "
\n", "\n", "
\n", "
\n", "nodeID\n", "
\n", "
str
\n", "\n", " Node ID (as in the XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "normalized\n", "
\n", "
str
\n", "\n", " Surface word stripped of punctations\n", "\n", "
\n", "\n", "
\n", "
\n", "nu\n", "
\n", "
str
\n", "\n", " Gramatical number (Singular, Plural)\n", "\n", "
\n", "\n", "
\n", "
\n", "number\n", "
\n", "
str
\n", "\n", " Gramatical number of the verb\n", "\n", "
\n", "\n", "
\n", "
\n", "orig_order\n", "
\n", "
int
\n", "\n", " Word order within corpus (per book)\n", "\n", "
\n", "\n", "
\n", "
\n", "otype\n", "
\n", "
str
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "person\n", "
\n", "
str
\n", "\n", " Gramatical person of the verb (first, second, third)\n", "\n", "
\n", "\n", "
\n", "
\n", "ref\n", "
\n", "
str
\n", "\n", " ref Id\n", "\n", "
\n", "\n", "
\n", "
\n", "roleclausedistance\n", "
\n", "
str
\n", "\n", " distance to wordgroup defining the role of this word\n", "\n", "
\n", "\n", "
\n", "
\n", "rule\n", "
\n", "
str
\n", "\n", " Wordgroup rule information \n", "\n", "
\n", "\n", "
\n", "
\n", "sentence\n", "
\n", "
int
\n", "\n", " Sentence number (counted per chapter)\n", "\n", "
\n", "\n", "
\n", "
\n", "sp\n", "
\n", "
str
\n", "\n", " Part of Speech (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "sp_full\n", "
\n", "
str
\n", "\n", " Part of Speech (long description)\n", "\n", "
\n", "\n", "
\n", "
\n", "strongs\n", "
\n", "
str
\n", "\n", " Strongs number\n", "\n", "
\n", "\n", "
\n", "
\n", "subj_ref\n", "
\n", "
str
\n", "\n", " Subject reference (to nodeID in XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "tense\n", "
\n", "
str
\n", "\n", " Gramatical tense of the verb (e.g. Present, Aorist)\n", "\n", "
\n", "\n", "
\n", "
\n", "type\n", "
\n", "
str
\n", "\n", " Gramatical type of noun or pronoun (e.g. Common, Personal)\n", "\n", "
\n", "\n", "
\n", "
\n", "unicode\n", "
\n", "
str
\n", "\n", " Word as it arears in the text in Unicode (incl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "verse\n", "
\n", "
int
\n", "\n", " Verse number inside chapter\n", "\n", "
\n", "\n", "
\n", "
\n", "voice\n", "
\n", "
str
\n", "\n", " Gramatical voice of the verb\n", "\n", "
\n", "\n", "
\n", "
\n", "wgclass\n", "
\n", "
str
\n", "\n", " Class of the wordgroup ()\n", "\n", "
\n", "\n", "
\n", "
\n", "wglevel\n", "
\n", "
int
\n", "\n", " number of parent wordgroups for a wordgroup\n", "\n", "
\n", "\n", "
\n", "
\n", "wgrole\n", "
\n", "
str
\n", "\n", " Role of the wordgroup (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgrolelong\n", "
\n", "
str
\n", "\n", " Role of the wordgroup (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgtype\n", "
\n", "
str
\n", "\n", " Wordgroup type details\n", "\n", "
\n", "\n", "
\n", "
\n", "word\n", "
\n", "
str
\n", "\n", " Word as it appears in the text (excl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordgroup\n", "
\n", "
int
\n", "\n", " Wordgroup number (counted per book)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordlevel\n", "
\n", "
str
\n", "\n", " number of parent wordgroups for a word\n", "\n", "
\n", "\n", "
\n", "
\n", "wordrole\n", "
\n", "
str
\n", "\n", " Role of the word (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordrolelong\n", "
\n", "
str
\n", "\n", " Role of the word (full)\n", "\n", "
\n", "\n", "
\n", "
\n", "oslots\n", "
\n", "
none
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Text-Fabric API: names N F E L T S C TF directly usable

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Loading-the-New-Testament-Text-Fabric (add a specific version, eg. 0.1.2)\n", "NA = use (\"tonyjurg/n1904_lft\", version=\"0.1.7\", hoist=globals())" ] }, { "cell_type": "code", "execution_count": 128, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'word': 0, 'wg': 1, 'sentence': 2, 'verse': 3, 'chapter': 4, 'book': 5}" ] }, "execution_count": 128, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N.otypeRank\n" ] }, { "cell_type": "code", "execution_count": 168, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'text-orig-full': 'word'}" ] }, "execution_count": 168, "metadata": {}, "output_type": "execute_result" } ], "source": [ "T.formats" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Perform some basic display \n", "\n", "note: the implementation with regards how phrases need to be displayed (esp. with regards to conjunctions) is still to be done." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0.01s 25 results\n" ] }, { "data": { "text/html": [ "

verse 1" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 1
sentence
wg
wglevel=1
wg P2CL Verbless
wglevel=2
wg NPofNP Predicate
wglevel=3
Βίβλος
roleclausedistance=1sp=nounwordlevel=4wordrole=p
wg NPofNP
wglevel=4
γενέσεως
roleclausedistance=2sp=nounwordlevel=5wordrole=p
wg Np-Appos
wglevel=5
wg Np-Appos
wglevel=6
wg Np-Appos
wglevel=7
Ἰησοῦ
roleclausedistance=5sp=nounwordlevel=8wordrole=p
Χριστοῦ
roleclausedistance=5sp=nounwordlevel=8wordrole=p
wg NPofNP apposition
wglevel=7
υἱοῦ
roleclausedistance=5sp=nounwordlevel=8wordrole=p
Δαυεὶδ
roleclausedistance=5sp=nounwordlevel=8wordrole=p
wg NPofNP apposition
wglevel=6
υἱοῦ
roleclausedistance=4sp=nounwordlevel=7wordrole=p
Ἀβραάμ.
roleclausedistance=4sp=nounwordlevel=7wordrole=p
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 2" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 2
sentence
wg
wglevel=1
wg Conj13CL
wglevel=2
wg S-V-O coordinate
wglevel=3
Ἀβραὰμ
roleclausedistance=0sp=nounwordlevel=4wordrole=s
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=4wordrole=v
wg DetNP Object
wglevel=4
τὸν
roleclausedistance=1sp=detwordlevel=5wordrole=o
Ἰσαάκ,
roleclausedistance=1sp=nounwordlevel=5wordrole=o
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἰσαὰκ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἰακώβ,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἰακὼβ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg NpaNp Object
wglevel=5
wg DetNP
wglevel=6
τὸν
roleclausedistance=2sp=detwordlevel=7wordrole=o
Ἰούδαν
roleclausedistance=2sp=nounwordlevel=7wordrole=o
wg
wglevel=6
καὶ
roleclausedistance=2sp=conjwordlevel=7wordrole=o
wg DetNP
wglevel=7
τοὺς
roleclausedistance=3sp=detwordlevel=8wordrole=o
wg NPofNP
wglevel=8
ἀδελφοὺς
roleclausedistance=4sp=nounwordlevel=9wordrole=o
αὐτοῦ,
roleclausedistance=4sp=pronwordlevel=9wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 3" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 3
sentence
wg
wglevel=1
wg Conj13CL
wglevel=2
wg
wglevel=3
wg S-V-O-ADV coordinate
wglevel=4
Ἰούδας
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O-ADV coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg NpaNp Object
wglevel=5
wg DetNP
wglevel=6
τὸν
roleclausedistance=2sp=detwordlevel=7wordrole=o
Φαρὲς
roleclausedistance=2sp=nounwordlevel=7wordrole=o
wg
wglevel=6
καὶ
roleclausedistance=2sp=conjwordlevel=7wordrole=o
wg DetNP
wglevel=7
τὸν
roleclausedistance=3sp=detwordlevel=8wordrole=o
Ζαρὰ
roleclausedistance=3sp=nounwordlevel=8wordrole=o
wg PrepNp Adverbial
wglevel=5
ἐκ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
wg DetNP
wglevel=6
τῆς
roleclausedistance=2sp=detwordlevel=7wordrole=adv
Θάμαρ,
roleclausedistance=2sp=nounwordlevel=7wordrole=adv
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Φαρὲς
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἐσρώμ,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἐσρὼμ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἀράμ,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 4" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 4
sentence
wg
wglevel=1
wg Conj13CL
wglevel=2
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἀρὰμ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἀμιναδάβ,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἀμιναδὰβ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ναασσών,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ναασσὼν
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Σαλμών,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 5" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 5
sentence
wg
wglevel=1
wg Conj13CL
wglevel=2
wg
wglevel=3
wg S-V-O-ADV coordinate
wglevel=4
Σαλμὼν
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O-ADV coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Βόες
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg PrepNp Adverbial
wglevel=5
ἐκ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
wg DetNP
wglevel=6
τῆς
roleclausedistance=2sp=detwordlevel=7wordrole=adv
Ῥαχάβ,
roleclausedistance=2sp=nounwordlevel=7wordrole=adv
wg
wglevel=3
wg S-V-O-ADV coordinate
wglevel=4
Βόες
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O-ADV coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἰωβὴδ
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg PrepNp Adverbial
wglevel=5
ἐκ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
wg DetNP
wglevel=6
τῆς
roleclausedistance=2sp=detwordlevel=7wordrole=adv
Ῥούθ,
roleclausedistance=2sp=nounwordlevel=7wordrole=adv
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἰωβὴδ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Ἰεσσαί,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 6" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 6
sentence
wg
wglevel=1
wg Conj13CL
wglevel=2
wg
wglevel=3
wg S-V-O coordinate
wglevel=4
Ἰεσσαὶ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=4
wg S-V-O coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg Np-Appos Object
wglevel=5
wg DetNP
wglevel=6
τὸν
roleclausedistance=2sp=detwordlevel=7wordrole=o
Δαυεὶδ
roleclausedistance=2sp=nounwordlevel=7wordrole=o
wg DetNP apposition
wglevel=6
τὸν
roleclausedistance=2sp=detwordlevel=7wordrole=o
βασιλέα.
roleclausedistance=2sp=nounwordlevel=7wordrole=o
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg S-V-O-ADV coordinate
wglevel=4
Δαυεὶδ
roleclausedistance=0sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=3
wg Conj14CL
wglevel=3
wg S-V-O-ADV coordinate
wglevel=4
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Σολομῶνα
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg PrepNp Adverbial
wglevel=5
ἐκ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
wg NPofNP
wglevel=6
τῆς
roleclausedistance=2sp=detwordlevel=7wordrole=adv
wg DetNP
wglevel=7
τοῦ
roleclausedistance=3sp=detwordlevel=8wordrole=adv
Οὐρίου,
roleclausedistance=3sp=nounwordlevel=8wordrole=adv
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 7" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 7
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Σολομὼν
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ῥοβοάμ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ῥοβοὰμ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀβιά,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀβιὰ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀσάφ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 8" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 8
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀσὰφ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἰωσαφάτ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἰωσαφὰτ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἰωράμ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἰωρὰμ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ὀζείαν,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 9" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 9
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ὀζείας
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἰωαθάμ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἰωαθὰμ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἄχαζ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἄχαζ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἐζεκίαν,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 10" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 10
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἐζεκίας
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Μανασσῆ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Μανασσῆς
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀμώς,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀμὼς
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἰωσείαν,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 11" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 11
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj14CL
wglevel=3
wg
wglevel=4
wg S-V-O-ADV coordinate
wglevel=5
Ἰωσείας
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O-ADV coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg NpaNp Object
wglevel=6
wg DetNP
wglevel=7
τὸν
roleclausedistance=2sp=detwordlevel=8wordrole=o
Ἰεχονίαν
roleclausedistance=2sp=nounwordlevel=8wordrole=o
wg
wglevel=7
καὶ
roleclausedistance=2sp=conjwordlevel=8wordrole=o
wg DetNP
wglevel=8
τοὺς
roleclausedistance=3sp=detwordlevel=9wordrole=o
wg NPofNP
wglevel=9
ἀδελφοὺς
roleclausedistance=4sp=nounwordlevel=10wordrole=o
αὐτοῦ
roleclausedistance=4sp=pronwordlevel=10wordrole=o
wg PrepNp Adverbial
wglevel=6
ἐπὶ
roleclausedistance=1sp=prepwordlevel=7wordrole=adv
wg DetNP
wglevel=7
τῆς
roleclausedistance=2sp=detwordlevel=8wordrole=adv
wg NPofNP
wglevel=8
μετοικεσίας
roleclausedistance=3sp=nounwordlevel=9wordrole=adv
Βαβυλῶνος.
roleclausedistance=3sp=nounwordlevel=9wordrole=adv
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 12" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 12
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj12CL
wglevel=3
wg ADV-S-V-O coordinate
wglevel=4
wg PrepNp Adverbial
wglevel=5
Μετὰ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
δὲ
roleclausedistance=0sp=conjwordlevel=3
wg Conj12CL
wglevel=3
wg ADV-S-V-O coordinate
wglevel=4
wg PrepNp Adverbial
wglevel=5
wg DetNP
wglevel=6
τὴν
roleclausedistance=2sp=detwordlevel=7wordrole=adv
wg NPofNP
wglevel=7
μετοικεσίαν
roleclausedistance=3sp=nounwordlevel=8wordrole=adv
Βαβυλῶνος
roleclausedistance=3sp=nounwordlevel=8wordrole=adv
Ἰεχονίας
roleclausedistance=0sp=nounwordlevel=5wordrole=s
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg DetNP Object
wglevel=5
τὸν
roleclausedistance=1sp=detwordlevel=6wordrole=o
Σαλαθιήλ,
roleclausedistance=1sp=nounwordlevel=6wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Σαλαθιὴλ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ζοροβαβέλ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 13" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 13
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj12CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ζοροβαβὲλ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀβιούδ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀβιοὺδ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἐλιακείμ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἐλιακεὶμ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀζώρ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 14" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 14
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj12CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀζὼρ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Σαδώκ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Σαδὼκ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἀχείμ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἀχεὶμ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἐλιούδ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 15" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 15
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj12CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἐλιοὺδ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἐλεάζαρ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἐλεάζαρ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Μαθθάν,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Μαθθὰν
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg DetNP Object
wglevel=6
τὸν
roleclausedistance=1sp=detwordlevel=7wordrole=o
Ἰακώβ,
roleclausedistance=1sp=nounwordlevel=7wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 16" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 16
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj12CL
wglevel=3
wg
wglevel=4
wg S-V-O coordinate
wglevel=5
Ἰακὼβ
roleclausedistance=0sp=nounwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=5
wg S-V-O coordinate
wglevel=5
ἐγέννησεν
roleclausedistance=0sp=verbwordlevel=6wordrole=v
wg Np-Appos Object
wglevel=6
wg DetNP
wglevel=7
τὸν
roleclausedistance=2sp=detwordlevel=8wordrole=o
Ἰωσὴφ
roleclausedistance=2sp=nounwordlevel=8wordrole=o
wg DetNP apposition
wglevel=7
τὸν
roleclausedistance=2sp=detwordlevel=8wordrole=o
wg NPofNP
wglevel=8
ἄνδρα
roleclausedistance=3sp=nounwordlevel=9wordrole=o
wg NP-CL
wglevel=9
Μαρίας,
roleclausedistance=4sp=nounwordlevel=10wordrole=o
wg ADV-V-S apposition
wglevel=10
wg PrepNp Adverbial
wglevel=11
ἐξ
roleclausedistance=1sp=prepwordlevel=12wordrole=adv
ἧς
roleclausedistance=1sp=pronwordlevel=12wordrole=adv
ἐγεννήθη
roleclausedistance=0sp=verbwordlevel=11wordrole=v
wg Np-Appos Subject
wglevel=11
Ἰησοῦς
roleclausedistance=1sp=nounwordlevel=12wordrole=s
wg DetCL apposition
wglevel=12
roleclausedistance=2sp=detwordlevel=13wordrole=s
wg VC-P
wglevel=13
λεγόμενος
roleclausedistance=0sp=verbwordlevel=14wordrole=vc
Χριστός.
roleclausedistance=0sp=nounwordlevel=14wordrole=p
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 17" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 17
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg Conj3CL
wglevel=3
wg S-P Verbless coordinate
wglevel=4
wg All-NP Subject
wglevel=5
Πᾶσαι
roleclausedistance=1sp=adjwordlevel=6wordrole=s
οὖν
roleclausedistance=0sp=conjwordlevel=3
wg Conj3CL
wglevel=3
wg S-P Verbless coordinate
wglevel=4
wg All-NP Subject
wglevel=5
wg DetNP
wglevel=6
αἱ
roleclausedistance=2sp=detwordlevel=7wordrole=s
wg NpPp
wglevel=7
γενεαὶ
roleclausedistance=3sp=nounwordlevel=8wordrole=s
wg 2Pp
wglevel=8
wg PrepNp
wglevel=9
ἀπὸ
roleclausedistance=5sp=prepwordlevel=10wordrole=s
Ἀβραὰμ
roleclausedistance=5sp=nounwordlevel=10wordrole=s
wg PrepNp
wglevel=9
ἕως
roleclausedistance=5sp=prepwordlevel=10wordrole=s
Δαυεὶδ
roleclausedistance=5sp=nounwordlevel=10wordrole=s
wg NpAdjp Predicate
wglevel=5
γενεαὶ
roleclausedistance=1sp=nounwordlevel=6wordrole=p
δεκατέσσαρες,
roleclausedistance=1sp=adjwordlevel=6wordrole=p
wg
wglevel=4
καὶ
roleclausedistance=0sp=conjwordlevel=5
wg S-P Verbless coordinate
wglevel=5
wg 2Pp Subject
wglevel=6
wg PrepNp
wglevel=7
ἀπὸ
roleclausedistance=2sp=prepwordlevel=8wordrole=s
Δαυεὶδ
roleclausedistance=2sp=nounwordlevel=8wordrole=s
wg PrepNp
wglevel=7
ἕως
roleclausedistance=2sp=prepwordlevel=8wordrole=s
wg DetNP
wglevel=8
τῆς
roleclausedistance=3sp=detwordlevel=9wordrole=s
wg NPofNP
wglevel=9
μετοικεσίας
roleclausedistance=4sp=nounwordlevel=10wordrole=s
Βαβυλῶνος
roleclausedistance=4sp=nounwordlevel=10wordrole=s
wg NpAdjp Predicate
wglevel=6
γενεαὶ
roleclausedistance=1sp=nounwordlevel=7wordrole=p
δεκατέσσαρες,
roleclausedistance=1sp=adjwordlevel=7wordrole=p
wg
wglevel=4
καὶ
roleclausedistance=0sp=conjwordlevel=5
wg S-P Verbless coordinate
wglevel=5
wg 2Pp Subject
wglevel=6
wg PrepNp
wglevel=7
ἀπὸ
roleclausedistance=2sp=prepwordlevel=8wordrole=s
wg DetNP
wglevel=8
τῆς
roleclausedistance=3sp=detwordlevel=9wordrole=s
wg NPofNP
wglevel=9
μετοικεσίας
roleclausedistance=4sp=nounwordlevel=10wordrole=s
Βαβυλῶνος
roleclausedistance=4sp=nounwordlevel=10wordrole=s
wg PrepNp
wglevel=7
ἕως
roleclausedistance=2sp=prepwordlevel=8wordrole=s
wg DetNP
wglevel=8
τοῦ
roleclausedistance=3sp=detwordlevel=9wordrole=s
Χριστοῦ
roleclausedistance=3sp=nounwordlevel=9wordrole=s
wg NpAdjp Predicate
wglevel=6
γενεαὶ
roleclausedistance=1sp=nounwordlevel=7wordrole=p
δεκατέσσαρες.
roleclausedistance=1sp=adjwordlevel=7wordrole=p
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 18" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 18
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg S-P-VC
wglevel=3
wg ofNPNP Subject
wglevel=4
wg DetNP
wglevel=5
Τοῦ
roleclausedistance=2sp=detwordlevel=6wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=3
wg S-P-VC
wglevel=3
wg ofNPNP Subject
wglevel=4
wg DetNP
wglevel=5
wg Np-Appos
wglevel=6
Ἰησοῦ
roleclausedistance=3sp=nounwordlevel=7wordrole=s
Χριστοῦ
roleclausedistance=3sp=nounwordlevel=7wordrole=s
wg DetNP
wglevel=5
roleclausedistance=2sp=detwordlevel=6wordrole=s
γένεσις
roleclausedistance=2sp=nounwordlevel=6wordrole=s
οὕτως
roleclausedistance=0sp=advwordlevel=4wordrole=p
ἦν.
roleclausedistance=0sp=verbwordlevel=4wordrole=vc
sentence
wg
wglevel=1
wg ClCl2
wglevel=2
wg V-S-IO
wglevel=3
μνηστευθείσης
roleclausedistance=0sp=verbwordlevel=4wordrole=v
wg Np-Appos Subject
wglevel=4
wg DetNP
wglevel=5
τῆς
roleclausedistance=2sp=detwordlevel=6wordrole=s
wg NPofNP
wglevel=6
μητρὸς
roleclausedistance=3sp=nounwordlevel=7wordrole=s
αὐτοῦ
roleclausedistance=3sp=pronwordlevel=7wordrole=s
Μαρίας
roleclausedistance=1sp=nounwordlevel=5wordrole=s
wg DetNP Indirect Object
wglevel=4
τῷ
roleclausedistance=1sp=detwordlevel=5wordrole=io
Ἰωσήφ,
roleclausedistance=1sp=nounwordlevel=5wordrole=io
wg ADV-V-O
wglevel=3
wg AdvpCL Adverbial subordinate
wglevel=4
πρὶν
roleclausedistance=1sp=advwordlevel=5wordrole=adv
wg sub-CL
wglevel=5
roleclausedistance=2sp=conjwordlevel=6wordrole=adv
wg V-S subordinate
wglevel=6
συνελθεῖν
roleclausedistance=0sp=verbwordlevel=7wordrole=v
αὐτοὺς
roleclausedistance=0sp=pronwordlevel=7wordrole=s
εὑρέθη
roleclausedistance=0sp=verbwordlevel=4wordrole=v
wg ADV-V-ADV Object subordinate
wglevel=4
wg PrepNp Adverbial
wglevel=5
ἐν
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
γαστρὶ
roleclausedistance=1sp=nounwordlevel=6wordrole=adv
ἔχουσα
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg PrepNp Adverbial
wglevel=5
ἐκ
roleclausedistance=1sp=prepwordlevel=6wordrole=adv
wg NpAdjp
wglevel=6
Πνεύματος
roleclausedistance=2sp=nounwordlevel=7wordrole=adv
Ἁγίου.
roleclausedistance=2sp=adjwordlevel=7wordrole=adv
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 19" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 19
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg S-ADV-V-O
wglevel=3
wg Np-Appos Subject
wglevel=4
Ἰωσὴφ
roleclausedistance=1sp=nounwordlevel=5wordrole=s
δὲ
roleclausedistance=0sp=conjwordlevel=3
wg S-ADV-V-O
wglevel=3
wg Np-Appos Subject
wglevel=4
wg DetNP apposition
wglevel=5
roleclausedistance=2sp=detwordlevel=6wordrole=s
wg NPofNP
wglevel=6
ἀνὴρ
roleclausedistance=3sp=nounwordlevel=7wordrole=s
αὐτῆς,
roleclausedistance=3sp=pronwordlevel=7wordrole=s
wg CLaCL Adverbial subordinate
wglevel=4
wg P-VC coordinate
wglevel=5
δίκαιος
roleclausedistance=0sp=adjwordlevel=6wordrole=p
ὢν
roleclausedistance=0sp=verbwordlevel=6wordrole=vc
wg
wglevel=5
καὶ
roleclausedistance=2sp=conjwordlevel=6wordrole=adv
wg ADV-V-O coordinate
wglevel=6
μὴ
roleclausedistance=0sp=advwordlevel=7wordrole=adv
θέλων
roleclausedistance=0sp=verbwordlevel=7wordrole=v
wg O-V Object subordinate
wglevel=7
αὐτὴν
roleclausedistance=0sp=pronwordlevel=8wordrole=o
δειγματίσαι,
roleclausedistance=0sp=verbwordlevel=8wordrole=v
ἐβουλήθη
roleclausedistance=0sp=verbwordlevel=4wordrole=v
wg ADV-V-O Object subordinate
wglevel=4
λάθρᾳ
roleclausedistance=0sp=advwordlevel=5wordrole=adv
ἀπολῦσαι
roleclausedistance=0sp=verbwordlevel=5wordrole=v
αὐτήν.
roleclausedistance=0sp=pronwordlevel=5wordrole=o
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 20" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 20
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg
wglevel=3
wg O-S-V
wglevel=4
ταῦτα
roleclausedistance=0sp=pronwordlevel=5wordrole=o
δὲ
roleclausedistance=0sp=conjwordlevel=3
wg
wglevel=3
wg O-S-V
wglevel=4
αὐτοῦ
roleclausedistance=0sp=pronwordlevel=5wordrole=s
ἐνθυμηθέντος
roleclausedistance=0sp=verbwordlevel=5wordrole=v
ἰδοὺ
roleclausedistance=0sp=verbwordlevel=4wordrole=aux
wg NPofNP Subject
wglevel=4
ἄγγελος
roleclausedistance=1sp=nounwordlevel=5wordrole=s
Κυρίου
roleclausedistance=1sp=nounwordlevel=5wordrole=s
wg PrepNp Adverbial
wglevel=4
κατ’
roleclausedistance=1sp=prepwordlevel=5wordrole=adv
ὄναρ
roleclausedistance=1sp=nounwordlevel=5wordrole=adv
ἐφάνη
roleclausedistance=0sp=verbwordlevel=4wordrole=v
αὐτῷ
roleclausedistance=0sp=pronwordlevel=4wordrole=io
wg Adverbial
wglevel=4
λέγων
roleclausedistance=0sp=verbwordlevel=5wordrole=v
wg CLaCL Object
wglevel=5
wg
wglevel=6
wg Np2CL Minor
wglevel=7
wg Np-Appos Auxiliar
wglevel=8
Ἰωσὴφ
roleclausedistance=1sp=nounwordlevel=9wordrole=aux
wg NPofNP apposition
wglevel=9
υἱὸς
roleclausedistance=2sp=nounwordlevel=10wordrole=aux
Δαυείδ,
roleclausedistance=2sp=nounwordlevel=10wordrole=aux
μὴ
roleclausedistance=0sp=advwordlevel=7wordrole=adv
φοβηθῇς
roleclausedistance=0sp=verbwordlevel=7wordrole=v
wg V-O-O2 Object subordinate
wglevel=7
παραλαβεῖν
roleclausedistance=0sp=verbwordlevel=8wordrole=v
Μαρίαν
roleclausedistance=0sp=nounwordlevel=8wordrole=o
wg DetNP Second Object
wglevel=8
τὴν
roleclausedistance=1sp=detwordlevel=9wordrole=o2
wg NPofNP
wglevel=9
γυναῖκά
roleclausedistance=2sp=nounwordlevel=10wordrole=o2
σου,
roleclausedistance=2sp=pronwordlevel=10wordrole=o2
wg sub-CL Adverbial
wglevel=7
wg S-P-VC subordinate
wglevel=8
wg DetCL Subject
wglevel=9
τὸ
roleclausedistance=1sp=detwordlevel=10wordrole=s
γὰρ
roleclausedistance=1sp=conjwordlevel=8wordrole=adv
wg S-P-VC subordinate
wglevel=8
wg DetCL Subject
wglevel=9
wg ADV-V
wglevel=10
wg PrepNp Adverbial
wglevel=11
ἐν
roleclausedistance=1sp=prepwordlevel=12wordrole=adv
αὐτῇ
roleclausedistance=1sp=pronwordlevel=12wordrole=adv
γεννηθὲν
roleclausedistance=0sp=verbwordlevel=11wordrole=v
wg PrepNp Predicate
wglevel=9
ἐκ
roleclausedistance=1sp=prepwordlevel=10wordrole=p
wg NpAdjp
wglevel=10
Πνεύματός
roleclausedistance=2sp=nounwordlevel=11wordrole=p
ἐστιν
roleclausedistance=0sp=verbwordlevel=9wordrole=vc
wg PrepNp Predicate
wglevel=9
wg NpAdjp
wglevel=10
Ἁγίου·
roleclausedistance=2sp=adjwordlevel=11wordrole=p
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 21" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 21
sentence
wg
wglevel=1
wg Conj-CL
wglevel=2
wg
wglevel=3
wg Adverbial
wglevel=4
wg CLaCL Object
wglevel=5
wg
wglevel=6
wg CLaCL coordinate
wglevel=7
wg V-O coordinate
wglevel=8
τέξεται
roleclausedistance=0sp=verbwordlevel=9wordrole=v
δὲ
roleclausedistance=2sp=conjwordlevel=7wordrole=o
wg CLaCL coordinate
wglevel=7
wg V-O coordinate
wglevel=8
υἱὸν
roleclausedistance=0sp=nounwordlevel=9wordrole=o
wg
wglevel=8
καὶ
roleclausedistance=4sp=conjwordlevel=9wordrole=o
wg
wglevel=9
καλέσεις
roleclausedistance=0sp=verbwordlevel=10wordrole=v
wg DetNP Object
wglevel=10
τὸ
roleclausedistance=1sp=detwordlevel=11wordrole=o
wg NPofNP
wglevel=11
ὄνομα
roleclausedistance=2sp=nounwordlevel=12wordrole=o
αὐτοῦ
roleclausedistance=2sp=pronwordlevel=12wordrole=o
Ἰησοῦν·
roleclausedistance=0sp=nounwordlevel=10wordrole=o2
wg sub-CL Adverbial
wglevel=10
wg S-V-O-ADV subordinate
wglevel=11
αὐτὸς
roleclausedistance=0sp=pronwordlevel=12wordrole=s
γὰρ
roleclausedistance=1sp=conjwordlevel=11wordrole=adv
wg S-V-O-ADV subordinate
wglevel=11
σώσει
roleclausedistance=0sp=verbwordlevel=12wordrole=v
wg DetNP Object
wglevel=12
τὸν
roleclausedistance=1sp=detwordlevel=13wordrole=o
wg NPofNP
wglevel=13
λαὸν
roleclausedistance=2sp=nounwordlevel=14wordrole=o
αὐτοῦ
roleclausedistance=2sp=pronwordlevel=14wordrole=o
wg PrepNp Adverbial
wglevel=12
ἀπὸ
roleclausedistance=1sp=prepwordlevel=13wordrole=adv
wg DetNP
wglevel=13
τῶν
roleclausedistance=2sp=detwordlevel=14wordrole=adv
wg NPofNP
wglevel=14
ἁμαρτιῶν
roleclausedistance=3sp=nounwordlevel=15wordrole=adv
αὐτῶν.
roleclausedistance=3sp=pronwordlevel=15wordrole=adv
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "Search0 = '''\n", "book book=Matthew\n", " chapter chapter=1\n", " \n", " verse \n", "'''\n", "Search0 = NA.search(Search0)\n", "NA.show(Search0, start=1, end=21, condensed=True, extraFeatures={'sp','gloss','wordrole', 'wglevel', 'wordlevel','roleclausedistance'}, suppress={'chapter'}, withNodes=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3 dump some structure information" ] }, { "cell_type": "code", "execution_count": 132, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A heading is a tuple of pairs (node type, feature value)\n", "\tof node types and features that have been configured as structural elements\n", "These 3 structural elements have been configured\n", "\tnode type book with heading feature book\n", "\tnode type chapter with heading feature chapter\n", "\tnode type verse with heading feature verse\n", "You can get them as a tuple with T.headings.\n", "\n", "Structure API:\n", "\tT.structure(node=None) gives the structure below node, or everything if node is None\n", "\tT.structurePretty(node=None) prints the structure below node, or everything if node is None\n", "\tT.top() gives all top-level nodes\n", "\tT.up(node) gives the (immediate) parent node\n", "\tT.down(node) gives the (immediate) children nodes\n", "\tT.headingFromNode(node) gives the heading of a node\n", "\tT.nodeFromHeading(heading) gives the node of a heading\n", "\tT.ndFromHd complete mapping from headings to nodes\n", "\tT.hdFromNd complete mapping from nodes to headings\n", "\tT.hdMult are all headings with their nodes that occur multiple times\n", "\n", "There are 8230 structural elements in the dataset.\n", "\n" ] } ], "source": [ "T.structureInfo()" ] }, { "cell_type": "code", "execution_count": 133, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',\n", " 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',\n", " 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',\n", " 'Converter_version': '0.1.6 (moved all phrases and claused to wordgroup nodes)',\n", " 'Convertor_source': 'https://github.com/tonyjurg/n1904_lft',\n", " 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat',\n", " 'Editors': 'Nestle',\n", " 'Name': 'Greek New Testament (NA1904)',\n", " 'TextFabric version': '11.2.3',\n", " 'Version': '1904',\n", " 'fmt:text-orig-full': '{word}{after}',\n", " 'sectionFeatures': 'book,chapter,verse',\n", " 'sectionTypes': 'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " 'writtenBy': 'Text-Fabric',\n", " 'dateWritten': '2023-05-02T15:20:37Z'}" ] }, "execution_count": 133, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TF.features['otext'].metaData\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running text fabric browser \n", "##### [Back to TOC](#TOC)" ] }, { "cell_type": "code", "execution_count": 134, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "^C\n" ] } ], "source": [ "!text-fabric app " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!text-fabric app -k" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0.30s 37683 results\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ " 0.31s ERROR in table(): unknown display option \"multiFeature=True\"\n" ] }, { "data": { "text/plain": [ "''" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "search1 = '''\n", "book book=Matthew\n", " word wordrole=p\n", " wg wgrole=p\n", "\n", "'''\n", "Search1=NA.search(search1)\n", "NA.table(Search1,start=1, end=20, condensed=True, MultiFeature=True)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "ename": "SyntaxError", "evalue": "Missing parentheses in call to 'print'. Did you mean print(VERSION)? (3831010425.py, line 1)", "output_type": "error", "traceback": [ "\u001b[1;36m Input \u001b[1;32mIn [28]\u001b[1;36m\u001b[0m\n\u001b[1;33m print VERSION\u001b[0m\n\u001b[1;37m ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m Missing parentheses in call to 'print'. Did you mean print(VERSION)?\n" ] } ], "source": [] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: text-fabric in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (11.2.3)\n", "Collecting text-fabric\n", " Downloading text_fabric-11.4.5-py3-none-any.whl (9.6 MB)\n", "Requirement already satisfied: wheel in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from text-fabric) (0.37.1)\n", "Collecting pyarrow\n", " Downloading pyarrow-12.0.0-cp39-cp39-win_amd64.whl (21.5 MB)\n", "Requirement already satisfied: pyyaml>=5.3 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from text-fabric) (6.0)\n", "Requirement already satisfied: pandas in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from text-fabric) (1.4.2)\n", "Collecting markdown>=3.4.1\n", " Using cached Markdown-3.4.3-py3-none-any.whl (93 kB)\n", "Requirement already satisfied: lxml in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from text-fabric) (4.8.0)\n", "Requirement already satisfied: ipython in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from text-fabric) (8.2.0)\n", "Requirement already satisfied: importlib-metadata>=4.4 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from markdown>=3.4.1->text-fabric) (4.11.3)\n", "Requirement already satisfied: zipp>=0.5 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from importlib-metadata>=4.4->markdown>=3.4.1->text-fabric) (3.7.0)\n", "Requirement already satisfied: traitlets>=5 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (5.1.1)\n", "Requirement already satisfied: matplotlib-inline in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.1.2)\n", "Requirement already satisfied: pickleshare in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.7.5)\n", "Requirement already satisfied: decorator in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (5.1.1)\n", "Requirement already satisfied: stack-data in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.2.0)\n", "Requirement already satisfied: jedi>=0.16 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.18.1)\n", "Requirement already satisfied: setuptools>=18.5 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (61.2.0)\n", "Requirement already satisfied: backcall in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.2.0)\n", "Requirement already satisfied: pygments>=2.4.0 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (2.11.2)\n", "Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (3.0.20)\n", "Requirement already satisfied: colorama in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from ipython->text-fabric) (0.4.4)\n", "Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from jedi>=0.16->ipython->text-fabric) (0.8.3)\n", "Requirement already satisfied: wcwidth in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->text-fabric) (0.2.5)\n", "Requirement already satisfied: pytz>=2020.1 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from pandas->text-fabric) (2021.3)\n", "Requirement already satisfied: python-dateutil>=2.8.1 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from pandas->text-fabric) (2.8.2)\n", "Requirement already satisfied: numpy>=1.18.5 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from pandas->text-fabric) (1.21.5)\n", "Requirement already satisfied: six>=1.5 in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from python-dateutil>=2.8.1->pandas->text-fabric) (1.16.0)\n", "Requirement already satisfied: pure-eval in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from stack-data->ipython->text-fabric) (0.2.2)\n", "Requirement already satisfied: executing in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from stack-data->ipython->text-fabric) (0.8.3)\n", "Requirement already satisfied: asttokens in c:\\users\\tonyj\\anaconda3\\lib\\site-packages (from stack-data->ipython->text-fabric) (2.0.5)\n", "Installing collected packages: pyarrow, markdown, text-fabric\n", " Attempting uninstall: markdown\n", " Found existing installation: Markdown 3.3.4\n", " Uninstalling Markdown-3.3.4:\n", " Successfully uninstalled Markdown-3.3.4\n", " Attempting uninstall: text-fabric\n", " Found existing installation: text-fabric 11.2.3\n", " Uninstalling text-fabric-11.2.3:\n", " Successfully uninstalled text-fabric-11.2.3\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\\\Users\\\\tonyj\\\\AppData\\\\Local\\\\Temp\\\\pip-uninstall-8_911h_s\\\\text-fabric.exe'\n", "Consider using the `--user` option or check the permissions.\n", "\n" ] } ], "source": [ "!pip install --upgrade text-fabric" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (3565284352.py, line 1)", "output_type": "error", "traceback": [ "\u001b[1;36m Input \u001b[1;32mIn [33]\u001b[1;36m\u001b[0m\n\u001b[1;33m import text-fabric as tf\u001b[0m\n\u001b[1;37m ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "import text-fabric as tf\n", "print(tf.__version__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "321.391px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }